Search Results

Search found 10285 results on 412 pages for 'enterprise repository'.

Page 330/412 | < Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >

  • How would you use version control for personal data, like a personal website?

    - by nn
    This is more a use-case question, but I generate static files for a personal website using txt2tags. I was thinking of maybe storing this information in a git repository. Normally I use RCS since it's simplest, and I'm only a single user. But there just seems to be a large trend of people using git/svn/cvs/etc. for personal data, and I thought this may also be a good way to at least learn some of the basics of the tool. Obviously most of the learning is done in an environment where you collaborate. So back to the question: how would you use use a version control system such as git, to manage a personal website?

    Read the article

  • TortoiseHg : Can commit by command line but not with contextual menu

    - by nicon
    I just installed TortoiseHg (and I'm new to mercurial). I haven't been able to execute any commit with the contextual menu from Tortoise. Every time I try, I get the following error : Commit : Abort : The system cannot find the specified file. I get the error no matter the changes in my repository : new files, modifications to existing files. I also took the time to configure tortoise as shown here : http://tortoisehg.bitbucket.org/manual/1.0/quick.html (section 3.1) The strange thing is, everything is working well when I'm doing my commit from the command line. What should I look for ?

    Read the article

  • git: better way for git revert without additional reverted commit

    - by Albert
    I have a commit in a remote+local branch and I want to throw that commit out of the history and put some of them into an own branch. Basically, right now I have: D---E---F---G master And I want: E---G topic / D master That should be both in my local and in the (there is only one, called origin) remote repository. Which is the cleanest way to get that? Also, there are also other people who have cloned that repo and who have checked out the master branch. If I would do such a change in the remote repo, would 'git pull' work for them to get also to the same state?

    Read the article

  • Which entities should be Aggregate Roots?

    - by MylesRip
    If Book aggregates Chapter which in turn aggregates Page, then what should be the aggregate root? One possibility might be: Book is an aggregate root with Chapter as a leaf and Chapter is an aggregate with Page as a leaf. In this scenario, Chapter is a leaf in one aggregate and a root in another. Is this okay? Would it make sense in this scenario to have two repositories, one for Book and another for Chapter? If so, then couldn't the Chapter repository be used to circumvent the fact that access to Chapter should only happen via Book? What would be the best way to handle a situation like this?

    Read the article

  • SQL Server 2000, how to automate import data from excel

    - by Stan
    Say the source data comes in excel format, below is how I import the data. Converting to csv format via MS Excel Roughly find bad rows/columns by inspecting backup the table that needs to be updated in SQL Query Analyzer truncate the table (may need to drop foreign key constraint as well) import data from the revised csv file in SQL Server Enterprise Manager If there's an error like duplicate columns, I need to check the original csv and remove them I was wondering how to make this procedure more effecient in every step? I have some idea but not complete. For step 2&6, using scripts that can check automatically and print out all error row/column data. So it's easier to remove all errors once. For step 3&5, is there any way to automatically update the table without manually go through the importing steps? Could the community advise, please? Thanks.

    Read the article

  • How to OrderBy on a generic IEnumerable (IEnumerable<T>) using LINQ in C#?

    - by Jeffrey
    In my generic repository I have below method: public virtual IEnumerable<T> GetAll<T>() where T : class { using (var ctx = new DataContext()) { ctx.ObjectTrackingEnabled = false; var table = ctx.GetTable<T>().ToList().AsReadOnly(); return table; } } T is a Linq to Sql class and I want to be able to OrderBy on a particular property. Say if T has property name "SortOrder" then do OrderBy on this property. But I am not sure how I can achieve this. So I need some helps. Thank you!

    Read the article

  • Is there a name for a pure-data Objective-C class?

    - by BrianEnigma
    This is less of a code-specific question and more of an Objective-C nomenclature question. In C you have struct for pure data. In Enterprise Java, you have "bean" classes that are purely member variables with getters and setters, but no business logic. In Adobe FLEX, you have "Value Objects". In Objective-C, is there a proper name for an object (descended from NSObject, of course) that simply has ivars and getters/setters (or @property/@synthesize, if you want to get fancy) and no real business logic? A more concrete example might be a simple class with getters and setters for filename, file size, description, and assorted other metadata. You could then take a bunch of these and easily throw them into a container (NSDictionary, NSArray) without the need for messy NSValue wrapping of a C struct. It is also a little more structure than putting, say, a bunch of loosely-typed child NSDictionaries into a parent container object.

    Read the article

  • span tag inside a dynamically generated td - jquery

    - by user1017268
    I have a dynamically generated page of an enterprise application. The data is inside table structure. I need to access a value of span tag inside this table. The page code looks like <td class="dCCItemValue" valign="bottom> <span id="S_0_1_5">Problem type</span> The id of the span tag is also generated dynamically and I have no control over it. So the problem statement becomes: How to get value span inside a td with class "dCCItemValue" I hope I have explained the problem correctly. Please Help

    Read the article

  • How do I Reload Ajax Call Parameters without Reloading the webpage

    - by Snowright
    I'm working with Extjs 2.2.1 with Alfresco 3.2 enterprise. I would like to update the ticket that handles authentication to the alfresco server on components that have been loaded during login. This ticket expires after a set time and this is why I will need to update the ticket. Options that do not seem viable for me(but please let me know if I'm wrong): Reload the components to reload the call parameters - I can't do this because it resets whatever the user was previously working on (ie. Tree panel gets reloaded, grid filters reset, etc). The actual webpage never reloads as everything uses ajax calls to update things on the page. Create a global variable that stores the ticket and attach it as a call parameter with any ajax calls - Any components that were loaded during login will still use the original ticket to make calls to the server.

    Read the article

  • Import files directly to SVN repo without checking out first

    - by Werner
    Hi, I am using SVN and have a repository on a remote machine. Sometimes, when working on my local machine I realize that I need to add some new files to the repo. The usual procedure I know would then be: 1- at the current folder on my local machine checkout the whole SVN repo 2- enter there 3- copy the interesting file here 4- commit But this can be a bit tedious. I wonder if somehow, I can omit steps 1 to 3 and import the "interesting" file to SVN directly without necessity of checking out the repo first. Thanks

    Read the article

  • Why is checking in files called a 'commit'?

    - by Kjetil Klaussen
    The act of checking in files in a source control repository like git, mercurial or svn, is called a commit. Does anyone know the reason behind calling it a commit instead of just check in? English is not my mother tongue, so it might be some linguistic I don't quite get her, but what I'm I actually commiting to? (Hopefully I'm not commiting a crime, but you'll never know.) Is it in the meaning of "to consign for preservation"? Is it related to transactions (commit at the end of a transaction)?

    Read the article

  • What's a good way to set up a development environment on OS X for ruby, rails, and git?

    - by Ein2015
    I'm going to start development on a web app using ruby, rails, probably either postgres or mysql, and most likely apache. I'll be using a git repository with the master repo on another server. I've searched through stackoverflow and done some Googling... so here's what I have so far... What are your opinions on what's described on this page?: http://robots.thoughtbot.com/post/159805668/2009-rubyists-guide-to-a-mac-os-x-development What about this one?: http://www.buildingwebapps.com/articles/79197-setting-up-rails-on-leopard-mac I don't need helping finding an editor, there's plenty out there (TextMate, TextWrangler, MacVim), but I do need help to make sure I'm setting things up correctly to code, build, and run the web app from my mac. Here's a specific set of scenarios I could use some help on: Testing various versions of rails and/or ruby. Testing performance, vulnerabilities, monitoring queries, etc. Testing different versions of gems. Working on other projects on this same machine.

    Read the article

  • How to avoid StaleObjectStateException when transaction updates thousands of entities?

    - by ThinkFloyd
    We are using Hibernate 3.6.0.Final with JPA 2 and Spring 3.0.5 for a large scale enterprise application running on tomcat 7 and MySQL 5.5. Most of the transactions in application, lives for less than a second and update 5-10 entities but in some use cases we need to update more than 10-20K entities in single transaction, which takes few minutes and hence more than 70% of times such transaction fails with StaleObjectStateException because some of those entities got updated by some other transaction. We generally maintain version column in all tables and in case of StaleObjectStateException we generally retry but since these longs transactions are anyways very long so if we keep on retrying then also I am not very sure that we'll be able to escape StaleObjectStateException. Also lot of activities keep updating these entities in busy hours so we cannot go with pessimistic approach because it can potentially halt many activities in system. Please suggest how to fix such long transaction issue because we cannot spawn thousands of independent and small transactions because we cannot afford messed up data in case of some failed & some successful transactions.

    Read the article

  • Visualizing branch topology in git

    - by Benjol
    I'm playing with git in isolation on my own machine, and even like that I find it difficult to maintain a mental model of all my branches and commits. I know I can do a git log to see the commit history from where I am, but is there a way to see the entire branch topography, something like these ascii maps that seem to be used everywhere for explaining branches? .-A---M---N---O---P / / / / / I B C D E \ / / / / `-------------' It just feels like someone coming along and trying to pick up my repository would have difficulty working out exactly what was going on. I guess I'm influenced by AccuRev's stream browser...

    Read the article

  • Why does SQL Server 2000 treat SELECT test.* and SELECT t.est.* the same?

    - by Chris Pebble
    I butter-fingered a query in SQL Server 2000 and added a period in the middle of the table name: SELECT t.est.* FROM test Instead of: SELECT test.* FROM test And the query still executed perfectly. Even SELECT t.e.st.* FROM test executes without issue. I've tried the same query in SQL Server 2008 where the query fails (error: the column prefix does not match with a table name or alias used in the query). For reasons of pure curiosity I have been trying to figure out how SQL Server 2000 handles the table names in a way that would allow the butter-fingered query to run, but I haven't had much luck so far. Any sql gurus know why SQL Server 2000 ran the query without issue? Update: The query appears to work regardless of the interface used (e.g. Enterprise Manager, SSMS, OSQL) and as Jhonny pointed out below it bizarrely even works when you try: SELECT TOP 1000 dbota.ble.* FROM dbo.table

    Read the article

  • strange SQL Server attach database error

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise with VSTS 2008, and I am developing a simple web application using ASP.Net and Forms Authentication. When I am using the configuration tool/menu of VSTS of my ASP.Net project (I want to use this tool to manually add some Forms authentication users), I met with the following error (SqlException), Trying to attach file D:\Projects\MyTest\App_Data\aspnetdb.mdf to automatically named database failed. It may be caused by existing the same name database, or may be caused by specified file can not be opened or caused by the specified file exists in UNC share. In my computer, there is no aspnetdb.mdf under dir D:\Projects\MyTest\App_Data, and I have used aspnet_regsql to generate database successfully before I run the configuration tool. Why there is such error? How to fix it? thanks in advance, George

    Read the article

  • Request/Response pattern in SOA implementation

    - by UserControl
    In some enterprise-like project (.NET, WCF) i saw that all service contracts accept a single Request parameter and always return Response: [DataContract] public class CustomerRequest : RequestBase { [DataMember] public long Id { get; set; } } [DataContract] public class CustomerResponse : ResponseBase { [DataMember] public CustomerInfo Customer { get; set; } } where RequestBase/ResponseBase contain common stuff like ErrorCode, Context, etc. Bodies of both service methods and proxies are wrapped in try/catch, so the only way to check for errors is looking at ResponseBase.ErrorCode (which is enumeration). I want to know how this technique is called and why it's better compared to passing what's needed as method parameters and using standard WCF context passing/faults mechanisms?

    Read the article

  • Upload a .pdf file to a Sharepoint Document Library using Access vba

    - by Jim Shaffer
    Within an Access 2007 application, I'm creating a static report in .pdf format. I want to create it, then export the static report (not the data itself) to a Sharepoint Document Library. The intent is for it to be a public repository, no versioning. Each report will carry a unique name. I'm a seasoned vba programmer, but using Sharepoint services is new to me. How do I go about doing this? Assume I can identify the file name and location after I've generated it, and I know the Sharepoint library URL, and have permissions. Where do I go from there?

    Read the article

  • Is it possible to place Joomla under revision control?

    - by Tom
    We are a team of many developers working on a website that uses both Joomla and custom PHP scripts. The problem is that there are multiple developers working on various features which need to update information in Joomla (adding modules, changing existing ones or changing settings) and when one developer changes something, he usually makes the change locally first and then does the same thing (hopefully) on the production server. Not only that this is very error-prone, but the developers often forget to tell other developers about the changes. The custom PHP scripts are easily shared between developers, but the changes in Joomla are often forgotten and they lead to serious conflicts when a developer tries to replicate his local changes in production. I have thought about placing Joomla in a Mercurial repository, but how could we distribute the changes in the database between the development, the testing and the production machines?

    Read the article

  • Magento module - database not initialized?

    - by Magnus
    Hello, So I've installed an extension in my Magento Enterprise. I've been able to configure new options in the admin interface after installing the module. However in the frontend it complains "table not found". Checking the database that is indeed true. Seems the mysql4-*.php scripts have not been run or failed. Is there a log or something I can look at to see what goes wrong? From what I've read (difficult to find documentation on this) the modules db should've been initialized on the first request after it was installed and activated. Any other suggestions to what I can check to find out why it's not initialized properly?

    Read the article

  • Sharepoint 2010 web application development suitability evaluation/assessment

    - by Robert Koritnik
    I would like to know what kind of applications are suitable to be developed on top of Sharepoint 2010 and which should not be built on to of it. So when to embrace/avoid Sharepoint 2010 as a development platform for new web applications. Addendum Would you as a sharepoint development specialist choose it as a platform for your next enterprise application with these characteristics: processor intensive lots of various screens for entering and managing data many complex business processes no need to change the UI (ie. reposition parts) ERP integration etc. I'm an Asp.net MVC (former web forms) developer and would like to know if usual multi-page semi complex web applications (intra/extra-net) should be built on top of Sharepoint 2010 and why (if yes or if no).

    Read the article

  • Read Velocity Tokens/Tag from .vm file

    - by user1801660
    I have an application where in I am trying to create a velocity template repository which will help me centralise all my email templates and will allow me to create a communication hub. All templates will be called at runtime and populates with data via services. My problem is that I need to provide users with optional and compulsory params list when they define the template inputs for the velocity template. Is there a way to read the tokens/tags from the velocity template file and extract them?? Like I want a list of tokens $name.address.streetName to be available to me from .vm file. I do not want to go for Regex . I do not have to cache or reuse them , its just going to be a one time read and store the default,compulsory & optional params in the database. I am following these patterns : http://kickjava.com/src/org/apache/velocity/test/view/TemplateNodeView.java.htm How to use String as Velocity Template? Please advice.

    Read the article

  • Java import from other directory

    - by heldopslippers
    Hi People! I am building a Enterprise Service Bus (ESB) with Java. I won't get into details But I have to build multiple servers who make use of the same classes. I have the following directory structure: /server1 -Main.java /server2 -Main.java /com -Database.java I want to import from the Main.java class for example the Database.class. But of course the following statements won't work: import com.Database; I am working with the javac compiler in the command line (so not eclipse stuff or whatever. just TextMate and the command line). And I found a (pretty stupid) solution by creating a symbolic link in the servers to the com directory. But that is not really an ideal solution. Does anybody have a better one?? THANXS in advanced!! :D

    Read the article

  • can't make svn store password, even though the configuration allows it

    - by davka
    did everything the book says, i.e. removed the authentication files from .subversion/auth, and explicitly set the relevant config parameters to 'yes' even though this is a default, and yet the shell svn commands ask for password each time. The repository is on cvsdude.com, the client is linux. I also use the subclipse plugin that caches the password ok. I vaguely remember that when I started working with it, the command asked interactively if I wanted to save clear password, and I said no. Can this choice be stored somewhere and take precedence over the configuration? Thanks!

    Read the article

  • Reasons for sticking with TEXT, NTEXT and IMAGE instead of (N)VARCHAR(max) and VARBINARY(max)

    - by John Assymptoth
    TEXT, NTEXT and IMAGE have been deprecated a long time ago and will, eventually, be removed from SQL Server. However, they are not going to be discontinued right away, not even in the next version of SQL Server, so it's not convenient for my enterprise to transform thousands of columns right away, even if it is using SQL Server 2012. What arguments can I use to postpone this migration? I know there are some advantages in using the new types. But I'm strictly looking for reasons not to migrate my data that is already functioning pretty well in the old types.

    Read the article

< Previous Page | 326 327 328 329 330 331 332 333 334 335 336 337  | Next Page >