Search Results

Search found 1395 results on 56 pages for 'repo'.

Page 26/56 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • XenServer 5.5 local storage problem

    - by Jason Nerer
    Hi community, I have the following problem with a Citrix XenServer 5.5. I had to physically move the host, so I shut down all machines via console: xe vm-shutdown force=true vm=my-machine-uuis-s After that I shut down the machine itself by issuing: halt After the reboot today the local storage repository is unplugged. I was trying to repair it via XenCenter, but I don't trust this one. So I tried: [root@xenserver ~]# xe pbd-list uuid ( RO) : ef6e2f3b-5825-393c-23e1-391d105c87ec host-uuid ( RO): c4bcf09c-2e52-448f-8210-df5d13bd33a9 sr-uuid ( RO): 2fb3be9c-075c-53ed-acb6-42f0c4ad0614 device-config (MRO): device: /dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83698154,/dev/disk/by-id/scsi-SATA_WDC_WD5001ABYS-_WD-WCAS83694262 currently-attached ( RO): false To reattach the storage I issued: xe pbd-plug uuid=ef6e2f3b-5825-393c-23e1-391d105c87ec That one is running now for a while but not talking to me. The local repo has around 1TB. Should I wait, or are there any other options to reattach the local repo? What could have caused this problem? Any ideas? Thx. J

    Read the article

  • NHibernate.PropertyValueException : not-null property references a null or transient

    - by frosty
    I am getting the following exception. NHibernate.PropertyValueException : not-null property references a null or transient Here are my mapping files. Product <class name="Product" table="Products"> <id name="Id" type="Int32" column="Id" unsaved-value="0"> <generator class="identity"/> </id> <set name="PriceBreaks" table="PriceBreaks" generic="true" cascade="all" inverse="true" > <key column="ProductId" /> <one-to-many class="EStore.Domain.Model.PriceBreak, EStore.Domain" /> </set> </class> Price Breaks <class name="PriceBreak" table="PriceBreaks"> <id name="Id" type="Int32" column="Id" unsaved-value="0"> <generator class="identity"/> </id> <property name="ProductId" column="ProductId" type="Int32" not-null="true" /> <many-to-one name="Product" column="ProductId" not-null="true" cascade="all" class="EStore.Domain.Model.Product, EStore.Domain" /> </class> I get the exception on the following method [Test] public void Can_Add_Price_Break() { IPriceBreakRepository repo = new PriceBreakRepository(); var priceBreak = new PriceBreak(); priceBreak.ProductId = 19; repo.Add(priceBreak); Assert.Greater(priceBreak.Id, 0); }

    Read the article

  • Mercurial Remote Subrepos

    - by Travis G
    I'm trying to set up my Mercurial repository system to work with multiple subrepos. I've basically followed these instructions to set up the client repo with Mercurial client v1.5 and I'm using HgWebDir to host my multiple projects. I have an HgWebDir with the following structure: http://myserver/hg fooproj mylib where mylib is some collection of common template library to be consumed by fooproj. The structure of fooproj looks like this: fooproj doc/ src/ .hgignore .hgsub .hgsubstate And .hgsub looks like: src/mylib = http://myserver/hg/mylib This should work, per my interpretation of the documentation: The first 'nested' is the path in our working dir, and the second is a URL or path to pull from. So, let's say I pull down fooproj to my home folder with: ~$ hg clone http://myserver/hg/fooproj foo Which pulls down the directory structure properly and adds the folder ~/foo/src/mylib which is a local Mercurial repository. This is where the problems begin: the mylib folder is empty aside from the items in .hg. With 2 seconds of investigation, one can see the src/mylib/.hg/hgrc is: [paths] default = http://myserver/hg/fooproj/src/mylib which is completely wrong (attempting a pull of that repo will give a 404 because, well, that URL doesn't make any sense). Logically, the default value should be what I specified in .hgsub or it would get the files from the repository in some way. None of the Mercurial commands return error codes (aside from a pull from within src/mylib), so it clearly believes that it is behaving properly (and just might be), although this does not seem logical at all. What am I doing wrong?

    Read the article

  • maven dependencies and jetty - avoiding deploy

    - by James Cooper
    Hi, I have a project with 3 artifacts: common - entities, business logic. no UI code webapp-a - a public web app webapp-b - an admin web app webapp-a and webapp-b depend on common. common is configured to deploy to a local maven repo. so far so good. I have IntelliJ configured so that each artifact is a separate module. Module dependencies are configured properly. I can add a new method to a class in common and immediately use that method in a class in a webapp. However, when I run mvn jetty:run it uses the currently deployed common snapshot in my repository. It does not use my local classes. If I add a method to a class in common, it compiles fine, but blows up at runtime. So is it possible to either: a) Convince jetty:run to use my local common build output or b) Deploy my common output to my local ~/.m2/repo while I'm testing locally before I want to commit/deploy or c) some other solution? thank you! -- James

    Read the article

  • Add jar to apache felix in Pom file?

    - by drozzy
    How can I add a jar to my bundle in Apache Felix? I am using maven, with maven-bundle-plugin to manage my bundles in OBR for me. But I am not sure where to declare the dependency inside my POM on the jar, so that maven correctly compiles it into the final bundle. This is how my plugin looks in pom: <plugin> <groupId>org.apache.felix</groupId> <artifactId>maven-bundle-plugin</artifactId> <version>2.1.0</version> <extensions>true</extensions> <configuration> <instructions> <Bundle-Category>sample</Bundle-Category> <Bundle-SymbolicName>${artifactId} </Bundle-SymbolicName> <Export-Package> //blahblah </Export-Package> </instructions> <!-- OBR --> <remoteOBR>repo-rel</remoteOBR> <prefixUrl>file:///C:/Users/blah/Projects/Eclipse3.6-RCP-64/Felix/obr-repo/releases</prefixUrl> <ignoreLock>true</ignoreLock> </configuration>

    Read the article

  • Asynchronous background processes in Python?

    - by Geuis
    I have been using this as a reference, but not able to accomplish exactly what I need: http://stackoverflow.com/questions/89228/how-to-call-external-command-in-python/92395#92395 I also was reading this: http://www.python.org/dev/peps/pep-3145/ For our project, we have 5 svn checkouts that need to update before we can deploy our application. In my dev environment, where speedy deployments are a bit more important for productivity than a production deployment, I have been working on speeding up the process. I have a bash script that has been working decently but has some limitations. I fire up multiple 'svn updates' with the following bash command: (svn update /repo1) & (svn update /repo2) & (svn update /repo3) & These all run in parallel and it works pretty well. I also use this pattern in the rest of the build script for firing off each ant build, then moving the wars to Tomcat. However, I have no control over stopping deployment if one of the updates or a build fails. I'm re-writing my bash script with Python so I have more control over branches and the deployment process. I am using subprocess.call() to fire off the 'svn update /repo' commands, but each one is acting sequentially. I try '(svn update /repo) &' and those all fire off, but the result code returns immediately. So I have no way to determine if a particular command fails or not in the asynchronous mode. import subprocess subprocess.call( 'svn update /repo1', shell=True ) subprocess.call( 'svn update /repo2', shell=True ) subprocess.call( 'svn update /repo3', shell=True ) I'd love to find a way to have Python fire off each Unix command, and if any of the calls fails at any time the entire script stops.

    Read the article

  • Storing class constants (for use as bitmask) in a database?

    - by fireeyedboy
    Let's say I have a class called Medium which can represent different types of media. For instance: uploaded video embedded video uploaded image embedded image I represent these types with contants, like this: class MediumAbstract { const UPLOAD = 0x0001; const EMBED = 0x0010; const VIDEO = 0x0100; const IMAGE = 0x1000; const VIDEO_UPLOAD = 0x0101; // for convenience const VIDEO_EMBED = 0x0110; // for convenience const IMAGE_UPLOAD = 0x1001; // for convenience const IMAGE_EMBED = 0x1010; // for convenience const ALL = 0x1111; // for convenience } Thus, it is easy for me to do a combined search on them on an (abstract) repository, with something like: { public function findAllByType( $type ) { ... } } $media = $repo->findAllByType( MediumAbstract::VIDEO | MediumAbstract::IMAGE_UPLOAD ); // or $media = $repo->findAllByType( MediumAbstract::ALL ); // etc.. How do you feel about using these constant values in a concrete repository like a database? Is it ok? Or should I substitute them with meaningful data in the database. Table medium: | id | type | location | etc.. ------------------------------------------------- | 1 | use constants here? | /some/path | etc.. (Of course I'll only be using the meaningful constants: VIDEO_UPLOAD, VIDEO_EMBED, IMAGE_UPLOAD and IMAGE_EMBED)

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • Copying a directory that is version controlled

    - by ibz
    I am curious whether it is OK to copy a directory that is under version control and start working on both copies. I know it can be different from one VCS to another, but I intentionally don't specify any VCS since I am curious about different cases. I was talking to a coworker recently about doing it in SVN. I think it should be OK, but I am still not 100% sure, since I don't know what exactly SVN is storing in the working copy. However, if we talk about the DVCS world, things might be even more unclear, since every working copy is a repository by itself. Being faced with doing this in bzr now, I decided to ask the question. Later edit: Some people asked why I would want to do that. Here is the whole story: In the case of SVN it was because being out of the office, the connection to the SVN server was really slow, so me and my coworker decided to check out the sources only once and make a local copy. That's what we did and it worked OK, but I am still wondering whether it is guaranteed to work, or it just happened. In the bzr case, I am planning to move the "main" repo to another server. So I was thinking to just copy it there and start considering that the main repo. I guess the safest is to make a clone though.

    Read the article

  • DDD and MVC: Difference between 'Model' and 'Entity'

    - by Nathan Loding
    I'm seriously confused about the concept of the 'Model' in MVC. Most frameworks that exist today put the Model between the Controller and the database, and the Model almost acts like a database abstraction layer. The concept of 'Fat Model Skinny Controller' is lost as the Controller starts doing more and more logic. In DDD, there is also the concept of a Domain Entity, which has a unique identity to it. As I understand it, a user is a good example of an Entity (unique userid, for instance). The Entity has a life-cycle -- it's values can change throughout the course of the action -- and then it's saved or discarded. The Entity I describe above is what I thought Model was supposed to be in MVC? How off-base am I? To clutter things more, you throw in other patterns, such as the Repository pattern (maybe putting a Service in there). It's pretty clear how the Repository would interact with an Entity -- how does it with a Model? Controllers can have multiple Models, which makes it seem like a Model is less a "database table" than it is a unique Entity. So, in very rough terms, which is better? No "Model" really ... class MyController { public function index() { $repo = new PostRepository(); $posts = $repo->findAllByDateRange('within 30 days'); foreach($posts as $post) { echo $post->Author; } } } Or this, which has a Model as the DAO? class MyController { public function index() { $model = new PostModel(); // maybe this returns a PostRepository? $posts = $model->findAllByDateRange('within 30 days'); while($posts->getNext()) { echo $posts->Post->Author; } } } Both those examples didn't even do what I was describing above. I'm clearly lost. Any input?

    Read the article

  • Can I recover lost commits in a SVN repository using a local tracking git-svn branch?

    - by Ian Stevens
    A SVN repo I use git-svn to track was recently corrupted and a backup was recovered. However, a week's worth of commits were lost in the recovery. Is it possible to recover those lost commits using git-svn dcommit on my local git repo? Is it sufficient to run git-svn dcommit with the SHA1 of the last recovered commit in SVN? eg. > svn info http://tracked-svn/trunk | sed -n "s/Revision: //p" 252 > git log --grep="git-svn-id:.*@252" --format=oneline | cut -f1 -d" " 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a > git svn dcommit 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Or will the git-svn-id need to be stripped from the intended commits? I tried this using --dry-run but couldn't tell whether it would try to submit all commits: > git svn dcommit --verbose --dry-run 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Committing to http://tracked-svn/trunk ... dcommitted on a detached HEAD because you gave a revision argument. The rewritten commit is: 55bb5c9cbb5fe11a90ec2e9e1e0c7c502908cf9a Thanks for your help.

    Read the article

  • How can I set up a git repository on windows, and then push to/pull from it on Mac OSX

    - by Eric S.
    I'm trying to set up a Windows-based web server, but do the development work on Mac OSX. I installed freeSSHd and msysGit on the Windows server, and set up a repository where I want it. I also have git on my Mac and set up a repo there too. When I try to clone, pull from, or push to the windows repo via SSH, it gives me an error, "fatal: protocol error: bad line length character" It doesn't matter what I set the remote to in my client (Mac OSX) machine - I can point it to a folder that doesn't exist and it still gives me that error. I also tried this on a Linux box I have sitting around and it works perfectly, so it's not my Mac. I have a couple ideas: Maybe freeSSHd isn't behaving correctly (as suggested here) so I could get a different SSH server for Windows - perhaps OpenSSH Perhaps I'm typing the code that combines Mac and Windows file paths incorrectly. I tried: sudo git clone ssh://[email protected]/C:/Users/[my_username]/[remote_repo_name]/.git [destination] and sudo git clone ssh://[email protected]/C:\Users\[my_username]\[remote_repo_name]\.git [destination] I'm getting the same error with both of these. Does anybody know what's going wrong? Better yet, is there anybody out there that has managed to do what I want to do (push to and pull from a windows repository via SSH)? Thanks!

    Read the article

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

  • Unwanted Shell expansion when assigning the output of a shell command to a variable

    - by Rob Goodwin
    I am exporting a portion of a local prototypte svn repository to import into a different repo. We have a number of svn properties set throughout the repo so I figured I would write a script to list the file elements and their corresponding properties. How hard can that be right. So I write started writing a bash script that would assign the output of the svn proplist -v to a variable so I could check if the specified file had any properties. #!/bin/bash svn proplist -v $1 o=$(svn proplist -v "$1") echo $o now this works fine and echos the output of the svn proplist command. But if the proplist command returns something like svn:ignore : * build it performs a shell expansion on the * and inserts the entire directory listing prior to the build property value. So if the directory had a.txt, b.txt and build files/dirs in it, the output would look like. svn:ignore a.txt b.txt build I figure I need to somehow escape the output or something to keep the expansion from happening, but have yet to find something that works. There are other ways to do this, but I hate when I cannot figure something out. and I have to admin, I think this one beat me ( well given the time I can spend on it )

    Read the article

  • NHibernate slow mapping

    - by Rob A
    My question is what can I do to determine the cause of the slowness, or what can I do to speed it up without knowing the exact cause. I am running a simple query and it appears that the mapping back to the entities is taking taking forever. The result set is 350, which is not much data in my opinion. IRepository repo = ObjectFactory.GetInstance<IRepository>(); var q = repo.Query<Order>(item => item.Ordereddate > DateTime.Now.AddDays(-40)); foreach (var order in q) { Console.WriteLine(order.TransactionNumber); } The profiler is telling me it is executing the query 7ms / 35257ms, I am assuming that the former is the actual response from the db and the latter is the time it takes NH to do it's magic. 35 seconds is too long. This is a simple mapping, one table, nested components, using fluent interface to do mappings. I just start up a simple console app and run the one query, the slowness is measured after the SessionFactory is initialized, there should only be one session, and I am not using a transaction. Thanks

    Read the article

  • "Session is Closed!" - NHibernate

    - by Alexis Abril
    This is in a web application environment: An initial request is able to successfully complete, however any additional requests return a "Session is Closed" response from the NHibernate framework. I'm using a HttpModule approach with the following code: public class MyHttpModule : IHttpModule { public void Init(HttpApplication context) { context.EndRequest += ApplicationEndRequest; context.BeginRequest += ApplicationBeginRequest; } public void ApplicationBeginRequest(object sender, EventArgs e) { CurrentSessionContext.Bind(SessionFactory.Instance.OpenSession()); } public void ApplicationEndRequest(object sender, EventArgs e) { ISession currentSession = CurrentSessionContext.Unbind( SessionFactory.Instance); currentSession.Dispose(); } public void Dispose() { } } SessionFactory.Instance is my singleton implementation, using FluentNHibernate to return an ISessionFactory object. In my repository class, I attempt to use the following syntax: public class MyObjectRepository : IMyObjectRepository { public MyObject GetByID(int id) { using (ISession session = SessionFactory.Instance.GetCurrentSession()) return session.Get<MyObject>(id); } } This allows code in the application to be called as such: IMyObjectRepository repo = new MyObjectRepository(); MyObject obj = repo.GetByID(1); I have a suspicion my repository code is to blame, but I'm not 100% sure on the actual implementation I should be using. I found a similar issue on SO here. I too am using WebSessionContext in my implementation, however, no solution was provided other than writing a custom SessionManager. For simple CRUD operations, is a custom session provider required apart from the built in tools(ie WebSessionContext)?

    Read the article

  • jquery load returns empty, possible MVC 2 problem?

    - by Max Fraser
    I have a site that need to get some data from a different sit that is using asp.net MVC/ The data to get loaded is from these pages: http://charity.hondaclassic.com/home/totaldonations http://charity.hondaclassic.com/Home/CharityList This should be a no brainer but for some reason I get an empty response, here is my JS: <script> jQuery.noConflict(); jQuery(document).ready(function($){ $('.totalDonations').load('http://charity.hondaclassic.com/home/totaldonations'); $('#charityList').load('http://charity.hondaclassic.com/home/CharityList'); }); </script> in firebug I see the request is made and come back with a response of 200 OK but the response is empty, if you browse to these pages they work fine! What the heck? Here are the controller actions from the MVC site: public ActionResult TotalDonations() { var total = "$" + repo.All<Customer>().Sum(x => x.AmountPaid).ToString(); return Content(total); } public ActionResult CharityList() { var charities = repo.All<Company>(); return View(charities); } Someone please out what stupid little thing I am missing - this should have taken me 5 minutes and it's been hours!

    Read the article

  • How to setup Eclipselink with JPA?

    - by deamon
    The Eclipselink documentation says that I need the following entries in my pom.xml to get it with Maven: <dependencies> <dependency> <groupId>org.eclipse.persistence</groupId> <artifactId>eclipselink</artifactId> <version>2.0.0</version> <scope>compile</scope> ... </dependency> <dependencies> ... <repositories> <repository> <id>EclipseLink Repo</id> <url>http://www.eclipse.org/downloads/download.php?r=1&amp;nf=1&amp;file=/rt/eclipselink/maven.repo</url> </repository> ... </repositories> But when I try to use @Entity annotation NetBeans tells me, that the class cannot be found. And indeed: there is no Entity class in the javax.persistence package from Eclipselink. How do I have to setup Eclipselink with Maven?

    Read the article

  • Syncing magento database froms development to production

    - by ringerce
    I use git for version control. I have a development, staging and production environment. When I finish in development I push to staging for review by the client. When approved, I push changes from staging to production. That works fine as long as there is no database changes. What happens if I install modules via Magento connect on local development and it makes database modifications. How would I push those changes up to the production server since the production server is always changing? Edit: I wrote two shell scripts. One that pulls the production database down to my development server, replaces base url with develpment url and updates my development db accordingly. It also leaves the production sql dump behind to be added to my git repo. I'm not really sure if it's beneficial to keep the raw dumps in source control but I'm going to try it out. The second scripts moves the development database up to staging and essentially performs the same operations as the first. Now when it comes time to move to production I pull the updated production repo into the production server and allow magento to do it's thing. I also started using SQLYog recently and it has a database comparison wizard which will give me the differences in my development and production databases and allow me to merge the changes in selectively. It always creates a migration script that I added to source control as well. If anything goes wrong I can run the comparison to see if anything was missed. Does this sounds like a decent workflow to you guys?

    Read the article

  • Named previously unnamed branch

    - by Jab
    It seems naming a previously unnamed branch doesn't really work out. It creates a nasty multiple heads problem that I can't find a solution for. Here is the workflow... UserA starts working on feature that they expect to be small, so they just start working(off the default branch). The change turns out to be a large project and will need multiple contributors. So UserA issues... hg branch "Feature1" and continues working, committing locally s needed. UserA then pulls down the changes from the central repo so he can push. At this point, why does hg heads return 3 heads? It shows 2 for default and 1 for Feature1. The first head for default is the latest change by another user on the branch(irrelevant). The second default head is the commit prior to the hg branch "Feature1" commit. The central repository has rules enforced so that only 1 head per branch is allowed, so forcing a push isn't an option. The repo doesn't want multiple heads on the default branch. UserA should be able to push these changes so that other users can see the Feature1 branch and help out. I can't seem to find a way to "correct" this. I don't think I can re-write the branch of the initial commits for the feature, before it was a named branch. I know the initial changes before the named branch are technically on the default branch, but does that mean they will be heads until that Feature1 branch is merged?

    Read the article

  • MVVM: where is the best place to integrate an id counter, in ViewModel or Repository or... ?

    - by msfanboy
    Hello, I am using http://loungerepo.codeplex.com/ this library needs a unique id when I persist my entities to the repository. I decided for integer not Guid. The question is now where do I retrieve a new integer and how do I do it? This is my current approach in the SchoolclassAdministrationViewModel.cs: public SchoolclassAdministrationViewModel() { _schoolclassRepo = new SchoolclassRepository(); _pupilRepo = new PupilRepository(); _subjectRepo = new SubjectRepository(); _currentSchoolclass = new SchoolclassModel(); _currentPupil = new PupilModel(); _currentSubject = new SubjectModel(); ... } private void AddSchoolclass() { // get the last free id for a schoolclass entity _currentSchoolclass.SchoolclassID = _schoolclassRepo.LastID; // add the new schoolclass entity to the repository _schoolclassRepo.Add(SchoolclassModel.SchoolclassModelToSchoolclass(_currentSchoolclass)); // add the new schoolclass entity to the ObservableCollection bound to the View Schoolclasses.Add(_currentSchoolclass); // Create a new schoolclass entity and the bound UI controls content gets cleaned CurrentSchoolclass = new SchoolclassModel(); } public class SchoolclassRepository : IRepository<Schoolclass> { private int _lastID; public SchoolclassRepository() { _lastID = FetchLastId(); } public void Add(Schoolclass entity) { //repo.Store(entity); } private int FetchLastId() { return // repo.GetIDOfLastEntryAndDoInc++ } public int LastID { get { return _lastID; } } } Explanation: Every time the user switches to the SchoolclassAdministrationViewModel which is datatemplated with a UserControl the saVM Ctor is called and the schoolclass repository is created wherein the FetchLastId() is called and I am up to date with the last ID doing a inc++ on it to get the free one... Do you have any better ideas? What I do not like about my current apporach: -Having a private method in repositry class because a repositry is to fetch data only not "entity logic" like the counter - Having to access from the ViewModel a public property - located in the repository -, actually its not the ViewModel concern to get a entity id and assign it. Actually the ViewModel should ask for a schoolclass POCO and get a SchoolclassModel to bind to the UI. But then I have again to re-read the Schoolclass properties into the SchoolclassModel properties what I want to avoid.

    Read the article

  • lightweight/portable VCS for server-hopping DBA?

    - by Aaron
    I'm looking for a VCS that'll help me keep all of my work scripts in-sync. Requirements: Portable (as in flash drive, not code-level) Run on Windows XP and Server 2003+ No installation dependencies (Cygwin, perl, Python) I use Mercurial on my work machine for version control of the various T-SQL, ksh, perl, and CMD/BAT scripts that I maintain as a MS SQL Server DBA and Unix sysadmin. So far, hg has worked for my AIX boxes- I mount my home directory as I login, and deal with the repo as if it were local. I haven't been able to find a similar solution for the Windows machines I use. Most of them I do not have Local Admin rights; even if I did, I'd rather not install (and maintain) Python + Mercurial on all of them. I can't get to my home directory on them remotely, which leaves a client running on each machine as the only option. Bonus points for an answer that would let me use a single repo for both the Windows and Unix machines. :) I'm running WinXP, with heavy use of Cygwin and a CrunchBang VM.

    Read the article

  • Tagging in Subversion - how do I make the decision about continuing to work on my trunk vs. the new

    - by Howiecamp
    I'm running Tortoise SVN to manage a project. Obviously the principles around tagging apply to any implementation of SVN but in this question I'll be referring to some TortoiseSVN-specific dialog boxes and messages. My working directory and the subversion repository structure both have a Source root directory and the Trunk, Tags and Branches directories underneath. (I couldn't figure out how to do a multilevel indented hierarchy in markdown without using bullets, so if someone could edit and fix this I'd appreciate it.) I'm working out of the Trunk directory in my working copy and it's pointing at the Trunk directory in the repo. I want to apply a Tag "Release1" so I click the "Branch/tag..." menu option and set the repo path as my [repo_path/bla/Source/Tags/Release1" tag. This dialog box gives me the option to "Switch my working copy to new branch/tag". I understand that if this option is left unchecked, the new "Release1" branch under /Tags" will be created but my working copy will remain on the previous "Trunk" path. If I do check this option (or use the Switch command) I understand that my working copy will switch to the new "Release1" branch under "/Tags". Where I'm missing a concept is how to make this decision. It doesn't seem like I want to switch my working directory to the recently created tag since by definition (?) I want that tag to be a snapshot of my code as of a point in time. If I don't switch the working directory, I'll continue working off Trunk and when I'm ready to take another snapshot I'll make another tag. And so on... Am I understanding this right or am I stating something incorrectly in the previous paragraph (e.g. the statement about not wanting to switch to the tag since the tag should represent a point in time snapshot) or otherwise missing something regarding how to make this decision?

    Read the article

  • Why is "origin/HEAD" shown when running "git branch -r"?

    - by Ben Hamill
    When you run git branch -r why the blazes does it list origin/HEAD? For example, there's a remote repo on GitHub, say, with two branches: master and awesome-feature. If I do git clone to grab it and then go into my new directory and list the branches, I see this: $ git branch -r origin/HEAD origin/master origin/awesome-feature Or whatever order it would be in (alpha? I'm faking this example to keep the identity of an innocent repo secret). So what's the HEAD business? Is it what the last person to push had their HEAD pointed at when they pushed? Won't that always be whatever it was they pushed? HEADs move around... why do I care what someone's HEAD pointed at on another machine? I'm just getting a handle on remote tracking and such, so this is one lingering confusion. Thanks! EDIT: I was under the impression that dedicated remote repos (like GitHub where no one will ssh in and work on that code, but only pull or push, etc) didn't and shouldn't have a HEAD because there was, basically, no working copy. Not so?

    Read the article

  • How should I manage/declare dependencies between open source C# projects?

    - by munificent
    I've got a game (a roguelike to be specific) in C# that I'm in the process of cleaning up to open source. One step I'd like to take is splitting it into three distinct pieces: A simple package of utility classes, things like 2D arrays, vectors, etc. A terminal UI package that gives you a curses-like display. It depends on 1. The actual game, which uses 1 and 2. Right now, these are all separate projects in the same solution, but I'd kind of like to make them completely separate projects (in the "open source project" sense, not the "visual studio project" use of the term) with their own names and repos. I think, at the very least, #1 is generally useful even if you aren't building game, and I don't want someone to have to build an entire game just to get some handy functions. What I'm not sure about is how to handle the dependencies if I split up the solution. If someone decides they want to sync the game, how should I ensure they also get 1 and 2? Include the built dependent .dlls in the games repo? Just document, "you need these other projects and they must be in a path relative to the game like this". Just leave it all one giant solution and a single repo. Something I'm not thinking of?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >