Search Results

Search found 5514 results on 221 pages for 'rpm repository'.

Page 112/221 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • How do I build latest Tycho

    - by hedefalk
    I've tried to build Tycho now for a couple of hours and just can't get it to work. I've followed these instructions: https://docs.sonatype.org/display/TYCHO/BuildingTycho So, I've downloaded Eclipse 3.6RC2 and Delta-packs linked from this instruction (is it for 3.5 only?): http:// (remove space) aniefer.blogspot.com/2009/06/using-deltapack-in-eclipse-35.html I've added the DeltaPack to the TargetPlatform inside of the Eclipse-installation. I've installed Maven: Apache Maven 3.0-beta-1 (r935667; 2010-04-19 19:00:39+0200) I can run the first bootstrap of the build, but the second fails: mvn clean install -e -V -Pbootstrap-2 -Dtycho.targetPlatform=$TYCHO_TARGET_PLATFORM ERROR] Internal error: java.lang.RuntimeException: Could not resolve plugin org.eclipse.core.net.linux.x86_null -> [Help 1] I've tried different stuff, I built an older revision against 3.5 as in this blogpost: http:// (remove space) divby0.blogspot.com/2010/03/im-in-love-with-tycho-08-and-maven-3.html and that actually built a running maven, but that version then can't find the tycho plugin: org.apache.maven.plugin.version.PluginVersionResolutionException: Error resolving version for plugin 'org.codehaus.tycho:maven-tycho-plugin' from the repositories [local (/Users/viktor/.m2/repository), central (http://repo1.maven.org/maven2)]: Plugin not found in any plugin repository I thought that the point was that the plugin was going to build in when I had built a Tycho-dist…? Sorry about the links, stackoverflows spam-protection doesn't let me post more than url yet

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • asp.net mvc insert doesnt seem to work for me....

    - by Pandiya Chendur
    My controller's call to repository insert method all the values are passed but it doesn't get inserted in my table.. My controller method, [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create([Bind(Exclude = "Id")]FormCollection collection) { try { MaterialsObj materialsObj = new MaterialsObj(); materialsObj.Mat_Name = collection["Mat_Name"]; materialsObj.Mes_Id = Convert.ToInt64(collection["MeasurementType"]); materialsObj.Mes_Name = collection["Mat_Type"]; materialsObj.CreatedDate = System.DateTime.Now; materialsObj.CreatedBy = Convert.ToInt64(1); materialsObj.IsDeleted = Convert.ToInt64(1); consRepository.createMaterials(materialsObj); return RedirectToAction("Index"); } catch { return View(); } } and my repository, public MaterialsObj createMaterials(MaterialsObj materialsObj) { Material mat = new Material(); mat.Mat_Name = materialsObj.Mat_Name; mat.Mat_Type = materialsObj.Mes_Name; mat.MeasurementTypeId = materialsObj.Mes_Id; mat.Created_Date = materialsObj.CreatedDate; mat.Created_By = materialsObj.CreatedBy; mat.Is_Deleted = materialsObj.IsDeleted; db.Materials.InsertOnSubmit(mat); return materialsObj; } What am i missing here any suggestion....

    Read the article

  • Create dynamic factory method in PHP (< 5.3)

    - by fireeyedboy
    How would one typically create a dynamic factory method in PHP? By dynamic factory method, I mean a factory method that will autodiscover what objects there are to create, based on some aspect of the given argument. Preferably without registering them first with the factory either. I'm OK with having the possible objects be placed in one common place (a directory) though. I want to avoid your typical switch statement in the factory method, such as this: public static function factory( $someObject ) { $className = get_class( $someObject ); switch( $className ) { case 'Foo': return new FooRelatedObject(); break; case 'Bar': return new BarRelatedObject(); break; // etc... } } My specific case deals with the factory creating a voting repository based on the item to vote for. The items all implement a Voteable interface. Something like this: Default_User implements Voteable ... Default_Comment implements Voteable ... Default_Event implements Voteable ... Default_VoteRepositoryFactory { public static function factory( Voteable $item ) { // autodiscover what type of repository this item needs // for instance, Default_User needs a Default_VoteRepository_User // etc... return new Default_VoteRepository_OfSomeType(); } } I want to be able to drop in new Voteable items and Vote repositories for these items, without touching the implementation of the factory.

    Read the article

  • DDD and MVC: Difference between 'Model' and 'Entity'

    - by Nathan Loding
    I'm seriously confused about the concept of the 'Model' in MVC. Most frameworks that exist today put the Model between the Controller and the database, and the Model almost acts like a database abstraction layer. The concept of 'Fat Model Skinny Controller' is lost as the Controller starts doing more and more logic. In DDD, there is also the concept of a Domain Entity, which has a unique identity to it. As I understand it, a user is a good example of an Entity (unique userid, for instance). The Entity has a life-cycle -- it's values can change throughout the course of the action -- and then it's saved or discarded. The Entity I describe above is what I thought Model was supposed to be in MVC? How off-base am I? To clutter things more, you throw in other patterns, such as the Repository pattern (maybe putting a Service in there). It's pretty clear how the Repository would interact with an Entity -- how does it with a Model? Controllers can have multiple Models, which makes it seem like a Model is less a "database table" than it is a unique Entity. So, in very rough terms, which is better? No "Model" really ... class MyController { public function index() { $repo = new PostRepository(); $posts = $repo->findAllByDateRange('within 30 days'); foreach($posts as $post) { echo $post->Author; } } } Or this, which has a Model as the DAO? class MyController { public function index() { $model = new PostModel(); // maybe this returns a PostRepository? $posts = $model->findAllByDateRange('within 30 days'); while($posts->getNext()) { echo $posts->Post->Author; } } } Both those examples didn't even do what I was describing above. I'm clearly lost. Any input?

    Read the article

  • Aggregate Pattern and Performance Issues

    - by Mosh
    Hello, I have read about the Aggregate Pattern but I'm confused about something here. The pattern states that all the objects belonging to the aggregate should be accessed via the Aggregate Root, and not directly. And I'm assuming that is the reason why they say you should have a single Repository per Aggregate. But I think this adds a noticeable overhead to the application. For example, in a typical Web-based application, what if I want to get an object belonging to an aggregate (which is NOT the aggregate root)? I'll have to call Repository.GetAggregateRootObject(), which loads the aggregate root and all its child objects, and then iterate through the child objects to find the one I'm looking for. In other words, I'm loading lots of data and throwing them out except the particular object I'm looking for. Is there something I'm missing here? PS: I know some of you may suggest that we can improve performance with Lazy Loading. But that's not what I'm asking here... The aggregate pattern requires that all objects belonging to the aggregate be loaded together, so we can enforce business rules.

    Read the article

  • LINQ query code for complex merging of data.

    - by Stacey
    I've posted this before, but I worded it poorly. I'm trying again with a more well thought out structure. Re-writing this a bit to make it more clear. I have the following code and I am trying to figure out the shorter linq expression to do it 'inline'. Please examine the "Run()" method near the bottom. I am attempting to understand how to join two dictionaries together based on a matching identifier in one of the objects - so that I can use the query in this sort of syntax. var selected = from a in items.List() // etc. etc. select a; This is my class structure. The Run() method is what I am trying to simplify. I basically need to do this conversion inline in a couple of places, and I wanted to simplify it a great deal so that I can define it more 'cleanly'. class TModel { public Guid Id { get; set; } } class TModels : List<TModel> { } class TValue { } class TStorage { public Dictionary<Guid, TValue> Items { get; set; } } class TArranged { public Dictionary<TModel, TValue> Items { get; set; } } static class Repository { static public TItem Single<TItem, TCollection>(Predicate<TItem> expression) { return default(TItem); // access logic. } } class Sample { public void Run() { TStorage tStorage = new TStorage(); // access tStorage logic here. Dictionary<TModel, TValue> d = new Dictionary<TModel, TValue>(); foreach (KeyValuePair<Guid, TValue> kv in tStorage.Items) { d.Add(Repository.Single<TModel, TModels>(m => m.Id == kv.Key),kv.Value); } } }

    Read the article

  • Setting up Mercurial/TortoiseHg to work with UltraCompare

    - by Tim Pietzcker
    Hi, I'm trying to get my favorite Windows diff/merge tool, UltraCompare (V7.00) to work with Mercurial/TortoiseHg. I have set up UltraCompare in my Mercurial.ini like this (only relevant bits shown): [merge-tools] UltraCompare.executable = C:\Programme\IDM Computer Solutions\UltraCompare\uc.com UltraCompare.args = $base $local $other UltraCompare.priority = 1 UltraCompare.gui = True UltraCompare.binary = True UltraCompare.checkconflicts = True UltraCompare.checkchanged = True However, the three-way-merge fails. The path names get messed up if the path to the repository that is being merged to contains a space. I have done some more testing, and I've found out (using Process Explorer) that uc.com is called with a broken command line if there is a space in the repository's path: Compare "C:\Programme\IDM Computer Solutions\UltraCompare\uc.exe" " "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.akr6au" "E:\Eigene Dateien\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.b92442" and "C:\Programme\IDM Computer Solutions\UltraCompare\uc.com" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~base.e7vryp" "E:\test\test-merge\test.txt" "c:\dokume~1\tim~1.pie\lokale~1\temp\test.txt~other.u_qxme" There is an extraneous " after the path of the executable in the first example - not in the second (which works fine). To me, it seems as if UltraCompare is doing everything right, and that Mercurial/TortoiseHg are passing a defective command line to it. Would you say so, too? Is there a workaround? I've just updated to Mercurial 1.5/TortoiseHg 1.0, and the problem persists. Support for other merge tools (Beyond Compare and others) has been added, sadly not UltraCompare...

    Read the article

  • SpringMvc Annotations for DAO interface and DAO implementation

    - by dev_darin
    I would like to know if I am annotating these classes correctly, since I am new to the annotations: Country.java @Component public class Country { private int countryId; private String countryName; private String countryCode; /** * No args constructor */ public Country() { } /** * @param countryId * @param countryName * @param countryCode */ public Country(int countryId, String countryName, String countryCode) { this.countryId = countryId; this.countryName = countryName; this.countryCode = countryCode; } //getters and setters } CountryDAO.java @Repository public interface CountryDAO { public List<Country> getCountryList(); public void saveCountry(Country country); public void updateCountry(Country country); } JdbcCountryDAO.java @Component public class JdbcCountryDAO extends JdbcDaoSupport implements CountryDAO{ private final Logger logger = Logger.getLogger(getClass()); @Autowired public List<Country> getCountryList() { int countryId = 6; String countryCode = "AI"; logger.debug("In getCountryList()"); String sql = "SELECT * FROM TBLCOUNTRY WHERE countryId = ? AND countryCode = ?"; logger.debug("Executing getCountryList String "+sql); Object[] parameters = new Object[] {countryId, countryCode}; logger.info(sql); //List<Country> countryList = getJdbcTemplate().query(sql,new CountryMapper()); List<Country> countryList = getJdbcTemplate().query(sql, parameters,new CountryMapper()); return countryList; } CountryManagerIFace.java @Repository public interface CountryManagerIFace extends Serializable{ public void saveCountry(Country country); public List<Country> getCountries(); } CountryManager.java @Component public class CountryManager implements CountryManagerIFace{ @Autowired private CountryDAO countryDao; public void saveCountry(Country country) { countryDao.saveCountry(country); } public List<Country> getCountries() { return countryDao.getCountryList(); } public void setCountryDao(CountryDAO countryDao){ this.countryDao = countryDao; } }

    Read the article

  • DATE_FORMAT in DQL symfon2

    - by schurtertom
    I would like to use some MySQL functions such as DATE_FORMAT in my QueryBuilder. I saw this post did not understand totally how I should achieve it: SELECT DISTINCT YEAR Doctrine class SubmissionManuscriptRepository extends EntityRepository { public function findLayoutDoneSubmissions( $fromDate, $endDate, $journals ) { if( true === is_null($fromDate) ) return null; $commQB = $this->createQueryBuilder( 'c' ) ->join('c.submission_logs', 'k') ->select("DATE_FORMAT(k.log_date,'%Y-%m-%d')") ->addSelect('c.journal_id') ->addSelect('COUNT(c.journal_id) AS numArticles'); $commQB->where("k.hash_key = c.hash_key"); $commQB->andWhere("k.log_date >= '$fromDate'"); $commQB->andWhere("k.log_date <= '$endDate'"); if( $journals != null && is_array($journals) && count($journals)>0 ) $commQB->andWhere("c.journal_id in (" . implode(",", $journals) . ")"); $commQB->andWhere("k.new_status = '20'"); $commQB->orderBy("k.log_date", "ASC"); $commQB->groupBy("c.hash_key"); $commQB->addGroupBy("c.journal_id"); $commQB->addGroupBy("DATE_FORMAT(k.log_date,'%Y-%m-%d')"); return $commQB->getQuery()->getResult(); } } Entity SubmissionManuscript /** * MDPI\SusyBundle\Entity\SubmissionManuscript * * @ORM\Entity(repositoryClass="MDPI\SusyBundle\Repository\SubmissionManuscriptRepository") * @ORM\Table(name="submission_manuscript") * @ORM\HasLifecycleCallbacks() */ class SubmissionManuscript { ... /** * @ORM\OneToMany(targetEntity="SubmissionManuscriptLog", mappedBy="submission_manuscript") */ protected $submission_logs; ... } Entity SubmissionManuscriptLog /** * MDPI\SusyBundle\Entity\SubmissionManuscriptLog * * @ORM\Entity(repositoryClass="MDPI\SusyBundle\Repository\SubmissionManuscriptLogRepository") * @ORM\Table(name="submission_manuscript_log") * @ORM\HasLifecycleCallbacks() */ class SubmissionManuscriptLog { ... /** * @ORM\ManyToOne(targetEntity="SubmissionManuscript", inversedBy="submission_logs") * @ORM\JoinColumn(name="hash_key", referencedColumnName="hash_key") */ protected $submission_manuscript; ... } any help I would appreciate a lot. EDIT 1 I have now successfully be able to add the Custom Function DATE_FORMAT. But now if I try with my Group By I get the following Error: [Semantical Error] line 0, col 614 near '(k.logdate,'%Y-%m-%d')': Error: Cannot group by undefined identification variable. Anyone knows about this?

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Missing files on branch after cvs2svn import

    - by cafebabe
    A colleague has imported a CVS repository into a pre-existing SVN repository using a cvs2svn dumpfile (like "svnadmin load --parent-dir /path < dumpfile") , which I originally created from the CVS repo. Now that I'm trying to checkout and build from SVN, I've noticed that some files seem to be missing in the SVN checkout that were present when I checked out the same branch from CVS, although the majority are present. They are mostly but not exclusively binary files (jars and gifs etc.) and I think (though I haven't checked exhaustively) that they are also files that have not been modified on the branch that I'm trying to check out. I should also point out that they don't show up using cvsweb (I would provide a link to the cvsweb documentation but I have no way of knowing its version etc), although they do appear doing a standard checkout of the branch. If anyone has any idea what's wrong here, or where to start looking to address this, I'd be very grateful! New to SVN so not sure if this is normal! Also, I know I could fairly easily "fix" it by copying over the files but I'd ideally like to keep their revision history so a more complete solution would be preferable. Thanks!

    Read the article

  • Git force complete sync to master

    - by Jesse
    My workplace uses Subversion for source control so I have been playing around with git-svn for the advantages of my own branches, commit as often as I want without touching the main repo, etc. Since my git svn checkout is local, I have cloned it to a network share as well to act as a backup. My thinking is that if my desktop takes a dump I will at least have the repo on the network share to get changes that I have not had a chance to dcommit yet. My workflow is to work from the desktop, make changes, commit, etc. At the end of the day I want to update the repo on the network share with all of my current changes. I had setup the repo on the network share using git clone repo_on_my_desktop and then updating the repo on the network share with git pull origin master. The problem that I am running into is when I used do a git rebase to squish multiple commits prior to dcommitting to the main svn repository. When I do this, I get merge conflicts on the repo on the network share when I try to backup at night. Is there a way to simply sync entirely with the repository on my desktop without doing a new git clone each night?

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • NHibernate2 query is wired when fetch the collection from the proxy. Is this correct behavior?

    - by ensecoz
    This is my class: public class User { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<UserFriend> Friends { get; protected set; } } public class UserFriend { public virtual int Id { get; set; } public virtual User User { get; set; } public virtual User Friend { get; set; } } This is my mapping (Fluent NHibernate): public class UserMap : ClassMap<User> { public UserMap() { Id(x => x.Id, "UserId").GeneratedBy.Identity(); HasMany<UserFriend>(x => x.Friends); } } public class UserFriendMap : ClassMap<UserFriend> { public UserFriendMap() { Id(x => x.Id, "UserFriendId").GeneratedBy.Identity(); References<User>(x => x.User).TheColumnNameIs("UserId").CanNotBeNull(); References<User>(x => x.Friend).TheColumnNameIs("FriendId").CanNotBeNull(); } } The problem is when I execute this code: User user = repository.Load(1); User friend = repository.Load(2); UserFriend userFriend = new UserFriend(); userFriend.User = user; userFriend.Friend = friend; friendRepository.Save(userFriend); var friends = user.Friends; At the last line, NHibernate generate this query for me: SELECT friends0_.UserId as UserId1_, friends0_.UserFriendId as UserFrie1_1_, friends0_.UserFriendId as UserFrie1_6_0_, friends0_.FriendId as FriendId6_0_, friends0_.UserId as UserId6_0_ FROM "UserFriend" friends0_ WHERE friends0_.UserId=@p0; @p0 = '1' QUESTION: Why the query look very wired? It should select only 3 fields (which are UserFriendId, UserId, FriendId) Am I right? or there is something going on inside NHibernate?

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Sell me Distributed revision control

    - by ring bearer
    I know 1000s of similar topics floating around. I read at lest 5 threads here in SO But why am I still not convinced about DVCS? I have only following questions (note that I am selfishly worried only about Java projects) What is the advantage or value of committing locally? What? really? All modern IDEs allows you to keep track of your changes? and if required you can restore a particular change. Also, they have a feature to label your changes/versions at IDE level!? what if I crash my hard drive? where did my local repository go? (so how is it cool compared to checking in to a central repo?) Working offline or in an air plane. What is the big deal?In order for me to build a release with my changes, I must eventually connect to the central repository. Till then it does not matter how I track my changes locally. Ok Linus Torvalds gives his life to Git and hates everything else. Is that enough to blindly sing praises? Linus lives in a different world compared to offshore developers in my mid-sized project? Pitch me!

    Read the article

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • Is there a way to restrict access to a public method to only a specific class in C#?

    - by Anon
    I have a class A with a public method in C#. I want to allow access to this method to only class B. Is this possible? UPDATE: This is what i'd like to do: public class Category { public int NumberOfInactiveProducts {get;} public IList<Product> Products {get;set;} public void ProcessInactiveProduct() { // do things... NumberOfInactiveProducts++; } } public class Product { public bool Inactive {get;} public Category Category {get;set;} public void SetInactive() { this.Inactive= true; Category.ProcessInactiveProduct(); } } I'd like other programmers to do: var prod = Repository.Get<Product>(id); prod.SetInactive(); I'd like to make sure they don't call ProcessInactiveProduct manually: var prod = Repository.Get<Product>(id); prod.SetInactive(); prod.Category.ProcessInactiveProduct(); I want to allow access of Category.ProcessInactiveProduct to only class Product. Other classes shouldn't be able to call Category.ProcessInactiveProduct.

    Read the article

  • problem writing xml to file with .net mvc - timeout?

    - by Mark
    Hey, so having an issue with writing out to an xml file. Works fine for single requests via the browser, but when I use something like Charles to perform 5-10 repeated requests concurrently several of them will fail. The trace simply shows a 500 error with no content inside, basically I think they start timing out waiting for write access or something... This method is inside my repository class, have also attempted to have repository instance as a singleton but doesn't appear to make any difference.. Any help would be much appreciated. Cheers public void Add(Request request) { try { XDocument requests; XmlReader xmlReader; using (xmlReader = XmlReader.Create(_requestsFilePath)) { requests = XDocument.Load(xmlReader); XElement xmlRequest = new XElement("request", new XElement("code", request.code), new XElement("date", request.date), new XElement("email", new XCData(request.email)), new XElement("name", new XCData(request.name)), new XElement("recieveOffers", request.recieveOffers) ); requests.Root.Element("requests").Add(xmlRequest); xmlReader.Close(); } requests.Save(_requestsFilePath); } catch (Exception ex) { HttpContext.Current.Trace.Warn("Error writing to file: "+ex); } }

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • Which source control paradigm and solution to embed in a custom editor application?

    - by Greg Harman
    I am building an application that manages a number of custom objects, which may be edited concurrently by multiple users (using different instances of the application). These objects have an underlying serialized representation, and my plan is to persist them (through my application UI) in an external source control system. Of course this implies that my application can check the current version of an object for updates, a merging interface for each object, etc. My question is what source control paradigm(s) and specific solution(s) to support and why. The way I (perhaps naively) see the source control world is three general paradigms: Single-repository, locked access (MS SourceSafe) Single-repository, concurrent access (CVS/SVN) Distributed (Mercurial, Git) I haven't heard of anyone using #1 for quite a number of years, so I am planning to disregard this case altogether (unless I get a compelling argument otherwise). However, I'm at a loss as to whether to support #2 or #3, and which specific implementations. I'm concerned that the use paradigms are subtly different enough that I can't adequately capture basic operations in a single UI. The last bit of information I should convey is that this application is intended to be deployed in a commercial setting, where a source control system may already be in use. I would prefer not to support more than one solution unless it's really a deal-breaker, so wide adoption in a corporate setting is a plus.

    Read the article

  • How to manage sessions in NHibernate unit tests?

    - by Ben
    I am a little unsure as to how to manage sessions within my nunit test fixtures. In the following test fixture, I am testing a repository. My repository constructor takes in an ISession (since I will be using session per request in my web application). In my test fixture setup I configure NHibernate and build the session factory. In my test setup I create a clean SQLite database for each test executed. [TestFixture] public class SimpleRepository_Fixture { private static ISessionFactory _sessionFactory; private static Configuration _configuration; [TestFixtureSetUp] // called before any tests in fixture are executed public void TestFixtureSetUp() { _configuration = new Configuration(); _configuration.Configure(); _configuration.AddAssembly(typeof(SimpleObject).Assembly); _sessionFactory = _configuration.BuildSessionFactory(); } [SetUp] // called before each test method is called public void SetupContext() { new SchemaExport(_configuration).Execute(true, true, false); } [Test] public void Can_add_new_simpleobject() { var simpleObject = new SimpleObject() { Name = "Object 1" }; using (var session = _sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); repo.Save(simpleObject); } using (var session =_sessionFactory.OpenSession()) { var repo = new SimpleObjectRepository(session); var fromDb = repo.GetById(simpleObject.Id); Assert.IsNotNull(fromDb); Assert.AreNotSame(simpleObject, fromDb); Assert.AreEqual(simpleObject.Name, fromDb.Name); } } } Is this a good approach or should I be handling the sessions differently? Thanks Ben

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >