Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 50/193 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Per directory read/write permissions in Mercurial

    - by pako
    I would like to convert my Subversion repository to Mercurial. I have a pretty big web project divided into many different folders. In Subversion I was able to set per directory permissions for a repository. For example, I could say that a new developer could only read and write a subset of all the project's directories. Is it possible to have a similar setup in a single Mercurial repository?

    Read the article

  • Can't get automated release working with Hudson + Git + Maven Release Plugin

    - by Christopher Maier
    As the title says, I'm trying to get an automated release job working on Hudson. It's a Maven project, and all the code is in Git. Manually, I do the release on my personal machine like so: git checkout master mvn -B release:prepare release:perform This works perfectly. The Maven release plugin properly pushes the release tag to the origin repository as well as the next commit that bumps the version to the next SNAPSHOT. However, when I run this same Maven job through Hudson (either by creating my own "release" job or by using the M2 Release Plugin) it doesn't work so well. The release tag gets pushed out to the origin repository, and the release gets pushed out to our Nexus repository, but the subsequent commit that bumps the version to the next SNAPSHOT doesn't go out. Furthermore, the "master" branch in the origin repository doesn't get changed at all. I've looked in Hudson's workspace for the job, however, and the version has been updated. After looking at the output from the Hudson job, it appears that the Git plugin does not actually checkout "master", but rather it's SHA1 id. That is, if the "master" branch label points to commit "f6af76f541f1a1719e9835cdb46a183095af6861", Hudson does git checkout -f f6af76f541f1a1719e9835cdb46a183095af6861 instead of git checkout -f master As a result, the changes that the Maven release plugin is making are not actually on any branch (certainly not on "master") and these changes don't make it to the origin repository. It runs on the right code, but bookkeeping-wise, the changes seem to get lost because no branch label points to them. Has anybody gotten the Hudson + Git + Maven Release Plugin combo to work properly? Is there some additional configuration somewhere I can set to make this happen? Or is this a bug in the Hudson Git plugin? Thanks in advance.

    Read the article

  • Cloning just a particular directory with hg?

    - by leeand00
    I come from a Subversion background, but I am slowly migrating to Mercurial. When starting on many of my projects, I would setup a development environment that was configured to a particular starting point in developing an app/webapp/program (much like a Maven 2 archetype, but not necessarily Java/Maven). Later I would checkout this archetype/template project out of my svn repo by its particular path; and than export the working copy from version control by the repository; so that I could import the working copy back in to another repository without adding the changes that I made to the working copy to the base the template/archetype project. I've tried doing the same thing in Mercurial, and I've run into a wall since I can't check out, er..um..no, clone a specific path from the hg repository. If I want to achieve the same sort of functionality using Mecurial, what should I do? Use tagged branches? The archetypes/template projects are very different, but I'd like to keep them in the same repository.

    Read the article

  • Mercurial: pull changes from unversioned copy

    - by Austin Hyde
    I am currently maintaining a Mercurial repository of the project I am working on. The rest of the team, however, doesn't. There is a "good" (unversioned) copy of the code base that I can access by SSH. What I would like to do is be able to do something like an hg pull from that good copy into my master repository whenever it gets updated. As far as I can tell, there's no obvious way to do this, as hg pull requires you have a source hg repository. I suppose I could use a utility like rsync to update my repository, then commit, but I was wondering: Is there was an easier/less contrived way to do this?

    Read the article

  • svn cat for git

    - by sanxiyn
    I am looking for the equivalent of svn cat in git. Yes, I am aware that the similar question was asked here. The answer is to use git show rev:path. However, svn cat can be used for the remote repository. That is, I can do svn cat url@rev and get the file from the specified revision of the remote repository, without getting the whole repository. My understanding is that git show only applies to the local repository. A workaround I found is to use gitweb interface to get the blob.

    Read the article

  • Trying to setup android sdk on PC with windows XP

    - by John Donovan
    When I start setup I get this message: XML verification failed for http://dl-ssl.google.com/android/repository/repository.xml. Error: cvc-elt.1: Cannot find the declaration of element 'sdk:sdk-repository'. Failed to fetch URL reason: Unknown Even when I force the download to use http nothing happens. I get no downloads etc for the SDK. Any help would be greatly appreciated.

    Read the article

  • Mercurial repo inside a repo

    - by AkiRoss
    Is it possible to create a mercurial repository inside an existing mercurial repository? The idea is to handle subdirectories of a repository as different repositories, how do you do that? I'm not talking about subrepos (at least, if I understood the purpose of subrepos...), but if this is how subrepos do exist for, I got it wrong and I'll try to get it right :) Thanks ~Aki

    Read the article

  • CVS to Mercurial conversion: end of line problem

    - by mizipzor
    I recently converted a CVS repository to Mercurial. From the looks of it, everything went perfect. Except that every end-of-line character is in Unix style and I want them in Windows style. I know the hg convert command can be used to "convert" a Mercurial repository to a Mercurial repository. Can I use it to do nothing on the repos but fix the line endings?

    Read the article

  • Creating a Bazaar branch from an offline SVN working copy?

    - by Igor Brejc
    I'm doing some offline development on my SVN working copy. Since I won't have access to the SVN repository for a while, I wanted to use Bazaar as a helper version control to keep the intermediate commit history before I commit everything back to the SVN repository. Is this possible? When I try to create a branch using TortoiseBZR from the SVN working copy, it wants to access the SVN repository, which is a problem.

    Read the article

  • Linq is returning too many results when joined

    - by KallDrexx
    In my schema I have two database tables. relationships and relationship_memberships. I am attempting to retrieve all the entries from the relationship table that have a specific member in it, thus having to join it with the relationship_memberships table. I have the following method in my business object: public IList<DBMappings.relationships> GetRelationshipsByObjectId(int objId) { var results = from r in _context.Repository<DBMappings.relationships>() join m in _context.Repository<DBMappings.relationship_memberships>() on r.rel_id equals m.rel_id where m.obj_id == objId select r; return results.ToList<DBMappings.relationships>(); } _Context is my generic repository using code based on the code outlined here. The problem is I have 3 records in the relationships table, and 3 records in the memberships table, each membership tied to a different relationship. 2 membership records have an obj_id value of 2 and the other is 3. I am trying to retrieve a list of all relationships related to object #2. When this linq runs, _context.Repository<DBMappings.relationships>() returns the correct 3 records and _context.Repository<DBMappings.relationship_memberships>() returns 3 records. However, when the results.ToList() executes, the resulting list has 2 issues: 1) The resulting list contains 6 records, all of type DBMappings.relationships(). Upon further inspection there are 2 for each real relationship record, both are an exact copy of each other. 2) All relationships are returned, even if m.obj_id == 3, even though objId variable is correctly passed in as 2. Can anyone see what's going on because I've spent 2 days looking at this code and I am unable to understand what is wrong. I have joins in other linq queries that seem to be working great, and my unit tests show that they are still working, so I must be doing something wrong with this. It seems like I need an extra pair of eyes on this one :)

    Read the article

  • Using SVN post-commit hook to update only files that have been commited

    - by fondie
    I am using an SVN repository for my web development work. I have a development site set up which holds a checkout of the repository. I have set up an SVN post-commit hook so that whenever a commit is made to the repository the development site is updated: cd /home/www/dev_ssl /usr/bin/svn up This works fine but due to the size of the repository the updates take a long time (approx. 3 minutes) which is rather frustrating when making regular commits. What I'd like is to change the post-commit hook to only update those files/directories that have been committed but I don't know how to go about doing this. Updating the "lowest common directory" would probably be the best solution, e.g. If committing the follow files: /branches/feature_x/images/logo.jpg /branches/feature_x/css/screen.css It would update the directory: /branches/feature_x/ Can anyone help me create a solution that achieves this please? Thanks! Update: The repository and development site are located on the same server so network issues shouldn't be involved. CPU usage is very low, and I/O should be ok (it's running on hi-spec dedicated server) The development site is approx. 7.5GB in size and contains approx. 600,000 items, this is mainly due to having multiple branches/tags

    Read the article

  • Do git tags get pushed as well?

    - by vfclists
    Since I created my repository it appears that the tags I have been creating are not pushed to the repository. When I do git tag on the local directory all the tags are present, but when I logon to the remote repository and do a git tag, only the first few show up. What could the problem be?

    Read the article

  • Partial push in mercurial

    - by Chris089
    I want to move a part, i.e. one subdirectory of an existing, private mercurial repository to a new, public repository on bitbucket. Is it possible to do this including the changesets or do I have to manually copy the directory to the new repository and commit it there (and lose the version history on the way)?

    Read the article

  • Can't get correct package from Nexus? error in "mvn help:effective-settings"

    - by larry cai
    I use nexus opensource version maven 2.2.1 When I type "mvn help:effective-settings", i got the error below [INFO] Scanning for projects... [INFO] Searching repository for plugin with prefix: 'help'. [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error building POM (may not be this project's POM). Project ID: org.apache.maven.plugins:maven-help-plugin Reason: Error getting POM for 'org.apache.maven.plugins:maven-help-plugin' from the repository: Failed to resolve artifact, possibly due to a repository list that is not appropriately equipped for this artifact's metadata. org.apache.maven.plugins:maven-help-plugin:pom:2.2-SNAPSHOT from the specified remote repositories: Nexus (http://192.168.56.191:8081/nexus/content/groups/public) for project org.apache.maven.plugins:maven-help-plugin When I check the local repository under ~.m2\repository\org\apache\maven\plugins\maven-help-plugin It has a file maven-metadata-central.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-help-plugin</artifactId> <versioning> <latest>2.2-SNAPSHOT</latest> <release>2.1.1</release> <versions> <version>2.0</version> <version>2.0.1</version> <version>2.0.2</version> <version>2.1</version> <version>2.1.1</version> <version>2.2-SNAPSHOT</version> </versions> <lastUpdated>20100519065440</lastUpdated> </versioning> </metadata> And I can't find any jar files under directory, what's wrong with nexus server ? I can't easily find support information from nexus. Any hints

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Reusing MSBuild targets for different build types

    - by Zbigniew Kawalec
    I have got a problem with reusing the same MSBuild targets for different build types on TFS. Let me describe the situation. I have got two build types (CI - for continuous integration and RC - for release candidate). So I have got two build types defined in the TFS. Their *.proj files are under: - $/Repository/TeamBuildTypes/CI - $/Repository/TeamBuildTypes/RC Also, I have got some common targets, like: ChnageVersion.taget, Deploy.tagert, etc. and I import them in the main *.proj file. Unfortunaltely, I have to keep two copies of them, one in each build type. I've been struggling to have only one copy of the common targets somewhere, but I give up. I can't do it, because when the build starts on a build agent, the build files are downloaded from: $/Repository/TeamBuildTypes/CI only. How can I make the build agent / TFS / whatever to download also $/Repository/TeamBuildTypes/Common for example?

    Read the article

  • Can I use a static cache Helper method in a NET MVC controller?

    - by Euston
    I realise there have been a few posts regarding where to add a cache check/update and the separation of concerns between the controller, the model and the caching code. There are two great examples that I have tried to work with but being new to MVC I wonder which one is the cleanest and suits the MVC methodology the best? I know you need to take into account DI and unit testing. Example 1 (Helper method with delegate) ...in controller var myObject = CacheDataHelper.Get(thisID, () => WebServiceServiceWrapper.GetMyObjectBythisID(thisID)); Example 2 (check for cache in model class) in controller var myObject = WebServiceServiceWrapper.GetMyObjectBythisID(thisID)); then in model class.............. if (!CacheDataHelper.Get(cachekey, out myObject)) { //do some repository processing // Add obect to cache CacheDataHelper.Add(myObject, cachekey); } Both use a static cache helper class but the first example uses a method signature with a delegate method passed in that has the name of the repository method being called. If the data is not in cache the method is called and the cache helper class handles the adding or updating to the current cache. In the second example the cache check is part of the repository method with an extra line to call the cache helper add method to update the current cache. Due to my lack of experience and knowledge I am not sure which one is best suited to MVC. I like the idea of calling the cache helper with the delegate method name in order to remove any cache code in the repository but I am not sure if using the static method in the controller is ideal? The second example deals with the above but now there is no separation between the caching check and the repository lookup. Perhaps that is not a problem as you know it requires caching anyway?

    Read the article

  • Rhino Mocks Sample How to Mock Property

    - by guazz
    How can I test that "TestProperty" was set a value when ForgotMyPassword(...) was called? > public interface IUserRepository { User GetUserById(int n); } public interface INotificationSender { void Send(string name); int TestProperty { get; set; } } public class User { public int Id { get; set; } public string Name { get; set; } } public class LoginController { private readonly IUserRepository repository; private readonly INotificationSender sender; public LoginController(IUserRepository repository, INotificationSender sender) { this.repository = repository; this.sender = sender; } public void ForgotMyPassword(int userId) { User user = repository.GetUserById(userId); sender.Send("Changed password for " + user.Name); sender.TestProperty = 1; } } // Sample test to verify that send was called [Test] public void WhenUserForgetPasswordWillSendNotification_WithConstraints() { var userRepository = MockRepository.GenerateStub<IUserRepository>(); var notificationSender = MockRepository.GenerateStub<INotificationSender>(); userRepository.Stub(x => x.GetUserById(5)).Return(new User { Id = 5, Name = "ayende" }); new LoginController(userRepository, notificationSender).ForgotMyPassword(5); notificationSender.AssertWasCalled(x => x.Send(null), options => options.Constraints(Rhino.Mocks.Constraints.Text.StartsWith("Changed"))); }

    Read the article

  • Setting up a Git remote with a truncated history

    - by drg
    I am in the midst of doing some non-standard, probably doomed, experiments on a git repository. The goal is to create a remote repository with a truncated history which can still share commits with an internal repository which has a full history. I've had some success using a graft to connect the public history with the private history - when I push from my internal repository, only the post-graft contents are included. So my main question is: what is the simplest way of taking a commit, eliminating its parent and writing a graft in place of the parent? A more general question: is what I'm trying to do going to cause me pain in the long run, do you know if there's a better way?

    Read the article

  • Why are Maven Goals not added by IntelliJ?

    - by Jasper
    I have produced a new Maven Project from gae-archetype-gwt from within IntelliJ, and everything is generated well, but the gae:... goals won't show up in the Maven View, and if I try to update Repository Indices, apart from the local repository I get errors only. When I run gae:unpack from terminal, everything works fine. Im running Ubuntu 10.04 Beta 1 and am using open-jdk, for which IntelliJ is also configured. UPDATE: WORKS FINE WITH UBUNTU 10.04 FINAL + JDK FROM PARTNER REPOSITORY

    Read the article

  • Nested svn repositories

    - by singles
    I got a "Project A" in repository. But in that project I'm using a library, which is hosted on Google Code. There is my question: is there any way, to have that library files "hooked" to Google Code SVN, and simultaneously my project in my repo (it's parent to that library), so I can commit library files into my repository when I decide, that outer project revision is ok? I've tried to do checkout in the library folder, files were downloaded from Google's Code repository. But I that case wasn't able to add them to my repository - they weren't visible in "Add" window.

    Read the article

  • Is it possible to do have Capistrano do a checkout over a reverse SSH tunnel?

    - by James A. Rosen
    I am developing an application that resides on a public host but whose source I must keep in a Git repository behind a corporate firewall. I'm getting very tired of the slowness of deploying via scp (copying the whole repository and shipping it over SSH on each deploy) and would like to have the remote host simply do a git pull to update. The problem is that the firewall prohibits incoming SSH connections. Would it be possible for me to set up an SSH tunnel from my computer to the deployment computer and use my repository as the source for the git pull? After all, git is distributed, so my copy is just as valid a repository as the central one. If this is possible, what would the tunnel command and the Capistrano configuration be? I think the tunnel will look something like ssh -R something:deployserver.com:something [email protected]

    Read the article

  • How can I generate a git diff of what's changed since the last time I pulled?

    - by Teflon Ted
    I'd like to script, preferably in rake, the following actions into a single command: Get the version of my local git repository. Git pull the latest code. Git diff from the version I extracted in step #1 to what is now in my local repository. In other words, I want to get the latest code form the central repository and immediately generate a diff of what's changed since the last time I pulled.

    Read the article

  • What database table structure should I use for versions, codebases, deployables?

    - by Zac Thompson
    I'm having doubts about my table structure, and I wonder if there is a better approach. I've got a little database for version control repositories (e.g. SVN), the packages (e.g. Linux RPMs) built therefrom, and the versions (e.g. 1.2.3-4) thereof. A given repository might produce no packages, or several, but if there are more than one for a given repository then a particular version for that repository will indicate a single "tag" of the codebase. A particular version "string" might be used to tag a version of the source code in more than one repository, but there may be no relationship between "1.0" for two different repos. So if packages P and Q both come from repo R, then P 1.0 and Q 1.0 are both built from the 1.0 tag of repo R. But if package X comes from repo Y, then X 1.0 has no relationship to P 1.0. In my (simplified) model, I have the following tables (the x_id columns are auto-incrementing surrogate keys; you can pretend I'm using a different primary key if you wish, it's not really important): repository - repository_id - repository_name (unique) ... version - version_id - version_string (unique for a particular repository) - repository_id ... package - package_id - package_name (unique) - repository_id ... This makes it easy for me to see, for example, what are valid versions of a given package: I can join with the version table using the repository_id. However, suppose I would like to add some information to this database, e.g., to indicate which package versions have been approved for release. I certainly need a new table: package_version - version_id - package_id - package_version_released ... Again, the nature of the keys that I use are not really important to my problem, and you can imagine that the data column is "promotion_level" or something if that helps. My doubts arise when I realize that there's really a very close relationship between the version_id and the package_id in my new table ... they must share the same repository_id. Only a small subset of package/version combinations are valid. So I should have some kind of constraint on those columns, enforcing that ... ... I don't know, it just feels off, somehow. Like I'm including somehow more information than I really need? I don't know how to explain my hesitance here. I can't figure out which (if any) normal form I'm violating, but I also can't find an example of a schema with this sort of structure ... not being a DBA by profession I'm not sure where to look. So I'm asking: am I just being overly sensitive?

    Read the article

  • Why should I use core.autocrlf in Git

    - by Rich
    I have a Git repository that is accessed from both Windows and OS X, and that I know already contains some files with CRLF line-endings. As far as I can tell, there are two ways to deal with this: Set core.autocrlf to false everywhere, Follow the instructions here (echoed on GitHub's help pages) to convert the repository to contain only LF line-endings, and thereafter set core.autocrlf to true on Windows and input on OS X. The problem with doing this is that if I have any binary files in the repository that: a). are not correctly marked as binary in gitattributes, and b). happen to contain both CRLFs and LFs, they will be corrupted. It is possible my repository contains such files. So why shouldn't I just turn off Git's line-ending conversion? There are a lot of vague warnings on the web about having core.autocrlf switched off causing problems, but very few specific ones; the only that I've found so far are that kdiff3 cannot handle CRLF endings (not a problem for me), and that some text editors have line-ending issues (also not a problem for me). The repository is internal to my company, and so I don't need to worry about sharing it with people with different autocrlf settings or line-ending requirements. Are there any other problems with just leaving line-endings as-is that I am unaware of?

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >