Search Results

Search found 4805 results on 193 pages for 'repository'.

Page 77/193 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • How to see a branch created in master

    - by richard
    Hi, I create a branch in my master repository (192.168.1.2). And in my other computer, I did '$ git pull --rebase ', I see Unpacking objects: 100% (16/16), done. From git+ssh://[email protected]/media/LINUXDATA/mozilla-1.9.1 62d004e..b291703 master -> origin/master * [new branch] improv -> origin/improv But when I do a 'git branch' in my local repository, I see only 1 branch and I did '$ git checkout improv ' $ git branch * master $ git checkout improv error: pathspec 'improv' did not match any file(s) known to git. Did you forget to 'git add'?

    Read the article

  • Subversion terminology. Difference between projects, modules and root directories

    - by aioobe
    I'm setting up a repository for me and some colleagues. I have a subversion repository at hand, and all required rights. The usual directory-skeleton has been set up for me (branches, tags and trunk). Now I'm about to create a directory for me and my colleagues to put our files in. I'm quite sure the right place to put it is in trunk. Now here and there in tutorials, I see terms like "modules" and "projects" such as in Checking Out a Project - svn checkout svn checkout http://host_name/svn_dir/repository_name/project/trunk proj Is proj in the above line some glorified directory in trunk? Should I do something else than a checkout on trunk, mkdir and then commit when creating a directory for me and my colleagues? Whats the difference between a project, a directory in trunk and a module?

    Read the article

  • OO design for business logic

    - by hotyi
    I have one Sell Operation object and two Buy Operation objects, i want to implement such behavior that one Sell Operation discharges two Buy Operation, it looks like: sellOperation.Discharge(oneBuyOperation); sellOperation.Discharge(twoBuyOperation); so i want to ask whether i should call the repository function in the Discharge method, or i'd better call the repository save method outside Discharge method. like: opRepository.Save(sellOpertion); So anyone could give me some advise what are you going to implement in this scenario? using Service class or anything better way?

    Read the article

  • Integrating Fish eye with Jenkins

    - by ramaperumal
    Jenkins is up and running in my organization.In configure tab under Repository browser we have an option called fish eye.If I select fish eye as an repository browser it is asking the fish eye URL (such as http://fisheye6.cenqua.com/browse/ant/)My requirment is I need to fetch the reports such as (we would like to track the SVN to take chain change report, to give statistics on how many check-in happening in each code base, how many cut has been taken from each branches…etc ).We need to achieve all these things in Jenkins itself. For performing such things we need to setup the seperate fish eye server.If that is the case why we have the fish eye option in jenkins.We can perform all these activities in Fisheye instead of Jenkins. Please suggest, Thx Rama

    Read the article

  • How to tag and goes to a tag in hg

    - by michael
    Hi, From here, it said 'hg tag 1.0' is to get my hg repository to a tag name. http://wiki.pylonshq.com/display/pylonscookbook/Mercurial+for+Subversion+Users How can I switch my repository to that tag name? $ hg tag myTag1.0 $ $ hg commit -m "a message" $ hg how to go back to that tag? and if I make a new 'hg commit' here, what will happen? Will it goes to the branch of myTag1.0? or it will stay with default branch? Thank you.

    Read the article

  • SVN Path Based Authorization: Granting listing access but not read access

    - by Jim
    Hello, We're using path-based-authorization module for Apache SVN. It all works fine, except that when users try to check out code they have access to, their SVN clients get confused if they don't have at least read access to the parent directories - all the way up to root. It works, but some clients just get confused sometimes. Because SVN path-based-authorization is recursively applied, we don't want to give all users read access to root, because that would give them access to all source code in the repository. It would, however, be acceptable if users could get directory listings (just not actual lines of code) for the entire repository. This would prevent the svn clients from getting confused. Does any one know how to grant permissions to get directory listings without granting permissions to the actual contents of the files? Thanks!

    Read the article

  • Maintaining Project with Git

    - by gkrdvl
    Hi All, I have 2 project, and actually these 2 project is about 80% same each other, the mainly difference is just about language and business model, one is for larger audience using english language and have a 9$/month business model, another is using local language with freemium business model. Sometime when I want to add new feature/functionality, I want to add it in both of the project, but also sometime I want to add feature especially just for the local project. My question is, how do I maintain these 2 project with git ? Maintain 2 git repository for each project or Maintain single git repository with 2 mainly branch or Any other suggestion ?

    Read the article

  • Subversion error, but I don't know what it means

    - by DaveDev
    I'm trying to do an Update on my solution but I'm getting the following subversion error: SharpSvn.SvnFileSystemException: Working copy path 'Path_to_image/logo LoRes.jpg' does not exist in repository but I can see that the image is in the repository. The stack trace is as follows: at SharpSvn.SvnClientArgs.HandleResult(SvnClientContext client, SvnException error) at SharpSvn.SvnClientArgs.HandleResult(SvnClientContext client, svn_error_t* error) at SharpSvn.SvnClient.Update(ICollection`1 paths, SvnUpdateArgs args, SvnUpdateResult& result) at SharpSvn.SvnClient.Update(String path, SvnUpdateArgs args, SvnUpdateResult& result) at Ankh.Commands.SolutionUpdateCommand.UpdateRunner.Work(Object sender, ProgressWorkerArgs e) at Ankh.ProgressRunnerService.ProgressRunner.Run(Object arg) Is there something else that could be wrong?

    Read the article

  • CMIS explorer webapp

    - by Nicolas Raoul
    CMIS is a recently approved standard for accessing ECM repositories. My idea is to create a repository explorer using CMIS, under the form of an open source Java/JEE Web Application. The main interest would probably be for integrators, using it as a framework on which to quickly build repository access intranet/extranet applications. Of course, if such an open source project already exists, I would rather contribute to it rather than start a competing effort. So, does such an application/framework already exist? As open source? The only one I have found so far is chemistry-opencmis-test-browser, which is intended for tests and seems really inconvenient to extend for business use (no MVC, no IoC).

    Read the article

  • Maintaining both free and pro versions of an application

    - by Immortal
    I want to create a PRO version of my application for Android and was wondering how to structure my repository. For know I have a trunk and feature branches. I'd like to put a pro version in another branch but maybe there is a better way? For example, maybe I should create two branches - one for free version, the other for pro? Pro version will have additional features and will be ads-free, so eg. I don't want to include AdMob libraries in the pro version. Do you have any experience or suggestions as to what would be the best way to structure the repository in this case?

    Read the article

  • WCF Multiple Services

    - by David
    Hi, im brand spanking new to WCF and Im trying to understand how to correctly expose my BLL to it. I created my first Resource.svc and IResource.svc Resource.svc [ServiceBehavior] public class Resources : IResources { #region IResources Members public List<Model.Resource> GetAll() { return Repository.Inventory.Resource.GetAll(true); } public List<Model.Resource> GetAllEnabled() { return Repository.Inventory.Resource.GetAllEnabled(true); } #endregion } IResource.cs [ServiceContract] public interface IResources { [OperationContract] List<Model.Resource> GetAll(); [OperationContract] List<Model.Resource> GetAllEnabled(); } So this all works, My windows app can talk to the service and all is great. So I now need to access some information, I have created another .svc file called Project.svc and IProject.cs, this contains the same info as resource (apart from the type is Project) But this now means I have another webservice, surley this is not right!?

    Read the article

  • Can you use Github App with Beanstalk?

    - by mikemick
    Being new to Git, I wanted to use a GUI (Windows based) and preferred the Github App. However, I would like to integrate this site with a Beanstalkapp account. I'm pretty sure this is possible, but I can't figure it out. Inside of the Github app, I navigate to my repository. When I choose "Tools Settings...", I place the Git Clone URL for the repository provided by Beanstalk into the "Primary Remote (origin)" field in my Github app. Now when I click "Publish" (which says "Click to publish this branch to server" when I hover over it) it changes to "Publishing...". After a few seconds, I get this error: server failure The remote server disconnected. Try again later, or if this persists, contact [email protected] I am pretty sure I set the SSH keys up properly (never done this before). I added the key to both the Beanstalkapp and my Github web account.

    Read the article

  • Embedding mercurial revision information in Visual Studio c# projects automatically

    - by Mark Booth
    Original Problem In building our projects, I want the mercurial id of each repository to be embedded within the product(s) of that repository (the library, application or test application). I find it makes it so much easier to debug an application ebing run by custiomers 8 timezones away if you know precisely what went into building the particular version of the application they are using. As such, every project (application or library) in our systems implement a way of getting at the associated revision information. I also find it very useful to be able to see if an application has been compiled with clean (un-modified) changesets from the repository. 'Hg id' usefully appends a + to the changeset id when there are uncommitted changes in a repository, so this allows is to easily see if people are running a clean or a modified version of the code. My current solution is detailed below, and fulfills the basic requirements, but there are a number of problems with it. Current Solution At the moment, to each and every Visual Studio solution, I add the following "Pre-build event command line" commands: cd $(ProjectDir) HgID I also add an HgID.bat file to the Project directory: @echo off type HgId.pre > HgId.cs For /F "delims=" %%a in ('hg id') Do <nul >>HgID.cs set /p = @"%%a" echo ; >> HgId.cs echo } >> HgId.cs echo } >> HgId.cs along with an HgId.pre file, which is defined as: namespace My.Namespace { /// <summary> Auto generated Mercurial ID class. </summary> internal class HgID { /// <summary> Mercurial version ID [+ is modified] [Named branch]</summary> public const string Version = When I build my application, the pre-build event is triggered on all libraries, creating a new HgId.cs file (which is not kept under revision control) and causing the library to be re-compiled with with the new 'hg id' string in 'Version'. Problems with the current solution The main problem is that since the HgId.cs is re-created at each pre-build, every time we need to compile anything, all projects in the current solution are re-compiled. Since we want to be able to easily debug into our libraries, we usually keep many libraries referenced in our main application solution. This can result in build times which are significantly longer than I would like. Ideally I would like the libraries to compile only if the contents of the HgId.cs file has actually changed, as opposed to having been re-created with exactly the same contents. The second problem with this method is it's dependence on specific behaviour of the windows shell. I've already had to modify the batch file several times, since the original worked under XP but not Vista, the next version worked under Vista but not XP and finally I managed to make it work with both. Whether it will work with Windows 7 however is anyones guess and as time goes on, I see it more likely that contractors will expect to be able to build our apps on their Windows 7 boxen. Finally, I have an aesthetic problem with this solution, batch files and bodged together template files feel like the wrong way to do this. My actual questions How would you solve/how are you solving the problem I'm trying to solve? What better options are out there than what I'm currently doing? Rejected Solutions to these problems Before I implemented the current solution, I looked at Mercurials Keyword extension, since it seemed like the obvious solution. However the more I looked at it and read peoples opinions, the more that I came to the conclusion that it wasn't the right thing to do. I also remember the problems that keyword substitution has caused me in projects at previous companies (just the thought of ever having to use Source Safe again fills me with a feeling of dread *8'). Also, I don't particularly want to have to enable Mercurial extensions to get the build to complete. I want the solution to be self contained, so that it isn't easy for the application to be accidentally compiled without the embedded version information just because an extension isn't enabled or the right helper software hasn't been installed. I also thought of writing this in a better scripting language, one where I would only write HgId.cs file if the content had actually changed, but all of the options I could think of would require my co-workers, contractors and possibly customers to have to install software they might not otherwise want (for example cygwin). Any other options people can think of would be appreciated. Update Partial solution Having played around with it for a while, I've managed to get the HgId.bat file to only overwrite the HgId.cs file if it changes: @echo off type HgId.pre > HgId.cst For /F "delims=" %%a in ('hg id') Do <nul >>HgId.cst set /p = @"%%a" echo ; >> HgId.cst echo } >> HgId.cst echo } >> HgId.cst fc HgId.cs HgId.cst >NUL if %errorlevel%==0 goto :ok copy HgId.cst HgId.cs :ok del HgId.cst Problems with this solution Even though HgId.cs is no longer being re-created every time, Visual Studio still insists on compiling everything every time. I've tried looking for solutions and tried checking "Only build startup projects and dependencies on Run" in Tools|Options|Projects and Solutions|Build and Run but it makes no difference. The second problem also remains, and now I have no way to test if it will work with Vista, since that contractor is no longer with us. If anyone can test this batch file on a Windows 7 and/or Vista box, I would appreciate hearing how it went. Finally, my aesthetic problem with this solution, is even strnger than it was before, since the batch file is more complex and this there is now more to go wrong. If you can think of any better solution, I would love to hear about them.

    Read the article

  • Git: how do you merge with remote repo?

    - by Marco
    Please help me understand how git works. I clone my remote repository on two different machines. I edit the same file on both machines. I successfully commit and push the update from the first machine to the remote repository. I then try to push the update on the second machine, but get an error: ! [rejected] master -> master (non-fast-forward) I understand why I received the error. How can I merge my changes into the remote repo? Do I need to pull the remote repo first?

    Read the article

  • Tortoise SVN revision history

    - by rahul
    I want to know for how long the tortoise svn keeps the revision history. Say I have a file which I deleted from repository through repo browser an year ago, will I be able to still recover that file? If I am able to recover, I also want to know the method to permanently delete that earlier copy of file and related revisions history so that in future nobody is able to access that file. Is it possible? I have run into problems in my organisation as I did frequent updations and deletions assuming that file was getting deleted permanently. The file system of repository has bloated now. Please suggest how to fix it.

    Read the article

  • subversion problem - commit access

    - by Calvin
    Hi everyone, I'm new to setting up subversion but originally when I made a repository, all my team members could update and commit without problem. There was a problem with it so we decided to recreate it, but now only I can commit changes to it. And my username/password doesn't work on their computers, so I'm sure it's something obvious and silly, but I just don't know enough to know what's causing it. The passwd and svnserve.conf files are the same as the original repository that worked for everyone. Any ideas? Thanks in advance.

    Read the article

  • NHibernate filters don't work with Session.Get

    - by Khash
    I'm trying to implement a Soft-deletable repository. Usually this can be easily done with a Delete Event listener. To filter out the deleted entities, I can add a Where attribute to my class mapping. However, I also need to implement two more methods in the repository for this entity: Restore and Purge. Restore will "undelete" entities and Purge will hard-delete them. This means I can't use Where attribute (since it block out soft-deleted entities to any access) I tried using filters instead. I can create a filter and enable or disable it within session to achieve the same result. But the problem is filters don't have any effect on Session.Get method (they only affect ICriteria based access). Any ideas as to how solve this problem? Thanks

    Read the article

  • How do I branch an individual file in SVN?

    - by Michael Carman
    The subversion concept of branching appears to be focused on creating an [un]stable fork of the entire repository on which to do development. Is there a mechanism for creating branches of individual files? For a use case, think of a common header (*.h) file that has multiple platform-specific source (*.c) implementations. This type of branch is a permanent one. All of these branches would see ongoing development with occasional cross-branch merging. This is in sharp contrast to unstable development/stable release branches which generally have a finite lifespan. I do not want to branch the entire repository (cheap or not) as it would create an unreasonable amount of maintenance to continuously merge between the trunk and all the branches. At present I'm using ClearCase, which has a different concept of branching that makes this easy. I've been asked to consider transitioning to SVN but this paradigm difference is important. I'm much more concerned about being able to easily create alternate versions for individual files than about things like cutting a stable release branch.

    Read the article

  • Using 'git pull' vs 'git checkout -f' for website deployment

    - by Michelle
    I've found two common approaches to automatically deploying website updates using a bare remote repo. The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used. cd /srv/www/siteA/ || exit unset GIT_DIR git pull hub master The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e. GIT_WORK_TREE=/srv/www/siteA/ git checkout -f The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess. Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?

    Read the article

  • Is it possible to exclude some files from checkin (TFS) ?

    - by Thomas Wanner
    We use configuration files within various projects under source control (TFS), where each developer has to make some adjustments in his local copy to configure his environment. The build process takes care about replacing the config files with the server configuration as a part of the deployment, so it doesn't actually matter what is in the repository. However, we would anyway like to keep some kind of a default non-breaking version of config files in the repository, so that e.g. people not involved in the particular project won't run into troubles because of local misconfiguration. We tried to resolve this by introducing the check-in policy that simply forbids to check-in the config files. This works fine, but just because we're lazy to always uncheck those checkboxes in the pending changes window, the question comes : is it possible to transparently disable the check-in of particular files without keeping them out of source control (e.g. locking their current version) ?

    Read the article

  • Using C# Type as generic

    - by I Clark
    I'm trying to create a generic list from a specific Type that is retrieved from elsewhere: Type listType; // Passed in to function, could be anything var list = _service.GetAll<listType>(); However I get a build error of: The type or namespace name 'listType' could not be found (are you missing a using directive or an assembly reference?) Is this even possible or am I setting foot onto C# 4 Dynamic territory? As a background: I want to automatically load all lists with data from the repository. The code below get's passed a Form Model whose properties are iterated for any IEnum (where T inherits from DomainEntity). I want to fill the list with objects of the Type the list made of from the repository. public void LoadLists(object model) { foreach (var property in model.GetType() .GetProperties(BindingFlags.Public | BindingFlags.Instance | BindingFlags.SetProperty)) { if (IsEnumerableOfNssEntities(property.PropertyType)) { var listType = property.PropertyType.GetGenericArguments()[0]; var list = _repository.Query<listType>().ToList(); property.SetValue(model, list, null); } } }

    Read the article

  • Git to SVN trouble

    - by Kevin
    My boss has a Perforce repository for which he wants to make a read-only copy available on Sourceforge via subversion. He had a perl script which would do this but it's no longer functioning (we don't want to try debugging it yet) and it's really not that great anyway. So an alternate solution is to pull the perforce repo into git as a remote ref, which I have already done successfully (including all the proper commit details and authors), now the trouble I'm having is pushing it out to a separate SVN repository. I can make it start the commit process with "git svn dcommit --add-author-from", but the problem is even though the correct author appears at the end of the commit message the "real" author committing is my machine's user. I want to preserve the real author with the commit, and I'd also like to preserve the original timestamps as well. Is anyone familiar with how I could accomplish this?

    Read the article

  • Caching for a Custom Repositiory Adapter for WebSphere Portal Virtual Member Manager

    - by Spike Williams
    I'm looking at writing a custom repository adapter to interact with Virtual Member Manager on WebSphere Portal 6.1. Basically, its a layer that takes a request in the form of a commonj.sco.DataObject and passes that on to an external web service, to get various information on our logged in users that is not otherwise available in LDAP. I'm concerned about the performance hit of going to a service every time we want to pull some permission from the back end. My question is, can the Virtual Member Manager handle caching of data going in and out of the custom repository adapters, or is that something I'm going to have to build into the adapter myself?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >