Search Results

Search found 4474 results on 179 pages for 'git workflow'.

Page 63/179 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • Git development?production workflow – how to set up repo?

    - by Blixt
    I'm working on a relatively small, but fast-changing project (a web application) with a few other developers. We're using Git for source control. We started out creating a stable branch which is what is deployed to the live production web server. The master branch is what is deployed to a secondary "unstable" server for testing purposes. Whenever we felt that the master branch was ready to go live, we merged it into stable. However, we came to a point where we wanted one of the later master commits, but not some of the commits before it, so we used cherry-pick to pull that change into stable. This creates a new commit with the same change as the one in master, and it feels as if we're losing the nice history that Git otherwise provides. Are there better ways of handling this type of unstable/stable deployment model? One solution I thought of was using feature branches, and only ever merging a feature branch into master once we want it to go live. Then we'll tag every deployment instead of having a stable branch.

    Read the article

  • How to check for changes on remote (origin) git repository?

    - by Lernkurve
    Question What are the Git commands to do the following workflow? Scenario: I cloned from a repository and did some commits of my own to my local repository. In the meantime, my colleagues did commits to the remote repository. Now, I want to: Check whether there are any new commits from other people on the remote repository, i.e. "origin"? Say there were 3 new commits on the remote repository since mine last pull, I would like to diff the remote repository's commits, i.e. HEAD~3 with HEAD~2, HEAD~2 with HEAD~1 and HEAD~1 with HEAD. After knowing what changed remotely, I want to get the latest commits from the others. My findings so far For step 2: I know the caret notation HEAD^, HEAD^^ etc. and the tilde notation HEAD ~2, HEAD~3 etc. For step 3: That is, I guess, just a git pull.

    Read the article

  • Can Git or Mercurial be set to bypass the local repository and go straight to the central one?

    - by Jian Lin
    Using Git or Mercurial, if the working directory is 1GB, then the local repository will be another 1GB (at least), residing normally in the same hard drive. And then when pushed to a central repository, there will be another 1GB. Can Git or Mercurial be set to use only a working directory and then a central repository, without having 3 copies of this 1GB data? (actually, when the central repository also update, then there are 4 copies of the same data... can it be reduced? In the SVN scenario, when there are 5 users, then there will be 6GB of data total. With Distributed Version Control, then there will be 12GB of data?)

    Read the article

  • Why does Git.pm on cygwin complain about 'Out of memory during "large" request?

    - by Charles Ma
    Hi, I'm getting this error while doing a git svn rebase in cygwin Out of memory during "large" request for 268439552 bytes, total sbrk() is 140652544 bytes at /usr/lib/perl5/site_perl/Git.pm line 898, <GEN1> line 3. 268439552 is 256MB. Cygwin's maxium memory size is set to 1024MB so I'm guessing that it has a different maximum memory size for perl? How can I increase the maximum memory size that perl programs can use? update: This is where the error occurs (in Git.pm): while (1) { my $bytesLeft = $size - $bytesRead; last unless $bytesLeft; my $bytesToRead = $bytesLeft < 1024 ? $bytesLeft : 1024; my $read = read($in, $blob, $bytesToRead, $bytesRead); //line 898 unless (defined($read)) { $self->_close_cat_blob(); throw Error::Simple("in pipe went bad"); } $bytesRead += $read; } I've added a print before line 898 to print out $bytesToRead and $bytesRead and the result was 1024 for $bytesToRead, and 134220800 for $bytesRead, so it's reading 1024 bytes at a time and it has already read 128MB. Perl's 'read' function must be out of memory and is trying to request for double it's memory size...is there a way to specify how much memory to request? or is that implementation dependent? UPDATE2: While testing memory allocation in cygwin: This C program's output was 1536MB int main() { unsigned int bit=0x40000000, sum=0; char *x; while (bit > 4096) { x = malloc(bit); if (x) sum += bit; bit >>= 1; } printf("%08x bytes (%.1fMb)\n", sum, sum/1024.0/1024.0); return 0; } While this perl program crashed if the file size is greater than 384MB (but succeeded if the file size was less). open(F, "<400") or die("can't read\n"); $size = -s "400"; $read = read(F, $s, $size); The error is similar Out of memory during "large" request for 536875008 bytes, total sbrk() is 217088 bytes at mem.pl line 6.

    Read the article

  • Save/restore git/cvs checkout changes when switching branches?

    - by Dale Forester
    Using cvs, git or another technique (file system level?), I would like to: Make modifications on branch A Checkout branch B: Changes to branch A are "stowed away" (by name would be nice), branch B is checked out such that my branch A changes are gone Make modifications on branch B Checkout branch A: Changes to branch B are "stowed away" (by name would be nice), branch A is checked out such that my branch B changes are gone but now my "saved" branch A changes from Step #2 are back Git-stash does not appear to fit the flow I'm describing although my impression could be wrong. Techniques involving RCS's or file system or command-line tools or otherwise are welcome.

    Read the article

  • How to export all changed/added files from Git?

    - by dr Hannibal Lecter
    Hi all! I am very new to Git and I have a slight problem. In SVN [this feels like an Only Fools and Horses story by uncle Albert.."during the war..."] when I wanted to update a production site with my latest changes, I'd do a diff in TSVN and export all the changed/added files between two revisions. As you can imagine, it was easy to get those files to a production site afterwards. However, it seems like I'm unable to find an "export changed files" option in Git. I can do a diff and see the changes, I can get a list of files, but I can't actually export them. Is there a reasonable way to do this? Am I missing something simple? Just to clarify once again, I need to export all the changes between two specific commits. Thanks in advance!

    Read the article

  • How can I "git log" only code published to trunk?

    - by Russell Silva
    At my workplace we have a "master" trunk branch that represents published code. To make a change, I check out a working copy, create a topic branch, commit to the topic branch, merge the topic branch into master, and push. For small changes, I might commit directly to master, then push. My problem is that when I use "git log", I don't care about my topic branches in my local working copy. I only want to see the changes to the master branch on the remote, shared git server. What's more, if I use --stat or -p or one of their friends, I want to see the files and changes associated with the merge commit to master, not associated to their original branch commits (which, like I said, I don't want to see at all). How do I go about doing this?

    Read the article

  • Why use a Rails-like deployment mechanism over 'git pull' for releasing?

    - by Chad Johnson
    To release my centralized webapp, I COULD have a vhost pointed to some directory and then just do a 'git pull' when I want to release, updating the files. But Rails has a different deployment mechanism: it copies files to a subdirectory and then points a symlink ('current') to that new subdirectory. I understand that it probably more acceptable to do a Rails-like deployment because the release is built in some directory, and then the symlink is pointed to that directory, so this is much faster, and it's less likely that users would experience weird issues while a release is happening. Are there any other advantages to the Rails approach? Or, is a 'git pull' approach actually more widely accepted?

    Read the article

  • Is it good to commit files often if using Mercurial or Git?

    - by Jian Lin
    It seems that it is suggested we can commit often to keep track of intermediate changes of code we wrote… such as on hginit.com, when using Mercurial or Git. However, let's say if we work on a project, and we commit files often. Now for one reason or another, the manager wants part of the feature to go out, so we need to do a push, but I heard that on Mercurial or Git, there is no way to push individual files or a folder… either everything committed gets pushed or nothing get pushed. So we either have to revert all those files we don't want to push, or we just never should commit until before we push -- right after commit, we push?

    Read the article

  • How to rebase one Git repository onto another one?

    - by kroimon
    Hi there! I had one Git repository (A) which contains the development of a project until a certain point. Then I lost the USB stick this repo A was on. Luckily I had a backup of the latest commit, so I could create a new repository (B) later where I imported the latest project's state and continue development. Now I recovered that lost USB stick, so I have two Git repositories. I think I just have to rebase repo B onto repo A somehow, but I have no idea how to do that, maybe using fetch/pull and rebase? Thanks in advance for your help!

    Read the article

  • Why does git hash-object return a different hash than openssl sha1?

    - by user657606
    Context: I downloaded a file (Audirvana 0.7.1.zip) from code.google to my Macbook Pro (Mac OS X 10.6.6). (current url: http://code.google.com/p/audirvana/downloads/detail?name=Audirvana%200.7.1.zip&can=2&q= ) I wanted to verify the checksum, which for that particular file is posted as 862456662a11e2f386ff0b24fdabcb4f6c1c446a (SHA-1). git hash-object gave me a different hash, but openssl sha1 returned the expected 862456662a11e2f386ff0b24fdabcb4f6c1c446a. The following experiment seems to rule out any possible download corruption or newline differences and to indicate that there are actually two different algorithms at play: $ echo A > foo.txt $ cat foo.txt A $ git hash-object foo.txt f70f10e4db19068f79bc43844b49f3eece45c4e8 $ openssl sha1 foo.txt SHA1(foo.txt)= 7d157d7c000ae27db146575c08ce30df893d3a64 What's going on?

    Read the article

  • git: Switch branch and ignore any changes without committing.

    - by boyfarrell
    Hello, I have got the git branch I'm working on to a nice place. So I make a commit with a useful commit message. I then absentmindedly make minor changes to the code that are not work keeping. I now want to changes branches, but git gives me, error: You have local changes to "X"; cannot switch branches. I thought that I could change branches without committing? If so how can I set this up. If not, how do I get out of this problem? I want to ignore the minor changes without committing and just changes branches! Cheers, Dan

    Read the article

  • Find git branch that got pushed to a bare repository.

    - by Senthil A Kumar
    Lets have 2 repositories, one containing the actual data repo and a bare repository which is loaded with deltas from the actual data repository by doing a git push from data repo to bare repo. Hope you have understood the model that am using here. Am creating clones by cloning the bare repo, and i will be pushing from the branches in my local clone to the branches in bare repository. When am pushing data from my branch to bare repo, the data is automatically synced to the data repo by a hook. The question i have - is there a way to find from which branch a code has come to the bare repo. I can see the source and target branch during a git push, but after pushing can i see from logs or other way to identify from which branch and repository the data has been pushed from? If there are 5 developers pushing to bare repo, can i find in the bare repo from which branch and clone a code is pushed?

    Read the article

  • Clear list of recent repositories in Git Extensions

    - by Marko Apfel
    Orphaned and wrong specified repositories in the recent list are annoying. Straightaway I does not found an option to clean this entries. And also not the persistence place for that. So it was time for Process Explorer. The storage happens under: HKEY_CURRENT_USER\Software\GitExtensions\GitExtensions\1.0.0.0 in the string value “history” You could edit the content of the string value or delete it – than during restarting Git Extensions the string value will be created with a default skeleton.

    Read the article

  • Beginner&amp;#8217;s Guide to Git

    <b>Make Tech Easier:</b> "Git is the revision control system created by the Linux kernel&#8217;s famous Linus Torvalds due to a lack of satisfaction with existing solutions. The main emphasis in the design was on speed, or more specifically, efficiency."

    Read the article

  • Tool to identify potential reviewers for a proposed change

    - by Lorin Hochstein
    Is there a tool that takes as input a proposed patch and a git repository, and identifies the developers are the best candidates for reviewing the patch? It would use the git history to identify the authors that have the most experience with the files / sections of code that are being changed. Edit: The use case is a large open source project (OpenStack Compute), where merge proposals come in, and I see a merge proposal on a chunk of code I'm not familiar with, and I want to add somebody else's name to the list of suggested reviewers so that person gets a notification to look at the merge proposal.

    Read the article

  • Advice Needed: Developers blocked by waiting on code to merge from another branch using GitFlow

    - by fogwolf
    Our team just made the switch from FogBugz & Kiln/Mercurial to Jira & Stash/Git. We are using the Git Flow model for branching, adding subtask branches off of feature branches (relating to Jira subtasks of Jira features). We are using Stash to assign a reviewer when we create a pull request to merge back into the parent branch (usually develop but for subtasks back into the feature branch). The problem we're finding is that even with the best planning and breakdown of feature cases, when multiple developers are working together on the same feature, say on the front-end and back-end, if they are working on interdependent code that is in separate branches one developer ends up blocking the other. We've tried pulling between each others' branches as we develop. We've also tried creating local integration branches each developer can pull from multiple branches to test the integration as they develop. Finally, and this seems to work possibly the best for us so far, though with a bit more overhead, we have tried creating an integration branch off of the feature branch right off the bat. When a subtask branch (off of the feature branch) is ready for a pull request and code review, we also manually merge those change sets into this feature integration branch. Then all interested developers are able to pull from that integration branch into other dependent subtask branches. This prevents anyone from waiting for any branch they are dependent upon to pass code review. I know this isn't necessarily a Git issue - it has to do with working on interdependent code in multiple branches, mixed with our own work process and culture. If we didn't have the strict code-review policy for develop (true integration branch) then developer 1 could merge to develop for developer 2 to pull from. Another complication is that we are also required to do some preliminary testing as part of the code review process before handing the feature off to QA.This means that even if front-end developer 1 is pulling directly from back-end developer 2's branch as they go, if back-end developer 2 finishes and his/her pull request is sitting in code review for a week, then front-end developer 2 technically can't create his pull request/code review because his/her code reviewer can't test because back-end developer 2's code hasn't been merged into develop yet. Bottom line is we're finding ourselves in a much more serial rather than parallel approach in these instance, depending on which route we go, and would like to find a process to use to avoid this. Last thing I'll mention is we realize by sharing code across branches that haven't been code reviewed and finalized yet we are in essence using the beta code of others. To a certain extent I don't think we can avoid that and are willing to accept that to a degree. Anyway, any ideas, input, etc... greatly appreciated. Thanks!

    Read the article

  • Is Perforce as good as merging as DVCSs?

    - by dukeofgaming
    I've heard that Perforce is very good at merging, I'm guessing this has to do with that it tracks changes in the form of changelists where you can add differences across several files in a single blow. I think this implies Perforce gathers more metadata and therefore has more information to do smarter merging (at least smarter than Subversion, being Perforce centralized). Since this is similar to how Mercurial and Git handle changes (I know DVCSs track content rather than files), I was wondering if somebody knew what were the subtle differences that makes Perforce better or worse than a DVCS like Mercurial or Git.

    Read the article

  • Is there a decent way to maintain development of wordpress sites using the same base?

    - by Joakim Johansson
    We've been churning out wordpress sites for a while, and we'd like to keep a base repository that can be used when starting a new project, as well as updating existing sites with changes to the wordpress base. Am I wrong in assuming this would be a good thing? We take care of updating the sites, so having a common base would make this easier. I've been looking at solutions using git, such as forking a base repository and using it to pull changes to the wordpress base, but committing the site to it's own repository. Or maybe, if it's possible, storing the base as a git submodule, but this would require storing themes and plugins outside of that. Is there any common way to go about this kind of website development?

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >