Search Results

Search found 4953 results on 199 pages for 'git commit'.

Page 10/199 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Git Svn Fetch More Revisions

    - by vigilant
    I am using git-svn for our svn repository. However, the repo is huge, so I first checked out the project like so: git svn clone svn://svn.server.com/project -s -r 12000:HEAD So, now I have only revisions 12000 to the current revision. I would like to checkout some more revisions, but the following does nothing: git svn fetch -r 11000:HEAD Is there a way to fetch older revisions?

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • Howto Nginx + git-http-backend + fcgiwrap (Debian Squeeze)

    - by brainsqueezer
    I am trying to setup git-http-backend with Nginx but after 24 hours wasting time and reading everything I could I think this config should work but doesn't. server { listen 80; server_name mydevserver; access_log /var/log/nginx/dev.access.log; error_log /var/log/nginx/dev.error.log; location / { root /var/repos; } location ~ /git(/.*) { gzip off; root /usr/lib/git-core; fastcgi_pass unix:/var/run/fcgiwrap.socket; include /etc/nginx/fastcgi_params2; fastcgi_param SCRIPT_FILENAME /usr/lib/git-core/git-http-backend; fastcgi_param DOCUMENT_ROOT /usr/lib/git-core/; fastcgi_param SCRIPT_NAME git-http-backend; fastcgi_param GIT_HTTP_EXPORT_ALL ""; fastcgi_param GIT_PROJECT_ROOT /var/repos; fastcgi_param PATH_INFO $1; #fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; } } Content of /etc/nginx/fastcgi_params2 fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REMOTE_USER $remote_user; # required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; but config seems not working $ git clone http://mydevserver/git/myprojectname/ Cloning into myprojectname... warning: remote HEAD refers to nonexistent ref, unable to checkout. and I can request an unexistant project and I will get the same answer $ git clone http://mydevserver/git/thisprojectdoesntexist/ Cloning into thisprojectdoesntexist... warning: remote HEAD refers to nonexistent ref, unable to checkout. If I change root to /usr/lib I will get a 403 error and this will be reported to nginx error log: 2011/11/23 15:52:46 [error] 5224#0: *55 FastCGI sent in stderr: "Cannot get script name, is DOCUMENT_ROOT and SCRIPT_NAME set and is the script executable?" while reading response header from upstream, client: 198.168.0.4, server: mydevserver, request: "GET /git/myprojectname/info/refs HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "mydevserver" My main trouble is with the correct root value with this configuration. Maybe there are some permissions problems. Notes: /var/repos/ is owned by www-data and contains folders bit git bare repos. All this works perfectly using ssh. If I go with my browser to http://mydevserver/git/myproject/info/refs it is answered by git-http-backend asking me to send a command. /var/run/fcgiwrap.socket has 777 permissions.

    Read the article

  • git reference common directory/repo

    - by phillee
    Project layout: /project_a /shared /project_b /shared /shared project_a and project_b both need to contain the shared folder. With svn, we used svn:externalsand that worked fine, since svn can reference subdirs (with relative paths too). However, we moved to git and it seems to not support checking out subdirs. Our solution now is to put project_a, project_b and shared all in different git repos, and use git submodules in project_a and project_b. However this seems much more complicated than one monolithic svn repo with svn:externals. What's the correct way to handle common elements in git?

    Read the article

  • Git to svn: Adding commit date to log messages

    - by Arnauld VM
    How should I do to have the author (or committer) name/date added to the log message when "dcommitting" to svn? For example, if the log message in Git is: This is a nice modif I'd like to have the message in svn be something like: This is a nice modif ----- Author: John Doo <[email protected] 2010-06-10 12:38:22 Committer: Nice Guy <[email protected] 2010-06-10 14:05:42 (Note that I'm mainly interested in the date, since I already mapped svn users in .svn-authors) Any simple way? Hook needed? Other suggestion? (See also: http://article.gmane.org/gmane.comp.version-control.git/148861) Thank you in advance. Yours faithfully, -- Arnauld Van Muysewinkel

    Read the article

  • How to bridge git to ClearCase?

    - by Bas Bossink
    I've recently used git svn and enjoyed it very much. Now I'm starting a new project at a different customer. At that site the SCM of choice is ClearCase. I haven't found a baked equivalent of git svn for ClearCase. Is there anybody who has tried to use git locally as a front-end to ClearCase using some tricks, configuration or scripting with any measure of success? If so can you please explain the method used?

    Read the article

  • Detach subdirectory into separate Git repository

    - by matli
    I have a Git repository which contains a number of subdirectories. Now I have found that one of the subdirectories is unrelated to the other and should be detached to a separate repository. How can I do this while keeping the history of the files within the subdirectory? I guess I could make a clone and remove the unwanted parts of each clone, but I suppose this would give me the complete tree when checking out an older revision etc. This might be acceptable, but I would prefer to be able to pretend that the two repositories doesn't have a shared history. Just to make it clear, I have the following structure: XYZ/ .git/ XY1/ ABC/ XY2/ But I would like this instead: XYZ/ .git/ XY1/ XY2/ ABC/ .git/

    Read the article

  • git - how do we verify commit messages for a push?

    - by shovas
    Coming from CVS, we have a policy that commit messages should be tagged with a bug number (simple suffix "... [9999]"). A CVS script checks this during commits and rejects the commit if the message does not conform. The git hook commit-msg does this on the developer side but we find it helpful to have automated systems check and remind us of this. During a git push, commit-msg isn't run. Is there another hook during push that could check commit messages? How do we verify commit messages during a git push?

    Read the article

  • How to specify an SSH key for Hudson with git plugin?

    - by jlpp
    I've got Hudson (continuous integration system) with the git plugin running on a Tomcat Windows Service. msysgit is installed and the msysgit bin dir is in the path. PuTTY/Pageant/plink are installed and msysgit is configured to use them. The trouble I'm running in to, I think, is that the user who owns the Tomcat/Hudson service (Local System) has no SSH key set up to be able to clone the git repository. When the git Hudson plugin tries to clone it gives the error: $ git clone -o origin git@hostname:project.git "e:\HUDSON_HOME\jobs\Project Trunk\workspace" ERROR: Error cloning remote repo 'origin' : Could not clone git@hostname:project.git ERROR: Cause: Error performing git clone -o origin git@hostname:project.git e:\HUDSON_HOME\jobs\Project Trunk\workspace Trying next repository ERROR: Could not clone from a repository FATAL: Could not clone hudson.plugins.git.GitException: Could not clone My question is, how can I set things up so that the git plugin/msysgit know to use a particular SSH private key when trying to clone? I don't think Pageant will work because the Tomcat service is running as the "Local System" user, but I may be wrong.

    Read the article

  • git workflow for separating commits

    - by gman
    Best practices with git (or any VCS for that matter) is supposed to be to have each commit do the smallest change possible. But, that doesn't match how I work at all. For example I recently I needed to add some code that checked if the version of a plugin to my system matched the versions the system supports. If not print a warning that the plugin probably requires a newer version of the system. While writing that code I decided I wanted the warnings to be colorized. I already had code that colorized error message so I edited that code. That code was in the startup module of one entry to the system. The plugin checking code was in another path that didn't use that entry point so I moved the colorization code into a separate module so both entry points could use it. On top of that, in order to test my plugin checking code works I need to go edit UI/UX code to make sure it tells the user "You need to upgrade". When all is said and done I've edited 10 files, changed dependencies, the 2 entry points are now both dependant on the colorization code, etc etc. Being lazy I'd probably just git add . && git commit -a the whole thing. Spending 10-15 minutes trying to manipulate all those changes into 3 to 6 smaller commits seems frustrating which brings up the question Are there workflows that work for you or that make this process easier? I don't think I can some how magically always modify stuff in the perfect order since I don't know that order until after I start modifying and seeing what comes up. I know I can git add --interactive etc but it seems, at least for me, kind of hard to know what I'm grabbing exactly the correct changes so that each commit is actually going to work. Also, since the changes are sitting in the current directory it doesn't seem like it would be easy to run tests on each commit to make sure it's going to work short of stashing all the changes. And then, if it were to stash and then run the tests, if I missed a few lines or accidentally added a few too many lines I have no idea how I'd easily recover from that. (as in either grab the missing lines from the stash and then put the rest back or take the few extra lines I shouldn't have grabbed and shove them into the stash for the next commit. Thoughts? Suggestions? PS: I hope this is an appropriate question. The help says development methodologies and processes

    Read the article

  • A file was added to git on commit n. How do I add it instead to commit n-m?

    - by carleeto
    I have a branch. Half way through I noticed git was not tracking a file that it should have been and so I added it as part of a commit and continued with my work. Now, I'm doing a git bisect and all commits before the file was added do not build. So I'm thinking, I need to split the commit that added the file into two parts: the file add and the rest of the commit. I then need to re-order the commits so that the file add commit will be at the beginning of my branch. Is this the correct solution or is there a better way of doing it?

    Read the article

  • "git pull" broken

    - by Ovid
    I recently upgraded my MacBook Pro to Snow Leopard and "git pull" returns: rakudo $ git pull git: 'pull' is not a git-command. See 'git --help' Did you mean this? shell rakudo $ git-pull -bash: git-pull: command not found I've tried reinstalling via macports, but to no avail. Then I saw this rakudo $ git --exec-path /Users/ovid/libexec/git-core That surprised me as that directory does not exist, nor has it ever existed. Google is not helping here. Hopefully you can :)

    Read the article

  • How to get the changes on a branch in git

    - by Greg Hewgill
    What is the best way to get a log of commits on a branch since the time it was branched from the current branch? My solution so far is: git log $(git merge-base HEAD branch)..branch The documentation for git-diff indicates that "git diff A...B" is equivalent to "git diff $(git-merge-base A B) B". On the other hand, the documentation for git-rev-parse indicates that "r1...r2" is defined as "r1 r2 --not $(git merge-base --all r1 r2)". Why are these different? Note that "git diff HEAD...branch" gives me the diffs I want, but the corresponding git log command gives me more than what I want. In pictures, suppose this: x---y---z---branch / ---a---b---c---d---e---HEAD I would like to get a log containing commits x, y, z. "git diff HEAD...branch" gives these commits. However, "git log HEAD...branch" gives x, y, z, c, d, e.

    Read the article

  • ssh_exchange_identification: Connection closed by remote host under Git bash

    - by MoreFreeze
    I work at win7 and set up git server with sshd. I git --bare init myapp.git, and clone ssh://git@localhost/home/git/myapp.git in Cywgin correctly. But I need config git of Cygwin again, I want to git clone in Git Bash. I run "git clone ssh://git@localhost/home/git/myapp.git" and get following message ssh_exchange_identification: Connection closed by remote host then I run "ssh -vvv git@localhost" in Git Bash and get message debug2: ssh_connect: needpriv 0 debug1: Connecting to localhost [127.0.0.1] port 22. debug1: Connection established. debug1: identity file /c/Users/MoreFreeze/.ssh/identity type -1 debug3: Not a RSA1 key file /c/Users/MoreFreeze/.ssh/id_rsa. debug2: key_type_from_name: unknown key type '-----BEGIN' debug3: key_read: missing keytype debug3: key_read: missing whitespace // above it repeats 24 times debug2: key_type_from_name: unknown key type '-----END' debug3: key_read: missing keytype debug1: identity file /c/Users/MoreFreeze/.ssh/id_rsa type 1 debug1: identity file /c/Users/MoreFreeze/.ssh/id_dsa type -1 ssh_exchange_identification: Connection closed by remote host it seems my private keys has wrong format? And I find that there are exactly 25 line in private keys without "BEGIN" and "END". I'm confused why it said NOT RSA1 key, I totally ensure it is RSA 2 key. Any advises are welcome. btw, I have read first 3 pages on google about this problem.

    Read the article

  • Git: Help an SVN novice translate trunk/branch concepts to Git

    - by Jasconius
    So I am not much of a source control expert, I've used SVN for projects in the past. I have to use Git for a particular project (client supplied Git repo). My workflow is as such that I will be working on the files from two different computers, and often I need to check in changes that are unstable when I move from place to place so I can continue my work. What then occurs is when, say, the client goes to get the latest version, they will also download the unstable code. In SVN, you can address this by creating a trunk and use working branches, or use the trunk as the working version and create stable branches. What is the equivalent concept in Git, and is there a simple way to do this via Github?

    Read the article

  • Git: Merge in only one commit

    - by Ivan
    Usually, I work with branches in Git, but I don't like to see hundreds of branches in my working tree (Git history). I'm wondering if there is a method in Git to "join" all commits in a branch in only one commit (ideally with a clear commit message). Something like this: git checkout -b branch <some work> git commit -a -m "commit 1" <some work> git commit -a -m "commit 2" <some work> git commit -a -m "commit 3" git checkout master git SUPER-JOIN branch -m "super commit" After this, only "super commit" will exist in the git log.

    Read the article

  • Git Project Dependencies on GitHub

    - by VirtuosiMedia
    I've written a PHP framework and a CMS on top of the framework. The CMS is dependent on the framework, but the framework exists as a self-contained folder within the CMS files. I'd like to maintain them as separate projects on GitHub, but I don't want to have the mess of updating the CMS project every time I update the framework. Ideally, I'd like to have the CMS somehow pull the framework files for inclusion into a predefined sub-directory rather than physically committing those files. Is this possible with Git/GitHub? If so, what do I need to know to make it work? Keep in mind that I'm at a very, very basic level of experience with Git - I can make repositories and commit using the Git plugin for Eclipse, connect to GitHub, and that's about it. I'm currently working solo on the projects, so I haven't had to learn much more about Git so far, but I'd like to open it up to others in the future and I want to make sure I have it right. Also, what should my ideal workflow be for projects with dependencies? Any tips on that subject would also greatly appreciated. If you need more info on my setup, just ask in the comments.

    Read the article

  • From TFS to Git

    - by Saeed Neamati
    I'm a .NET developer and I've used TFS (team foundation server) as my source control software many times. Good features of TFS are: Good integration with Visual Studio (so I do almost everything visually; no console commands) Easy check-out, check-in process Easy merging and conflict resolution Easy automated builds Branching Now, I want to use Git as the backbone, repository, and source control of my open source projects. My projects are in C#, JavaScript, or PHP language with MySQL, or SQL Server databases as the storage mechanism. I just used github.com's help for this purpose and I created a profile there, and downloaded a GUI for Git. Up to this part was so easy. But I'm almost stuck at going along any further. I just want to do some simple (really simple) operations, including: Creating a project on Git and mapping it to a folder on my laptop Checking out/checking in files and folders Resolving conflicts That's all I need to do now. But it seems that the GUI is not that user friendly. I expect the GUI to have a Connect To... or something like that, and then I expect a list of projects to be shown, and when I choose one, I expect to see the list of files and folders of that project, just like exploring your TFS project in Visual Studio. Then I want to be able to right click a file and select check-in... or check-out and stuff like that. Do I expect much? What should I do to easily use Git just like TFS? What am I missing here?

    Read the article

  • Understanding and memorizing git rebase parameters

    - by Robert Dailey
    So far the most confusing portion of git is rebasing onto another branch. Specifically, it's the command line arguments that are confusing. Each time I want to rebase a small piece of one branch onto the tip of another, I have to review the git rebase documentation and it takes me about 5-10 minutes to understand what each of the 3 main arguments should be. git rebase <upstream> <branch> --onto <newbase> What is a good rule of thumb to help me memorize what each of these 3 parameters should be set to, given any kind of rebase onto another branch? Bear in mind I have gone over the git-rebase documentation again, and again, and again, and again (and again), but it's always difficult to understand (like a boring scientific white-paper or something). So at this point I feel I need to involve other people to help me grasp it. My goal is that I should never have to review the documentation for these basic parameters. I haven't been able to memorize them so far, and I've done a ton of rebases already. So it's a bit unusual that I've been able to memorize every other command and its parameters so far, but not rebase with --onto.

    Read the article

  • Working with Git on multiple machines

    - by Tesserex
    This may sound a bit strange, but I'm wondering about a good way to work in Git from multiple machines networked together in some way. It looks to me like I have two options, and I can see benefits on both sides: Use git itself for sharing, each machine has its own repo and you have to fetch between them. You can work on either machine even if the other is offline. This by itself is pretty big I think. Use one repo that is shared over the network between machines. No need to do git pulls every time you switch machines, since your code is always up to date. Never worry that you forgot to push code from your other non-hosting machine, which is now out of reach, since you were working off a fileshare on this machine. My intuition says that everyone generally goes with the first option. But the downside I see is that you might not always be able to access code from your other machines, and I certainly don't want to push all my WIP branches to github at the end of every day. I also don't want to have to leave my computers on all the time so I can fetch from them directly. Lastly a minor point is that all the git commands to keep multiple branches up to date can get tedious. Is there a third handle on this situation? Maybe some third party tools are available that help make this process easier? If you deal with this situation regularly, what do you suggest?

    Read the article

  • Combination of Operating Mode and Commit Strategy

    - by Kevin Yang
    If you want to populate a source into multiple targets, you may also want to ensure that every row from the source affects all targets uniformly (or separately). Let’s consider the Example Mapping below. If a row from SOURCE causes different changes in multiple targets (TARGET_1, TARGET_2 and TARGET_3), for example, it can be successfully inserted into TARGET_1 and TARGET_3, but failed to be inserted into TARGET_2, and the current Mapping Property TLO (target load order) is “TARGET_1 -> TARGET_2 -> TARGET_3”. What should Oracle Warehouse Builder do, in order to commit the appropriate data to all affected targets at the same time? If it doesn’t behave as you intended, the data could become inaccurate and possibly unusable.                                               Example Mapping In OWB, we can use Mapping Configuration Commit Strategies and Operating Modes together to achieve this kind of requirements. Below we will explore the combination of these two features and how they affect the results in the target tables Before going to the example, let’s review some of the terms we will be using (Details can be found in white paper Oracle® Warehouse Builder Data Modeling, ETL, and Data Quality Guide11g Release 2): Operating Modes: Set-Based Mode: Warehouse Builder generates a single SQL statement that processes all data and performs all operations. Row-Based Mode: Warehouse Builder generates statements that process data row by row. The select statement is in a SQL cursor. All subsequent statements are PL/SQL. Row-Based (Target Only) Mode: Warehouse Builder generates a cursor select statement and attempts to include as many operations as possible in the cursor. For each target, Warehouse Builder inserts each row into the target separately. Commit Strategies: Automatic: Warehouse Builder loads and then automatically commits data based on the mapping design. If the mapping has multiple targets, Warehouse Builder commits and rolls back each target separately and independently of other targets. Use the automatic commit when the consequences of multiple targets being loaded unequally are not great or are irrelevant. Automatic correlated: It is a specialized type of automatic commit that applies to PL/SQL mappings with multiple targets only. Warehouse Builder considers all targets collectively and commits or rolls back data uniformly across all targets. Use the correlated commit when it is important to ensure that every row in the source affects all affected targets uniformly. Manual: select manual commit control for PL/SQL mappings when you want to interject complex business logic, perform validations, or run other mappings before committing data. Combination of the commit strategy and operating mode To understand the effects of each combination of operating mode and commit strategy, I’ll illustrate using the following example Mapping. Firstly we insert 100 rows into the SOURCE table and make sure that the 99th row and 100th row have the same ID value. And then we create a unique key constraint on ID column for TARGET_2 table. So while running the example mapping, OWB tries to load all 100 rows to each of the targets. But the mapping should fail to load the 100th row to TARGET_2, because it will violate the unique key constraint of table TARGET_2. With different combinations of Commit Strategy and Operating Mode, here are the results ¦ Set-based/ Correlated Commit: Configuration of Example mapping:                                                     Result:                                                      What’s happening: A single error anywhere in the mapping triggers the rollback of all data. OWB encounters the error inserting into Target_2, it reports an error for the table and does not load the row. OWB rolls back all the rows inserted into Target_1 and does not attempt to load rows to Target_3. No rows are added to any of the target tables. ¦ Row-based/ Correlated Commit: Configuration of Example mapping:                                                   Result:                                                  What’s happening: OWB evaluates each row separately and loads it to all three targets. Loading continues in this way until OWB encounters an error loading row 100th to Target_2. OWB reports the error and does not load the row. It rolls back the row 100th previously inserted into Target_1 and does not attempt to load row 100 to Target_3. Then, if there are remaining rows, OWB will continue loading them, resuming with loading rows to Target_1. The mapping completes with 99 rows inserted into each target. ¦ Set-based/ Automatic Commit: Configuration of Example mapping: Result: What’s happening: When OWB encounters the error inserting into Target_2, it does not load any rows and reports an error for the table. It does, however, continue to insert rows into Target_3 and does not roll back the rows previously inserted into Target_1. The mapping completes with one error message for Target_2, no rows inserted into Target_2, and 100 rows inserted into Target_1 and Target_3 separately. ¦ Row-based/Automatic Commit: Configuration of Example mapping: Result: What’s happening: OWB evaluates each row separately for loading into the targets. Loading continues in this way until OWB encounters an error loading row 100 to Target_2 and reports the error. OWB does not roll back row 100th from Target_1, does insert it into Target_3. If there are remaining rows, it will continue to load them. The mapping completes with 99 rows inserted into Target_2 and 100 rows inserted into each of the other targets. Note: Automatic Correlated commit is not applicable for row-based (target only). If you design a mapping with the row-based (target only) and correlated commit combination, OWB runs the mapping but does not perform the correlated commit. In set-based mode, correlated commit may impact the size of your rollback segments. Space for rollback segments may be a concern when you merge data (insert/update or update/insert). Correlated commit operates transparently with PL/SQL bulk processing code. The correlated commit strategy is not available for mappings run in any mode that are configured for Partition Exchange Loading or that include a Queue, Match Merge, or Table Function operator. If you want to practice in your own environment, you can follow the steps: 1. Import the MDL file: commit_operating_mode.mdl 2. Fix the location for oracle module ORCL and deploy all tables under it. 3. Insert sample records into SOURCE table, using below plsql code: begin     for i in 1..99     loop         insert into source values(i, 'col_'||i);     end loop;     insert into source values(99, 'col_99'); end; 4. Configure MAPPING_1 to any combinations of operating mode and commit strategy you want to test. And make sure feature TLO of mapping is open. 5. Deploy Mapping “MAPPING_1”. 6. Run the mapping and check the result.

    Read the article

  • Empirical Evidence of Popularity of Git and Mercurial

    - by ana
    It's 2012! Mercurial and Git are both still strong. I understand the trade-offs of both. I also understand everyone has some sort of preference for one or the other. That's fine. I'm looking for some information on level of usage of both. For example, on stackoverflow.com, searching for Git gets you 12000 hits, Mercurial gets you 3000. Google Trends says it's 1.9:1.0 for Git. What other empirical information is available to estimate the relative usage of both tools?

    Read the article

  • Git bug branching convention

    - by kisplit
    I've been following the successful Git branching model guide for most of my development. I still wonder if the way I handle bug tickets is correct. My current workflow: Once I accept a bug ticket I will do a git checkout -b bug/{ticket_number}, create a single commit as a fix and then checkout develop and do a git merge --no-ff. I'd love to hear from the experiences of others whether or not I am abusing the --no-ff option in this instance. If I am, could someone suggest a better approach?

    Read the article

  • (12.04 vm/server) Dist-upgrade to 3.2.0-63 wants to remove git (1.9.2) and git-core - is that the correct behavior?

    - by YellowShark
    was wondering if anyone knows dist-upgrade wants to remove git. FWIW, this is a pretty simple box, mainly used for web dev. $ uname -a Linux precise64 3.2.0-61-generic #93-Ubuntu SMP Fri May 2 21:31:50 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ git --version git version 1.9.2 $ sudo apt-get dist-upgrade Reading package lists... Done Building dependency tree Reading state information... Done Calculating upgrade... Done The following packages will be REMOVED: git git-core The following NEW packages will be installed: linux-headers-3.2.0-63 linux-headers-3.2.0-63-generic linux-image-3.2.0-63-generic The following packages will be upgraded: git-man linux-headers-server linux-image-server linux-server phpmyadmin 5 upgraded, 3 newly installed, 2 to remove and 0 not upgraded. Need to get 58.8 MB of archives. After this operation, 199 MB of additional disk space will be used. Do you want to continue [Y/n]?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >