Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 13/105 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Import/commit to svn branch from a different codebase

    - by publicRavi
    I am trying to migrate to svn from a not-so-famous version control system (lets call it nsfvc). svn trunk was created some time ago from nsfvc's trunk. There is an active branch in nsfvc that I have to import to svn branch. The diff between nsfvc's trunk and branch is huge (updates, renames, additions, deletions, moves). How do I go about doing this? I am guessing it is not as simple as... svn co http://mysvn/repo/branches/branch c:\workspace # replace files in c:\workspace svn add svn ci

    Read the article

  • Mercurial commit only tip

    - by kiw
    In my setup I have a central Hg repo to which I'm pushing my local changes. Say in my local clone I have a series of local commits and then I want to push the changes to the central repo. How can I push only the final state without including all of the "small" local commits that I made? I want this because sometimes I dont want to pollute the central repo's history with all of the small local commits that I made.

    Read the article

  • Git: Removing the object(s) associated with an old commit

    - by user362893
    A couple of months ago I added and committed a release tarball to a git code repository. A couple of commits later, I removed the file and committed the removal. This one file was nearly 10x the size of the whole repository, so the presence of that file in .git slows cloning down significantly. At this point there have been hundreds of commits since the pair of commits that added and removed the file. Is there a way to remove the two commits which cancel out (the add and the remove) and also remove the copy of the file in .git, without hosing the repository? Thanks..

    Read the article

  • Why does writing a file to an NFS share send a COMMIT operation to the NFS server?

    - by Antonis Christofides
    I have a Debian squeeze (2.6.32-5-amd64) which is at the same time a NFS4 server and client (it mounts itself through NFS4). The local directory that leads directly to disk is /nfs4exports/mydir, whereas /nfs4mounts/mydir is the same thing mounted through NFS, using the machine's external IP address. Here is the line from fstab: 192.168.1.75:/mydir /nfs4mounts/mydir nfs4 soft 0 0 I have an application that writes many small files. If I write directly to /nfs4exports/mydir, it writes thousands of files per second; but if I write to /nfs4mounts/mydir, it writes 4 files per second or so. I can greatly increase speed if I add async to /etc/exports. (Writing a single large file to the NFS-mounted directory goes at more than 100 MB/s.) I examine the server statistics and I see that whenever a file is written, it is "committed" (this also happens with NFSv3): root@debianvboxtest:~# mount -t nfs4 192.168.1.75:/mydir /mnt root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 10 4% 1 0% 1 0% root@debianvboxtest:~# echo 'hello' >/mnt/test1056 root@debianvboxtest:~# nfsstat|grep -A 2 'nfs v4 operations' Server nfs v4 operations: op0-unused op1-unused op2-future access close commit 0 0% 0 0% 0 0% 11 4% 2 0% 2 0% Now in the RFC, I read this: The COMMIT operation is similar in operation and semantics to the POSIX fsync(2) system call that synchronizes a file's state with the disk (file data and metadata is flushed to disk or stable storage). COMMIT performs the same operation for a client, flushing any unsynchronized data and metadata on the server to the server's disk or stable storage for the specified file. I don't understand why the client commits. I don't think that the "echo" shell built-in command runs fsync; if echo wrote to a local file and then the machine went down, the file might be lost. In contrast, the NFS client appears to be sending a COMMIT upon completion of the echo. Why? I am reluctant to use the async NFS server option, because it would apparently ignore COMMIT. I feel as if I had a local filesystem and I had to choose between syncing every file upon close and ignoring fsync altogether. What have I understood wrong?

    Read the article

  • How can the Private Bytes of a process be significantly less than its effect on the system commit charge?

    - by bacar
    On a 64-bit Windows Server 2003, I can see using taskmgr or process explorer that the total commit charge is around 3.5GB, yet when I sum the Private Bytes consumed by each process (by running pslist -m and adding all values under the Priv column) the total comes in at 1.6GB. I know which process seems to be causing this (sqlservr.exe) as when I kill the process, the commit charge drops dramatically. However the process in question is consuming only ~220MB of Private Bytes yet killing the process drops the commit charge by ~1.6GB. How is this possible? How can the commit charge be so significantly greater than Private Bytes, which should represent the amount of committed memory? If some other factor contributes to the commit charge, what is that factor and how can I view its impact in process explorer? Note: I claim that I understand the difference between reserved and committed memory already: my investigations above relate specifically to Private Bytes which includes only committed memory and excludes reserved memory. the Virtual Size of the process in this case is over 4GB, but this should be irrelevant - Virtual Size in procexp represents reserved, not committed memory, and should not contribute to the commit charge. I'm particularly interested in generalised answers to this question: I'm assuming that if sqlservr.exe can behave in this way, that any process potentially could. Further Investigations I notice that pointing Sysinternals VMMap at this process reports a committed "Private Data" of 1.6GB despite Procexp's reported a Private Bytes of 220MB. This is particularly strange given that the documentation for this field in the "Windows® Sysinternals Administrator's Reference" states that: Private Data memory is memory that is allocated by VirtualAlloc and that is not further handled by the Heap Manager or the .NET runtime, or assigned to the Stack category... VMMap’s definition of “Private Data” is more granular than that of Process Explorer’s “private bytes.” Procexp’s “private bytes” includes all private committed memory belonging to the process. i.e. that VMMap's committed "Private Data" should be smaller than procexp's "Private Bytes". Also, after reading the 'Process committed memory' section of Mark Russinovich's excellent Pushing the Limits of Windows: Virtual Memory, he highlights two cases which won't show up in Private Bytes: File mapping views with copy-on-write semantics (however, according to VMMap there is no significant space allocated to Mapped Files). pagefile-backed virtual memory (however, I tried testlimit with the -l flag as suggested, and no significant memory is consumed by pagefile-backed sections)

    Read the article

  • How do I get Composer to download the latest commit in the master branch from GitHub for a package?

    - by pthurmond
    I am trying to get Composer do download the latest commit for the Behat/MinkSelenium2Driver package. That particular repo only has a master branch. I have tried every method I can think of, including deleting the files and letting it pull them back in, to get it to work but it doesn't. How would I get it to pull in latest committed files or at least those from the commit I list below? Specifically I want to get this commit: https://github.com/Behat/MinkSelenium2Driver/commit/2e73d8134ec8526b6e742f05c146fec2d5e1b8d6 Thanks, Patrick

    Read the article

  • Clever way to add files to changeset after commit?

    - by Pekka
    It sometimes happens to me that I forget to include a file in a changeset (i.e. a commit of a number of changed files that belong together, e.g. "Fixes bug #45") I will usually just make a second commit with the same commit message. Is there a clever and simple way to add the "latecomer" to the first commit somehow? Without svn dumping and svndumpfilter ing?

    Read the article

  • Git: Is there a way to figure out where a commit was cherry-pick'ed from?

    - by EricSchaefer
    If I cherry-pick from multiple branches, is there a simple way to figure out where the commit was coming from (e.g. the sha of the original commit)? Example: - at master branch - cherry pick commit A from a dev branch - A becomes D at the master branch Before: * B (master) Feature Y | * C (dev) Feature Z | * A Feature X |/ * 3 * 2 * 1 After: * D (master) Feature X * B Feature Y | * C (dev) Feature Z | * A Feature X |/ * 3 * 2 * 1 Is it possible to figure out that B was cherry-picked from A (aside from searching for the commit message)?

    Read the article

  • Transactions not working for SubSonic under Oracle?

    - by Fervelas
    The following code sample works perfectly under SQL Server 2005: using (TransactionScope ts = new TransactionScope()) { using (SharedDbConnectionScope scope = new SharedDbConnectionScope()) { MyTable t = new MyTable(); t.Name = "Test"; t.Comments = "Comments 123"; t.Save(); ts.Complete(); } } But under Oracle 10g it throws a "ORA-02089: COMMIT is not allowed in a subordinate session" error. If I only execute the code inside the SharedDbConnectionScope block then everything works OK, but obviously I won't be able to execute operations under a transaction, thus risking data corruption. This is only a small sample of what my real application does. I'm not sure as to what may be causing this behavior; anyone out there care to shed some light on this issue please? Many thanks in advance.

    Read the article

  • SQL Queries for Creating a rollback point and to rollback to that specific point

    - by Santhosha
    Hi, As per my project requirement i want to perform two operation Password Change Unlock Account(Only unlocking account, no password change!) I want return success only if both the transactions succeeds. Say if password change succeeds and unlock fails i cannot send success or failure. So i want to create a rollback point before password change, if both queries executes successfully i will commit the transaction. If one of the query fails i will discard the changes by rolling back to the rollback point. I am doing this in C++ using ADO. Is there any SQL Queries,using i can create the rollback point and reverting to rollback point and commiting the transaction I am using below commands for Password change ALTER LOGIN [username] WITH PASSWORD = N'password' for Unlock account ALTER LOGIN [%s] WITH CHECK_POLICY = OFF ALTER LOGIN [%s] WITH CHECK_POLICY = ON Thanks in advance!! Santhosh

    Read the article

  • svnlook always returns an error and no output

    - by Pierre-Alain Vigeant
    I'm running this small C# test program launched from a pre-commit batch file private static int Test(string[] args) { var processStartInfo = new ProcessStartInfo { FileName = "svnlook.exe", UseShellExecute = false, ErrorDialog = false, CreateNoWindow = true, RedirectStandardOutput = true, RedirectStandardError = true, Arguments = "help" }; using (var svnlook = Process.Start(processStartInfo)) { string output = svnlook.StandardOutput.ReadToEnd(); svnlook.WaitForExit(); Console.Error.WriteLine("svnlook exited with error 0x{0}.", svnlook.ExitCode.ToString("X")); Console.Error.WriteLine("Current output is: {0}", string.IsNullOrEmpty(output) ? "empty" : output); return 1; } } I am deliberately calling svnlook help and forcing an error so I can see what is going on when committing. When this program run, SVN displays svnlook exited with error 0xC0000135. Current output is: empty I looked up the error 0xC0000135 and it mean App failed to initialize properly although it wasn't specific to svnhook. Why is svnlook help not returning anything? Does it fail when executed through another process?

    Read the article

  • SVN hook script conflict

    - by user297303
    I am trying to write a pre-commit hook script that will alter a specific svn-property of a folder/file. The script looks fairly similar to the one that is documented in the svn book. I figured out how to set/change the property of a node and when executing the binding function svn.fs.commit_txn the property of the node actually gets set. But at the moment tortoise always gives me a conflict on the folder I am altering the property. I wrote my script with Python but am new python and hook scripts. Hope someone can give me a clue why I am getting this conflict..

    Read the article

  • MySQL Create tables without commiting current transaction

    - by user276648
    I'd like my program to be able to install plugins, and rollback all the changes made if an error occurs. So I create a transaction that keeps all the things that were added while installing the plugin. The problem is that the plugin may want to create tables, and doing so automatically commits the current transaction in MySQL. See Statements That Cause an Implicit Commit on MySQL web site. Any idea on how I could do it? I thought of using temporary tables as they are not automatically commited, unless they are using too much memory, but it looks like temporary tables cannot be rolled back anyway (and I haven't found a way to convert them to permanent tables). I just found out about "save points" (http://dev.mysql.com/doc/refman/5.1/en/savepoint.html), but I don't really understand how/when it should be used nor if it can help me achieve what I want.

    Read the article

  • How can I create a custom cleanup mode for git?

    - by Danny
    Git's default cleanup of strip removes all lines starting with a # character. Unfortunately, the Trac engine's wiki formatter uses hashes in the beginning of a code block to denote the syntax type. Additionally any code added verbatim might include hashes as they are a common comment prefix; Perl comes to mind. In the following example the comments all get destroyed by git's cleanup mode. Example: {{{ #!/usr/bin/perl use strict; # say hi to the user. print "hello world\n"; }}} I'd like to use a custom filter that removes all lines beginning with a hash from the bottom of the file upwards. Leaving those lines that being with a hash that are embedded in the commit message I wrote alone. Where or how can I specify this in git? Note, creating a sed or perl script to perform the operation is not a problem, just knowing where to hook it into git is the question.

    Read the article

  • How can I ensure that nested transactions are committed independently of each other?

    - by Caldera
    If I have a stored procedure that executes another stored procedure several times with different arguments, is it possible to have each of these calls commit independently of the others? In other words, if the first two executions of the nested procedure succeed, but the third one fails, is it possible to preserve the results of the first two executions (and not roll them back)? I have a stored procedure defined something like this in SQL Server 2000: CREATE PROCEDURE toplevel_proc .. AS BEGIN ... while @row_count <= @max_rows begin select @parameter ... where rownum = @row_count exec nested_proc @parameter select @row_count = @row_count + 1 end END

    Read the article

  • When should I make the first commit to source control?

    - by Kendall Frey
    I'm never sure when a project is far enough along to first commit to source control. I tend to put off committing until the project is 'framework-complete' and primarily commit features from then on. (I haven't done any personal projects large enough to have a core framework too big for this.) I have a feeling this isn't best practice, though I'm not sure what all could go wrong. Let's say, for example, I have a project which consists of a single code file. It will take about 10 lines of boilerplate code, and 100 lines to get the project working with extremely basic functionality (1 or 2 features). Should I first check in: The empty file? The boilerplate code? The first features? At some other point? Also, what are the reasons to check in at a specific point?

    Read the article

  • How do I obtain and use a CVSNT commit ID?

    - by skiphoppy
    I saw a reference on another question to a unique commit id auto-generated by CVSNT that marks each commit. I think most people in my department are using CVSNT or frontends to it. I found commit identifiers described in the CVSNT manual, but there is no explanation about how to determine what the CVSNT commit identifier is for a particular revision of a file. Is there a way to do this? I'd like to find out what commit identifiers are being generated for other people's checkins so I can group together the files involved in their commits.

    Read the article

  • interesting network or git problem

    - by bogumbiker
    I have a setup my own git repository with gitosis on dedicated debian server. The server is visible via port 22 from outside (the port 22 is forwarded from my router to my git server). On the local network the git repository works perfectly. The problem happens once I try to do "git clone.." from remote server. So once I do "git clone.." from remote I am getting git clone hung after cloning around 20-30% of the repository (small around 2MB) and I think this is the random percentage. I can do scp to and from the git server without any problems. Also as I mentioned the git clone, push, etc works perfectly within my internal network. Any idea how to debug this problem? thanks

    Read the article

  • Which SCM/VCS cope well with moving text between files?

    - by pfctdayelise
    We are having havoc with our project at work, because our VCS is doing some awful merging when we move information across files. The scenario is thus: You have lots of files that, say, contain information about terms from a dictionary, so you have a file for each letter of the alphabet. Users entering terms blindly follow the dictionary order, so they will put an entry like "kick the bucket" under B if that is where the dictionary happened to list it (or it might have been listed under both B, bucket and K, kick). Later, other users move the terms to their correct files. Lots of work is being done on the dictionary terms all the time. e.g. User A may have taken the B file and elaborated on the "kick the bucket" entry. User B took the B and K files, and moved the "kick the bucket" entry to the K file. Whichever order they end up getting committed in, the VCS will probably lose entries and not "figure out" that an entry has been moved. (These entries are later automatically converted to an SQL database. But they are kept in a "human friendly" form for working on them, with lots of comments, examples etc. So it is not acceptable to say "make your users enter SQL directly".) It is so bad that we have taken to almost manually merging these kinds of files now, because we can't trust our VCS. :( So what is the solution? I would love to hear that there is a VCS that could cope with this. Or a better merge algorithm? Or otherwise, maybe someone can suggest a better workflow or file arrangement to try and avoid this problem?

    Read the article

  • Removing multiple files from a Git repo that have already been deleted from disk

    - by Codebeef
    I have a Git repo that I have deleted four files from using rm (not git rm), and my Git status looks like this: # deleted: file1.txt # deleted: file2.txt # deleted: file3.txt # deleted: file4.txt How do I remove these files from Git without having to manually go through and add each file like this: git rm file1 file2 file3 file4 Ideally, I'm looking for something that works in the same way that git add . does, if that's possible.

    Read the article

  • How to safely backport specific linux kernel commits to an older kernel using git

    - by superc0w
    I'm currently on a stable 2.6.32 kernel. But I need certain fixes on 2.6.33 branch to be incorporated into this 2.6.32 kernel so that I can create a custom kernel for testing purposes. I can't apply the said fixes directly to the 2.6.32 source because they seem to have dependencies on other fixes. Is there any safe way to incorporate only the fixes (and all their dependencies) I need into the 2.6.32 kernel with git to create a custom kernel? Assuming there is a way to do the above, is there a way to track the fixes that have been applied to the custom kernel (i.e. track which commits have been applied to the 2.6.32 kernel to create the custom kernel source)?

    Read the article

  • overwrite parameters passed by querystring

    - by opensas
    I have the following problem I have a web framework built with classic asp that saves the page state in hidden textboxes, and then issues a submit to itself. Before submitting, we have a javascript functions that saves the action in a hidden "action" input, and then performs the submit. The page loads the state from those hidden texts, reads the action issued, reads extra parameters, like the id of the record to edit, and then builds the page accordingly. I'd like to make a url link to automatically start the page with "edit" action on a "x" id. So I was thinking about building the following url, for example http://myapp/user?action=edit&id=23 the problem is that when the page auto-submits, que url string keeps the parameters. I'd like to achieve the following: when the user clicks on http://myapp/user?action=edit&id=23 my page should receive the posted values action=edit and id=23 but the url should be just http://myapp/user and both parameters should be kept in the hidden texts... (I wonder if I make myself clear...) thanks a lot saludos sas ps: I have a couple of ideas about how to solve it, but I'll post them as answers...

    Read the article

  • How to read changed values with native query during one transaction? (Spring and JPA)

    - by knarf1983
    We have container transaction with Spring and JPA (Hibernate). I need to make an update on a table to "flag" some rows via native statements. Then we insert some rows via EntityManager from JPATemplate to this table. After that, we need to calculate changes in the table via native statement (with Oracle's union and minus, complex groups...) I see that changes from step 1 and 2 are not commited and thats why the statement from 3 fails. I already tried with transaction propagation REQUIRES_NEW, EntityManager.flush... Didn't work. 1) update SOMETABLE acolumn = somevalue (native) 2) persist some values into SOMETABLE (via entity manager) 3) select values from SOMETABLE Is there a possibility to read the changes from step 1 and 2 in step 3?

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >