Search Results

Search found 2601 results on 105 pages for 'commit'.

Page 11/105 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • "unadd" a file to svn before commit

    - by Lowgain
    I was in the middle of doing a recursive svn add/commit, and a folder which did not have the proper ignore properties was included. I've got about 100 uploaded binary files versioned now, but I haven't committed yet. What is the easiest way to 'undo' this, without deleting all the documents? Thanks!

    Read the article

  • svn: trying to commit after remove a folder and create it again (with the same name)

    - by user248959
    Hi, imagine i have made a co. Then if I remove a folder and create another one with the same name. Then if i try to ci I get: svn: Commit failed (details follow): svn: Directory '/opt/lampp/htdocs/prueba4/apps/frontend/modules/moto/.svn' containing working copy admin area is missing laptop@laptop:/opt/lampp/htdocs/prueba4$ sudo svn st ~ apps/frontend/modules/moto If i tried to add that folder i get: svn: warning: 'apps/frontend/modules/moto' is already under version control What should i do? Regards Javi

    Read the article

  • Review before or after code commit, which is better?

    - by fifth
    Traditionally we performed code review before commit, I had an argument with my colleague today, who preferred code review after commit. First, here's some background, we got some experienced developers and we also got new hires with almost zero programming practice. we'd like to perform fast and short iterations to release our product. we all team members locate at same site. The advantages of code review before commit I've learned, mentor new hires try to prevent errors, failures, bad designs in early developing cycle learn from others knowledge backup if someone quits But I also got some bad experience, like low efficiency, some changes may be reviewed over days hard to balance speed and quality, especially for newbies some guy felt distrust As to post-review, I just knew little about this, but the most thing I worried about is the risk of losing control, people never review. Any opinions?

    Read the article

  • What is the term for a really BIG source code commit?

    - by Ida
    Sometimes when we check the commit history of a software, we may see that there are a few commits that are really BIG - they may change 10 or 20 files with hundreds of changed source code lines (delta). I remember that there is a commonly used term for such BIG commit but I can't recall exactly what that term is. Can anyone help me? What is the term that programmers usually use to refer to such BIG and giant commit? BTW, is committing a lot of changes all together a good practice? UPDATE: thank you guys for the inspiring discussion! But I think "code bomb" is the term that I'm looking for.

    Read the article

  • Delivery of JMS message before the transaction is committed

    - by ewernli
    Hi, I have a very simple scenario involving a database and a JMS in an application server (Glassfish). The scenario is dead simple: 1. an EJB inserts a row in the database and sends a message. 2. when the message is delivered with an MDB, the row is read and updated. The problem is that sometimes the message is delivered before the insert has been committed in the database. This is actually understandable if we consider the 2 phase commit protocol: 1. prepare JMS 2. prepare database 3. commit JMS 4. ( tiny little gap where message can be delivered before insert has been committed) 5. commit database I've discussed this problem with others, but the answer was always: "Strange, it should work out of the box". My questions are then: How could it work out-of-the box? My scenario sounds fairly simple, why isn't there more people with similar troubles? Am I doing something wrong? Is there a way to solve this issue correctly? Here are a bit more details about my understanding of the problem: This timing issue exist only if the participant are treated in this order. If the 2PC treats the participants in the reverse order (database first then message broker) that should be fine. The problem was randomly happening but completely reproducible. I found no way to control the order of the participants in the distributed transactions in the JTA, JCA and JPA specifications neither in the Glassfish documentation. We could assume they will be enlisted in the distributed transaction according to the order when they are used, but with an ORM such as JPA, it's difficult to know when the data are flushed and when the database connection is really used. Any idea?

    Read the article

  • Determining URLs updated via a series of commit logs

    - by adamrubin
    I'm working on a project where I programmatically need to know when a URL has been changed by a developer, post or during deploy. The obvious answer may be to curl the URL one day, save the output, then curl and in x days then do a diff. That won't work in my case, as I'm only looking for changes the developer mande. If the site is a blog, new comments, user submitted photos, etc would make that curl diff useless. RoR example, using github. Let's assume I have access to the entire repository and all commit logs between iterations. Is there a way I could see that "/views/people/show.html.erb" was commited, then backtrack from there (maybe by inspecting routes.rb), to come up with the URL I can then hit via a browser?

    Read the article

  • Subversion: Ignore a Directory in the Repo on Commit

    - by Charles
    I have all the boost header files in this repository and when I do a check in it takes a really long time to scan all those files that will never change. Because I want users that checkout the project to be able to compile without installing boost I am in a pickle. I want to checkout everything, and then ignore updates (there will never be any) on a directory. Tortoise svn has a ignore-on-commit change list, but I cannot find anyway to add an entire directory to this list, and I do not fancy the idea of 'modifying' all the boost files so I can add them to this change list. Is there a simple solution?

    Read the article

  • SQLAlchemy automatically converts str to unicode on commit

    - by Victor Stanciu
    Hello, When inserting an object into a database with SQLAlchemy, all it's properties that correspond to String() columns are automatically transformed from <type 'str'> to <type 'unicode'>. Is there a way to prevent this behavior? Here is the code: from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData from sqlalchemy.orm import mapper, sessionmaker engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() table = Table('projects', metadata, Column('id', Integer, primary_key=True), Column('name', String(50)) ) class Project(object): def __init__(self, name): self.name = name mapper(Project, table) metadata.create_all(engine) session = sessionmaker(bind=engine)() project = Project("Lorem ipsum") print(type(project.name)) session.add(project) session.commit() print(type(project.name)) And here is the output: <type 'str'> <type 'unicode'> I know I should probably just work with unicode, but this would involve digging through some third-party code and I don't have the Python skills for that yet :)

    Read the article

  • Forbid developer to commit code because of making weekly build

    - by Xinwang
    Our development team (about 40 developers) has a formal build every two weeks. We have a process that in the "build day", every developers are forbiden to commit code into SVN. I don't think this is a good idea because: Build will take days (even weeks in bad time) to make and BVT. People couldn't comit code as they will, they will not work. People will comit all codes in a hurge pack, so the common is hard to write. I want know if your team has same policy, and if not how do you take this situation. Thanks

    Read the article

  • Redoing Commit History in GIT Without Rebase

    - by yar
    Since asking my last question which turned out to be about rebasing with GIT, I have decided that I don't want to rebase at all. Instead I want to: Branch Work work work, checking in and pushing at all times Throw out all of those commits and pretend they never happened (so one clean commit at the end of work) I do this currently by copying the files to a new directory and then copying them back in to a new branch (branched at the same point as my working branch), and then merging that into master or wherever. Is this just plain bad and why? More important: Is there a better/GIT way to do this? git rebase -i forces me to merge (and pick, and squash).

    Read the article

  • SQLITE (C/C++interface) - How to commit a transaction

    - by AJ
    I am using sqlite c/c++ interface. Now here is my scenario - I have 3 tables (related tables) say A, B, C. Now, there is a function called Set, which get some inputs and based on the inputs inserts rows into these three tables. (sometimes it can be an update in one of the tables) Now I need two things. One, i dont want autocommit feature. Basically I would like to commit after every 1000 calls to Set function Secondly, within the set function itself, if i find that after inserting into two tables, the third insert fails, then i have to revert, those particular changes in that Set function call. Now i dont see any sqlite3_commit function exposed. I only see a function called sqlite3_commit_hook() which is slightly diff in documentation. Are there any function exposed for this purpose? or What is the way to achieve this behaviour? Can you help me with the best approach of doing this. Regards, Arjun

    Read the article

  • Eclipse Subversive revision numbers on multiple project commit

    - by CannyDuck
    If I have 2 projects in Eclipse that refers to the same repository location. repository location: svn://server project-module1 - svn://server/trunk/project-module1 project-module2 - svn://server/trunk/project-module2 So if I sync the project change with Subversive and have a change in module1 and module2 that refers to the same context I select all files and perform one commit, but if I look into my project revisions after that I see that 2 revisions were created. One for module1 and one for module2 with the same comment. How can I change the behave that only one revision number is created?

    Read the article

  • New to Git. Made a big mistake with git commit and ended up at an older commit

    - by Ramario Depass
    I'm new to Git and I've made a huge mistake. Git kept prompting me with git - rejected master -> master (non-fast-forward). But, I still committed by using: --force This was disastrous, the whole project changed back to the stage it was at about a week ago. I've lost so many changes. I seem to have been pushed back to an earlier commit. Is there anyway I can get back to one of my newer commits? As I have made an enormous amount of changes and need to get them back. Thanks.

    Read the article

  • At which line in the following code should I commit my unit of work?

    - by Pure.Krome
    I have the following code which is in a transaction. I'm not sure where/when I should be commiting my unit of work. On purpose, I've not mentioned what type of Respoistory i'm using - eg. Linq-To-Sql, Entity Framework 4, NHibernate, etc. If someone knows where, can they please explain WHY they have said, where? (i'm trying to understand the pattern through example(s), as opposed to just getting my code to work). Here's what i've got :- using ( TransactionScope transactionScope = new TransactionScope ( TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted } ) ) { _logEntryRepository.InsertOrUpdate(logEntry); //_unitOfWork.Commit(); // Here, commit #1 ? // Now, if this log entry was a NewConnection or an LostConnection, // then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert( new ConnectedClient { LogEntryId = logEntry.LogEntryId }); //_unitOfWork.Commit(); // Here, commit #2 ? } // A (PB) BanKick does _NOT_ register a lost connection, // so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete( logEntry.ClientName, logEntry.ClientIpAndPort); //_unitOfWork.Commit(); // Here, commit #3 ? } _unitOfWork.Commit(); // Here, commit #4 ? transactionScope.Complete(); }

    Read the article

  • Fluent Nhibernate causes System.IndexOutOfRangeException on Commit()

    - by Moss
    Hey there. I have been trying to figure out how to configure the mapping with both NH and FluentNH for days, and I think I'm almost there, but not quite. I have the following problem. What I need to do is basically map these two entities, which are simplified versions of the actual ones. Airlines varchar2(3) airlineCode //PK varchar2(50) Aircraft varchar2(3) aircraftCode //composite PK varchar2(3) airlineCode //composite PK, FK referencing PK in Airlines varchar2(50) aircraftName My classes look like class Airline { string AirlineCode; string AirlineName; IList<Aircraft> Fleet; } class Aircraft { Airline Airline; string AircraftCode; string AircraftName; } Using FluentNH, I mapped it like so AirlineMap Table("Airlines"); Id(x => x.AirlineCode); Map(x => x.AirlineName); HasMany<Aircraft>(x => x.Fleet) .KeyColumn("Airline"); AircraftMap Table("Aircraft"); CompositeId() .KeyProperty(x => x.AircraftCode) .KeyReference(x => x.Airline); Map(x => x.AircraftName); References(x => x.Airline) .Column("Airline"); Using Nunit, I'm testing the addition of another aircraft, but upon calling transaction.Commit after session.Save(aircraft), I get an exception: "System.IndexOutOfRangeException : Invalid index 22 for this OracleParameterCollection with Count=22." The Aircraft class (and the table) has 22 properties. Anyone have any ideas?

    Read the article

  • CVS update doesn't fetch most recent commit

    - by mizipzor
    I just commited changes to a file (bringing it to revision 1.3). I then updated the file and to my surprise I got back revision 1.2 (how the file looked before my commit). Checking the history if the file shows that there is a revision 1.3 (most recent) and a 1.2 before that (which is identical to my local copy). Ive tried removing the file and updating the folder, causing the file to be downloaded again. Ive also tried fetching a clean copy of the file of HEAD. Clearing cache and fetching HEAD doesnt work. But forcing an update to revision 1.3 explicitly does. But doing a normal update after that causes it to go back to revision 1.2. This is on a Windows XP machine, server is also a Windows box. Im using TortoiseCVS. Does anyone know what could be causing this? (If you dont know how to fix it, I will award bonus points to anyone that can tell me why CVS broke and, hopefully, how horrible it will be to fix it. I want to add it to my list so that maybe some day I can convince my colleagues to finally give it up in favor of a more modern VCS.)

    Read the article

  • git commit best practices

    - by Ivan Z. Siu
    I am using git to manage a C++ project. When I am working on the projects, I find it hard to organize the changes into commits when changing things that are related to many places. For example, I may change a class interface in a .h file, which will affect the corresponding .cpp file, and also other files using it. I am not sure whether it is reasonable to put all the stuff into one big commit. Intuitively, I think the commits should be modular, each one of them corresponds to a functional update/change, so that the collaborators could pick things accordingly. But seems that sometimes it is inevitable to include lots of files and changes to make a functional change actually work. Searching did not yield me any good suggestion or tips. Hence I wonder if anyone could give me some best practices when doing commits. Thanks! PS. I've been using git for a while and I know how to interactively add/rebase/split/amend/... What I am asking is the PHILOSOPHY part.

    Read the article

  • How do I customize the format of git rebase --interactive commit messages?

    - by adamjford
    Hi everyone, I use git for my local work (and love it ever so much), and I follow a workflow similar to the one described in this article. So basically, when starting on a new feature, I create a branch for it, go through the usual hack then commit cycle, and when I think I'm done with it, I squash it into a single commit using git rebase --interactive master, and these squashed commit messages always end up looking like the example in the article, reproduced here: [#3275] User Can Add A Comment To a Post * Adding Comment model, migrations, spec * Adding Comment controller, helper, spec * Adding Comment relationship with Post * Comment belongs to a User * Comment form on Post show page Of course, that's after a bunch of removing # This is the xth commit message lines and copy/pasting * in front of each commit message. Now, what I was wondering, is there any way to customize how git rebase -i outputs the merged commit messages so I don't have to do all that hacking? (I use msysgit, if that matters.) Thanks!

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • CM synergy file merging

    - by Ravisha
    I am using CM Synergy 6.4.3410 version And each time i do a code change ,if some one else does check in on same file its nightmare We need to reconcile it and take latest version from server then manually do the merging and check in . There is an option for merge in reconcile window,but it actually creates a parallel version. I have found out that Synergy does not do a smart merge.Even though the changes done are in different lines of the file

    Read the article

  • Mercurial hook to disallow committing large binary files

    - by hekevintran
    I want to have a Mercurial hook that will run before committing a transaction that will abort the transaction if a binary file being committed is greater than 1 megabyte. I found the following code which works fine except for one problem. If my changeset involves removing a file, this hook will throw an exception. The hook (I'm using pretxncommit = python:checksize.newbinsize): from mercurial import context, util from mercurial.i18n import _ import mercurial.node as dpynode '''hooks to forbid adding binary file over a given size Ensure the PYTHONPATH is pointing where hg_checksize.py is and setup your repo .hg/hgrc like this: [hooks] pretxncommit = python:checksize.newbinsize pretxnchangegroup = python:checksize.newbinsize preoutgoing = python:checksize.nopull [limits] maxnewbinsize = 10240 ''' def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) tip = context.changectx(repo, 'tip').rev() ctx = context.changectx(repo, node) for rev in range(ctx.rev(), tip+1): ctx = context.changectx(repo, rev) print ctx.files() for f in ctx.files(): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid The exception: $ hg commit -m 'commit message' error: pretxncommit hook raised an exception: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest transaction abort! rollback completed abort: apps/helpers/templatetags/include_extends.py@bced6272d8f4: not found in manifest! I'm not familiar with writing Mercurial hooks, so I'm pretty confused about what's going on. Why does the hook care that a file was removed if hg already knows about it? Is there a way to fix this hook so that it works all the time? Update (solved): I modified the hook to filter out files that were removed in the changeset. def newbinsize(ui, repo, node=None, **kwargs): '''forbid to add binary files over a given size''' forbid = False # default limit is 10 MB limit = int(ui.config('limits', 'maxnewbinsize', 10000000)) ctx = repo[node] for rev in xrange(ctx.rev(), len(repo)): ctx = context.changectx(repo, rev) # do not check the size of files that have been removed # files that have been removed do not have filecontexts # to test for whether a file was removed, test for the existence of a filecontext filecontexts = list(ctx) def file_was_removed(f): """Returns True if the file was removed""" if f not in filecontexts: return True else: return False for f in itertools.ifilterfalse(file_was_removed, ctx.files()): fctx = ctx.filectx(f) filecontent = fctx.data() # check only for new files if not fctx.parents(): if len(filecontent) > limit and util.binary(filecontent): msg = 'new binary file %s of %s is too large: %ld > %ld\n' hname = dpynode.short(ctx.node()) ui.write(_(msg) % (f, hname, len(filecontent), limit)) forbid = True return forbid

    Read the article

  • git gui app to show detected renames

    - by karolrvn
    Hi. Is there a git gui app (for commiting) that shows detected renames? Git-gui currently shows me a lot of deleted and new files instead of renames. TortoiseGit does not work at all on my system. Intellij's Git somehow does not detect any modifications to commit. TIA.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >