Search Results

Search found 175 results on 7 pages for 'changeset'.

Page 5/7 | < Previous Page | 1 2 3 4 5 6 7  | Next Page >

  • svn commit is hung at start of commit

    - by jwhitlock
    I'm commiting a large changeset, including a large binary file (180 MB) over a slow VPN connection. It looks for all the world like it is stalled. How can I diagnose where it is stuck? The output is: $ svn commit -m "My commit message" Connecting to deprecated signal QDBusConnectionInterface::serviceOwnerChanged(QString,QString,QString)` Local subversion is 1.6.9 on Linux, KDE 4.3, and svn status shows ML . L ws M ws/manage.py L ws/locales L ws/locales/ja_JP L ws/locales/ja_JP/LC_MESSAGES The process isn't using much of any resources. The server is Linux, served by Apache and mod_dav_svn, same subversion 1.6.9. I can't see any process that is handling the commit.

    Read the article

  • How do I do client-side validation in WPF using WCF RIA Services

    - by lsb
    Hi! I've created a WCF RIA Service that I'd like to use with a WPF application. I've added several System.ComponentModel.DataAnnotations validation rules on the entities meta-data, all which work great on the server when I call .SubmitChanges(changeSet) from the client. I'd also like to validate my entities on the client side before I sumbit my changes to the server but I have no idea how to do so. Any help in this regard would be greatly appreciated! Thanks....

    Read the article

  • git rebase branch with all subbranches

    - by knittl
    is it possible to rebase a branch with all it's subbranches in git? i often use branches as quick/mutable tags to mark certain commits. * master * * featureA-finished * * origin/master now i want to rebase -i master onto origin/master, to change/reword the commit featureA-finished^ after git rebase -i --onto origin/master origin/master master, i basically want the history to be: * master * * featureA-finished * (changed/reworded) * origin/master but what i get is: * master * * (same changeset as featureA-finished) * (changed/reworded) | * featureA-finished |.* (original commit i wanted to edit) * origin/master is there a way around it, or am i stuck with recreating the branches on the new rebased commits?

    Read the article

  • How do you keep track of what you have released in production?

    - by systempuntoout
    Tipically a deploy in production does not involve just a mere source code update (build) but requires a lot of other important tasks like for example: Db scripts (tables, query..) Configuration files (differents from test\production) Batch to schedule Executables to move to the correct path Etc. etc. In our company we just send an email to a "Release email address" describing the tasks in order, which changeset need to be published (TFS), which SP need to be updated, db scripts and so on. I believe there's not a magic tool that does these tasks automagically in order, rollback included; but probably there's something better than email that helps to keep track of releases in production. Do you have any tools to suggest or practices to share?

    Read the article

  • TFS Automated Builds to Code Packages

    - by Adam Jenkin
    I would like to hear the best practices or know how people perform the following task in TFS 2008. I am intending on using TFS for building and storing web applications projects. Sometimes these projects can contain 100's of files (*.cs, *.acsx etc) During the lifetime of the website, a small bug will get raised resulting in say a stylesheet change, and a change to default.aspx.cs for example. On checking in these changes to TFS, and automated build would be triggered (great!), however for deploying the changes to the target production machine, I only need to deploy for example: style.css default.asx MyWebApplications.dll So my question is, can MSBuild be customized to generate a "code pack" of only the files which require deploying to the production server based on the changeset which cause the re-build?

    Read the article

  • acl.allow not working in mercurial

    - by sagar
    When I am trying to apply some authentication in .hg/hgrc file on Ubuntu machine its not working. I have added below code to hgrc file on Ubuntu [web] allow_push=* allow_read=* push_ssl =false [hooks] pretxnchangegroup.acl=python:hgext.acl.hook [acl.allow] /home/test/testrepository/*=myid When I am pushing some data from my Windows repository to testrepository on Ubuntu giving below message pushing to http://ubantuip:8000 searching for changes remote: adding changesets remote: adding manifests remote: adding file changes remote: added 1 changesets with 1 changes to 1 files remote: error: pretxnchangegroup.acl hook failed: acl: access denied for changes et 69f00e372c67 remote: transaction abort! remote: rollback completed remote: abort: acl: access denied for changeset 69f00e372c67 why I am not able to push the changes?

    Read the article

  • Mercurial Branching Model for task features

    - by Stan
    My development env: Windows 7, TortoiseHg, ASP.NET 4.0/MVC3 Test branch: code on test server Prod branch: code on production server This is my current branching model. The reason to branch out every task (feature) is because some features go to live slower. So in above graph, task 1 finished earlier (changeset #5), and merge into test branch for testing. However, due to bug or modification of original request, changesets #10, #12 have been made. While task 2 has finished testing #8 and pushed to live #9 already. My problem is every time when modifying task branch (like #10, #12), I have to do another merge to test branch (#11, #13), this makes the graph very messy. Is there any way to solve this issue? Or any better branching model?

    Read the article

  • Mercurial adding new file gives no match found error

    - by Ivo
    I have a strange problem with updating Mercurial. Everytime when I add a file to my repository and then update another location of the repository (for example with in CI process), the error "no match found" occures. Then when I remove to whole folder and clone it again there are no problems and the new added file(s) are there. Updating and removing doesnt give problems When I do "a" Verify the following is shown: data/test.txt.i@54: missing revlog! 54: empty or missing test.txt test.txt@54: b80de5d13875 in manifests not found 3 integrity errors encountered! (first damaged changeset appears to be 54) Any idea what could be causing this?

    Read the article

  • Mercurial: Recommended way of sending a whole repository to someone

    - by Svish
    I have done some programming and I have used Mercurial for source control. I now need to send all of my code to someone else (because they are going to take over). Since all copies of a mercurial repository is a full and real repository my first thought is to first do a clone of my repository without an update and then zipping and emailing that clone. Is this a good way, or is there a better way? For example when using the TortoiseHg Repository Explorer I can right-click on a changeset and under Export there are various options that looks like they could be doing something interesting, but I don't quite understand them or know which one to use.

    Read the article

  • Zero downtime uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • Interruptionless Uploads / Rollback in IIS

    - by NickatUship
    I'm not sure if this is the right way to ask this question, but here's basically what i'd like to do: 1.) Push a changeset to a site in IIS. 2.) Don't interrupt the users. 3.) Be able to roll back effortlessly. So, there are a few things that I know have to happen: 1.) Out of Proc session - handled 2.) Out of Proc cache - handled So the questions that remain: 1.) How do i keep from interrupting the users? If i just upload the files to bin, the app recycles and takes 10+ seconds to come back online 2.) How do i roll back effortlessly? I was thinking a possible solution would be to have two sites set up in IIS, one public and one private. Uploads go to private and get warmed up. After warmup, the sites are swapped. A rollback only entails swapping to private without an upload. This seems sound in theory, but Im not sure of the mechanics. Any ideas?

    Read the article

  • SQL Server source control from Visual Studio

    - by David Atkinson
    Developers have long since had to context switch between two IDEs, Visual Studio for application code development and SQL Server Management Studio for database development. While this is accepted, especially given the richness of the database development feature set in SSMS, loading a separate tool can seem a little overkill. This is where SQL Connect comes in. This is an add-in to Visual Studio that provides a connected development experience for the SQL Server developer. Connected database development involves modifying a development sandbox database, as opposed to offline development, where SQL text files are modified independently of the database. One of the main complaints of Data Dude (VS DBPro) is that it enforces the offline approach. This gripe is what SQL Connect addresses. If you don't already use SQL Source Control, you can get up and running with SQL Connect by adding a new project to your Visual Studio solution as follows: Then choose your existing development database and you're ready to go. If you already use SQL Source Control, you will need to link SQL Connect to your existing database scripts folder repository, so SQL Connect and SQL Source Control can be used collaboratively (note that SQL Source Control v.3.0.9.18 or later is required). Locate the repository (this can be found in the Setup tab in SQL Source Control). .and create a working folder for it (here I'm using TortoiseSVN). Back in Visual Studio, locate the SQL Connect panel (in the View menu if it hasn't auto loaded) and select Import SQL Source Control project Locate your working folder and click Import. This creates a Red Gate database project under your solution: From here you can modify your development database, and manage your changes in source control. To associate your development database with the project, right click on the project node, select Properties, set the database and Save. Now you're ready to make some changes. Locate the object you'd like to modify in the Solution Explorer, and double click it to invoke a query window or table designer. You also have the option to edit the creation SQL directly using Edit SQL File in Project. Keeping the development database and Visual Studio project in sync is as easy as clicking on a button. One you've made your change, you can use whichever mechanism you choose to commit to source control. Here I'm using the free open-source AnkhSVN to integrate Subversion with Visual Studio. Maintaining your database in a Visual Studio solution means that you can commit database changes and application code changes in the same changeset. This is desirable if you have continuous integration set up as you want to ensure that all files related to a change are committed atomically, so you avoid an interim "broken build". More discussion on SQL Connect and its benefits can be found in the following article on Simple Talk: No More Disconnected SQL Development in Visual Studio The SQL Connect project team is currently assessing the backlog for the next development effort, and they'd appreciate your feature suggestions, as well as your votes on their suggestions site: http://redgate.uservoice.com/forums/140800-sql-connect-for-visual-studio- A 28-day free trial of SQL Connect is available from the Red Gate website. Technorati Tags: SQL Server

    Read the article

  • Adding custom interfaces to your mock instance.

    - by mehfuzh
    Previously, i made a post  showing how you can leverage the dependent interfaces that is implemented by JustMock during the creation of mock instance. It could be a informative post that let you understand how JustMock behaves internally for class or interfaces implement other interfaces into it. But the question remains, how you can add your own custom interface to your target mock. In this post, i am going to show you just that. Today, i will not start with a dummy class as usual rather i will use two most common interfaces in the .NET framework  and create a mock combining those. Before, i start i would like to point out that in the recent release of JustMock we have extended the Mock.Create<T>(..) with support for additional settings though closure. You can add your own custom interfaces , specify directly the real constructor that should be called or even set the behavior of your target. Doing a fast forward directly to the point,  here goes the test code for create a creating a mock that contains the mix for ICloneable and IDisposable using the above mentioned changeset. var myMock = Mock.Create<IDisposable>(x => x.Implements<ICloneable>()); var myMockAsClonable = myMock as ICloneable; bool isCloned = false;   Mock.Arrange(() => myMockAsClonable.Clone()).DoInstead(() => isCloned = true);   myMockAsClonable.Clone();   Assert.True(isCloned);   Here, we are creating the target mock for IDisposable and also implementing ICloneable. Finally, using the “as” for getting the ICloneable reference accordingly arranging it, acting on it and asserting if the expectation is met properly. This is a very rudimentary example, you can do the same for a given class: var realItem = Mock.Create<RealItem>(x => {     x.Implements<IDisposable>();     x.CallConstructor(() => new RealItem(0)); }); var iDispose = realItem as IDisposable;     iDispose.Dispose(); Here, i am also calling the real constructor for RealItem class.  This is to mention that you can implement custom interfaces only for non-sealed classes or less it will end up with a proper exception. Also, this feature don’t require any profiler, if you are agile or running it inside silverlight runtime feel free to try it turning off the JM add-in :-). TIP :  Ability to  specify real constructor could be a useful productivity boost in cases for code change and you can re-factor the usage just by one click with your favorite re-factor tool.   That’s it for now and hope that helps Enjoy!!

    Read the article

  • Pre Commit Hook for JSLint in Mercurial and Git

    - by jrburke
    I want to run JSLint before a commit into either a Mercurial or Git repo is done. I want this as an automatic step that is set up instead of relying on the developer (mainly me) remembering to run JSLint before-hand. I normally run JSLint while developing, but want to specify a contract on JS files that they pass JSLint before being committed to the repo. For Mercurial, this page spells out the precommit syntax, but the only variables that seem to be available are the parent1 and parent2 changeset IDs involved in the commit. What I really want are a list of file names that are involved with the commit, so that I can then choose the .js file and run jslint over them. Similar issue for GIT, the default info available as part of the precommit script seems limited. What might work is calling hg status/git status as part of the precommit script, parse that output to find JS files then do the work that way. I was hoping for something easier though, and I am not sure if calling status as part of a precommit hook reflect the correct information. For instance in Git if the changes files have not been added yet, but the git commit uses -a, would the files show up in the correct section of the git status output as being part of the commit set? Update: I got something working, it is visible here: http://github.com/jrburke/dvcs_jslint/

    Read the article

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • Is there a workaround for JDBC w/liquibase and MySQL session variables & client side SQL instructions

    - by David
    Slowly building a starter changeSet xml file for one of three of my employer's primary schema's. The only show stopper has been incorporating the sizable library of MySQL stored procedures to be managed by liquibase. One sproc has been somewhat of a pain to deal with: The first few statements go like use TargetSchema; select "-- explanatory inline comment thats actually useful --" into vDummy; set @@session.sql_mode='TRADITIONAL' ; drop procedure if exists adm_delete_stats ; delimiter $$ create procedure adm_delete_stats( ...rest of sproc I cut out the use statement as its counter-productive, but real issue is the set @@session.sql_mode statement which causes an exception like liquibase.exception.MigrationFailedException: Migration failed for change set ./foobarSchema/sprocs/adm_delete_stats.xml::1293560556-151::dward_autogen dward: Reason: liquibase.exception.DatabaseException: Error executing SQL ... And then the delimiter statement is another stumbling block. Doing do dilligence research I found this rejected MySQL bug report here and this MySQL forum thread that goes a little bit more in depth to the problem here. Is there anyway I can use the sproc scripts that currently exist with Liquibase or would I have to re-write several hundred stored procedures? I've tried createProcedure, sqlFile, and sql liquibase tags without much luck as I think the core issue is that set, delimiter, and similar SQL commands are meant to be interpreted and acted upon by the client side interpreter before being delivered to the server.

    Read the article

  • Quality assurance in small developer teams

    - by Kim L
    Ideally, in a project you will developers, testers, QA manager(s) etc which all make their contribution to the quality of the code. But what if you don't have that kind of resources? If you just have, for example, three developers and don't have the resources to hire a full time QA manager, how do you assure that the code quality meets set standards? What kind of things do you pay attention to in quality assurance? Quality isn't just about the code doing what it is supposed to do (code is properly tested with automatic tests). Quality is also about the code being clean (readable, maintainable, well structured, documented, etc). I'm looking forward to hear what kind of processes you have applied to your team to assure that the quality meets the set standards. We've applied a process where we rotate the QA role between the developers. Each developer is responsible for QA one week at a time. Each changeset is revised and checked that existing tests pass, required new tests have been written, that the code is clean and, of course, that the project builds.

    Read the article

  • QueryHistory against a codeplex project hangs indefinitely

    - by Robaticus
    I'm working on a TFS utility that gets the changesets for a particular project in TFS. I've got a home TFS 2010 server which I primarily use for testing, but I decided to give it a try against a codeplex project to which I contribute. That way, I can test functionality against a larger number of changesets than I have locally. While it works fine in my environment, heading out over the wire to codeplex has left me stumped. My application queries the history, but then, when trying to iterate through the history (which is when it lazy-loads the IEnumerable), my application hangs. Looking at Intellitrace, I see a couple of "first chance" exceptions that the "item doesn't exist at the specified version"-- which is patently not true, as I'm trying to get history for "$/" at VersionSpec.Latest. I also see two or three consecutive server 500 errors being returned to me after forcing debugging to pause. Other operations (like GetItems() ) work fine, so I'm pretty sure authentication isn't an issue. Any thoughts? Here's the code: IEnumerable items = vcs.QueryHistory("$/", VersionSpec.Latest, 1, RecursionType.None, null, null, null, 5, true, false); List<ChangesetItem> returnList = new List<ChangesetItem>(); foreach (Changeset cs in items) //hangs here on first iteraiton { ChangesetItem newItem = new ChangesetItem() { ChangesetId = cs.ChangesetId, //ChangesetNote = cs.CheckinNote.Values[0].Value, Comment = cs.Comment, Committer = cs.Committer, CreationDate = cs.CreationDate }; returnList.Add(newItem); }

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • LINQ to Sql: Insert instead of Update

    - by Christina Mayers
    I am stuck with this problems for a long time now. Everything I try to do is insert a row in my DB if it's new information - if not update the existing one. I've updated many entities in my life before - but what's wrong with this code is beyond me (probably something pretty basic) I guess I can't see the wood for the trees... private Models.databaseDataContext db = new Models.databaseDataContext(); internal void StoreInformations(IEnumerable<EntityType> iEnumerable) { foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == item.Room).FirstOrDefault(); if (type == null) { db.EntityType.InsertOnSubmit(item); } else { type.Date = item.Date; type.LastUpdate = DateTime.Now(); type.End = item.End; } } } internal void Save() { db.SubmitChanges(); } Edit: just checked the ChangeSet, there are no updates only inserts. For now I've settled with foreach (EntityType item in iEnumerable) { EntityType type = db.EntityType.Where(t => t.Room == item.Room).FirstOrDefault(); if (type != null) { db.Exams.DeleteOnSubmit(type); } db.EntityType.InsertOnSubmit(item); } but I'd love to do updates and lose these unnecessary delete statements.

    Read the article

  • Pull/Clone a svn repository into hg with new default branch name?

    - by TheLQ
    I'm forking a project's SVN repo and need to integrate into my Mercurial repo. To keep things simple I have a local hgsubversion repo and a local hg repo. However both the mercurial and hgsubversion repo uses default as their default branch name. My goal here is to put the original code and updates on one branch and my code on the default branch However I have yet to be able to do this. W:\programming\tcsite-svn-test>hg clone http://*HG_SITE*/hg . no changes found updating to branch default 0 files updated, 0 files merged, 0 files removed, 0 files unresolved W:\programming\tcsite-svn-test>hg branch blizzard marked working directory as branch blizzard W:\programming\tcsite-svn-test>hg commit W:\programming\tcsite-svn-test>hg log changeset: 0:be13a9580df0 branch: blizzard tag: tip user: Leon Blakey <[email protected]> date: Fri Jan 14 23:44:25 2011 -0500 summary: Created Blizzard Branch W:\programming\tcsite-svn-test>hg pull http://*SVN_SITE*/svn/ pulling from http://*SVN_SITE*/svn/ .... pulled 23 revisions (run 'hg update' to get a working copy) W:\programming\tcsite-svn-test>hg branch blizzard W:\programming\tcsite-svn-test>hg branches default 23:93642a8890ab <------ blizzard 0:be13a9580df0 Not surprisingly, hgsubversion puts pulled commits into the default branch when I really need them in the blizzard branch. From the docs, there is no way to rename the branch that a commit came from. Frustratingly I can't even come up with a way to do it on a repo with only the hgsubversion repo being pulled from, nothing else. All commits are tied to that one branch no matter what. Is there any suggestions on how to pull changes from an SVN repo and rename the branch to something else?

    Read the article

  • How does mercurial's bisect work when the range includes branching?

    - by Joshua Goldberg
    If the bisect range includes multiple branches, how does hg bisect's search work. Does it effectively bisect each sub-branch (I would think that would be inefficient)? For instance, borrowing, with gratitude, a diagram from an answer to this related question, what if the bisect got to changeset 7 on the "good" right-side branch first. @ 12:8ae1fff407c8:bad6 | o 11:27edd4ba0a78:bad5 | o 10:312ba3d6eb29:bad4 |\ | o 9:68ae20ea0c02:good33 | | | o 8:916e977fa594:good32 | | | o 7:b9d00094223f:good31 | | o | 6:a7cab1800465:bad3 | | o | 5:a84e45045a29:bad2 | | o | 4:d0a381a67072:bad1 | | o | 3:54349a6276cc:good4 |/ o 2:4588e394e325:good3 | o 1:de79725cb39a:good2 | o 0:2641cc78ce7a:good1 Will it then look only between 7 and 12, missing the real first-bad that we care about? (thus using "dumb" numerical order) or is it smart enough to use the full topography and to know that the first bad could be below 7 on the right-side branch, or could still be anywhere on the left-side branch. The purpose of my question is both (a) just to understand the algorithm better, and (b) to understand whether I can liberally extend my initial bisect range without thinking hard about what branch I go to. I've been in high-branching bisect situations where it kept asking me after every test to extend beyond the next merge, so that the whole procedure was essentially O(n). I'm wondering if I can just throw the first "good" marker way back past some nest of merges without thinking about it much, and whether that would save time and give correct results.

    Read the article

  • Handling close-to-impossible collisions on should-be-unique values

    - by balpha
    There are many systems that depend on the uniqueness of some particular value. Anything that uses GUIDs comes to mind (eg. the Windows registry or other databases), but also things that create a hash from an object to identify it and thus need this hash to be unique. A hash table usually doesn't mind if two objects have the same hash because the hashing is just used to break down the objects into categories, so that on lookup, not all objects in the table, but only those objects in the same category (bucket) have to be compared for identity to the searched object. Other implementations however (seem to) depend on the uniqueness. My example (that's what lead me to asking this) is Mercurial's revision IDs. An entry on the Mercurial mailing list correctly states The odds of the changeset hash colliding by accident in your first billion commits is basically zero. But we will notice if it happens. And you'll get to be famous as the guy who broke SHA1 by accident. But even the tiniest probability doesn't mean impossible. Now, I don't want an explanation of why it's totally okay to rely on the uniqueness (this has been discussed here for example). This is very clear to me. Rather, I'd like to know (maybe by means of examples from your own work): Are there any best practices as to covering these improbable cases anyway? Should they be ignored, because it's more likely that particularly strong solar winds lead to faulty hard disk reads? Should they at least be tested for, if only to fail with a "I give up, you have done the impossible" message to the user? Or should even these cases get handled gracefully? For me, especially the following are interesting, although they are somewhat touchy-feely: If you don't handle these cases, what do you do against gut feelings that don't listen to probabilities? If you do handle them, how do you justify this work (to yourself and others), considering there are more probable cases you don't handle, like a supernonva?

    Read the article

  • Should checkins be small steps or complete features?

    - by Caspin
    Two of version controls uses seem to dictate different checkin styles. distibution centric: changesets will generally reflect a complete feature. In general these checkins will be larger. This style is more user/maintainer friendly. rollback centric: changesets will be individual small steps so the history can function like an incredibly powerful undo. In general these checkins will be smaller. This style is more developer friendly. I like to use my version control as really powerful undo while while I banging away at some stubborn code/bug. In this way I'm not afraid to make drastic changes just to try out a possible solution. However, this seems to give me a fragmented file history with lots of "well that didn't work" checkins. If instead I try to have my changeset reflect complete features I loose the use of my version control software for experimentation. However, it is much easier for user/maintainers to figure out how the code is evolving. Which has great advantages for code reviews, managing multiple branches, etc. So what's a developer to do? checkin small steps or complete features?

    Read the article

  • Is this a situation where I should "hg push -f"?

    - by user144182
    I have two machines, A and B that both access an external hg repository. I did some development on A, wasn't ready to push changesets to the external, and needed to switch machines, so I pushed the changesets to B using hg serve. Changesets continued on B, were committed and then pushed to external repo. I then pulled on A and updated to default/tip. This left the local changesets that had previously been pushed to B as a branch, but because of how I pushed things around, the changes in the local changesets are already in default/tip. I've now continued to make changes and commit locally on A, but when I try to push hg asks me to merge or do push -f instead. I know push -f is almost never recommended. This situation is close to one where I should use rebase, however the changesets that would be "rebased" I don't really need locally or in the external repository since they are already effectively in default/tip via the push to B. Now, I know I could merge with the latest local changeset and just discard the changes, but then I would still have to commit the merge which gets me back into rebase territory. Is this a case where I could do hg push -f? Also, why would pushing from A create remote heads if I've updated to default/tip before I continued to commit changesets?

    Read the article

< Previous Page | 1 2 3 4 5 6 7  | Next Page >