Search Results

Search found 988 results on 40 pages for 'branching and merging'.

Page 32/40 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Using capistrano to deploy from different git branches

    - by Toms Mikoss
    I am using capistrano to deploy a RoR application. The codebase is in a git repository, and branching is widely used in development. Capistrano uses deploy.rb file for it's settings, one of them being the branch to deploy from. My problem is this: let's say I create a new branch A from master. The deploy file will reference master branch. I edit that, so A can be deployed to test environment. I finish working on the feature, and merge branch A into master. Since the deploy.rb file from A is fresher, it gets merged in and now the deploy.rb in master branch references A. Time to edit again. That's a lot of seemingly unnecessary manual editing - the parameter should always match current branch name. On top of that, it is easy to forget to edit the settings each and every time. What would be the best way to automate this process? Edit: Turns out someone already had done exactly what I needed.

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Has subversion lost some of my revisions in a branch?

    - by BombDefused
    I've been working on my project using a subversion branch. I've used the branching feature few times before without issue, until today. I've come to merge back into the trunk, and noticed that not everything from my branch was there. I go back to my project folder which I've been committing to the branch and look at the log messages using TortoiseSVN (the command line basic log command shows the same). See the attached image. The revision numbers go up incrementally, until revision 303 (the last trunk revision was 299). Then there are numbers missing. The latest commit, about half an hour ago was 316, but it doesn't show up in the log for the branch. Trying to commit the files again doesn't do anything. I am the only person committing to this repository at present. The missing revisions do not show up in the log for the trunk project. What's going on here. Is this a bug or am I doing something wrong? Update - the revisions do show in the repo browser (Thanks Antonio Perez), but I don't understand why they are not being included with the merge?

    Read the article

  • What do you call a generalized (non-GUI-related) "Model-View-Controller" architecture?

    - by dcuccia
    I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario: Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller. So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions. I understand "M means Model data and the V means View" - but do you call the superset pattern? Component-Commander-Controller? Whence can I exhume examples exemplary?

    Read the article

  • Is there an algorithm for finding an item that matches certain properties, like a 20 questions game?

    - by lala
    A question about 20 questions games was asked here: However, if I'm understanding it correctly, the answers seem to assume that each question will go down a hierarchal branching tree. A binary tree should work if the game went like this: Is it an animal? Yes. Is it a mammal? Yes. Is it a feline? Yes. Because feline is an example of a mammal and mammal is an example of an animal. But what if the questions go like this? Is it a mammal? Yes. Is it a predator? Yes. Does it have a long nose? No. You can't branch down a tree with those kinds of questions, because there are plenty of predators that aren't mammals. So you can't have your program just narrow it down to mammal and have predators be a subset of mammals. So is there a way to use a binary search tree that I'm not understanding or is there a different algorithm for this problem?

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • how can I make a "downstream only" copy of a file in TFS

    - by jcollum
    I've got this SQL script that needs to exist in two places in source control. I want to have only one real copy of this file and keep a virtual copy of the file in the other solution. One is needed for a unit test and the other for a development tool. The files, should, by definition, always be the same. If they have differences then there's a problem with our process. In Sourcegear I could make a virtual copy of a specific version of a file and keep it somewhere else in the source tree. That doesn't seem to be possible in TFS. Is it possible in SVN? So what are my options here? Branching/merging -- which is what the TFS team says I should be doing here -- means just another step that I have to remember to do. Plus it isn't automatic and I would prefer that this be automated. Is there some way to run an exe on checkin of a specific file? I'm thinking if I could do that then I could do a checkout-edit-checkin of the downstream copy of the file.

    Read the article

  • Media Archive System with branches?

    - by Ian McEwen
    In short, how can I get VCS features (revisioning, branching, and deduplication) for a media collection that's far too large for most/all VCS systems? Background I have a 300GB music folder; unfortunately, I only have the hard drive space for this on my desktop system. However, a good portion of my collection is FLAC; therefore, I could theoretically have a space-optimized version in which I transcode all the FLAC to mp3 or some other lossy format, and use only that version on the laptop. However, a portion of my collection isn't FLAC. And that which isn't FLAC shouldn't be transcoded to an equivalent format; it won't have any space savings, which is the point. Moreover, it shouldn't be duplicated: the mp3/ogg portions of the collection should probably be exactly the same files. Thoughts One solution is to have format-specific organization of my music folders, and use some script to transcode the FLAC directory to mp3 or such into another directory. Another is some sort of hack using entirely separate copies and symbolic links for deduplication, or something similar. But these also have a disadvantage of lacking versioning; I'd like to be able to reorganize my music collection, retag things, etc. and save history. This isn't key, but would be awfully nice. I can't see it as entirely unreasonable to set up VCS hooks or something equivalent to keep directory structure synced between two copies, update tags, and transcode FLAC automatically into the space-optimized copy. Basically, the system I really want is a version control system. Two branches: one archival/desktop branch including the FLAC, one space-optimized/laptop branch without it; most VCSes would deal well with whole chunks being the same files well by compressing in a reasonable way (i.e. don't keep two copies of the same data). I could also do a lot of what I talk about above with hooks. But I don't know of any VCS that would deal with a 300GB repository with almost 20k files. Many of them would just not even initialize the whole affair; others would just do it inexpressibly slowly or otherwise badly. checkpoint looks like it's designed for something close (it's at least for media), but wouldn't do deduplication well (and I'm not convinced I'd be able to script it to do things like automatic transcoding and directory-structure syncing). So. Is there anything out there that can do all this, or should I consider it a programming project?

    Read the article

  • How to rate-limit concurrent sessions with nginx or haproxy?

    - by bantic
    I'm currently using nginx to reverse-proxy requests from web clients that are doing long-polling to an upstream. Since we're doing long polling (as opposed to websockets), when a client connects it will make multiple http connections to the server in serial, re-establishing a connection every time the server sends it some data (or timing out and re-establishing if the server has nothing to say for 10 seconds). What I'd like to do is limit the number of concurrent web clients. Since the clients are constantly making new HTTP requests instead of keeping a single request open, it's a little tricky to count the total number of web clients (because it's not the same as total number of concurrently connected http clients). The method I've come up with is to track http requests by the originating IP address, and store the IP address somewhere with a TTL of 20 seconds. If a request comes in whose IP isn't recognized, then we check the total number of unexpired stored IP addresses; if that's less than the maximum then we allow this request through. And if a request comes in with an IP address that we can find in the look-up table that hasn't yet expired, then it is allowed through as well. All requests that are allowed through have their IPs added to the table (if not there before) and the TTL refreshed to 20 seconds again. I had actually whipped something together that worked correctly this way using nginx along with the Redis 2.0 Nginx Module (and the nginx lua module to simplify the conditional branching), using redis to store my IP addresses with a TTL (the SETEX command), and checking the table size with the DBSIZE command. This worked but the performance was horrible. nginx and redis ended up using lots of cpu and the machine could only handle a very small number of concurrent requests. The new stick-table and tracking counters that were added to Haproxy in version 1.5 (via a commission from serverfault) seem like they might be ideal to implement exactly this sort of rate limiting, because the stick-table can track IP addresses and automatically expire entries. However, I don't see an easy way to get a total count of the unexpired entries in the stick table, which would be necessary to know the number of connected web clients. I'm curious if anyone has any suggestions, for nginx or haproxy or even for something else not mentioned here that I haven't thought of yet.

    Read the article

  • Core Data migration problem: "Persistent store migration failed, missing source managed object model

    - by John Gallagher
    The Background A Cocoa Non Document Core Data project with two Managed Object Models. Model 1 stays the same. Model 2 has changed, so I want to migrate the store. I've created a new version by Design Data Model Add Model Version in Xcode. The difference between versions is a single relationship that's been changed from to a one to many. I've made my changes to the model, then saved. I've made a new Mapping Model that has the old model as a source and new model as a destination. I've ensured all Mapping Models and Data Models and are being compiled and all are copied to the Resource folder of my app bundle. I've switched on migrations by passing in a dictionary with the NSMigratePersistentStoresAutomaticallyOption key as [NSNumber numberWithBool:YES] when adding the Persistent Store. Rather than merging all models in the bundle, I've specified the two models I want to use (model 1 and the new version of model 2) and merged them using modelByMergingModels: The Problem No matter what I do to migrate, I get the error message: "Persistent store migration failed, missing source managed object model." What I've Tried I clean after every single build. I've tried various combinations of having only the model I'm migrating to in Resources, being compiled, or both. Since the error message implies it can't find the source model for my migration, I've tried having every version of the model in both the Resources folder and being compiled. I've made sure I'm not making a really basic error by switching back to the original version of my data model. The app runs fine. I've deleted the Mapping Model and the new version of the model, cleaned, then recreated both. I've tried making a different change in the new model - deleting an entity instead. I'm at my wits end. I can't help but think I've made a huge mistake somewhere that I'm not seeing. Any ideas?

    Read the article

  • How to add a blank page to a pdf using iTextSharp?

    - by Russell
    I am trying to do something I thought would be quite simple, however it is not so straight forward and google has not helped. I am using iTextSharp to merge PDF documents (letters) together so they can all be printed at once. If a letter has an odd number of pages I need to append a blank page, so we can print the letters double-sided. Here is the basic code I have at the moment for merging all of the letters: // initiaise MemoryStream pdfStreamOut = new MemoryStream(); Document document = null; MemoryStream pdfStreamIn = null; PdfReader reader = null; int numPages = 0; PdfWriter writer = null; for int(i = 0;i < letterList.Count; i++) { byte[] myLetterData = ...; pdfStreamIn = new MemoryStream(myLetterData); reader = new PdfReader(pdfStreamIn); numPages = reader.NumberOfPages; // open the streams to use for the iteration if (i == 0) { document = new Document(reader.GetPageSizeWithRotation(1)); writer = PdfWriter.GetInstance(document, pdfStreamOut); document.Open(); } PdfContentByte cb = writer.DirectContent; PdfImportedPage page; int importedPageNumber = 0; while (importedPageNumber < numPages) { importedPageNumber++; document.SetPageSize(reader.GetPageSizeWithRotation(importedPageNumber)); document.NewPage(); page = writer.GetImportedPage(reader, importedPageNumber); cb.AddTemplate(page, 1f, 0, 0, 1f, 0, 0); } } I have tried using: document.SetPageSize(reader.GetPageSizeWithRotation(1)); document.NewPage(); at the end of the for loop for an odd number of pages without success. Any help would be much appreciated!

    Read the article

  • Working with Subversion the same as with Visual Source Safe in Visual Studio

    - by Para
    Hi, At work I just started using subversion with ankhsvn instead of visual source safe. I managed to integrate it well enough but it doesn't seem the same. Using VSS the following would happen: A user check out a file by right clicking and selecting "check out" or by editing it. If another user tried to modify the same file he would get an error. No 2 users could edit the same file at the same time. No fancy merging. No conflicts and no conflict resolutions. I understand the the philosophy behind subversion is different but is there any way that this behavior described above could be duplicated with subversion? There is an option in ankhSVN called "Automatically lock files on change..." but even if I activate this option when I edit a file it never gets automatically locked. Even if this option worked the other users wouldn't see the lock until they comited the file. They wouldn't get an error when they tried to edit it like they would in Visual Source Safe. So basically: can Visual Source Safe's behavior be duplicated using Subversion and ankhSVN?

    Read the article

  • Xcode: Unable to open project... cannot be opened because the project file cannot be parsed...

    - by Chris Butler
    Hi everyone. I have been working for a while to create an iPhone app. Today when my battery was low, I was working and constantly saving my source files then the power went out... Now when I plugged my computer back in and it is getting good power I try to open my project file and I get an error: "Unable to Open Project Project ... cannot be opened because the project file cannot be parsed." Is there a way that people know of that I can recover from this? I tried using an older project file and re inserting it and then compiling. It gives me a funky error which is probably because it isn't finding all the files it wants... I really don't want to rebuild my project from scratch if possible. Thanks in advance. EDIT Ok, I did a diff between this and a slightly older project file that worked and saw that there was some corruption in the file. After merging them (the good and newest parts) it is now working. Great points about the SVN. I have one, but there has been some funkiness trying to sync XCode with it. I'll definitely spend more time with it now... ;-) Thanks for everyone's comments and suggestions.

    Read the article

  • Split WPF Style XAML Files

    - by anon
    Most WPF styles I have seen are split up into one very long Theme.xaml file. I want to split mine up for readability so my Theme.xaml looks like this: <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/PresentationFramework.Aero;v3.0.0.0;31bf3856ad364e35;component/themes/aero.normalcolor.xaml"/> <ResourceDictionary Source="Controls/Brushes.xaml"/> <ResourceDictionary Source="Controls/Buttons.xaml"/> ... </ResourceDictionary.MergedDictionaries> </ResourceDictionary> The problem is that this solution does not work. I have a default button style which is BasedOn the default Aero style for a button: <Style x:Key="{x:Type Button}" TargetType="{x:Type Button}" BasedOn="{StaticResource {x:Type Button}}"> <Setter Property="FontSize" Value="14"/> ... </Style> If I place all of this in one file it works but as soon as I split it up I get StackOverflow exceptions because it thinks it is BasedOn itself. Is there a way around this? How does WPF add resources when merging resource dictionaries?

    Read the article

  • Why might changes be populated from one NSManagedObjectContext to another without an explicit merge?

    - by Mike Laurence
    I'm working on an object import feature that utilizes multiple threads/NSManagedObjectContexts, using http://www.mac-developer-network.com/columns/coredata/may2009/ as my guide (note that I am developing for iPhone). For some reason, when I save one of my contexts the other is immediately updated with the changes, even though I have commented out my calls to mergeChangesFromContextDidSaveNotification. Are there any reasons the contexts might be merging into one another without an explicit call? Here a log of what's going on: // 1.) Main context is saved with "Peter Gabriel" // 2.) Test context is created, begins with same contents as main context // 3.) Main context is inserted with "Spoon" // 4.) Test context is inserted with "Phoenix" // Contents at this point: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) // 5.) testContext is saved // New contents of contexts: CoreTest[4341:903] Artists in main context: ( "Peter Gabriel", "Phoenix", "Spoon" ) CoreTest[4341:903] Artists in test context: ( "Peter Gabriel", "Phoenix" ) As you can see, the test context is saved midway through, and the main context suddenly has the new objects from the test context, even though I haven't performed the whole NSManagedObjectContextDidSaveNotification/mergeChangesFromContext combo. My understanding is that no changes will ever be merged unless done so explicitly... does anyone know what's going on here?

    Read the article

  • Need advice or pointers on Release Management Strategies

    - by Murray
    I look after an internal web based (Java, JSP, Mediasurface, etc.) system that is in constant use (24/5). Users raise tickets for enhancements, bug fixes and other business changes. These issues are signed off individually and assigned to one of three or four developers. Once the issue is complete it is built and the code only committed to SVN. The changed files (templates, html, classes, jsp) are then copied to a dev server and committed to a different repository from where they are checked out to the UAT server for testing. (this often requires the Tomcat service to be restarted and occasionally the Mediasurface service as well). The users then test and either reject or approve the release. If approved the edited files are checked out to the Live server and the same process as with UAT undertaken. If rejected the developer makes the relevant changes and starts the release process again. This is all done manually without much control. Where different developers are working on similar files, changes sometimes get overwritten by builds done on out of sync code in other cases changes in UAT are moved to live in error as they are mixed up in files associated with a signed off release. I would like to move this to a more controlled and automated process where all source code and output files are held in SVN and releases to Dev, UAT and Live managed by a CI system (We have TeamCity in house for our .NET applications). My question is on how to manage the releases of multiple changes where some will be signed off and moved on and others rejected and returned to the developer. The changes may be on overlapping files and simply merging each release in to a Release Branch means that the rejected changes would have to be backed out of the branch. Is there a way to manage this using SVN and CI or will I simply have to live with the current system.

    Read the article

  • Merge Word Documents (Office Interop & .NET), Keeping Formatting

    - by mbmccormick
    I'm having some difficulty merging multiple word documents together using Microsoft Office Interop Assemblies (Office 2007) and ASP.NET 3.5. I'm able to merge the documents, but some of my formatting is missing (namely the fonts and images). My current merge code is shown below. private void CombineDocuments() { object wdPageBreak = 7; object wdStory = 6; object oMissing = System.Reflection.Missing.Value; object oFalse = false; object oTrue = true; string fileDirectory = @"C:\documents\"; Microsoft.Office.Interop.Word.Application WordApp = new Microsoft.Office.Interop.Word.Application(); Microsoft.Office.Interop.Word.Document wDoc = WordApp.Documents.Add(ref oMissing, ref oMissing, ref oMissing, ref oMissing); string[] wordFiles = Directory.GetFiles(fileDirectory, "*.doc"); for (int i = 0; i < wordFiles.Length; i++) { string file = wordFiles[i]; wDoc.Application.Selection.Range.InsertFile(file, ref oMissing, ref oMissing, ref oMissing, ref oFalse); wDoc.Application.Selection.Range.InsertBreak(ref wdPageBreak); wDoc.Application.Selection.EndKey(ref wdStory, ref oMissing); } string combineDocName = Path.Combine(fileDirectory, "Merged Document.doc"); if (File.Exists(combineDocName)) File.Delete(combineDocName); object combineDocNameObj = combineDocName; wDoc.SaveAs(ref combineDocNameObj, ref m_WordDocumentType, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing, ref oMissing); } I don't care necessarily how this is accomplished. It could output via PDF if it had to. I just want the formatting to carry over. Any help or hints that you could provide me with would be appreciated! Thanks!

    Read the article

  • Databinding in WinForms performing async data import

    - by burnside
    I have a scenario where I have a collection of objects bound to a datagrid in winforms. If a user drags and drops an item on to the grid, I need to add a placeholder row into the grid and kick off a lengthy async import process. I need to communicate the status of the async import process back to the UI, updating the row in the grid and have the UI remain responsive to allow the user to edit the other rows. What's the best practice for doing this? My current solution is: binding a thread safe implementation of BindingList to the grid, filled with the objects that are displayed as rows in the grid. When a user drags and drops an item on to the grid, I create a new object containing the sparse info obtained from the dropped item and add that to the BindingList, disabling the editing of that row. I then fire off a separate thread to do the import, passing it the newly bound object I have just created to fill with data. The import process, periodically sets the status of the object and fires an event which is subscribed to by the UI telling it to refresh the grid to see the new properties on the object. Should I be passing the same object that is bound to the grid to the import process thread to operate on, or should I be creating a copy and merging back the changes to the object on the UI thread using BeginInvoke? Any problems or advice with this implementation? Thanks

    Read the article

  • I want to merge two PostScript Documents, pagewise. How?

    - by Peter Miehle
    hello, i have a tricky question, so i need to describe my problem: i need to print 2-sided booklets (a third of a paper) on normal paper (german A4, but letter is okay also) and cut the paper afterwards. The Pages are in a Postscript Level 2 File (generated by an ancient printer driver, so no chance to patch that, except ps2ps) generated by me with the ancient OS's Printing driver facilities (GpiMove, GpiLine, GpiText etc). I do not want to throw away two-thirds of the paper, so my idea is: Take file one, two and three, merge them (how?) on new double-sided papers by translate/move file two and three by one resp. two thirds and print the resulting new pages. If it helps, I can manage to print one page of the booklet per file. I cannot "speak" postscript natively, but I am capable of parsing and merging and manipulating files programmaticly. Maybe you can hint me to a webpage. I've read through the manuals on adobe's site and followed the links on www.inkguides.com/postscript.asp Maybe there are techniques with PDF that would help? I can translate ps2pdf. Thanks for help Peter Miehle PS: my current solution: i.e. 8 pages: print page 1, 4 and 7 on page one, 2,5,8 on page two and 3,6,blank on page three, cut the paper and restack. But i want to use a electrical cutting machine, which works better with thicker stacks of paper.

    Read the article

  • StarTeam trunk.

    - by Nix
    I have the unfortunate opportunity of source control via Borland's StarTeam. It unfortunately does very few things well, and one supreme weakness is its view management. I love SVN and come from an SVN mindset. Our issue is post production release we are spending countless hours merging changes into a "production support" environment. Please do not harass me this was not my doing, I inherited it and am trying to present a better way of managing the repository. It is not an option to switch to a different SCM tool. Current setup Product.1.0 (TRUNK, current production code, and at this level are pending bug fixes) Product.2.0(true trunk anything checked in gets tested, and then released next production cycle, a lot of changes occur in this view) My proposal is going to be to swap them, have all development be done on the trunk (Production), tag on releases, and as needed create child views to represent production support bug fixes. Production Production.2.0.SP.1 I can not find any documentation to support the above proposal so I am trying to get feedback on whether or not the change is a good idea and if there is anything you would recommend doing differently.

    Read the article

  • Problem reintegrating a branch into the trunk in Subversion 1.5

    - by pako
    I'm trying to reintegrate a development branch into the trunk in my Subversion 1.5 repository. I merged all the changes from the trunk to the development branch prior to this operation. Now when I try to reintegrate the changes from the branch I get the following error message: Command: Reintegrate merge https://dev/svn/branches/devel into C:\trunk Error: Reintegrate can only be used if revisions 280 through 325 were previously Error: merged from https://dev/svn/trunk to the reintegrate Error: source, but this is not the case: Error: branches/devel/images/test Error: Missing ranges: /trunk/images/test:280-324 ... The message then goes on complaining about some folders in my project. But when I try to merge the changes from the trunk to the development branch again, TortoiseSVN tells me that there's nothing to merge (as I already merged all the changes before): Command: Merging revisions 1-HEAD of https://dev/svn/trunk into C:\devel, respecting ancestry Completed: C:\devel I'm trying to follow the instructions from here: http://svnbook.red-bean.com/en/1.5/svn.branchmerge.basicmerging.html, but there's nothing about solving such a problem. Any ideas? Perhaps I should just delete the trunk and then make a copy of my branch? But I'm not really sure if it's safe.

    Read the article

  • Doubts About Core Data NSManagedObject Deep Copy

    - by Jigzat
    Hello everyone, I have a situation where I must copy one NSManagedObject from the main context into an editing context. It sounds unnecessary to most people as I have seen in similar situations described in Stackoverflow but I looks like I need it. In my app there are many views in a tab bar and every view handles different information that is related to the other views. I think I need multiple MOCs since the user may jump from tab to tab and leave unsaved changes in some tab but maybe it saves data in some other tab/view so if that happens the changes in the rest of the views are saved without user consent and in the worst case scenario makes the app crash. For adding new information I got away by using an adding MOC and then merging changes in both MOCs but for editing is not that easy. I saw a similar situation here in Stackoverflow but the app crashes since my data model doesn't seem to use NSMutableSet for the relationships (I don't think I have a many-to-many relationship, just one-to-many) I think it can be modified so I can retrieve the relationships as if they were attributes for (NSString *attr in relationships) { [cloned setValue:[source valueForKey:attr] forKey:attr]; } but I don't know how to merge the changes of the cloned and original objects. I think I could just delete the object from the main context, then merge both contexts and save changes in the main context but I don't know if is the right way to do it. I'm also concerned about database integrity since I'm not sure that the inverse relationships will keep the same reference to the cloned object as if it were the original one. Can some one please enlighten me about this?

    Read the article

  • How do I manage application configuration in ASP.NET?

    - by GlennS
    I am having difficulty with managing configuration of an ASP.Net application to deploy for different clients. The sheer volume of different settings which need twiddling takes up large amounts of time, and the current configuration methods are too complicated to enable us to push this responsibility out to support partners. Any suggestions for better methods to handle this or good sources of information to research? How we do things at present: Various xml configuration files which are referenced in Web.Config, for example an AppSettings.xml. Configurations for specific sites are kept in duplicate configuration files. Text files containing lists of data specific to the site In some cases, manual one-off changes to the database C# configuration for Windsor IOC. The specific issues we are having: Different sites with different features enabled, different external services we have to talk to and different business rules. Different deployment types (live, test, training) Configuration keys change across versions (get added, remove), meaning we have to update all the duplicate files We still need to be able to alter keys while the application is running Our current thoughts on how we might approach this are: Move the configuration into dynamically compiled code (possibly Boo, Binsor or JavaScript) Have some form of diffing/merging configuration: combine a default config with a live/test/training config and a site-specific config

    Read the article

  • Git-svn branch hoses dcommit when using an odd branch structure

    - by Chuck Vose
    I had a boss, past-tense, who decided to put svn branches in the same folder as trunk. Normally, this wouldn't affect me that much but since I'm using git-svn things are going so well. After I did a fetch it created a folder for each branch in my root folder so I have three folders, drupal, trunk, and client. The drupal folder is git's master branch, client and trunk are the svn branches. Merging and committing works great, in fact everything git related is working superb. However dcommit is totally hosed, it's trying to commit a folder called client and one called trunk. I can't even imagine what havoc this would cause for svn later on. So my question is, what have I done wrong in my .git/config and is there anything I can do to fix this or am I going to have to suffer and go back to using svn? Please don't make me go back. I don't think I can take it anymore. Bastard boss knows how to leave a legacy. [svn-remote "svn"] url = https://svn.mydomain.com/svn/project_name fetch = trunk:refs/remotes/trunk branches = *:refs/remotes/* tags = tags/*:refs/remotes/tags/* Normally the branches line would look like this (when using --stdlayout): branches = branches/*:refs/remotes/branches/* ls output is thus: $ ls client/ docs/ drupal/ sql/ trunk/

    Read the article

  • merge() multiple data frames (do.call ?)

    - by Vincent
    Hi everyone, here's my very simple question: merge() only takes two data frames as input. I need to merge a series of data frames from a list, using the same keys for every merge operation. Given a list named "test", I want to do something like: do.call("merge", test). I could write some kind of loop, but I'm wondering if there's a standard or built-in way to do this more efficiently. Any advice is appreciated. Thanks! Here's a subset of the dataset in dput format (note that merging on country is trivial in this case, but that there are more countries in the original data): test <- list(structure(list(country = c("United States", "United States", "United States", "United States", "United States"), NY.GNS.ICTR.GN.ZS = c(13.5054687, 14.7608697, 14.1115876, 13.3389063, 12.9048351), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NY.GNS.ICTR.GN.ZS", "year"), row.names = c(NA, 5L), class = "data.frame"), structure(list( country = c("United States", "United States", "United States", "United States", "United States"), NE.TRD.GNFS.ZS = c(29.3459277, 28.352838, 26.9861939, 25.6231246, 23.6615328), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NE.TRD.GNFS.ZS", "year"), row.names = c(NA, 5L), class = "data.frame"), structure(list( country = c("United States", "United States", "United States", "United States", "United States"), NY.GDP.MKTP.CD = c(1.37416e+13, 1.31165e+13, 1.23641e+13, 1.16309e+13, 1.0908e+13), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NY.GDP.MKTP.CD", "year"), row.names = c(NA, 5L), class = "data.frame"))

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >