Search Results

Search found 2823 results on 113 pages for 'perforce branch spec'.

Page 49/113 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • SVN supports historical merges so how is Mercurial better?

    - by radman
    Hi, I'm a long time SVN user and have been hearing a lot of brou ha ha with regard to mercurial and decentralised version control systems in general. The main touted feature that I am aware of is that merging in Mercurial is much easier because it records information for each merge so each successive merge is aware of the previous ones. Now as stated in the red book, in the section to do with merging, SVN already supports this with mergeinfo. Now I have not actually used this feature (although I wanted to, our repo version wasn't recent enough) but is this SVN feature particularly different to what Mercurial offers? For anyone who is not aware the suggested work flow for historical merging in svn is this: branch from the development trunk to do your own thing. Regularly merge changes from trunk into your branch to stay up to date. Merge back when your done with the mergeinfo to smooth the process. Without historical data merging this is a nightmare because the comparison is strictly on the differences in the files and does not take into account the steps taken on the way. So each change in the development trunk puts you further into possible conflict when you merge back. Now what I would like to know is: Does merging using Mercurial provide a significant advantage when compared with mergeinfo in SVN or is this just a lot of hot air about nothing? Has anyone used the mergeinfo feature in SVN and how good is it actually in practice?

    Read the article

  • Git graph with ref logs

    - by Francisco Garcia
    I am trying to improve my custom git log format string. I have almost everything I want except the ref names. I can already get a log similar to what I want: > git log --all --source --pretty=oneline --graph * b7c7ad3855b54e94ad7ac03f2d2e5b96d6e5ac1d refs/heads/b1 na | * 695e1482622a79230fa1d83afb8d70e86847334a refs/heads/master Merge branch 'b1' | |\ | |/ |/| * | ec21f370f82096c0208f43b390da234d92e8c74a refs/heads/b1 beta * | c6bc1f55ab3b1bd568493a5de4298dfcb4f66d8d refs/heads/b1 alfa * | 762dd868ae87753afc1cbf9803744c76f9a9e121 refs/heads/b1 tango | * 57fb27bff06ee9bb569f93ba815e9dcd69521c13 refs/heads/master little last post commit |/ | * 8d613d09b43152a7263b6e02d47ec8a4304f54be refs/heads/b3 the other commit | * e1f32b7cb86633351df06e37c2c58ef3f9fafc40 refs/heads/b3 something |/ | * 01b5c6728cf25dd576733211ce75dd3ecc29c7ba refs/heads/b2 this time a I am fighting to get a customized output with my own format string like this: > git log --pretty=format:'%h - %gD %s' --source -g b7c7ad3 - HEAD@{0} na ec21f37 - HEAD@{1} beta 01b5c67 - HEAD@{2} this time a 01b5c67 - HEAD@{3} this time a 695e148 - HEAD@{4} Merge branch 'b1' 57fb27b - HEAD@{5} little last post commit My main problem is that I cannot get the ref names I want. I assume it is one of the %g? format strings, but none of them seem to give me the full ref name. Another problem is that the %g? format strings are empty unless I walk the reflogs (-g). However git refuses to combine --graph with -g How can reproduce the first sample with a format string which I can further customize?

    Read the article

  • Trigger ad-hoc activity within a workflow

    - by Chris Taylor
    I am looking to use WF 4 to replace an existing workflow solution we have. One feature that is currently used in the existing workflow engine is the ability to cancel a current activity and loopback to a FlowSwitch type activity. So given the following crude workflow where we start at 'O' and base in the input data the workflow follows the path to 'A2' which is currently blocking on s bookmark waiting for input. ---------A1--\ | \ /\ \ O------- ---->--(A2)-------| ^ \/ / | | | / | | ---------A3--/ | | | |----------------------| However in the meantime some out of band data comes in that means we should cancel 'A2' and return to the FlowSwitch to re-evaluate based on the new data. The question is what is the best way to handle the out of band data that arrived? My initial guess is to have a Parallel activity with one branch waiting for out of band data and the other branch containing the workflow sequence described above. If data came in on the brach waiting for the out of band data, how would I cancel the current activity in the workflow and force it to return to the FlowSwitch. Or of course is there a better way to handle this. I have not actually done any work with the WF4 stuff for WF3 for that matter so I might be missing something obvious here.

    Read the article

  • SIP UAS asks for OPTIONS

    - by TacB0sS
    Hey, I have UAC that registers to a UAS, after registration the UAS sends me an OPTIONS request, what should I answer it? only the audio media streams? Update I: Allow me to explain myself better... if I want to invite someone to a session I USE the INVITE method and negotiate the media then, for that specific session. But once I register to the server, and it asks me for OPTIONS, then what should I supply, everything my client supports? once I answer it would it deduce that every INVITE I would request from now on would use these medias? or would I need to supply new media with every request? Update II: Hi Wiz, I was in the process of building a negotiation system, so i tried it out and replied the UAS here is the sort dialog we had: OPTIONS sip:[email protected] SIP/2.0 Via: SIP/2.0/UDP xx.xx.xx.xx:5060;branch=z9hG4bK45b197cb;rport=5060;received=xx.xx.xx.xx From: "Unknown" <sip:[email protected]>;tag=as66cf26df To: <sip:[email protected]> Contact: <sip:[email protected]> Call-ID: [email protected] CSeq: 102 OPTIONS User-Agent: Freeswitch 1.2.3 Max-Forwards: 70 Date: Sat, 05 Jun 2010 12:06:43 GMT Allow: INVITE,ACK,CANCEL,OPTIONS,BYE,REFER,SUBSCRIBE,NOTIFY,INFO Supported: replaces Content-Length: 0 OPTIONS In Response To 102: SIP/2.0 200 OK Via: SIP/2.0/UDP xx.xx.xx.xx:5060;branch=z9hG4bK45b197cb;rport=5060;received=xx.xx.xx.xx From: "Unknown" <sip:[email protected]>;tag=as66cf26df To: <sip:[email protected]> CSeq: 102 OPTIONS Call-ID: [email protected] Allow: INVITE,CANCEL,ACK,BYE,OPTIONS Content-Type: application/sdp Content-Length: 248 v=0 o=310 4515233118481497946 4515233118481497946 IN IP4 10.0.0.1 s=- i=Nu-Art Software - TacB0sS VoIP information c=IN IP4 10.0.0.1 m=audio 40000 RTP/AVP 0 8 101 a=rtpmap:0 PCMU/8000 a=rtpmap:8 PCMA/8000 a=rtpmap:101 telephone-event/8000 This response caused the server to stop sending me the options request, does this means I can only use these parameters with the server now? or as you said, it does not matter? Thanks, Adam.

    Read the article

  • How do you parse the XDG/gnome/kde menu/desktop item structure in c++??

    - by Joe Soul-bringer
    I would like to parse the menu structure for Gnome Panels (the standard Gnome Desktop application launcher) and it's KDE equivalent using c/c++ function calls. That is, I'd like a list of what the base menu categories and submenu are installed in a given machine. I would like to do with using fairly simple c/c++ function calls (with NO shelling out please). I understand that these menus are in the standard xdg format. I understand that this menu structure is stored in xml files such as: /home/user/.config/menus/applications.menu I've look here: http://www.freedesktop.org/wiki/Specifications/menu-spec?action=show&redirect=Standards%2Fmenu-spec but all they offer is the standard and some shell files to insert item entries (I don't want shell scripts, I don't want installation, I definitely don't want to create a c-library from the XDG specification. I want to find the existing menu structure). I've looked here: http://library.gnome.org/admin/system-admin-guide/stable/menustructure-13.html.en for more notes on these structures. None of this gives me a good idea of how determine the menu structures using a c/c++ program. The actual gnome menu structures seem to be a horrifically hairy things - they don't seem to show the menu structure but to give an XML-coded description of all the changes that the menus have gone through since installation. I assume gnome panels parses these file so there's a function buried somewhere to do this but I've yet to find where that function is after scanning library.gnome.org for a couple of days. I've scanned the Nautilus source code as well but Panels seem to exist elsewhere or are burried well. Thanks in advance

    Read the article

  • Behavior of local variables in JavaScripts with()-statement

    - by thr
    I noticed some weird (and to my knowledge undefined behavior, by the ECMA 3.0 Spec at least), take the following snippet: var foo = { bar: "1", baz: "2" }; alert(bar); with(foo) { alert(bar); alert(bar); } alert(bar); It crashes in both Firefox and Chrome, because "bar" doesn't exist in the first alert(); statement, this is as expected. But if you add a declaration of bar inside the with()-statement, so it looks like this: var foo = { bar: "1", baz: "2" }; alert(bar); with(foo) { alert(bar); var bar = "g2"; alert(bar); } alert(bar); It will produce the following: undefined, 1, g2, undefined It seems as if you create a variable inside a with()-statement most browsers (tested on Chrome or Firefox) will make that variable exist outside that scope also, it's just set to undefined. Now from my perspective bar should only exist inside the with()-statement, and if you make the example even weirder: var foo = { bar: "1", baz: "2" }; var zoo; alert(bar); with(foo) { alert(bar); var bar = "g2"; zoo = function() { return bar; } alert(bar); } alert(bar); alert(zoo()); It will produce this: undefined, 1, g2, undefined, g2 So the bar inside the with()-statement does not exist outside of it, yet the runtime somehow "automagically" creates a variable named bar that is undefined in its top level scope (global or function) but this variable does not refer to the same one as inside the with()-statement, and that variable will only exist if a with()-statement has a variable named bar that is defined inside it. Very weird, and inconsistent. Anyone have an explanation for this behavior? There is nothing in the ECMA Spec about this.

    Read the article

  • Is it possible to defer member initialization to the constructor body?

    - by Kjir
    I have a class with an object as a member which doesn't have a default constructor. I'd like to initialize this member in the constructor, but it seems that in C++ I can't do that. Here is the class: #include <boost/asio.hpp> #include <boost/array.hpp> using boost::asio::ip::udp; template<class T> class udp_sock { public: udp_sock(std::string host, unsigned short port); private: boost::asio::io_service _io_service; udp::socket _sock; boost::array<T,256> _buf; }; template<class T> udp_sock<T>::udp_sock(std::string host = "localhost", unsigned short port = 50000) { udp::resolver res(_io_service); udp::resolver::query query(udp::v4(), host, "spec"); udp::endpoint ep = *res.resolve(query); ep.port(port); _sock(_io_service, ep); } The compiler tells me basically that it can't find a default constructor for udp::socket and by my research I understood that C++ implicitly initializes every member before calling the constructor. Is there any way to do it the way I wanted to do it, or is it too "Java-oriented" and not feasible in C++? I worked around the problem by defining my constructor like this: template<class T> udp_sock<T>::udp_sock(std::string host = "localhost", unsigned short port = 50000) : _sock(_io_service) { udp::resolver res(_io_service); udp::resolver::query query(udp::v4(), host, "spec"); udp::endpoint ep = *res.resolve(query); ep.port(port); _sock.bind(ep); } So my question is more out of curiosity and to better understand OOP in C++

    Read the article

  • Is the Java classpath final after JVM startup?

    - by Jens
    Hi, I have read a lot about the Java class loading process lately. Often I came across texts that claimed that it is not possible to add classes to the classpath during runtime and load them without class loader hackery (URLClassLoaders etc.) As far as I know classes are loaded dynamically. That means their bytecode representation is only loaded and transformed to a java.lang.Class object when needed. So shouldn't it be possible to add a JAR or *.class file to the classpath after the JVM started and load those classes, provided they haven't been loaded yet? (To be clear: In this case the classpath is simple folder on the filesystem. "Adding a JAR or *.class file" simply means dropping them in this folder.) And if not, does that mean that the classpath is searched on JVM startup and all fully qualified names of the found classes are cached in an internal "list"? It would be nice of you if you could point me to some sources in your answers. Preferably the offical SUN documentation: Sun JVM Spec. I have read the spec but could not find anything about the classpath and if it's finalized on JVM startup. P.s. This is a theoretical question. I just want to know if it is possible. There is nothing practical I want to achieve. There is just my thirst for knowledge :)

    Read the article

  • Rails and GIT workflow advice.

    - by dannymcc
    Hi Everyone, I need some advice with my desired setup with git and rails. Basically for my first Rails application I used a base application template from GitHub, I then made a ton of changes and now have a full application which is fairly customised. I have now extracted all of the changes I made to the files within the base application template and have committed the changes to my fork of the github repo. Ideally, I would like to have the base application template as a branch in my application and rebase it with my master. Is this actually possible? The reason I want to do this: I want to keep the base application up to date and functional, so for my next project I can just clone the base application template from my github fork and get working. Likewise, if anyone fixes any bugs in the base application template, I could merge those fixes with any application I have the base application template as a branch? Is this possible? Is there a better/more common way to do this? Thanks in advanced! Thanks, Danny

    Read the article

  • How to handle not-enough-isolatedstorage issue deep in data loader?

    - by Edward Tanguay
    I have a silverlight application which loads data from many external data sources into IsolatedStorage, and while loading any of these sources if it does not have enough IsolatedStorage, it ends up in a catch statement. At that point in that catch statement I would like to ask the user to click a button to approve silverlight to increase the IsolatedStorage capacity. The problem is, although I have a "SwitchPage()" method with which I display a page, if I access it at this point it is too deep in the loading process and the application always goes into an endless loop, hangs and crashes. I need a way to branch out of the application completely somehow to an independent UserControl which has a button and code behind which does the increase logic. What is a solution for an application to be able to branch out of a loading process catch statement like this, display a user control which has a button to ask the user to increase the IsolatedStorage? public static void SaveBitmapImageToIsolatedStorageFile(OpenReadCompletedEventArgs e, string fileName) { try { using (IsolatedStorageFile isf = IsolatedStorageFile.GetUserStoreForApplication()) { using (IsolatedStorageFileStream isfs = new IsolatedStorageFileStream(fileName, FileMode.Create, isf)) { Int64 imgLen = (Int64)e.Result.Length; byte[] b = new byte[imgLen]; e.Result.Read(b, 0, b.Length); isfs.Write(b, 0, b.Length); isfs.Flush(); isfs.Close(); isf.Dispose(); } } } catch (IsolatedStorageException) { //handle: present user with button to increase isolated storage } catch (TargetInvocationException) { //handle: not saved } }

    Read the article

  • routing mvc on the web

    - by generic_noob
    Hi, I was wondering if anyone could possibly provide me some advice on how i could improve the routing (and/or architecture) to each 'section' of my application. (I'm writing in PHP5, and trying to use strict MVC) Basically, I have a generic index page for the app, and that will spew out boilerplate stuff like jquery and the css etc. and it also generates the main navidation for the entire site, but i'm unsure about the best approach to connect the 'main menu' items(hyperlinks) with their associated controllers. Up until now I have been appending strings into the url and using a 'switch' statement to branch to the correct controller(and view) by extracting the strings back out of '$GET[]' to let it execute the code for the corrosponding action. for instance if i had a basic crud system for customer data, the url to edit a customers details would look like 'www.example.com/index.php?page=customer&action=edit&id=4'. I'm worried that there is a security concern by doing it this way, and i'm not sure of an alternative to branch the main 'index.php' file to the correct controller for each action once the user has clicked the link. Would it be better to use mod_rewrite to disguise the controllers names? or to create a similar system to the ASP MVC framework, where there is a seperate routing system where each url is filtered to get the associated controller? Cheers!

    Read the article

  • Qt Creator CONFIG (debug, release) switches does NOT work

    - by killdaclick
    Problem: CONFIG(debug,debug|release) and CONFIG(release,deubg|release) are always evaluated wherever debug or release is choosen in Qt Creator 2.8.1 for Linux. My configuration in Qt Creator application (stock - default for new project): Projects->Build Settings->Debug Build Steps: qmake build configuration: Debug Effective qmake call: qmake2 proj.pro -r -spec linux-gnueabi-oe-g++ CONFIG+=debug Projects->Build Settings->Release Build Steps: qmake build configuration: Release Effective qmake call: qmake2 proj.pro -r -spec linux-gnueabi-oe-g++ My configuration in proj.pro: message(Variable CONFIG:) message($$CONFIG) CONFIG(debug,debug|release) { message(Debug build) } CONFIG(release,debug|release) { message(Release build) } Output on console for Debug: Project MESSAGE: Variable CONFIG: Project MESSAGE: lex yacc warn_on debug uic resources warn_on release incremental link_prl no_mocdepend release stl qt_no_framework debug console Project MESSAGE: Debug build Project MESSAGE: Release build Output on console for Release: Project MESSAGE: Variable CONFIG: Project MESSAGE: lex yacc warn_on uic resources warn_on release incremental link_prl no_mocdepend release stl qt_no_framework console Project MESSAGE: Debug build Project MESSAGE: Release build Under Windows 7 I didnt experienced any problem with such .pro configuration and it worked fine. I was desperate and modified .pro file: CONFIG = test message(Variable CONFIG:) message($$CONFIG) CONFIG(debug,debug|release) { message(Debug build) } CONFIG(release,debug|release) { message(Release build) } and I was suprised with the output: Project MESSAGE: Variable CONFIG: Project MESSAGE: test Project MESSAGE: Debug build Project MESSAGE: Release build so even if I completly clean CONFIG variable it still see debug and release configuration. What Im doing wrong?

    Read the article

  • Git and cloning

    - by jriff
    Hi all! I have done an app for a client called 'A' (not really). I have found out that it is very cool and that I want to sell it to other clients also. The directory 'A' is a Git repository. I think I have a problem with cloning it. As far as I can see I need to make a copy of the dir 'A' and call it 'Generic_A'. Then delete the dir 'A' and do a "git clone Generic_A A" Then I could start changing the 'Generic_A'-repo with a generic design and all client references removed. But that is kind of the other way around. I should have started doing the generic design and then cloned the repo to change to the client specific design. Can I: make a new branch do all the changes to make the design generic create a patch that reflects the changes between the two remove the client specific branch rename the directory to 'Generic_A' clone the repo to a new dir 'A' apply the patch to get the client specific stuff back And if yes - how do I make the patch and apply it? Regards, Jacob

    Read the article

  • Cross-Origin Resource Sharing (CORS) - am I missing something here?

    - by David Semeria
    I was reading about CORS (https://developer.mozilla.org/en/HTTP_access_control) and I think the implementation is both simple and effective. However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine. But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information. I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access. So the expanded sequence would be: 1) Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc) 2) Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal 3) Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list. I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole. I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.

    Read the article

  • folder structure in a mercurial repo?

    - by ajsie
    I have just switched from svn to mercurial and have read some tutorials about it. I've still got some confusions that i hope you could help me to sort out. I wonder if I have understood the folder structure in a mercurial repo right. In a svn repo I usually have these folders: svn: branches (branches/chat, branches/new_login etc) tags (version1.0, version2.0 etc) sandbox trunk Should a branch actually be another clone of the original/central repo in mercurial? it seemed like that when I read the manual. And a tag is just a named identifier, but you should clone the original/central repo whenever you want to create a tag? How about the sandbox? should that be another clone too? So basically you just have in a repo all the folders/files that you would have in the trunk folder? mercurial: central repo: projects folders/files (not in any parentfolder) tag repo: cloned from central repo at a given moment for release (version1.0, version2.0 etc) branch repo: cloned from central repo for adding features (chat, new_login etc) sandbox repo: experimental repo (could be pushed to central repo, or just deleted) is this correct?

    Read the article

  • Get back the changes after accidental checkout?

    - by Millisami
    The following was the status of my repo. [~/rails_apps/jekyll_apps/nepalonrails (design)?] ? gst # On branch design # Changed but not updated: # (use "git add/rm <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: _layouts/default.html # deleted: _site/blog/2010/04/07/welcome-to-niraj-blog/index.html # deleted: _site/blog/2010/04/08/the-code-syntax-highlight/index.html # deleted: _site/blog/2010/05/01/showing-demo-to-kalyan/index.html # deleted: _site/config.ru # deleted: _site/index.html # deleted: _site/static/css/style.css # deleted: _site/static/css/syntax.css # modified: static/css/style.css # no changes added to commit (use "git add" and/or "git commit -a") Accedently, I did git checkout -f and now the changes are gone which I wasnt supposed to do. [~/rails_apps/jekyll_apps/nepalonrails (design)?] ? git co -f [~/rails_apps/jekyll_apps/nepalonrails (design)] ? gst # On branch design nothing to commit (working directory clean) [~/rails_apps/jekyll_apps/nepalonrails (design)] ? Can I get back the changes back?

    Read the article

  • Git add not working with .png files?

    - by D Lawson
    I have a dirty working tree, dirty because I made changes to source files and touched up some images. I was trying to add just the images to the index, so I ran this command: git add *.png But, this doesn't add the files. There were a few new image files that were added, but none of the ones that were modified/pre-existing were added. What gives? Edit: Here is some relevant terminal output $ git status # On branch master # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_male.png # modified: src/main/resources/icons/doctor_female.png # # Untracked files: # (use "git add <file>..." to include in what will be committed) # # src/main/resources/icons/arrow_up.png # src/main/resources/icons/bullet_arrow_down.png # src/main/resources/icons/bullet_arrow_up.png no changes added to commit (use "git add" and/or "git commit -a") Then executed "git add *.png" (no output after command) Then: $ git status # On branch master # # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # new file: src/main/resources/icons/arrow_up.png # new file: src/main/resources/icons/bullet_arrow_down.png # new file: src/main/resources/icons/bullet_arrow_up.png # # Changed but not updated: # (use "git add <file>..." to update what will be committed) # (use "git checkout -- <file>..." to discard changes in working directory) # # modified: src/main/java/net/plugins/analysis/FormMatcher.java # modified: src/main/resources/icons/doctor_edit_female.png # modified: src/main/resources/icons/doctor_edit_male.png

    Read the article

  • What is the subject of Rspecs its method

    - by Steve Weet
    When you use the its method in rspec like follows its(:code) { should eql(0)} what is 'its' referring to. I have the following spec that works fine describe AdminlwController do shared_examples_for "valid status" do it { should be_an_instance_of(Api::SoapStatus) } it "should have a code of 0" do subject.code.should eql(0) end it "should have an empty errors array" do subject.errors.should be_an(Array) subject.errors.should be_empty end #its(:code) { should eql(0)} end describe "Countries API Reply" do before :each do co1 = Factory(:country) co2 = Factory(:country) @result = invoke :GetCountryList, "empty_auth" end subject { @result } it { should be_an_instance_of(Api::GetCountryListReply) } describe "Country List" do subject {@result.country_list} it { should be_an_instance_of(Array) } it { should have(2).items } it "should have countries in the list" do subject.each {|c| c.should be_an_instance_of(Api::Country)} end end describe "result status" do subject { @result.status } it_should_behave_like "valid status" end end However if I then uncomment the line with its(:code) then I get the following output AdminlwController Countries API Reply - should be an instance of Api::GetCountryListReply AdminlwController Countries API Reply Country List - should be an instance of Array - should have 2 items - should have countries in the list AdminlwController Countries API Reply result status - should be an instance of Api::SoapStatus - should have a code of 0 - should have an empty errors array AdminlwController Countries API Reply result status code - should be empty (FAILED - 1) 1) NoMethodError in 'AdminlwController Countries API Reply result status code should be empty' undefined method code for <AdminlwController:0x40fc4dc> /Users/steveweet/romad_current/romad/spec/controllers/adminlw_controller_spec.rb:29: Finished in 0.741599 seconds 8 examples, 1 failure It seems as if "its" is referring to the subject of the whole test, namely AdminLwController rather than the current subject. Am I doing something wrong or is this an Rspec oddity?

    Read the article

  • Use SQL to clone a tree structure represented in a database

    - by AmoebaMan17
    Given a table that represents a hierarchical tree structure and has three columns ID (Primary Key, not-autoincrementing) ParentGroupID SomeValue I know the lowest most node of that branch, and I want to copy that to a new branch with the same number of parents that also need to be cloned. I am trying to write a single SQL INSERT INTO statement that will make a copy of every row that is of the same main has is part one GroupID into a new GroupID. Example beginning table: ID | ParentGroupID | SomeValue ------------------------ 1 | -1 | a 2 | 1 | b 3 | 2 | c Goal after I run a simple INSERT INTO statement: ID | ParentGroupID | SomeValue ------------------------ 1 | -1 | a 2 | 1 | b 3 | 2 | c 4 | -1 | a-cloned 5 | 4 | b-cloned 6 | 5 | c-cloned Final tree structure +--a (1) | +--b (2) | +--c (3) | +--a-cloned (4) | +--b-cloned (5) | +--c-cloned (6) The IDs aren't always nicely spaced out as this demo data is showing, so I can't always assume that the Parent's ID is 1 less than the current ID for rows that have parents. Also, I am trying to do this in T-SQL (for Microsoft SQL Server 2005 and greater). This feels like a classic exercise that should have a pure-SQL answer, but I'm too used to programming that my mind doesn't think in relational SQL.

    Read the article

  • Repository layout and sparse checkouts

    - by chuanose
    My team is considering to move from Clearcase to Subversion and we are thinking of organising the repository like this: \trunk\project1 \trunk\project2 \trunk\project3 \trunk\staticlib1 \trunk\staticlib2 \trunk\staticlib3 \branches\.. \tags\.. The issue here is that we have lots of projects (1000+) and each project is a dll that links in several common static libraries. Therefore checking out everything in trunk is a non-starter as it will take way too long (~2 GB), and is unwieldy for branching. Using svn:externals to pull out relevant folders for each project doesn't seem ideal because it results in several working copies for each static library folder. We also cannot do an atomic commit if the changes span the project and some static libraries. Sparse checkouts sounds very suitable for this as we can write a script to pull out only the required directories. However when we want to merge changes from a branch back to the trunk we will need to first check out a full trunk. Wonder if there is some advice on 1) a better repository organization or 2) a way to merge over branch changes to a trunk working copy that is sparse?

    Read the article

  • Convenient way to do "wrong way rebase" in git?

    - by Kaz
    I want to pull in newer commits from master into topic, but not in such a way that topic changes are replayed over top of master, but rather vice versa. I want the new changes from master to be played on top of topic, and the result to be installed as the new topic head. I can get exactly the right object if I rebase master to topic, the only problem being that the object is installed as the new head of master rather than topic. Is there some nice way to do this without manually shuffling around temporary head pointers? Edit: Here is how it can be achieved using a temporary branch head, but it's clumsy: git checkout master git checkout -b temp # temp points to master git rebase topic # topic is brought into temp, temp changes played on top Now we have the object we want, and it's pointed at by temp. git checkout topic git reset --hard temp Now topic has it; and all that is left is to tidy up by deleting temp: git branch -d temp Another way is to to do away with temp and just rebase master, and then reset topic to master. Finally, reset master back to what it was by pulling its old head from the reflog, or a cut-and-paste buffer.

    Read the article

  • J2EE and alternatives

    - by Ilya K
    Hello, I am J2SE developer but I have rich web-background (php, perl/cgi and so on) and now I am starting new project. It will have web interface, spaghetti business logic, relational database as storage and connections to other services. I do it from the scratch. My colleagues told me to use spring, spring security and struts. I look briefly at J2EE spec and found that it covers almost all aspects of enterprise application. I asked my colleagues why do they need spring and struts, but looks like they use technologies simply because they are familiar with them and not familiar with classic J2EE stack. So, my question is: what is bad about J2EE? Why do I need spring if there are JNDI lookups? It will take a day or two to create fake InitialContext for unit-tests. And that is all: I stand with out of external tools like spring. Why do I need spring-security if there is a security built in Servlets spec? I can map any request to any servlet using web.xml, no struts.xml is needed. I can use servlet-filters instead of struts interceptors. There is RMI, so I do not need spring-remote. And so on.. Why should I bother my self with all that fancy stuff if there is J2EE? I really want to find situation when J2EE is not enough. Do you have any? Thanks!

    Read the article

  • Will TFS 2010 support non-contiguous merging?

    - by steve_d
    I know that merging non-contiguous changesets at once may not be a good idea. However there is at least one situation in which merging non-contiguous changesets is (probably) not going to break anything: when there are no intervening changes on the affected individual files. (At least, it wouldn't break any worse than would a series of cherry-picked merges, checked in each time; and at least this way you would discover breakage before checking in). For instance, let's say you have a Main and a Development branch. They start out identical (e.g. after a release). They have two files, foo.cs and bar.cs. Alice makes a change in Development\foo.cs and checks it in as changeset #1001. Bob makes a change in Development\bar.cs and checks it in as #1002. Alice makes another change to Development\foo.cs and checks it in as #1003. Now we could in theory merge both changes #1001 and #1003 from dev-to main in a single operation. If we try to merge at the branch level, dev-to-main, we will have to do it as two operations. In this simple, contrived example it's simple enough to merge the one file - but in the real world where there would be many files involved, it's not so simple. Non-contiguous merging is one of the reasons given for why "merge by workitem" is not implemented in TFS.

    Read the article

  • Help renaming svn repository

    - by rascher
    Here is the deal: I created an SVN repository, say, foo. It is at http://www.example.com/foo. Then I did an svn checkout. I made some updates and changes to my local copy of the code over the week. I haven't committed yet. I realized that I wanted to rename the repository. So I did this: svn copy http://example.com/foo http://example.com/bar svn delete http://example.com/foo I finish my changes (and local svn still thinks I'm working under "foo".) svn commit fails because the repo has been renamed. I try to use svn switch --relocate but it yells at me because svn is awful. I try using the script here to replace "foo" with "bar" in my billion .svn/ folders. This replace is taking a long time. I wonder if something hung? Or maybe sshfs failed? I kill it. Ctrl-C. I look and see that half my files have "foo" and the others have "bar" in the URLs in the sundry .svn/ folders. All I want to do is commit my files with the new name. I could re-checkout the branch, but then I have no way to remember which files I changed, which is why I was using version control in the first place, and svn is so godawful at moving and renaming things. What do I need to do to: Have a "clean" copy of my "bar" svn branch? and, most importantly: Commit the changes I made?

    Read the article

  • How to combine two separate unrelated Git repositories into one with single history timeline

    - by Antony
    I have two unrelated (not sharing any ancestor check in) Git repositories, one is a super repository which consists a number of smaller projects (Lets call it repository A). Another one is just a makeshift local Git repository for a smaller project (lets call it repository B). Graphically, it would look like this A0-B0-C0-D0-E0-F0-G0-HEAD (repo A) A0-B0-C0-D0-E0-F0-G0-HEAD (remote/master bare repo pulled & pushed from repo A) A1-B1-C1-D1-E1-HEAD (repo B) Ideally, I would really like to merge repo B into repo A with a single history timeline. So it would appear that I originally started project in repo A. Graphically, this would be the ideal end result A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (new repo A) A0-A1-B1-B0-D1-C0-D0-E0-F0-G0-E1-H(from repo B)-HEAD (remote/master bare repo pulled & pushed from repo A) I have been doing some reading with submodules and subtree (Pro Git is a pretty good book by the way), but both of them seem to cater solution towards maintaining two separate branch with sub module being able to pull changes from upstream and subtree being slightly less headache. Both solution require additional and specialized git commands to handle check ins and sync between master and sub tree/module branch. Both solution also result in multiple time-lines (with --squash you even get 3 timelines with subtree). The closest solution from SO seems to talk about "graft", but is that really it? The goal is to have a single unified repository where I can pull/push check-ins, so that there are no more repo B, just repo A in the end.

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >