Search Results

Search found 1622 results on 65 pages for 'branch'.

Page 57/65 | < Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >

  • Designing DAOs around a JSON API for iPhone Development

    - by Bob Spryn
    So I've been trying to design a clean way of grabbing data for my models in iPhone land. All the data for my application is coming from JSON API's. So right now when a VC needs some models, it does the JSON call itself (asynch) and when it receives the data, it builds the models. It works, but I'm trying to think of a cleaner method whereby the DAO's retrieve the information for me and return the models, all in an async manner. My initial thought is build a protocol for my DAOs, such that the VC would instantiate a DAO and make itself the delegate. When you requested data [DAOinstance getAllUsers] the DAO would do all the network request stuff, and then when it had the data, it would call a method on its delegate (the VC) to pass the data. So I think that's a cool solution, but realized that if I needed to use the same DAO for different purposes in the same VC, my delegate method would have to branch logic depending on which DAO instance initiated the request. So my second thought was to be able to pass 'handler' selectors to the DAO object a la typical javascript patterns. So instead of an official protocol, I would say something like [DAOinstance getAllUsersWithSelector:"TheHandlerFunctionOnMyVC:"] Then when the DAO completed its network activities, it would call the passed selector on the VC, and pass the data back. So am I headed in the wrong direction entirely here? Seems like maybe an ok way to go. Any pointers or articles on designing this kind of data layer would be sweet. Thanks! Bob

    Read the article

  • Managing aesthetic code changes in git

    - by Ollie Saunders
    I find that I make a lot of small changes to my source code, often things that have almost no functional effect. For example: Refining or correcting comments. Moving function definitions within a class for a more natural reading order. Spacing and lining up some declarations for readability. Collapsing something using multiple lines on to one. Removing an old piece of commented-out code. Correcting some inconsistent whitespace. I guess I have a formidable attention to detail in my code. But the problem is I don't know what to do about these changes and they make it difficult to switch between branches etc. in git. I find myself not knowing whether to commit the minor changes, stash them, or put them in a separate branch of little tweaks and merge that in later. None those options seems ideal. The main problem is that these sort of changes are unpredictable. If I was to commit these there would be so many commits with the message "Minor code aesthetic change.", because, the second I make such a commit I notice another similar issue. What should I do when I make a minor change, a significant change, and then another minor change? I'd like to merge the three minor changes into one commit. It's also annoying seeing files as modified in git status when the change barely warrants my attention. I know about git commit --amend but I also know that's bad practice as it makes my repo inconsistent with remotes.

    Read the article

  • Mercurial central server file discrepancy (using 'diff to local')

    - by David Montgomery
    Newbie alert! OK, I have a working central Mercurial repository that I've been working with for several weeks. Everything has been great until I hit a really bizarre problem: my central server doesn't seem to be synced to itself? I only have one file that seems to be out-of-sync right now, but I really need to know how this happened to prevent it from happening in the future. Scenario: 1) created Mercurial repository on server using an existing project directory. The directory contained the file 'mypage.aspx'. 2) On my workstation, I cloned the central repository 3) I made an edit to mypage.aspx 4) hg commit, then hg push from my workstation to the central server 5) now if I look at mypage.aspx on the server's repository using TortoiseHg's repository explorer, I see the change history for mypage.aspx -- an initial check-in and one edit. However, when I select 'Diff to local', it shows the current version on the server's disk is the original version, not the edited version! I have not experimented with branching at all yet, so I'm sure I'm not getting a branch problem. 'hg status' on the server or client returns no pending changes. If I create a clone of the server's repository to a new location, I see the same change history as I would expect, but the file on disk doesn't contain my edit. So, to recap: Central repository = original file, but shows change in revision history (bad) Local repository 'A' = updated file, shows change in revision history (good) Local repository 'B' = original file, but shows change in revision history (bad) Help please! Thanks, David

    Read the article

  • Windows FTP batch sript to read & dl from external user list

    - by Will Sims
    i have several old, unused batches that i'm redoing.. I have a batch file for an old network arch from several years ago.. the main thing I'd like it to do now is read a list of files.. I'll explain the setup.. Server updates a complete list [CurrentMediaStores.txt] 2x a day. The laptops can set settings to DL this list through their start.bat which also runs addins and updates I aply to my pc's, to give my batches and myself a break from slavish folder assignments and add a lil more dynamics and less adminin the bats now call on a list the user makes by simply copying a line from the CMS.txt file and pasting it into their [Grab_List.txt] My problem is though I have the branch :: off right now and the code that detects if LAN is connected or not to switch to an ftp connection. I'd like for the ftp batch to call/ use the Grab_List also. but I just can't/ don't know how to pass and do the for loop with a ftp session to loop through x amount of files in the users req list.. Anyhelp would be greatly appreciated

    Read the article

  • Is there a way to easily convert a series of tarballs of a source tree into a git repository?

    - by Hotei
    I'm new to git and I have a moderately large number of weekly tarballs from a long running project. Each tarball has on average a few hundred files in it. I'm looking for a git strategy that will allow me to add the expanded contents of each tarball to a new git repository, starting from version 1.001 and going through version 1.650. As of this stage of the project 99.5% of tarball(n) is just a copy of version(n-1) - in other words, a perfect candidate for git. The desired end result is to have only the master branch remaining at the end of the process. I think I know git well enough to do this "by hand". As I understand it there is no possibility of a merge conflict since there will be no opportunity to change the master before the next version is added and committed. A shell script is my first guess, but I'm not sure how well bash will like it when git checkout branch_n gets processed while bash is executing in branch_n-1. For the purposes of this project the host environment is Ubuntu 10.4, resources available are 8 Gig RAM, 500 Gig Disk space free and 4 CPU processor at 3.ghz . I don't need someone else to solve the problem but I could use a nudge in the right direction as to how a git expert would approach it. Any advice from someone who's "been there done that" would be appreciated. Hotei PS: I have looked at site's suggested "related questions" and found nothing relevant.

    Read the article

  • Help me choose a web development framework/platform that will make me learn something

    - by Sergio Tapia
    I'm having a bit of an overload of information these past two days. I'm planning to start my own website that will allow local businesses to list their items on sale, and then users can come in and search for "Abercrombie t-shirt" and the stores that sell them will be listed. It's a neat little project I'm really excited for and I'm sure it'll take off, but I'm having problems from the get go. Sure I could use ASP.Net for it, I'm a bit familiar with it and the IDE for ASP.Net pages is bar-none, but I feel this is a great chance for me to learn something new to branch out a bit and not regurgitate .NET like a robot. I've been looking and asking around but it's all just noise and I can't make an educated decision. Can you help me choose a framework/platform that will make me learn something that's a nice thing to know in the job market, but also nice for me to grow as a professional? So far I've looked at: Ruby on Rails Kohana CakePHP CodeIgniter Symfony But they are all very esoteric to me, and I have trouble even finding out which IDE to use to that will let me use auto-complete for the proprietary keywords/methods. Thank you for your time.

    Read the article

  • How to automatically split git commits to separate changes to a single file

    - by Hercynium
    I'm just plain stuck as to how to accomplish this, or if it's even possible. Even it it can be done, I wonder if it could be setting us up for a messed-up, unmanageable repository. I have set up two branches of the code-base. One is "master" and the other is "prod". The HEAD of prod is always the latest code in production, and master is the main development branch. Here's the problem, though: We're converting from CVS here at $work and most of the developers are still getting used to git. Their CVS workflow involved tagging versions of individual files for production, then updating the servers using the tag. Unfortunately, this has let to sloppy practices like committing unrelated changes together and then tagging the files after-the-fact... and the devs want to know how they can do the following: In their local repos, they hack and commit to their hearts' delight, then at the end of the day, be able to run a command that takes a list of files whose commits over the day get merged with their local prod - and only those files - even if those commits combine changes to other files. I know how to split commits with git rebase --interactive, but I have no clue how I would automate splitting commits at all, never mind the way I want to. I do realize the simplest thing would be to just tell them to switch the their prod branches, checkout the files from their master branches into the working tree then commit to prod. My problem with that is losing the history of their commits over the day.

    Read the article

  • TFS How does merging work?

    - by Johannes Rudolph
    I have a release branch (RB, starting at C5) and a changeset on trunk (C10) that I now want to merge onto RB. The file has changes at C3 (common to both), one in CS 7 on RB, and one in C9 (trunk) and one in C10). So the history for my changed file looks like this: RB: C5 -> C7 Trunk: C3 -> C9 -> C10 When I merge C10 from trunk to RB, I'd expect to see a merge window showing me C10 | C3 | C7 since C3 is the common ancestor revision and C10 and C7 are the tips of my two branches respectively. However, my merge tool shows me C10 | C9 | C7. My merge tool is configured to show %1(OriginalFile)|%3(BaseFile)|%2(Modified File), so this tells me TFS chose C9 as the base revision. This is totally unexpected and completely contrary to the way I'm used to merges working in Mercurial or Git. Did I get something wrong or is TFS trying to drive me nuts with merging? Is this the default TFS Merge behavior? If so, can you provide insight into why they chose to implement it this way? I'm using TFS 2008 with VS2010 as a Client.

    Read the article

  • Git-Based Source Control in the Enterprise: Suggested Tools and Practices?

    - by Bob Murphy
    I use git for personal projects and think it's great. It's fast, flexible, powerful, and works great for remote development. But now it's mandated at work and, frankly, we're having problems. Out of the box, git doesn't seem to work well for centralized development in a large (20+ developer) organization with developers of varying abilities and levels of git sophistication - especially compared with other source-control systems like Perforce or Subversion, which are aimed at that kind of environment. (Yes, I know, Linus never intended it for that.) But - for political reasons - we're stuck with git, even if it sucks for what we're trying to do with it. Here are some of the things we're seeing: The GUI tools aren't mature Using the command line tools, it's far to easy to screw up a merge and obliterate someone else's changes It doesn't offer per-user repository permissions beyond global read-only or read-write privileges If you have a permission to ANY part of a repository, you can do that same thing to EVERY part of the repository, so you can't do something like make a small-group tracking branch on the central server that other people can't mess with. Workflows other than "anything goes" or "benevolent dictator" are hard to encourage, let alone enforce It's not clear whether it's better to use a single big repository (which lets everybody mess with everything) or lots of per-component repositories (which make for headaches trying to synchronize versions). With multiple repositories, it's also not clear how to replicate all the sources someone else has by pulling from the central repository, or to do something like get everything as of 4:30 yesterday afternoon. However, I've heard that people are using git successfully in large development organizations. If you're in that situation - or if you generally have tools, tips and tricks for making it easier and more productive to use git in a large organization where some folks are not command line fans - I'd love to hear what you have to suggest. BTW, I've asked a version of this question already on LinkedIn, and got no real answers but lots of "gosh, I'd love to know that too!"

    Read the article

  • Which Stroustrup book should I use?

    - by Chris Simmons
    I'm a C# programmer that is looking to branch out. I'm bored of writing business software and want to start getting into graphics programming and games/simulators. So I figured, although writing that stuff isn't impossible in managed code, the "right" way to do that would be to look to C++, of course focussing on the language first, then getting into OpenGL or DirectX (or whatever). Way way back ('98? '99?) I had tried and failed to really grasp Stroustrup's The C++ Programming Language. I know that this book is often not recommended for the beginner. Anyway, I picked it back up (in a much more recent printing) and I'm actually getting it and enjoying it. I also have a copy of his textbook, Programming: Principles and Practice Using C++, which, as I understand it, is really geared toward teaching programming, not necessarily C++. I'm certainly not arrogant enough to claim I don't have anything more to learn about programming, data structures, algoriths, etc., however I'm not a novice there either. So my question is, with the goal of gaining the broader and more real-world-useful understanding of C++ and given my background, on which should I focus? The denser (as I perceive it) TCPPPL or the gentler Programming? EDIT: I thank everyone for the responses. However, I've got a personal choice here to make between these two books. Granted there are other very good books out there, but I'm already a good length into both of the books I mention and I'd like to finish one. So, can anyone respond on which would be the better and why? Time is not an issue; I'm not looking (at this point) at an "accelerated" read.

    Read the article

  • Git + GitHub + Heroku

    - by Haseeb Khan
    Hi All, I am new to the world of Git, GitHub and Heroku. So far, I am enjoying this paradigm but coming from a background with SVN, things seems a bit complicated to me in the world of Git. I am facing a problem for which I am looking for a solution. Scenario: I have setup a new private project on GitHub. I forked the private project and now I have the following structure in my branch: /project /apps /my-apps /my-app-1 .... /my-app-2 .... /your-apps /your-app-1 .... /your-app-2 .... /plugins .... I can commit the code in my Fork on GitHub from my machine in any of the folders I want. Later on, these would be pulled into the master repository by the admin of the project. For every individual application in the apps folder, I have setup an app on Heroku which is a Git Repo in itself where I push my changes when I am done with the user stories from my local machine. In short, every app in the apps folder is a Rails App hosted on Heroku. Problem: What I want is that when I push my changes into Heroku, they can be committed into my project fork on GitHub as well, so, it also has the latest code all the time. The issue I see is that the code on Heroku is a Git Repo while the folders which I have on GitHub are part of a Repo. So far, what I have researched is that there is something known as Submodule in the Git World which can come to the rescue, however, I have not been able to find some newbie instructions. Can someone in the community be kind enough to share thoughts and help me to identify the solution of this problem? Thanks in advance. Regards, Haseeb Khan haseeb [AT] tkxel.com TkXel

    Read the article

  • Building a control-flow graph from an AST with a visitor pattern using Java

    - by omegatai
    Hi guys, I'm trying to figure out how to implement my LEParserCfgVisitor class as to build a control-flow graph from an Abstract-Syntax-Tree already generated with JavaCC. I know there are tools that already exist, but I'm trying to do it in preparation for my Compilers final. I know I need to have a data structure that keeps the graph in memory, and I want to be able to keep attributes like IN, OUT, GEN, KILL in each node as to be able to do a control-flow analysis later on. My main problem is that I haven't figured out how to connect the different blocks together, as to have the right edge between each blocks depending on their nature: branch, loops, etc. In other words, I haven't found an explicit algorithm that could help me build my visitor. Here is my empty Visitor. You can see it works on basic langage expressions, like if, while and basic operations (+,-,x,^,...) public class LEParserCfgVisitor implements LEParserVisitor { public Object visit(SimpleNode node, Object data) { return data; } public Object visit(ASTProgram node, Object data) { data = node.childrenAccept(this, data); return data; } public Object visit(ASTBlock node, Object data) { } public Object visit(ASTStmt node, Object data) { } public Object visit(ASTAssignStmt node, Object data) { } public Object visit(ASTIOStmt node, Object data) { } public Object visit(ASTIfStmt node, Object data) { } public Object visit(ASTWhileStmt node, Object data) { } public Object visit(ASTExpr node, Object data) { } public Object visit(ASTAddExpr node, Object data) { } public Object visit(ASTFactExpr node, Object data) { } public Object visit(ASTMultExpr node, Object data) { } public Object visit(ASTPowerExpr node, Object data) { } public Object visit(ASTUnaryExpr node, Object data) { } public Object visit(ASTBasicExpr node, Object data) { } public Object visit(ASTFctExpr node, Object data) { } public Object visit(ASTRealValue node, Object data) { } public Object visit(ASTIntValue node, Object data) { } public Object visit(ASTIdentifier node, Object data) { } } Can anyone give me a hand? Thanks!

    Read the article

  • Weird camera Intent behavior

    - by David Erosa
    Hi all. I'm invoking the MediaStore.ACTION_IMAGE_CAPTURE intent with the MediaStore.EXTRA_OUTPUT extra so that it does save the image to that file. On the onActivityResult I can check that the image is being saved in the intended file, which is correct. The weird thing is that anyhow, the image is also saved in a file named something like "/sdcard/Pictures/Camera/1298041488657.jpg" (epoch time in which the image was taken). I've checked the Camera app source (froyo-release branch) and I'm almost sure that the code path is correct and wouldn't have to save the image, but I'm a noob and I'm not completly sure. AFAIK, the image saving process starts with this callback (comments are mine): private final class JpegPictureCallback implements PictureCallback { ... public void onPictureTaken(...){ ... // This is where the image is passed back to the invoking activity. mImageCapture.storeImage(jpegData, camera, mLocation); ... public void storeImage(final byte[] data, android.hardware.Camera camera, Location loc) { if (!mIsImageCaptureIntent) { // Am i an intent? int degree = storeImage(data, loc); // THIS SHOULD NOT BE CALLED WITHIN THE CAPTURE INTENT!! ....... // An finally: private int storeImage(byte[] data, Location loc) { try { long dateTaken = System.currentTimeMillis(); String title = createName(dateTaken); String filename = title + ".jpg"; // Eureka, timestamp filename! ... So, I'm receiving the correct data, but it's also being saved in the "storeImage(data, loc);" method call, which should not be called... It'd not be a problem if I could get the newly created filename from the intent result data, but I can't. When I found this out, I found about 20 image files from my tests that I didn't know were on my sdcard :) I'm getting this behavior both with my Nexus One with Froyo and my Huawei U8110 with Eclair. Could please anyone enlight me? Thanks a lot.

    Read the article

  • git rebase without changing commit timestamps

    - by Olivier
    Would it make sense to perform git rebase while preserving the commit timestamps? I believe a consequence would be that the new branch will not necessarily have commit dates chronologically. Is that theoretically possible at all? (e.g. using plumbing commands; just curious here) If it is theoretically possible, then is it possible in practice with rebase, not to change the timestamps? For example, assume I have the following tree: master <jun 2010> | : : : oldbranch <feb 1984> : / oldcommit <jan 1984> Now, if I rebase oldbranch on master, the date of the commit changes from feb 1984 to jun 2010. Is it possible to change that behaviour so that the commit timestamp is not changed? In the end I would thus obtain: oldbranch <feb 1984> / master <jun 2010> | : Would that make sense at all? Is it even allowed in git to have a history where an old commit has a more recent commit as a parent? Edit A crucial question of Von C helped me understand what is going on: when your rebase, the committer's timestamp changes, but not the author's timestamp, which suddenly all makes sense. So my question was actually not precise enough. The answer is that rebase actually doesn't change the author's timestamps (you don't need to do anything for that), which suits me perfectly.

    Read the article

  • How do I use a custom cookie session serializer in Rack?

    - by Damien Wilson
    Hello SO. I'm currently integrating Warden into a new Rack application I'm building. I'd like to implement a recent patch to Rack that allows me to specify how sessions are serialized; specifically, I'd like to use Rack::Session::Cookie::Identity as the session processor. Unfortunately, the documentation is a little unclear as to what syntax I should use to configure Rack::Session::Cookie in my rackup file, can anyone here tell me what I'm doing wrong? config.ru require 'my_sinatra_app' app = self use Rack::Session::Cookie.new(app, Rack::Session::Cookie::Identity.new), {:key => "auth_token"} use Warden::Manager do |warden| # Must come AFTER Rack::Session warden.default_strategies :password warden.failure_app Jelli::Auth.run! end run MySinatraApp error message from thin !! Unexpected error while processing request: undefined method `new' for #<Rack::Session::Cookie:0x00000110124128> PS: I'm using bundler to manage my gem dependencies and I've likewise included rack's master branch as the desired version. Update: As suggested in the comments below, I have read the documentation; sadly the suggested syntax in the docs is not working. Update: Still no luck on my end; offering up a bounty to whoever can help me figure this out.

    Read the article

  • alternative to #include within namespace { } block

    - by Jeff
    Edit: I know that method 1 is essentially invalid and will probably use method 2, but I'm looking for the best hack or a better solution to mitigate rampant, mutable namespace proliferation. I have multiple class or method definitions in one namespace that have different dependencies, and would like to use the fewest namespace blocks or explicit scopings possible but while grouping #include directives with the definitions that require them as best as possible. I've never seen any indication that any preprocessor could be told to exclude namespace {} scoping from #include contents, but I'm here to ask if something similar to this is possible: (see bottom for explanation of why I want something dead simple) // NOTE: apple.h, etc., contents are *NOT* intended to be in namespace Foo! // would prefer something most this: namespace Foo { #include "apple.h" B *A::blah(B const *x) { /* ... */ } #include "banana.h" int B::whatever(C const &var) { /* ... */ } #include "blueberry.h" void B::something() { /* ... */ } } // namespace Foo ... // over this: #include "apple.h" #include "banana.h" #include "blueberry.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } int B::whatever(C const &var) { /* ... */ } void B::something() { /* ... */ } } // namespace Foo ... // or over this: #include "apple.h" namespace Foo { B *A::blah(B const *x) { /* ... */ } } // namespace Foo #include "banana.h" namespace Foo { int B::whatever(C const &var) { /* ... */ } } // namespace Foo #include "blueberry.h" namespace Foo { void B::something() { /* ... */ } } // namespace Foo My real problem is that I have projects where a module may need to be branched but have coexisting components from the branches in the same program. I have classes like FooA, etc., that I've called Foo::A in the hopes being able to branch less painfully as Foo::v1_2::A, where some program may need both a Foo::A and a Foo::v1_2::A. I'd like "Foo" or "Foo::v1_2" to show up only really once per file, as a single namespace block, if possible. Moreover, I tend to prefer to locate blocks of #include directives immediately above the first definition in the file that requires them. What's my best choice, or alternatively, what should I be doing instead of hijacking the namespaces?

    Read the article

  • How to cope with null results in SQL Tasks that return single rows in SSIS 2005?

    - by JSacksteder
    In a dataflow task, I can slip a rowcount into the processing flow and place the count into a variable. I can later use that variable to conditionally perform some other work if the rowcount was 0. This works well for me, but I have no corresponding strategy for sql tasks expected to return a single row. In that event, I'm returning those values into variables. If the lookup produces no rows, the sql task fails when assigning values into those variables. I can branch on that component failing, but there's a side effect of that - if I'm running the job as a SQL server agent job step, the step returns DTSER_FAILURE, causing the step to fail. I can tell the sql agent to disregard the step failure, but then I won't know if I have a legitimate error in that step. This seems harder than it should be. The only strategy I can think of is to run the same query with a count(*) aggregate and test if that returns a number 0 and if so running the query again without the count. That's ugly because I have the same query in two places that I need to keep in sync. Is there a better way?

    Read the article

  • Extjs 4.1 How to call controller action method from form

    - by Omar Faruq
    Extjs 4.1 How to call controller action method from form which is already use in a button action but i want this method reuse from a form field but do not how i can do? // here is my controller code init: function() { this.control({ 'itemsgrid': { removeitem: this.removeUser }, 'salewindow button[action=resetAll]': { click: this.resertform }, 'salewindow button[action=saveOrder]' : { click : this.onsaveOrder }, 'salewindow button[action=PDF]' : { click : this. pdfreport } }); }, resertform : function(button){ var store = Ext.data.StoreManager.get('Items'); store.destroy(); var vatstore = Ext.data.StoreManager.get('Vats'); vatstore.reload(); var rebatestore = Ext.data.StoreManager.get('Rebates'); rebatestore.reload(); Ext.getCmp('calculation-form').getForm().reset(); Ext.getCmp('itemform2').getForm().reset(); Ext.getCmp('itemsgrid').subtotalquantity(p=0); store.reload(); var co=store.getCount(); console.log(co); Ext.getCmp('customerID').setValue('0'); var a = Ext.getCmp('customerID').getValue(); Ext.getCmp('itemform2').customerinfo(a); } //and here is my from field listener { xtype : 'textfield', name : 'BranchId', fieldLabel : 'Branch Id', allowNegative : false, id : 'branchid', value : '1', onBlur: function(){ restoreItem();// i want call this method from here whis is stay in controller } }

    Read the article

  • How do I optimize this postfix expression tree for speed?

    - by Peter Stewart
    Thanks to the help I received in this post: I have a nice, concise recursive function to traverse a tree in postfix order: deque <char*> d; void Node::postfix() { if (left != __nullptr) { left->postfix(); } if (right != __nullptr) { right->postfix(); } d.push_front(cargo); return; }; This is an expression tree. The branch nodes are operators randomly selected from an array, and the leaf nodes are values or the variable 'x', also randomly selected from an array. char *values[10]={"1.0","2.0","3.0","4.0","5.0","6.0","7.0","8.0","9.0","x"}; char *ops[4]={"+","-","*","/"}; As this will be called billions of times during a run of the genetic algorithm of which it is a part, I'd like to optimize it for speed. I have a number of questions on this topic which I will ask in separate postings. The first is: how can I get access to each 'cargo' as it is found. That is: instead of pushing 'cargo' onto a deque, and then processing the deque to get the value, I'd like to start processing it right away. I don't yet know about parallel processing in c++, but this would ideally be done concurrently on two different processors. In python, I'd make the function a generator and access succeeding 'cargo's using .next(). But I'm using c++ to speed up the python implementation. I'm thinking that this kind of tree has been around for a long time, and somebody has probably optimized it already. Any Ideas? Thanks

    Read the article

  • Doing without partial commits the "Mercurial way"

    - by David Moles
    Subversion shop considering switching to Mercurial, trying to figure out in advance what all the complaints from developers are going to be. There's one fairly common use case here that I can't see how to handle. I'm working on some largish feature, and I have a significant part of the code -- or possibly several significant parts of the code -- in pieces all over the garage floor, totally unsuitable for checkin, maybe not even compiling. An urgent bugfix request comes in. The fix is nice and local and doesn't touch any of the code I've been working on. I make the fix in my working copy. Now what? I've looked at "Mercurial cherry picking changes for commit" and "best practices in mercurial: branch vs. clone, and partial merges?" and all the suggestions seem to be extensions of varying complexity, from Record and Shelve to Queues. The fact that there apparently isn't any core functionality for this makes me suspect that in some sense this working style is Doing It Wrong. What would a Mercurial-like solution to this use case look like?

    Read the article

  • How to redirect registry access of a dll loaded by my program

    - by dummzeuch
    I have got a dll that I load in my program which reads and writes its settings to the registry (hkcu). My program changes these settings prior to loading the dll so it uses the settings my program wants it to use which works fine. Unfortunately I need to run several instances of my program with different settings for the dll. Now the approach I have used so far no longer works reliably because it is possible for one instance of the program to overwrite the settings that another instance just wrote before the dll has a chance to read them. I haven't got the source of the dll in question and I cannot ask the programmer who wrote it to change it. One idea I had, was to hook registry access functions and redirect them to a different branch of the registry which is specific to the instance of my program (e.g. use the process id as part of the path). I think this should work but maybe you have got a different / more elegant. In case it matters: I am using Delphi 2007 for my program, the dll is probably written in C or C++.

    Read the article

  • Design Philosophy Question - When to create new functions

    - by Eclyps19
    This is a general design question not relating to any language. I'm a bit torn between going for minimum code or optimum organization. I'll use my current project as an example. I have a bunch of tabs on a form that perform different functions. Lets say Tab 1 reads in a file with a specific layout, tab 2 exports a file to a specific location, etc. The problem I'm running into now is that I need these tabs to do something slightly different based on the contents of a variable. If it contains a 1 I may need to use Layout A and perform some extra concatenation, if it contains a 2 I may need to use Layout B and do no concatenation but add two integer fields, etc. There could be 10+ codes that I will be looking at. Is it more preferable to create an individual path for each code early on, or attempt to create a single path that branches out only when absolutely required. Creating an individual path for each code would allow my code to be extremely easy to follow at a glance, which in turn will help me out later on down the road when debugging or making changes. The downside to this is that I will increase the amount of code written by calling some of the same functions in multiple places (for example, steps 3, 5, and 9 for every single code may be exactly the same. Creating a single path that would branch out only when required will be a bit messier and more difficult to follow at a glance, but I would create less code by placing conditionals only at steps that are unique. I realize that this may be a case-by-case decision, but in general, if you were handed a previously built program to work on, which would you prefer?

    Read the article

  • What is an elegant way to set up a leiningen project that requires different dependencies based on the build platform?

    - by Savanni D'Gerinel
    In order to do some multi-platform GUI development, I have just switched from GTK + Clojure (because it looks like the Java bindings for GTK never got ported to Windows) to SWT + Clojure. So far, so good in that I have gotten an uberjar built for Linux. The catch, though, is that I want to build an uberjar for Windows and I am trying to figure out a clean way to manage the project.clj file. At first, I thought I would set the classpath to point to the SWT libraries and then build the uberjar. This would require that I set a classpath to the SWT libraries before running the jar, but I would likely need a launcher script, anyway. However, leiningen seems to ignore the classpath in this instance because it always reports that Currently, project.clj looks like this for me: (defproject alyra.mana-punk/character "1.0.0-SNAPSHOT" :description "FIXME: write" :dependencies [[org.clojure/clojure "1.2.0"] [org.clojure/clojure-contrib "1.2.0"] [org.eclipse/swt-gtk-linux-x86 "3.5.2"]] :main alyra.mana-punk.character.core) The relevant line is the org.eclipse/swt-gtk-linux-x86 line. If I want to make an uberjar for Windows, I have to depend on org.eclipse/swt-win32-win32-x86, and another one for x86-64, and so on and so forth. My current solution is to simply create a separate branch for each build environment with a different project.clj. This seems kinda like using a semi to deliver a single gallon of milk, but I am using bazaar for version control, so branching and repeated integrations are easy. Maybe the better way is to have a project.linux.clj, project.win32.clj, etc, but I do not see any way to tell leiningen which project descriptor to use. What are other (preferably more elegant) ways to set up such an environment?

    Read the article

  • git changes modification time of files

    - by tanascius
    In the GitFaq I can read, that Git sets the current time as the timestamp on every file it modifies, but only those. However, I tried this command sequence (EDIT: added complete command sequence) $ git init test && cd test Initialized empty Git repository in d:/test/.git/ exxxxxxx@wxxxxxxx /d/test (master) $ touch filea fileb exxxxxxx@wxxxxxxx /d/test (master) $ git add . exxxxxxx@wxxxxxxx /d/test (master) $ git commit -m "first commit" [master (root-commit) fcaf171] first commit 0 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 filea create mode 100644 fileb exxxxxxx@wxxxxxxx /d/test (master) $ ls -l > filea exxxxxxx@wxxxxxxx /d/test (master) $ touch fileb -t 200912301000 exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 1 -rw-r--r-- 1 exxxxxxx Administ 132 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Dec 30 10:00 fileb exxxxxxx@wxxxxxxx /d/test (master) $ git status -a warning: LF will be replaced by CRLF in filea # On branch master warning: LF will be replaced by CRLF in filea # Changes to be committed: # (use "git reset HEAD <file>..." to unstage) # # modified: filea # exxxxxxx@wxxxxxxx /d/test (master) $ git checkout . exxxxxxx@wxxxxxxx /d/test (master) $ ls -l total 0 -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 filea -rw-r--r-- 1 exxxxxxx Administ 0 Feb 12 18:36 fileb Now my question: Why did git change the timestamp of file fileb? I'd expect the timestamp to be unchanged. Are my commands causing a problem? Maybe it is possible to do something like a git checkout . --modified instead? I am using git version 1.6.5.1.1367.gcd48 under mingw32/windows xp.

    Read the article

  • UIButton stops responding after going into landscape mode - iPhone

    - by casey
    I've been trying different things the last few days and I've run out of ideas so I'm looking for help. The situation is that I'm displaying my in-app purchasing store view after the user clicks a button. Button pressed, view is displayed. The store shows fine. Inside this view, I have a few labels with descriptions of the product, and then below them I have the price and a Buy button which triggers the in-app purchase. Problem is when I rotate the phone to landscape, that Buy button no longer responds, weird. Works fine in portrait. The behavior in landscape when the I touch the button is nothing. It doesn't appear to press down and be selected or anything, just not responding to my touches. But then when I rotate back to portrait or even upside down portrait, it works fine. Here is the rough structure of my view in IB, all the rotating and layout is setup in IB. I set the autoresizing in IB so that everything looks ok in landscape and the Buy button expands horizontally a little bit. The only layout manipulation I do in my code is after loading, I set the content size of the scroll view. File Owner with view set to the scrollView / scrollView ----/ view --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ label --------/ uibutton (Buy) After orientation changes I printed out the userInteractionEnabled property of the scrollView and the button, and they were both TRUE at all orientations. Ideas? Or maybe some other way of displaying a buy button that won't be nonfunctional? I've already begun a branch that plays with a toolbar and placing the buy button there, but I can't seem to get the bar to stay in place while scrolling.

    Read the article

< Previous Page | 53 54 55 56 57 58 59 60 61 62 63 64  | Next Page >