Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 63/87 | < Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >

  • How do I keep my branches up to date with the 'default' branch under Mercurial?

    - by Chad Johnson
    Let's say I have the following workflow with Mercurial: stable (clone on server) default (branch) development (clone on server) default (branch) bugs (branch) developer1 (clone on local machine) developer2 (clone on local machine) developer3 (clone on local machine) feature1 (branch) developer3 (clone on local machine) feature2 (branch) developer1 (clone on local machine) developer2 (clone on local machine) My main line of development which is always in a release ready state is 'default'. So the 'default' branch in the 'development' clone is always release-ready. Now suppose I'm developer1 working on feature2. And let's say also that feature2 takes several months. It's pretty obvious that I'm going to want to keep my 'feature2' branch up to date with the 'default' branch. Does this make sense? How would I go about doing this with Mercurial?

    Read the article

  • A GUID as the MySQL table's Primary Key or as a separate column

    - by Ben
    I have a multi-process program that performs, in a 2 hour period, 5-10 million inserts to a 34GB table within a single Master/Slave MySQL setup (plus an equal number of reads in that period). The table in question has only 5 fields and 3 (single field) indexes. The primary key is auto-incrementing. I am far from a DBA, but the database appears to be crippled during this two hour period. So, I have a couple of general questions. 1) How much bang will I get out of batching these writes into units of 10? Currently, I am writing each insert serially because, after writing, I immediately need to know, in my program, the resulting primary key of each insert. The PK is the only unique field presently and approximating the order of insertion with something like a Datetime field or a multi-column value is not acceptable. If I perform a bulk insert, I won't know these IDs, which is a problem. So, I've been thinking about turning the auto-increment primary key into a GUID and enforcing uniqueness. I've also been kicking around the idea of creating a new column just for the purposes of the GUID. I don't really see the what that achieves though, that the PK approach doesn't already offer. As far as I can tell, the big downside to making the PK a randomly generated number is that the index would take a long time to update on each insert (since insertion order would not be sequential). Is that an acceptable approach for a table that is taking this number of writes? Thanks, Ben

    Read the article

  • Subversion: setting up a remote repository and running my site off it?

    - by Matt Andrews
    Hi all. I'm new to SVN and have experimented with it locally on my Dreamhost test server (which has a Subversion "one-click-install" function). Having found my way around the functionality I'm definitely sold, but a little lost about using it to manage my work website (not hosted with Dreamhost, so not offering a one-click SVN installation). Am I correct in thinking that I can set up a repository on my website root (which contains all the files), and then when I develop new features and run a commit, this will update my site? Is this the proper workflow for this sort of thing? If so, is there a standard way to set this kind of thing up on my remote server? Thanks.

    Read the article

  • How do I use dependencies in a makefile without calling a target?

    - by rassie
    I'm using makefiles to convert an internal file format to an XML file which is sent to other colleagues. They would make changes to the XML file and send it back to us (Don't ask, this needs to be this way ;)). I'd like to use my makefile to update the internal files when this XML changes. So I have these rules: %.internal: $(DATAFILES) # Read changes from XML if any # Create internal representation here %.xml: %.internal # Convert to XML here Now the XML could change because of the workflow described above. But since no data files have changed, make would tell me that file.internal is up-to-date. I would like to avoid making %.internal target phony and a circular dependency on %.xml obviously doesn't work. Any other way I could force make to check for changes in the XML file and re-build %.internal?

    Read the article

  • How to parallelize this groovy code?

    - by lucas
    I'm trying to write a reusable component in Groovy to easily shoot off emails from some of our Java applications. I would like to pass it a List, where Email is just a POJO(POGO?) with some email info. I'd like it to be multithreaded, at least running all the email logic in a second thread, or make one thread per email. I am really foggy on multithreading in Java so that probably doesn't help! I've attempted a few different ways, but here is what I have right now: void sendEmails(List<Email> emails) { def threads = [] def sendEm = emails.each{ email -> def th = new Thread({ Random rand = new Random() def wait = (long)(rand.nextDouble() * 1000) println "in closure" this.sleep wait sendEmail(email) }) println "putting thread in list" threads << th } threads.each { it.run() } threads.each { it.join() } } I was hoping the sleep would randomly slow some threads down so the console output wouldn't be sequential. Instead, I see this: putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list putting thread in list in closure sending email1 in closure sending email2 in closure sending email3 in closure sending email4 in closure sending email5 in closure sending email6 in closure sending email7 in closure sending email8 in closure sending email9 in closure sending email10 sendEmail basically does what you'd expect, including the println statement, and the client that calls this follows, void doSomething() { Mailman emailer = MailmanFactory.getExchangeEmailer() def to = ["one","two"] def from = "noreply" def li = [] def email (1..10).each { email = new Email(to,null,from,"email"+it,"hello") li << email } emailer.sendEmails li }

    Read the article

  • Morphing content based on window part focus

    - by Adrian A.
    What I'm trying to achieve is basically have a code that will morph ( move around the page ) based on the part of the window which is currently viewed. Scenario : - actual page height : 2000px - actual screen height( pc, laptop whatever ) : 800px - 1 image of 600px - 3 div's or virtual boxes ( just to prove what I want to do ) Workflow When you open the page, you'd see the first part of the page with the image loaded in the first div. What I want and need to achieve is when scrolling the page, and the focus would be on the second div ( or the image simply gets out of focus - you can't see it no more ), the image would move ( disappear from the first box ) and appear in the second one, which is currently visible. The idea might seem pretty easy but I'm not Javascript savvy. Ideally, the answer should include a way to load a Javascript instead of that image.

    Read the article

  • SVN Feature Branch Method

    - by Seth
    I am getting a SVN server setup and will be using the feature branch method. I plan on having 1+ branches making up a release tag. How do I merge (?) multiple branches into the release tag, while still maintaining diffs and such? I've given an example of our workflow below. Multiple devs pull to local Create feature branch Commit to branch Use branch to build QA (Here is where my question starts) I need to have all the branches for the next build to be put into a build tag to be used to build Production

    Read the article

  • Error_No_Token by adding printer driver (c#)

    - by user1686388
    I'm trying to add printer drivers using the windows API function AddPrinterDriver. The Win32 error 1008 (An attempt was made to reference a token that does not exist.) was always generated. My code is shown as following [DllImport("Winspool.drv")] static extern bool AddPrinterDriver(string Name, Int32 Level, [in] ref DRIVER_INFO_3 DriverInfo); [StructLayout(LayoutKind.Sequential)] public struct DRIVER_INFO_3 { public Int32 cVersion; public string Name; public string Environment; public string DriverPath; public string DataFile; public string ConfigFile; public string HelpFile; public string DependentFiles; public string MonitorName; public string DefaultDataType; } //....................... DRIVER_INFO_3 di = new DRIVER_INFO_3(); //...................... AddPrinterDriver(Environment.MachineName, 3, ref di); I have also tried to get a token by "ImpersonateSelf" before adding the printer driver. But the error 1008 insists. Does anyone have an idea? best regards.

    Read the article

  • TextMate/Macfusion combo for mounting projects over SSH

    - by Sam Lee
    Here is my workflow: I use Macfusion to mount a server over SSH, and then edit the root directory of the project in TextMate (using mate /Volumes/server/projectdir). I have a plug in installed that disables refreshing on refresh. This works ALMOST perfectly--the only thing I have problems with is "Find in Project": it's REALLY slow. Has anyone run into this problem before and been able to find any solutions? Currently I go to terminal when I have to do a search, but it would be great to be able to do it in TextMate. Thanks!

    Read the article

  • Aspect Oriented Programming vs List<IAction> To execute methods based on conditions

    - by David Robbins
    I'm new to AOP so bear with me. Consider the following scenario: A state machine is used in a workflow engine, and after the state of the application is changed, a series of commands are executed. Depending on the state, different types of commands should be executed. As I see it, one implementation is to create List<IAction> and have each individual action determine whether it should execute. Would a Aspect Oriented process work as well? That is, could you create an aspect that notifies a class when a property changes, and execute the appropriate processes from that class? Would this help centralize the state specific rules?

    Read the article

  • How do you learn a class hierarchy quickly?

    - by rsteckly
    Hi, Something I don't enjoy about programming is learning a new API. For example, right now I'm trying to learn Windows Identity Foundation. Its frustrating because I'm going to spend the bulk of the time learning how a few classes work and actually only write several lines of code. In .NET, there are so many types that I seem to spend more time hunting around in msdn for a class than writing code. It also interrupts my workflow while I'm working because I have to type a little bit than look something up. Obviously, I don't have to do this for the basic classes. Whenever new things come though there is definitely some looking up to do. Then I often don't reuse that class enough to really review it or bring it into action. I'm wondering if anybody out there has a found a way to memorize (or look up more efficiently) these object model hierarchies?

    Read the article

  • Python - react to custom keyboard interrupt

    - by flixic
    Hello. I am writing python chatbot that displays output through console. Every half second it asks server for updates, and responds to message. In the console I can see chat log. This is sufficient in most cases, however, sometimes I want to interrupt normal workflow and write custom chat answer myself. I would love to be able to press a button (or combination) that would switch to "custom reply mode". What is the best way to do that, or achieve similar result? Thanks a lot!

    Read the article

  • Is there a programmatic way to transform a sequence of image files into a PDF?

    - by Salim Fadhley
    I have a sequence of JPG images. Each of the scans is already cropped to the exact size of one page. They are sequential pages of a valuable and out of print book. The publishing application requires that these pages be submitted as a single PDF file. I could take each of these images and just past them into a word-processor (e.g. OpenOffice) - unfortunately the problem here is that it's a very big book and I've got quite a few of these books to get through. It would obviously be time-consuming. This is volunteer work! My second idea was to use LaTeX - I could make a very simple document that consists of nothing more than a series of in-line image includes. I'm sure that this approach could be made to work, it's just a little on the complex side for something which seems like a very simple job. It occurred to me that there must be a simpler way - so any suggestions? I'm on Ubuntu 9.10, my primary programming language is Python, but if the solution is super-simple I'd happily adopt any technology that works.

    Read the article

  • Finding a list of indices from master array using secondary array with non-unique entries

    - by fideli
    I have a master array of length n of id numbers that apply to other analogous arrays with corresponding data for elements in my simulation that belong to those id numbers (e.g. data[id]). Were I to generate a list of id numbers of length m separately and need the information in the data array for those ids, what is the best method of getting a list of indices idx of the original array of ids in order to extract data[idx]? That is, given: a=numpy.array([1,3,4,5,6]) # master array b=numpy.array([3,4,3,6,4,1,5]) # secondary array I would like to generate idx=numpy.array([1,2,1,4,2,0,3]) The array a is typically in sequential order but it's not a requirement. Also, array b will most definitely have repeats and will not be in any order. My current method of doing this is: idx=numpy.array([numpy.where(a==bi)[0][0] for bi in b]) I timed it using the following test: a=(numpy.random.uniform(100,size=100)).astype('int') b=numpy.repeat(a,100) timeit method1(a,b) 10 loops, best of 3: 53.1 ms per loop Is there a better way of doing this?

    Read the article

  • Keeping a web request alive.

    - by The Machine
    I have a web application , that helps download reports. But the report generation sometimes takes a lot of time, and the web request times out through the intermediate proxy server.(Timeout :90 secs). The workflow for downloading the report is straightforward. Client sends request to the web server. The web server generates the report , and makes it available to the client as an excel download. The excel is generated using POI and the download is provided using Spring's AbstractExcelView. What could be the best way to keep the web request alive(without increasing the timeout , of course) ?

    Read the article

  • Facebook Api - Local development, Testserver, Liveserver ... How?

    - by Thijs Kaspers
    I'm working on a new website that uses the Facebook API for users to login and several implementations of the graph Api. My workflow usually is: Development on localhost Development using MAMP/XAMPP or similar software Push to server - testing domain A team of people can test the changes for a few days to see if everything works as planned. Push to server - live domain Changes are live for public Facebook uses the site URL in the appsettings and for security reasons, they will only redirect to that url... Problem is.. I have localhost and 2 different domains. How can I make this work? Ofcourse I could edit the hostsfile, but that only fixes it for localhost.. Still no solution for the testdomain. Please tell me this is somehow possible! I'm getting more and more depressed with the Facebook API.

    Read the article

  • saving file name from html form input

    - by user343934
    I am working on python and biopython right now. I have a file upload form and whatever file is uploaded suppose(abc.fasta) then i want to pass same name in execute (abc.fasta) function parameter and display function parameter (abc.aln). Workflow goes like this. ----If submit is not true then display only header and form part --- if submit is true then call execute() and get file name from form input --- Then display the save file result in the same page. File name is same as input. My raw code is here -- http://pastebin.com/FPUgZSSe Any suggestions, changes and algorithm is appreciated Thanks

    Read the article

  • How to use git feature branches with live updates and merge back to master?

    - by karlthorwald
    I have a production website where master is checked out and a development webiste where I develop in feature branches. When a feature is merged into master I do this on the development site: (currently on the new-feature branch) $ git commit -m"new feature finished" $ git push $ git checkout master $ git merge new-feature $ git push And on the production site: (currently on master branch) $git pull This works for me. But sometimes the client calls and needs a small change on the website quickly. I can do this on production on master and push master and this works fine. But when I use a feature branch for the small change I get a gap: (On production on branch master) $ git branch quick-feature $ git checkout quick-feature $ git push origin quick-feature $ edit files... $ git add . $ git commit -m"quick changes" $ git push # until this point the changes are live $ git checkout master #now the changes are not live anymore GAP $ git merge quick-feature # now the changes are live again $ git push I hope I could make clear the intention of this workflow. Can you recommend something better?

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • Agile version control?

    - by Paul Dixon
    I'm trying to work out a good method to manage code changes on a large project with multiple teams. We use subversion at the moment, but I want more flexibility in building a new release than I seem to be able to get with subversion. Here's roughly I want: for each developer to create easily identifiable patches for the project. Each patch delivers a complete user story (a releasable feature or fix). It might encompass many changes to many files. developers are able to easily apply and remove their own and other patches to facilitate testing release manager selects the patches to be used in the next release into a new branch branch is tested, fixes merged in, and ultimately merged into live teams can then pull these changes back down into their sandboxes. I'm looking at stacked git as a way of achieving this, but what other tools or techniques can deliver this sort of workflow?

    Read the article

  • Calling from C# to C function which accept a struct array allocated by caller

    - by lifey
    I have the following C struct struct XYZ { void *a; char fn[MAX_FN]; unsigned long l; unsigned long o; }; And I want to call the following function from C#: extern "C" int func(int handle, int *numEntries, XYZ *xyzTbl); Where xyzTbl is an array of XYZ of size numEntires which is allocated by the caller I have defined the following C# struct: [System.Runtime.InteropServices.StructLayoutAttribute(System.Runtime.InteropServices.LayoutKind.Sequential, CharSet = System.Runtime.InteropServices.CharSet.Ansi)] public struct XYZ { public System.IntPtr rva; [System.Runtime.InteropServices.MarshalAsAttribute(System.Runtime.InteropServices.UnmanagedType.ByValTStr, SizeConst = 128)] public string fn; public uint l; public uint o; } and a method: [System.Runtime.InteropServices.DllImport(@"xyzdll.dll", CallingConvention = CallingConvention.Cdecl)] public static extern Int32 func(Int32 handle, ref Int32 numntries, [MarshalAs(UnmanagedType.LPArray)] XYZ[] arr); Then I try to call the function : XYZ xyz = new XYZ[numEntries]; for (...) xyz[i] = new XYZ(); func(handle,numEntries,xyz); Of course it does not work. Can someone shed light on what I am doing wrong ?

    Read the article

  • How do I get a remote tracking branch to stay up to date with remote origin in a bare Git repository?

    - by Beau Simensen
    I am trying to maintain a bare copy of a Git repository and having some issues keeping the remote tracking branches up to date. I create the remote tracking branches like this: git branch -t 0.1 origin/0.1 This seems to do what I need to do for that point in time. However, if I make changes to origin and then fetch with the bare repo, things start to fall apart. My workflow looks like this: git fetch origin It looks like all of the commits come in at that point, but my local copy of 0.1 is not being updated. I can see that the changes have been brought into the repository by doing the following: git diff 0.1 refs/remotes/origin/0.1 What do I need to do to get my tracking branch updated with the remote's updates? I feel like I must be missing a step or a flag somewhere.

    Read the article

  • Is this an acceptable use of "ASCII arithmetic"?

    - by jmgant
    I've got a string value of the form 10123X123456 where 10 is the year, 123 is the day number within the year, and the rest is unique system-generated stuff. Under certain circumstances, I need to add 400 to the day number, so that the number above, for example, would become 10523X123456. My first idea was to substring those three characters, convert them to an integer, add 400 to it, convert them back to a string and then call replace on the original string. That works. But then it occurred to me that the only character I actually need to change is the third one, and that the original value would always be 0-3, so there would never be any "carrying" problems. It further occurred to me that the ASCII code points for the numbers are consecutive, so adding the number 4 to the character "0", for example, would result in "4", and so forth. So that's what I ended up doing. My question is, is there any reason that won't always work? I generally avoid "ASCII arithmetic" on the grounds that it's not cross-platform or internationalization friendly. But it seems reasonable to assume that the code points for numbers will always be sequential, i.e., "4" will always be 1 more than "3". Anybody see any problem with this reasoning? Here's the code. string input = "10123X123456"; input[2] += 4; //Output should be 10523X123456

    Read the article

  • how to deploy web application directly from git master branch

    - by mobile.linkr
    For educational purpose, I am writing a server instance in GCE(google compute engine) to serve a few web apps mostly (to be) written in Dart and Polymer. My workflow is, when my students log-in the server above, they will automatically fork those web apps into their own registries in their own server instances for further development. My issues are, How to serve web applications(they are git registries as well) in GCE like Github Pages? Is it possible to manipulate Github Pages to serve web apps mostly using Dart and Polymer packages? Thanks in advance.

    Read the article

  • How to evaluate "enterprise" platforms?

    - by Ran Biron
    Hi all, I'm tasked with evaluating an "enterprise" platform for the next-gen version of a product. We're currently considering two "types" of platforms - RAD (workflow engine, integrated UI, small cores of "technology plugins" to the workflows, automatic persisting of state...) like SalesForce.com / Service-Now.com and "cloud based" (EC2 / AppEngine...). While I have a few ideas on where to start, I'd like your opinions - how would you evaluate platforms for an enterprise suite of products? What factors would you consider? How would you eliminate weak options quickly enough to be able to concentrate on the few strong ones? Also interesting is how would you compare enterprise RAD (proven technology, quick to develop - but tends to look "the same as the competition") to cloud-based technology (lots of "buzz", not that many competitors - easy to justify to management, but probably lacking (?) enterprise tools and experience). As said before - I have a few ideas, but would like to see some answers before I post mine so I wouldn't drive the discussion to a specific place. RB.

    Read the article

< Previous Page | 59 60 61 62 63 64 65 66 67 68 69 70  | Next Page >