Search Results

Search found 28207 results on 1129 pages for 'tfs process template'.

Page 27/1129 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Set process priority to High: Dangerous?

    - by eek142
    I have read that setting something to realtime is a big no-no, so I am not going to do that. But I do have an application that I need to make sure always has the highest priority on my system as it is critical for the rest of the applications I am running. Is there any danger in setting the priority to high, which is one level below realtime? Also, how would I be able to do this by changing the shortcut target? What is the command?

    Read the article

  • Background process and SIGHUP

    - by Charles Salvia
    My understanding is that a program that is associated with a terminal will receive the SIGHUP signal if that terminal is closed. This usually will terminate the program. I also know that you can use the nohup command along with the & symbol to run the program in the background, and disassociate it from the terminal so that the program is not terminated when the terminal closes (on log out.) However, suppose a program is run normally without nohup, but is then suspended using Cntl-Z. If the program is then resumed in the background using the bg command, will it receive the SIGHUP signal on log out? Or to put it another way: if I have a program which is already running, and I don't want to stop it but I'd like to log out, can I suspend it using Cntl-Z and run it in the background using bg? Or will the program be terminated when I log out?

    Read the article

  • Kill PHP process when connection is closed

    - by user838437
    I've posted the following question to SO, but thought there might be a server based solution. http://stackoverflow.com/questions/9053964/php-script-with-sleep-does-not-exit-on-connection-close I'm running an Ubuntu VPS to run this script, and I'm trying to get the script to die when the user closes the window/tab of his browser. There are several PHP based functions to see if the connection is still open, but none works (trust me, tested them all). Any creative ideas on how I can do this through the server maybe?

    Read the article

  • Unable to capture standard output of process using Boost.Process

    - by Chris Kaminski
    Currently am using Boost.Process from the Boost sandbox, and am having issues getting it to capture my standard output properly; wondering if someone can give me a second pair of eyeballs into what I might be doing wrong. I'm trying to take thumbnails out of RAW camera images using DCRAW (latest version), and capture them for conversion to QT QImage's. The process launch function: namespace bf = ::boost::filesystem; namespace bp = ::boost::process; QImage DCRawInterface::convertRawImage(string path) { // commandline: dcraw -e -c <srcfile> -> piped to stdout. if ( bf::exists( path ) ) { std::string exec = "bin\\dcraw.exe"; std::vector<std::string> args; args.push_back("-v"); args.push_back("-c"); args.push_back("-e"); args.push_back(path); bp::context ctx; ctx.stdout_behavior = bp::capture_stream(); bp::child c = bp::launch(exec, args, ctx); bp::pistream &is = c.get_stdout(); ofstream output("C:\\temp\\testcfk.jpg"); streamcopy(is, output); } return (NULL); } inline void streamcopy(std::istream& input, std::ostream& out) { char buffer[4096]; int i = 0; while (!input.eof() ) { memset(buffer, 0, sizeof(buffer)); int bytes = input.readsome(buffer, sizeof buffer); out.write(buffer, bytes); i++; } } Invoking the converter: DCRawInterface DcRaw; DcRaw.convertRawImage("test/CFK_2439.NEF"); The goal is to simply verify that I can copy the input stream to an output file. Currently, if I comment out the following line: args.push_back("-c"); then the thumbnail is written by DCRAW to the source directory with a name of CFK_2439.thumb.jpg, which proves to me that the process is getting invoked with the right arguments. What's not happening is connecting to the output pipe properly. FWIW: I'm performing this test on Windows XP under Eclipse 3.5/Latest MingW (GCC 4.4).

    Read the article

  • Validating a linked item&rsquo;s data template in Sitecore

    - by Kyle Burns
    I’ve been doing quite a bit of work in Sitecore recently and last week I encountered a situation that it appears many others have hit.  I was working with a field that had been configured originally as a grouped droplink, but now needed to be updated to support additional levels of hierarchy in the folder structure.  If you’ve done any work in Sitecore that statement makes sense, but if not it may seem a bit cryptic.  Sitecore offers a number of different field types and a subset of these field types focus on providing links either to other items on the content tree or to content that is not stored in Sitecore.  In the case of the grouped droplink, the field is configured with a “root” folder and each direct descendant of this folder is considered to be a header for a grouping of other items and displayed in a dropdown.  A picture is worth a thousand words, so consider the following piece of a content tree: If I configure a grouped droplink field to use the “Current” folder as its datasource, the control that gets to my content author looks like this: This presents a nicely organized display and limits the user to selecting only the direct grandchildren of the folder root.  It also presents the limitation that struck as we were thinking through the content architecture and how it would hold up over time – the authors cannot further organize content under the root folder because of the structure required for the dropdown to work.  Over time, not allowing the hierarchy to go any deeper would prevent out authors from being able to organize their content in a way that it would be found when needed, so the grouped droplink data type was not going to fit the bill. I needed to look for an alternative data type that allowed for selection of a single item and limited my choices to descendants of a specific node on the content tree.  After looking at the options available for links in Sitecore and considering them against each other, one option stood out as nearly perfect – the droptree.  This field type stores its data identically to the droplink and allows for the selection of zero or one items under a specific node in the content tree.  By changing my data template to use droptree instead of grouped droplink, the author is now presented with the following when selecting a linked item: Sounds great, but a did say almost perfect – there’s still one flaw.  The code intended to display the linked item is expecting the selection to use a specific data template (or more precisely it makes certain assumptions about the fields that will be present), but the droptree does nothing to prevent the author from selecting a folder (since folders are items too) instead of one of the items contained within a folder.  I looked to see if anyone had already solved this problem.  I found many people discussing the problem, but the closest that I found to a solution was the statement “the best thing would probably be to create a custom validator” with no further discussion in regards to what this validator might look like.  I needed to create my own validator to ensure that the user had not selected a folder.  Since so many people had the same issue, I decided to make the validator as reusable as possible and share it here. The validator that I created inherits from StandardValidator.  In order to make the validator more intuitive to developers that are familiar with the TreeList controls in Sitecore, I chose to implement the following parameters: ExcludeTemplatesForSelection – serves as a “deny list”.  If the data template of the selected item is in this list it will not validate IncludeTemplatesForSelection – this can either be empty to indicate that any template not contained in the exclusion list is acceptable or it can contain the list of acceptable templates Now that I’ve explained the parameters and the purpose of the validator, I’ll let the code do the rest of the talking: 1: /// <summary> 2: /// Validates that a link field value meets template requirements 3: /// specified using the following parameters: 4: /// - ExcludeTemplatesForSelection: If present, the item being 5: /// based on an excluded template will cause validation to fail. 6: /// - IncludeTemplatesForSelection: If present, the item not being 7: /// based on an included template will cause validation to fail 8: /// 9: /// ExcludeTemplatesForSelection trumps IncludeTemplatesForSelection 10: /// if the same value appears in both lists. Lists are comma seperated 11: /// </summary> 12: [Serializable] 13: public class LinkItemTemplateValidator : StandardValidator 14: { 15: public LinkItemTemplateValidator() 16: { 17: } 18:   19: /// <summary> 20: /// Serialization constructor is required by the runtime 21: /// </summary> 22: /// <param name="info"></param> 23: /// <param name="context"></param> 24: public LinkItemTemplateValidator(SerializationInfo info, StreamingContext context) : base(info, context) { } 25:   26: /// <summary> 27: /// Returns whether the linked item meets the template 28: /// constraints specified in the parameters 29: /// </summary> 30: /// <returns> 31: /// The result of the evaluation. 32: /// </returns> 33: protected override ValidatorResult Evaluate() 34: { 35: if (string.IsNullOrWhiteSpace(ControlValidationValue)) 36: { 37: return ValidatorResult.Valid; // let "required" validation handle 38: } 39:   40: var excludeString = Parameters["ExcludeTemplatesForSelection"]; 41: var includeString = Parameters["IncludeTemplatesForSelection"]; 42: if (string.IsNullOrWhiteSpace(excludeString) && string.IsNullOrWhiteSpace(includeString)) 43: { 44: return ValidatorResult.Valid; // "allow anything" if no params 45: } 46:   47: Guid linkedItemGuid; 48: if (!Guid.TryParse(ControlValidationValue, out linkedItemGuid)) 49: { 50: return ValidatorResult.Valid; // probably put validator on wrong field 51: } 52:   53: var item = GetItem(); 54: var linkedItem = item.Database.GetItem(new ID(linkedItemGuid)); 55:   56: if (linkedItem == null) 57: { 58: return ValidatorResult.Valid; // this validator isn't for broken links 59: } 60:   61: var exclusionList = (excludeString ?? string.Empty).Split(','); 62: var inclusionList = (includeString ?? string.Empty).Split(','); 63:   64: if ((inclusionList.Length == 0 || inclusionList.Contains(linkedItem.TemplateName)) 65: && !exclusionList.Contains(linkedItem.TemplateName)) 66: { 67: return ValidatorResult.Valid; 68: } 69:   70: Text = GetText("The field \"{0}\" specifies an item which is based on template \"{1}\". This template is not valid for selection", GetFieldDisplayName(), linkedItem.TemplateName); 71:   72: return GetFailedResult(ValidatorResult.FatalError); 73: } 74:   75: protected override ValidatorResult GetMaxValidatorResult() 76: { 77: return ValidatorResult.FatalError; 78: } 79:   80: public override string Name 81: { 82: get { return @"LinkItemTemplateValidator"; } 83: } 84: }   In this blog entry, I have shared some code that I found useful in solving a problem that seemed fairly common.  Hopefully the next person that is looking for this answer finds it useful as well.

    Read the article

  • What is the correlation between the quality of the software development process and the quality of the product?

    - by Ophir Yoktan
    I used to believe the practicing "good" software development methods tends to yield a better product in the long run. However, I've seen quite a few cases where "quick-and-dirty" \ "brute-force" \ "copy-paste" programming appeared to give decent results quicker, and cheaper. This appears especially in cases where time to market is more critical then maintenance overhead. Is there a correlation between the quality of the development process and techniques and the quality of the product?

    Read the article

  • Release Process Improvements

    - by wallismark
    The process of creating a new build and releasing it to production is a critical step in the SDLC but it is often left as an afterthought and varies greatly from one company to the next. I'm hoping people will share improvements they have made to this process in their organisation so we can all takes steps to 'reduce the pain'. So the question is, specify one painful/time consuming part of your release process and what did you do to improve it? My example: at a previous employer all developers made database changes on one common development database. Then when it came to release time, we used Redgate's SQL Compare to generate a huge script from the differences between the Dev and QA databases. This works reasonably well but the problems with this approach are:- ALL changes in the Dev database are included, some of which may still be 'works in progress'. Sometimes developers made conflicting changes (that were not noticed until the release was in production) It was a time consuming and manual process to create and validate the script (by validate I mean, try to weed out issues like problem 1 and 2). When there were problems with the script (eg the order in which things were run such as creating a record which relies on a foreign key record which is in the script but not yet run) it took time to 'tweak' it so it ran smoothly. It's not an ideal scenario for Continuous Integration. So the solution was:- Enforce a policy of all changes to the database must be scripted. A naming convention was important for ensuring the correct running order of the scripts. Create/Use a tool to run the scripts at release time. Developers had their own copy of the database do develop against (so there was no more 'stepping on each others toes') The next release after we started this process was much faster with fewer problems, indeed the only problems found were due to people 'breaking the rules', eg not creating a script. Once the issues with releasing to QA were fixed, when it came time to release to production it was very smooth. We applied a few other changes (like introducing CI) but this was the most significant, overall we reduced release time from around 3 hours down to a max of 10-15 minutes.

    Read the article

  • Modify Build Failure Work Item in TFS 2010 Build

    - by Jakob Ehn
    The default behaviour in TFS Team Build (all versions) is to create a bug work item when a build fails. This main benefit of this is that you get a work item for something that needs to be done, namely to fix the build!. When the developer responsible for the build failure has fixed the problem, he/she can associated that check-in with the work item that was created from the previous build failure. In TFS 2005/2008 you could modify the information in the created work item by changing some predefined properties in the TFSBuild.proj file:   <!-- WorkItemType The type of the work item created on a build failure. --> <WorkItemType>Bug</WorkItemType> <!-- WorkItemFieldValues Fields and values of the work item created on a build failure. Note: Use reference names for fields if you want the build to be resistant to field name changes. Reference names are language independent while friendly names are changed depending on the installed language. For example, "System.Reason" is the reference name for the "Reason" field. --> <WorkItemFieldValues>System.Reason=Build Failure;System.Description=Start the build using Team Build</WorkItemFieldValues> <!-- WorkItemTitle Title of the work item created on build failure. --> <WorkItemTitle>Build failure in build:</WorkItemTitle> <!-- DescriptionText History comment of the work item created on a build failure. --> <DescriptionText>This work item was created by Team Build on a build failure.</DescriptionText> <!-- BuildLogText Additional comment text for the work item created on a build failure. --> <BuildlogText>The build log file is at:</BuildlogText> <!-- ErrorWarningLogText Additional comment text for the work item created on a build failure. This text will only be added if there were errors or warnings. --> <ErrorWarningLogText>The errors/warnings log file is at:</ErrorWarningLogText>   In TFS 2010, with Windows Workflow, you change this by modifying the properties on the OpenWorkItem activity. The hardest part of this is to actually find where this activity is located in the build process workflow. If you open the build definition in XAML you can just search for OpenWorkItem. If you use the designer you need to click your way down to the Catch section of the Try to Compile the Project sequence: To change the default values of the created work item, select the Created Work Item activity and look at the Properties window: Note the CustomFields property which is a dictionary with key (work item field name) and value. If you add custom fields to your work item you can add a value for it here by adding a new entry in the dictionary.

    Read the article

  • CCNet TFS Migration - Dealing with left over folders

    - by Michael Stephenson
    Im currently in the process of migrating our many BizTalk projects from MKS source control to TFS.  While we will be using TFS for work item tracking and source control etc we will be continuing to use Cruise Control for continuous integration although im updating this to CCNet 1.5 at the same time. Ill post a few things as much as a reminder to myself about some of the problems we come across. Problem After the first build of our code the next time a build is triggered an error is encountered by the TFS source control block refreshing the source code. System.IO.IOException: The directory is not empty.    at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive)    at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path)    at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result)    at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) System.IO.IOException: The directory is not empty. at System.IO.Directory.DeleteHelper(String fullPath, String userPath, Boolean recursive) at System.IO.Directory.Delete(String fullPath, String userPath, Boolean recursive) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.deleteDirectory(String path) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Vsts.GetSource(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Build(IIntegrationResult result) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) Project: Bupa.BPI.Documents Date of build: 2011-01-28 14:54:21 Running time: 00:00:05 Integration Request: Build (ForceBuild) triggered from VMOPBZDEV11 Solution The problem seems to be with a folder called TestLocations which is created by the build process and used along with the file adapter as a way to get messages into BizTalk.  For some reason the source control block when it does a full refresh of the code does not get rid of this folder and then complains thats a problem and fails the build. Interestingly there are other folders created by the build which are deleted fine.  My assumption is that this if something to do with the file adapter polling the directory.  However note that we have not had this problem with other source control blocks in the past. To workaround this I have added a prebuild task to the ccnet.config file to delete this folder before the source control block is executed.  See below for example < prebuild> exec>executable>cmd.exe</executable>buildArgs>/c "if exist "C:\<MyCode>\TestLocations" rd /s /q "C:\<MyCode>\TestLocations""</buildArgs>exec> prebuild> < < < </ </

    Read the article

  • Developmnet process for an embedded project with significant Hardware change

    - by pierr
    Hi, I have a good idea about Agile development process but it seems it does not fit well with a embedded project with significant hardware change. I will describe below what we are currently doing (Ad-hoc way , no defined process yet). The change are divided to three categories and different process are used for them : complete hardware change example : use a different video codec IP a) Study the new IP b) RTL/FPGA simulation c) Implement the leagcy interface - go to b) d) Wait until hardware (tape out) is ready f) Test on the real Hardware hardware improvement example : enhance the image display quaulity by improving the underlie algorithm a)RTL/FPGA simulation b)Wait until hardware and test on the hardware Mino change exmaple : only change hardware register mapping a)Wait until hardware and test on the hardware The worry is it seems we don't have too much control and confidence about software maturity for the hardware change as the bring up schedule is always very tight and the customer desired a seemless change when updating to a new version hardware. How did you manage this kind of hardware hardware change? Did you solve that by a Hardware Abstraction Layer (HAL)? Did you have a automatical test for the HAL layer? How did you test when the hardware platform is not even ready? Do you have well documented process for this kind of change? Thanks for your insight.

    Read the article

  • The First Microsoft Dynamics NAV Builds on TFS 2010 Server

    - by ssmantha
    We are now successfully, able build Dynamics NAV solutions using the TFS Build workflow mechanisms. Lots of test builds were made, the builds can restore the NAV Database and start from a fresh solution, take latest of the NAV objects and then import it to Navision and call the compile method. The workflow is also able to generate FOB files as output which can be directly shipped to the customers. I think this is the First in the world implementation of the TFS build concepts in conjunction with NAV. I think this is a time to change the thinking caps and try to approach ERP development and include the practises of ALM into ERP Product Development.

    Read the article

  • [News] Un plugin TFS pour Eclipse

    Suite au r?cent rachat de la soci?t? Teamprise en Novembre, Microsoft annonce la disponibilit? d'un plugin TFS pour Eclipse d?nomm? Visual Studio Team Explorer 2010 : " The beta release contains what we consider to be the essential features necessary to claim that we?re a client for TFS 2010. We?ve been trying to strike a balance between including 2010 features, and getting the product to market, so you won?t see everything here yet (...) " . Les copies d'?cran de ce plugin qui semble gratuit sont disponibles sur le blog en question.

    Read the article

  • TFS 2010 and Expanding Possibilities

    - by ssmantha
    My organization is migrating to the new Team Foundation Server 2010, and we all are busy experimenting with ALM features. I find it very exciting and I foresee quite a lot of possibilities for TFS 2010 getting integrated with the Dynamics product line. I hope most of us know about the Version Control capabilities. However, I see a strong possibilities of having Team Builds and other automations, thanks to the technologies like workflow foundation in .net which is now being actively used in various TFS scenarios.   Lots of experiments so there is more to come!!

    Read the article

  • TFS Backup Plan Wizard Tool

    - by Enrique Lima
    With the release of the “September – 2010” TFS 2010 Power Tools, came an addition to the Team Foundation Server Administration Console.  This addition is the Team Foundation Backups Tree item.  The tool is used to create backup plans and to work with it you run through a wizard, just like you would in configuring TFS or any of the extensions it has. The areas covered through the tool include: Backup to a Network Backup Path, retention configuration. Under Advanced Options, the extension to be used for the Full and Transactional backups. The capability to include external databases, meaning, include the reporting databases and SharePoint databases as part of the plan. There are further options as you can see, that includes being able to define a task scheduler account, be able to set alerts for notifications on execution of the plans, and last the option to configure the schedule for the plan execution.  All in all a very good tool and great way to safeguard the investment you’ve made.

    Read the article

  • C# Application process hangs after some time

    - by Chris
    Hi, I implemented a simple C# application which inserts about 350000 records into the database. This used to work well and the process took approximately 20 minutes. I created a progress bar which lets you know approximately the progress of the records insertion. When the progress bar reaches about 75% it stops progressing. I have to manually terminate the program as the process doesn't seem to complete. If I use less data (like 10000), the progress bar finishes and the process is completed. However when I try to insert all the records, this won't happen any more. Note that if I wait longer to terminate the program manually, more records would have been inserted. For example, if I terminate the program after 15 minutes, 200000 records are inserted, whereas if I terminate the program after 20 minutes, 250000 records are inserted. This program is using a single thread. In face I can't do anything else until the process is complete. Does this have anything to do with threading or processes? Any feedback will be greatly appreciated. Thanks.

    Read the article

  • Opening up process and grabbing window title. Where did I go wrong?

    - by user1632018
    In my application I allow for the users to add a program from a open file dialog, and it then adds the item to a listview and saves the items location into the tag. So what I am trying to do is when the program in the listview is selected and the button is pressed, it starts a timer and this timer checks to see if the process is running, and if it isn't launches the process, and once the process is launched it gets the window title of the process and sends it to a textbox on another form. EDIT: The question is if anyone can see why it is not working, by this I mean starting the process, then when it's started closing the form and adding the process window title to a textbox on another form. I have tried to get it working but I can't. I know that the process name it is getting is right I think my problem is to do with my for loop. Basically it isn't doing anything visible right now. I feel like I am very close with my code and im hoping it just needs a couple minor tweaks. Any help would be appreciated. Sorry if my coding practices aren't that great, im pretty new to this. EDIT:I thought I found a solution but it only works now if the process has been started already. It won't start it up. Private Sub Timer1_Tick(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Timer1.Tick Dim s As String = ListView1.SelectedItems(0).Tag Dim myFile As String = Path.GetFileName(s) Dim mu As String = myFile.Replace(".exe", "").Trim() Dim f As Process Dim p As Process() = Process.GetProcessesByName(mu) For Each f In p If p.Length > 0 Then For i As Integer = 0 To p.Length - 1 ProcessID = (p(i).Id) AutoMain.Name.Text = f.MainWindowTitle Timer1.Enabled = False Me.Close() Next Else ProcessID = 0 End If If ProcessID = 0 Then Process.Start(myFile) End If Next End Sub

    Read the article

  • Can 'screen' grab an existing process and tie itself to it?

    - by warren
    Scenario: Started a process that's going to take "a while" to complete outside of screen. Need to leave the terminal / netowrk hiccups Process lost Would be nice if: Started a process outside of screen Realize error Run screen <magic-goes-here> and it grabs the active process to itself From the man pages and --help info, I don't see a way this can be done. Is this possible directly with screen? If not, is it possible to change the owning shell of a process, so that the bash (or other shell of your choosing) instance inside screen can have a command run which will change the parent shell of the initial process to itself from the originator?

    Read the article

  • Odd company release cycle: Go Distributed Source Control?

    - by MrLane
    sorry about this long post, but I think it is worth it! I have just started with a small .NET shop that operates quite a bit differently to other places that I have worked. Unlike any of my previous positions, the software written here is targetted at multiple customers and not every customer gets the latest release of the software at the same time. As such, there is no "current production version." When a customer does get an update, they also get all of the features added to he software since their last update, which could be a long time ago. The software is highly configurable and features can be turned on and off: so called "feature toggles." Release cycles are very tight here, in fact they are not on a shedule: when a feature is complete the software is deployed to the relevant customer. The team only last year moved from Visual Source Safe to Team Foundation Server. The problem is they still use TFS as if it were VSS and enforce Checkout locks on a single code branch. Whenever a bug fix gets put out into the field (even for a single customer) they simply build whatever is in TFS, test the bug was fixed and deploy to the customer! (Myself coming from a pharma and medical devices software background this is unbeliveable!). The result is that half baked dev code gets put into production without being even tested. Bugs are always slipping into release builds, but often a customer who just got a build will not see these bugs if they don't use the feature the bug is in. The director knows this is a problem as the company is starting to grow all of a sudden with some big clients coming on board and more smaller ones. I have been asked to look at source control options in order to eliminate deploying of buggy or unfinished code but to not sacrifice the somewhat asyncronous nature of the teams releases. I have used VSS, TFS, SVN and Bazaar in my career, but TFS is where most of my experience has been. Previously most teams I have worked with use a two or three branch solution of Dev-Test-Prod, where for a month developers work directly in Dev and then changes are merged to Test then Prod, or promoted "when its done" rather than on a fixed cycle. Automated builds were used, using either Cruise Control or Team Build. In my previous job Bazaar was used sitting on top of SVN: devs worked in their own small feature branches then pushed their changes to SVN (which was tied into TeamCity). This was nice in that it was easy to isolate changes and share them with other peoples branches. With both of these models there was a central dev and prod (and sometimes test) branch through which code was pushed (and labels were used to mark builds in prod from which releases were made...and these were made into branches for bug fixes to releases and merged back to dev). This doesn't really suit the way of working here, however: there is no order to when various features will be released, they get pushed when they are complete. With this requirement the "continuous integration" approach as I see it breaks down. To get a new feature out with continuous integration it has to be pushed via dev-test-prod and that will capture any unfinished work in dev. I am thinking that to overcome this we should go down a heavily feature branched model with NO dev-test-prod branches, rather the source should exist as a series of feature branches which when development work is complete are locked, tested, fixed, locked, tested and then released. Other feature branches can grab changes from other branches when they need/want, so eventually all changes get absorbed into everyone elses. This fits very much down a pure Bazaar model from what I experienced at my last job. As flexible as this sounds it just seems odd to not have a dev trunk or prod branch somewhere, and I am worried about branches forking never to re-integrate, or small late changes made that never get pulled across to other branches and developers complaining about merge disasters... What are peoples thoughts on this? A second final question: I am somewhat confused about the exact definition of distributed source control: some people seem to suggest it is about just not having a central repository like TFS or SVN, some say it is about being disconnected (SVN is 90% disconnected and TFS has a perfectly functional offline mode) and others say it is about Feature Branching and ease of merging between branches with no parent-child relationship (TFS also has baseless merging!). Perhaps this is a second question!

    Read the article

  • Execute C++ exe from C# form using Process.start()

    - by Dan
    Hi, I'm trying to create a C# form app that will allow me to use all of my previous C++ programs from one central program. I'm able to open the exes with Process.start, however it does not compile the code correctly. Example code: Process.Start("C:\\Documents and Settings\\dan\\Desktop\\test.exe"); This will bring up the console and act like it's running, but it does not run like when I normally compile out of the C++ editor. Is there a startinfo variable I need to set to signify that it's a c++ program or something along that line? Also, is there any way to execute a C++ program using process.start that will allow me to pass it variables through the command line via argc and argv? Thanks

    Read the article

  • Failed to Kill Process in SQL 2008

    - by Andrea.Ko
    I have a process with the following information, and i execute the kill process to kill this id, and it return me "Only user processes can be killed." SPID:11 Status:BACKGROUND Login:sa HostName: . BlkBy: . DBName: SAFEMIG Command:CHECKPOINT Normally, all the session to login to this server, it should have a HostName which display our PC name, but this connection is with a dot, so not sure who is executing what process that have this connection. I execute "dbcc inputbuffer(11)" It return me"EventType= No Event, Parameters = 0 and EventInfo=Null" Appreciate for any help\advice on this problem!

    Read the article

  • Programmatic resource monitoring per process in Linux

    - by tuxx
    Hi, I want to know if there is an efficient solution to monitor a process resource consumption (cpu, memory, network bandwidth) in Linux. I want to write a daemon in C++ that does this monitoring for some given PIDs. From what I know, the classic solution is to periodically read the information from /proc, but this doesn't seem the most efficient way (it involves many system calls). For example to monitor the memory usage every second for 50 processes, I have to open, read and close 50 files (that means 150 system calls) every second from /proc. Not to mention the parsing involved when reading these files. Another problem is the network bandwidth consumption: this cannot be easily computed for each process I want to monitor. The solution adopted by NetHogs involves a pretty high overhead in my opinion: it captures and analyzes every packet using libpcap, then for each packet the local port is determined and searched in /proc to find the corresponding process. Do you know if there are more efficient alternatives to these methods presented or any libraries that deal with this problems?

    Read the article

  • Java respawn process

    - by Bart van Heukelom
    I'm making an editor-like program. If the user chooses File-Open in the main window I want to start a new copy of the editor process with the chosen filename as an argument. However, for that I need to know what command was used to start the first process: java -jar myapp.jar blabalsomearguments // --- need this information Open File (fileUrl) exec("java -jar myapp.jar blabalsomearguments fileUrl"); I'm not looking for an in-process solution, I've already implemented that. I'd like to have the benefits that seperate processes bring.

    Read the article

  • Problem with System.Diagnostics.Process RedirectStandardOutput to appear in Winforms Textbox in real

    - by Jonathan Websdale
    I'm having problems with the redirected output from a console application being presented in a Winforms textbox in real-time. The messages are being produced line by line however as soon as interaction with the Form is called for, nothing appears to be displayed. Following the many examples on both Stackoverflow and other forums, I can't seem to get the redirected output from the process to display in the textbox on the form until the process completes. By adding debug lines to the 'stringWriter_StringWritten' method and writing the redirected messages to the debug window I can see the messages arriving during the running of the process but these messages will not appear on the form's textbox until the process completes. Grateful for any advice on this. Here's an extract of the code public partial class RunExternalProcess : Form { private static int numberOutputLines = 0; private static MyStringWriter stringWriter; public RunExternalProcess() { InitializeComponent(); // Create the output message writter RunExternalProcess.stringWriter = new MyStringWriter(); stringWriter.StringWritten += new EventHandler(stringWriter_StringWritten); System.Diagnostics.ProcessStartInfo startInfo = new System.Diagnostics.ProcessStartInfo(); startInfo.FileName = "myCommandLineApp.exe"; startInfo.UseShellExecute = false; startInfo.RedirectStandardOutput = true; startInfo.CreateNoWindow = true; startInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden; using (var pProcess = new System.Diagnostics.Process()) { pProcess.StartInfo = startInfo; pProcess.OutputDataReceived += new System.Diagnostics.DataReceivedEventHandler(RunExternalProcess.Process_OutputDataReceived); pProcess.EnableRaisingEvents = true; try { pProcess.Start(); pProcess.BeginOutputReadLine(); pProcess.BeginErrorReadLine(); pProcess.WaitForExit(); } catch (Exception ex) { MessageBox.Show(ex.ToString()); } finally { pProcess.OutputDataReceived -= new System.Diagnostics.DataReceivedEventHandler(RunExternalProcess.Process_OutputDataReceived); } } } private static void Process_OutputDataReceived(object sender, System.Diagnostics.DataReceivedEventArgs e) { if (!string.IsNullOrEmpty(e.Data)) { RunExternalProcess.OutputMessage(e.Data); } } private static void OutputMessage(string message) { RunExternalProcess.stringWriter.WriteLine("[" + RunExternalProcess.numberOutputLines++.ToString() + "] - " + message); } private void stringWriter_StringWritten(object sender, EventArgs e) { System.Diagnostics.Debug.WriteLine(((MyStringWriter)sender).GetStringBuilder().ToString()); SetProgressTextBox(((MyStringWriter)sender).GetStringBuilder().ToString()); } private delegate void SetProgressTextBoxCallback(string text); private void SetProgressTextBox(string text) { if (this.ProgressTextBox.InvokeRequired) { SetProgressTextBoxCallback callback = new SetProgressTextBoxCallback(SetProgressTextBox); this.BeginInvoke(callback, new object[] { text }); } else { this.ProgressTextBox.Text = text; this.ProgressTextBox.Select(this.ProgressTextBox.Text.Length, 0); this.ProgressTextBox.ScrollToCaret(); } } } public class MyStringWriter : System.IO.StringWriter { // Define the event. public event EventHandler StringWritten; public MyStringWriter() : base() { } public MyStringWriter(StringBuilder sb) : base(sb) { } public MyStringWriter(StringBuilder sb, IFormatProvider formatProvider) : base(sb, formatProvider) { } public MyStringWriter(IFormatProvider formatProvider) : base(formatProvider) { } protected virtual void OnStringWritten() { if (StringWritten != null) { StringWritten(this, EventArgs.Empty); } } public override void Write(char value) { base.Write(value); this.OnStringWritten(); } public override void Write(char[] buffer, int index, int count) { base.Write(buffer, index, count); this.OnStringWritten(); } public override void Write(string value) { base.Write(value); this.OnStringWritten(); } }

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >