Search Results

Search found 8897 results on 356 pages for 'msbuild task'.

Page 54/356 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • How to execute a scheduled task with "schtasks" without opening a new command line window ?

    - by Misha Moroshko
    I have a batch file that creates a scheduled task using schtasks like this: schtasks /create /tn my_task_name /tr "...\my_path\my_task.bat" /sc daily /st 10:00:00 /s \\my_computer_name /u my_username /p my_password It works OK except the fact that when my_task.bat is executed - a new command line window is opened (and closed after execution). I would like to avoid opening this new window (i.e. to run the task in quiet mode, in the background). I thought to use start /b ...\my_path\my_task.bat but I don't know how, because since I have to call start from the batch file I need to precede it with cmd /c, which again causes the new window to open. How could I solve this problem ?

    Read the article

  • Why does the Windows XP Task Manager icon disappear from the tray?

    - by Jason Owen
    One of the more annoying bugs in Windows XP is the tendency for the Task Manager icon to not show up in the tray (aka the notification area). Sometimes it does, sometimes it doesn't, and it's not consistent enough to have an obvious cause. Looking on Google turns up a bunch of forums that have the same problem but no working solution. Why does the task manager icon sometimes not show up? How can I repair it when this happens (how can make it show up when it's missing)? How can I prevent it from not working in the first place, so that I don't have to worry about repairing it after the fact?

    Read the article

  • How to configure permissions for win2008 task running as Network Service to stop/start a service on different system?

    - by weiji
    Well... title says it all, actually. We've got a Scheduled Task set up on a windows server 2003 box running as the Network Service, and the batch file it runs will invoke "sc" to stop and then start a service on another windows box, however sc reports: [SC] OpenService FAILED 5: Access is denied. Running the same batch file via the windows explorer has no issues, and my user account is part of the Administrators group so I believe this is why there are no issues when I try it manually. Is this a permissions thing I enable for Network Service on the first server? Or do I enable permissions for Network Service somehow on the target server? This question (http://serverfault.com/questions/19382/why-sc-query-fails-from-one-machine-but-works-from-another) touches on something similar, but I'm looking for enabling the Network Service to access the service via the scheduled task.

    Read the article

  • Distribute Sort Sample Service

    - by kaleidoscope
    How it works? Using the front-end of the service, a user can specify a size in MB for the input data set to sort. Algorithm CreateAndSplit The CreateAndSplit task generates the input data and stores them as 10 blobs in the utility storage. The URLs to these blobs are packaged as Separate work items and written to the queue. · Separate The Separate task reads the blobs with the random numbers created in the CreateAndSplit task and places the random numbers into buckets. The interval of the numbers that go into one bucket is chosen so that the expected amount of numbers (assuming a uniform distribution of the numbers in the original data set) is around 100 kB. Each bucket is represented as a blob container in utility storage. Whenever there are 10 blobs in one bucket (i.e., the placement in this bucket is complete because we had 10 original splits), the separate task will generate a new Sort task and write the task into the queue. · Sort The Sort task merges all blobs in a single bucket and sorts them using a standard sort algorithm. The result is stored as a blob in utility storage. · Concat The concat task merges the results of all Sort tasks into a single blob. This blob can be downloaded as a text file using this Web page. As the resulting file is presented in text format, the size of the file is likely to be larger than the specified input file. Anish

    Read the article

  • Sharepoint Designer Workflow with multiple tasks in sequence

    - by Triangle Man
    I have a multi-step Sharepoint workflow in task list A that starts when a new task is created in that list and creates a task in another list, B. When that task in list B is completed, I would like the workflow in list A to create another task in list C. I am using Sharepoint Designer 2007 to build all of this and at the moment I have this represented by multiple steps. So, step one is to create the task in the other list, and store its ID as a variable. Step 2 is conditional on a value in the task created by step one being marked complete, and it creates a task in the next list, and so on. However, when I run the workflow, it marks its status as complete as soon as the item in the first list is completed, and does not go on to create the task outlined in Step 2 of the workflow. I would like to know why the workflow is marking itself complete at the end of step one, and why the subsequent steps are not executed. Thanks in advance for your help.

    Read the article

  • Monkeypatch a model in a rake task to use a method provided by a plugin?

    - by gduquesnay.mp
    During some recent refactoring we changed how our user avatars are stored not realizing that once deployed it would affect all the existing users. So now I'm trying to write a rake task to fix this by doing something like this. namespace :fix do desc "Create associated ImageAttachment using data in the Users photo fields" task :user_avatars => :environment do class User # Paperclip has_attached_file :photo ... <paperclip stuff, styles etc> end User.all.each do |user| i = ImageAttachment.new i.photo_url = user.photo.url user.image_attachments << i end end end When I try running that though I'm getting undefined method `has_attached_file' for User:Class I'm able to do this in script/console but it seems like it can't find the paperclip plugin's methods from a rake task.

    Read the article

  • How to handle all unhandled exceptions when using Task Parallel Library?

    - by Buu Nguyen
    I'm using the TPL (Task Parallel Library) in .NET 4.0. I want to be able to centralize the handling logic of all unhandled exceptions by using the Thread.GetDomain().UnhandledException event. However, in my application, the event is never fired for threads started with TPL code, e.g. Task.Factory.StartNew(...). The event is indeed fired if I use something like new Thread(threadStart).Start(). This MSDN article suggests to use Task#Wait() to catch the AggregateException when working with TPL, but that is not I want because it is not "centralized" enough a mechanism. Does anyone experience same problem at all or is it just me? Do you have any solution for this?

    Read the article

  • Can a rake task know about the other tasks in the invocation chain?

    - by andrewdotnich
    Rake (like make) is able to have many targets/tasks specified on invocation. Is it possible for a rake task to access the list of tasks the user invoked, in order to do its job? Scenario: Consider a Rake-based build tool. A help task would like to know what tasks were also specified in order to print their usage and halt the build process. The benefit of this as opposed to rake-style parameter passing are cleaner syntax (rake help build instead of rake help task=build) and chaining (rake help build run_tests would print usage for both).

    Read the article

  • Oracle Fusion Supply Chain Management (SCM) Designs May Improve End User Productivity

    - by Applications User Experience
    By Applications User Experience on March 10, 2011 Michele Molnar, Senior Usability Engineer, Applications User Experience The Challenge: The SCM User Experience team, in close collaboration with product management and strategy, completely redesigned the user experience for Oracle Fusion applications. One of the goals of this redesign was to increase end user productivity by applying design patterns and guidelines and incorporating findings from extensive usability research. But a question remained: How do we know that the Oracle Fusion designs will actually increase end user productivity? The Test: To answer this question, the SCM Usability Engineers compared Oracle Fusion designs to their corresponding existing Oracle applications using the workflow time analysis method. The workflow time analysis method breaks tasks into a sequence of operators. By applying standard time estimates for all of the operators in the task, an estimate of the overall task time can be calculated. The workflow time analysis method has been recently adopted by the Applications User Experience group for use in predicting end user productivity. Using this method, a design can be tested and refined as needed to improve productivity even before the design is coded. For the study, we selected some of our recent designs for Oracle Fusion Product Information Management (PIM). The designs encompassed tasks performed by Product Managers to create, manage, and define products for their organization. (See Figure 1 for an example.) In applying this method, the SCM Usability Engineers collaborated with Product Management to compare the new Oracle Fusion Applications designs against Oracle’s existing applications. Together, we performed the following activities: Identified the five most frequently performed tasks Created detailed task scenarios that provided the context for each task Conducted task walkthroughs Analyzed and documented the steps and flow required to complete each task Applied standard time estimates to the operators in each task to estimate the overall task completion time Figure 1. The interactions on each Oracle Fusion Product Information Management screen were documented, as indicated by the red highlighting. The task scenario and script provided the context for each task.  The Results: The workflow time analysis method predicted that the Oracle Fusion Applications designs would result in productivity gains in each task, ranging from 8% to 62%, with an overall productivity gain of 43%. All other factors being equal, the new designs should enable these tasks to be completed in about half the time it takes with existing Oracle Applications. Further analysis revealed that these performance gains would be achieved by reducing the number of clicks and screens needed to complete the tasks. Conclusions: Using the workflow time analysis method, we can expect the Oracle Fusion Applications redesign to succeed in improving end user productivity. The workflow time analysis method appears to be an effective and efficient tool for testing, refining, and retesting designs to optimize productivity. The workflow time analysis method does not replace usability testing with end users, but it can be used as an early predictor of design productivity even before designs are coded. We are planning to conduct usability tests later in the development cycle to compare actual end user data with the workflow time analysis results. Such results can potentially be used to validate the productivity improvement predictions. Used together, the workflow time analysis method and usability testing will enable us to continue creating, evaluating, and delivering Oracle Fusion designs that exceed the expectations of our end users, both in the quality of the user experience and in productivity. (For more information about studying productivity, refer to the Measuring User Productivity blog.)

    Read the article

  • Syntax of passing lambda

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • Syntax of passing lambda causing hair loss (pulling out)

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • TFS API Change WorkItem CreatedDate And ChangedDate To Historic Dates

    - by Tarun Arora
    There may be times when you need to modify the value of the fields “System.CreatedDate” and “System.ChangedDate” on a work item. Richard Hundhausen has a great blog with ample of reason why or why not you should need to set the values of these fields to historic dates. In this blog post I’ll show you, Create a PBI WorkItem linked to a Task work item by pre-setting the value of the field ‘System.ChangedDate’ to a historic date Change the value of the field ‘System.Created’ to a historic date Simulate the historic burn down of a task type work item in a sprint Explain the impact of updating values of the fields CreatedDate and ChangedDate on the Sprint burn down chart Rules of Play      1. You need to be a member of the Project Collection Service Accounts              2. You need to use ‘WorkItemStoreFlags.BypassRules’ when you instantiate the WorkItemStore service // Instanciate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules);      3. You cannot set the ChangedDate         - Less than the changed date of previous revision         - Greater than current date Walkthrough The walkthrough contains 5 parts 00 – Required References 01 – Connect to TFS Programmatically 02 – Create a Work Item Programmatically 03 – Set the values of fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates 04 – Results of our experiment Lets get started………………………………………………… 00 – Required References Microsoft.TeamFoundation.dll Microsoft.TeamFoundation.Client.dll Microsoft.TeamFoundation.Common.dll Microsoft.TeamFoundation.WorkItemTracking.Client.dll 01 – Connect to TFS Programmatically I have a in depth blog post on how to connect to TFS programmatically in case you are interested. However, the code snippet below will enable you to connect to TFS using the Team Project Picker. // Services I need access to globally private static TfsTeamProjectCollection _tfs; private static ProjectInfo _selectedTeamProject; private static WorkItemStore _wis; // Connect to TFS Using Team Project Picker public static bool ConnectToTfs() { var isSelected = false; // The user is allowed to select only one project var tfsPp = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPp.ShowDialog(); // The TFS project collection _tfs = tfsPp.SelectedTeamProjectCollection; if (tfsPp.SelectedProjects.Any()) { // The selected Team Project _selectedTeamProject = tfsPp.SelectedProjects[0]; isSelected = true; } return isSelected; } 02 – Create a Work Item Programmatically In the below code snippet I have create a Product Backlog Item and a Task type work item and then link them together as parent and child. Note – You will have to set the ChangedDate to a historic date when you created the work item. Remember, If you try and set the ChangedDate to a value earlier than last assigned you will receive the following exception… TF26212: Team Foundation Server could not save your changes. There may be problems with the work item type definition. Try again or contact your Team Foundation Server administrator. If you notice below I have added a few seconds each time I have modified the ‘ChangedDate’ just to avoid running into the exception listed above. // Create Linked Work Items and return Ids private static List<int> CreateWorkItemsProgrammatically() { // Instantiate Work Item Store with the ByPassRules flag _wis = new WorkItemStore(_tfs, WorkItemStoreFlags.BypassRules); // List of work items to return var listOfWorkItems = new List<int>(); // Create a new Product Backlog Item var p = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Product Backlog Item"]); p.Title = "This is a new PBI"; p.Description = "Description"; p.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); p.AreaPath = _selectedTeamProject.Name; p["Effort"] = 10; // Just double checking that ByPassRules is set to true if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (p.Validate().Count == 0) { p.Save(); listOfWorkItems.Add(p.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var t = new WorkItem(_wis.Projects[_selectedTeamProject.Name].WorkItemTypes["Task"]); t.Title = "This is a task"; t.Description = "Task Description"; t.IterationPath = string.Format("{0}\\Release 1\\Sprint 1", _selectedTeamProject.Name); t.AreaPath = _selectedTeamProject.Name; t["Remaining Work"] = 10; if (_wis.BypassRules) { t.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01"); } if (t.Validate().Count == 0) { t.Save(); listOfWorkItems.Add(t.Id); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in t.Validate()) { Console.WriteLine(" - '{0}' ", e); } } var linkTypEnd = _wis.WorkItemLinkTypes.LinkTypeEnds["Child"]; p.Links.Add(new WorkItemLink(linkTypEnd, t.Id) {ChangedDate = Convert.ToDateTime("2012-01-01").AddSeconds(20)}); if (_wis.BypassRules) { p.Fields["System.ChangedDate"].Value = Convert.ToDateTime("2012-01-01").AddSeconds(20); } if (p.Validate().Count == 0) { p.Save(); } else { Console.WriteLine(">> Following exception(s) encountered during work item save: "); foreach (var e in p.Validate()) { Console.WriteLine(" - '{0}' ", e); } } return listOfWorkItems; } 03 – Set the value of “Created Date” and Change the value of “Changed Date” to Historic Dates The CreatedDate can only be changed after a work item has been created. If you try and set the CreatedDate to a historic date at the time of creation of a work item, it will not work. // Lets do a work item effort burn down simulation by updating the ChangedDate & CreatedDate to historic Values private static void WorkItemChangeSimulation(IEnumerable<int> listOfWorkItems) { foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); switch (wi.Type.Name) { case "ProductBacklogItem": if (wi.State.ToLower() == "new") wi.State = "Approved"; // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed Date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; case "Task": // Advance the changed date by few seconds wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); // Set the CreatedDate to Changed date wi.Fields["System.CreatedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(10); wi.Save(); break; } } // A mock sprint start date var sprintStart = DateTime.Today.AddDays(-5); // A mock sprint end date var sprintEnd = DateTime.Today.AddDays(5); // What is the total Sprint duration var totalSprintDuration = (sprintEnd - sprintStart).Days; // How much of the sprint have we already covered var noOfDaysIntoSprint = (DateTime.Today - sprintStart).Days; // Get the effort assigned to our tasks var totalEffortRemaining = QueryTaskTotalEfforRemaining(listOfWorkItems); // Defining how much effort to burn every day decimal dailyBurnRate = totalEffortRemaining / totalSprintDuration < 1 ? 1 : totalEffortRemaining / totalSprintDuration; // we have just created one task var totalNoOfTasks = 1; var simulation = sprintStart; var currentDate = DateTime.Today.Date; // Carry on till effort has been burned down from sprint start to today while (simulation.Date != currentDate.Date) { var dailyBurnRate1 = dailyBurnRate; // A fixed amount needs to be burned down each day while (dailyBurnRate1 > 0) { // burn down bit by bit from all unfinished task type work items foreach (var id in listOfWorkItems) { var wi = _wis.GetWorkItem(id); var isDirty = false; // Set the status to in progress if (wi.State.ToLower() == "to do") { wi.State = "In Progress"; isDirty = true; } // Ensure that there is enough effort remaining in tasks to burn down the daily burn rate if (QueryTaskTotalEfforRemaining(listOfWorkItems) > dailyBurnRate1) { // If there is less than 1 unit of effort left in the task, burn it all if (Convert.ToDecimal(wi["Remaining Work"]) <= 1) { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } else { // How much to burn from each task? var toBurn = (dailyBurnRate / totalNoOfTasks) < 1 ? 1 : (dailyBurnRate / totalNoOfTasks); // Check that the task has enough effort to allow burnForTask effort if (Convert.ToDecimal(wi["Remaining Work"]) >= toBurn) { wi["Remaining Work"] = Convert.ToDecimal(wi["Remaining Work"]) - toBurn; dailyBurnRate1 = dailyBurnRate1 - toBurn; isDirty = true; } else { wi["Remaining Work"] = 0; dailyBurnRate1 = dailyBurnRate1 - Convert.ToDecimal(wi["Remaining Work"]); isDirty = true; } } } else { dailyBurnRate1 = 0; } if (isDirty) { if (Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).Date == simulation.Date) { wi.Fields["System.ChangedDate"].Value = Convert.ToDateTime(wi.Fields["System.ChangedDate"].Value).AddSeconds(20); } else { wi.Fields["System.ChangedDate"].Value = simulation.AddSeconds(20); } wi.Save(); } } } // Increase date by 1 to perform daily burn down by day simulation = Convert.ToDateTime(simulation).AddDays(1); } } // Get the Total effort remaining in the current sprint private static decimal QueryTaskTotalEfforRemaining(List<int> listOfWorkItems) { var unfinishedWorkInCurrentSprint = _wis.GetQueryDefinition( new Guid(QueryAndGuid.FirstOrDefault(c => c.Key == "Unfinished Work").Value)); var parameters = new Dictionary<string, object> { { "project", _selectedTeamProject.Name } }; var q = new Query(_wis, unfinishedWorkInCurrentSprint.QueryText, parameters); var results = q.RunLinkQuery(); var wis = new List<WorkItem>(); foreach (var result in results) { var _wi = _wis.GetWorkItem(result.TargetId); if (_wi.Type.Name == "Task" && listOfWorkItems.Contains(_wi.Id)) wis.Add(_wi); } return wis.Sum(r => Convert.ToDecimal(r["Remaining Work"])); }   04 – The Results If you are still reading, the results are beautiful! Image 1 – Create work item with Changed Date pre-set to historic date Image 2 – Set the CreatedDate to historic date (Same as the ChangedDate) Image 3 – Simulate of effort burn down on a task via the TFS API   Image 4 – The history of changes on the Task. So, essentially this task has burned 1 hour per day Sprint Burn Down Chart – What’s not possible? The Sprint burn down chart is calculated from the System.AuthorizedDate and not the System.ChangedDate/System.CreatedDate. So, though you can change the System.ChangedDate and System.CreatedDate to historic dates you will not be able to synthesize the sprint burn down chart. Image 1 – By changing the Created Date and Changed Date to ‘18/Oct/2012’ you would have expected the burn down to have been impacted, but it won’t be, because the sprint burn down chart uses the value of field ‘System.AuthorizedDate’ to calculate the unfinished work points. The AsOf queries that are used to calculate the unfinished work points use the value of the field ‘System.AuthorizedDate’. Image 2 – Using the above code I burned down 1 hour effort per day over 5 days from the task work item, I would have expected the sprint burn down to show a constant burn down, instead the burn down shows the effort exhausted on the 24th itself. Simply because the burn down is calculated using the ‘System.AuthorizedDate’. Now you would ask… “Can I change the value of the field System.AuthorizedDate to a historic date” Unfortunately that’s not possible! You will run into the exception ValidationException –  “TF26194: The value for field ‘Authorized Date’ cannot be changed.” Conclusion - You need to be a member of the Project Collection Service account group in order to set the fields ‘System.ChangedDate’ and ‘System.CreatedDate’ to historic dates - You need to instantiate the WorkItemStore using the flag ByPassValidation - The System.ChangedDate needs to be set to a historic date at the time of work item creation. You cannot reset the ChangedDate to a date earlier than the existing ChangedDate and you cannot reset the ChangedDate to a date greater than the current date time. - The System.CreatedDate can only be reset after a work item has been created. You cannot set the CreatedDate at the time of work item creation. The CreatedDate cannot be greater than the current date. You can however reset the CreatedDate to a date earlier than the existing value. - You will not be able to synthesize the Sprint burn down chart by changing the value of System.ChangedDate and System.CreatedDate to historic dates, since the burn down chart uses AsOf queries to calculate the unfinished work points which internally uses the System.AuthorizedDate and NOT the System.ChangedDate & System.CreatedDate - System.AuthorizedDate cannot be set to a historic date using the TFS API Read other posts on using the TFS API here… Enjoy!

    Read the article

  • rake migration aborted: could not find table 'roles'

    - by user464180
    I just inherited code that I'm attempting to run the migrations for but I keep getting a rake aborted error. I've come across others that have what appears to be similar issues, but most involved Heroku and I'm trying to run this locally (to start.) I've tried troubleshooting using both PostgreSQL and SQLite, and both produce the same issue. The table "roles" referenced is the second migration called, so I'm having a hard time figuring out what is causing it to not get built. Any and all assistance is greatly appreciated. Thanks in advance. Here's the roles migration: class CreateRoles < ActiveRecord::Migration def change create_table :roles do |t| t.string :name t.timestamps end end end Here is the trace for SQLite: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! Could not find table 'roles' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/sqlite_adapter.rb:470:in `table_structure' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/sqlite_adapter.rb:351:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/schema_cache.rb:12:in `block in initialize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `yield' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `default' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:248:in `column_names' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:261:in `column_methods_hash' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:69:in `all_attributes_exists?' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:27:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/initializ ers/constants.rb:1:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `block in load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:588:in `block (2 levels) in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `block in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `instance_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:55:in `block in run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:136:in `initialize!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/rail tie/configurable.rb:30:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/environme nt.rb:5:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `block in require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:103:in `require_environment!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:292:in `block (2 levels) in initialize_tasks' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `call' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `block in execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :158:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :176:in `block in invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :157:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :144:in `invoke' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:116:in `invoke_task' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block (2 levels) in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:88:in `top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:66:in `block in run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:63:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/bin/rake:33:in ` <top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `<main>' Tasks: TOP => db:migrate => environment Here is the trace for PostgreSQL: ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment rake aborted! PG::Error: ERROR: relation "roles" does not exist LINE 4: WHERE a.attrelid = '"roles"'::regclass ^ : SELECT a.attname, format_type(a.atttypid, a.atttypmod), d.adsrc, a .attnotnull FROM pg_attribute a LEFT JOIN pg_attrdef d ON a.attrelid = d.adrelid AND a.attnum = d.adnum WHERE a.attrelid = '"roles"'::regclass AND a.attnum > 0 AND NOT a.attisdropped ORDER BY a.attnum /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1106:in `async_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1106:in `exec_no_cache' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:650:in `block in exec_query' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/abstract_adapter.rb:280:in `block in log' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/notifications/instrumenter.rb:20:in `instrument' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/abstract_adapter.rb:275:in `log' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:649:in `exec_query' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:1231:in `column_definitions' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/postgresql_adapter.rb:845:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/connection_adapters/schema_cache.rb:12:in `block in initialize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `yield' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `default' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:228:in `columns' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:248:in `column_names' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/model_schema.rb:261:in `column_methods_hash' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:69:in `all_attributes_exists?' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activerecord-3.2.1/lib/active _record/dynamic_matchers.rb:27:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/initializ ers/constants.rb:1:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `block in load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:245:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:588:in `block (2 levels) in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/engi ne.rb:587:in `block in <class:Engine>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `instance_exec' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:30:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:55:in `block in run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/init ializable.rb:54:in `run_initializers' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:136:in `initialize!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/rail tie/configurable.rb:30:in `method_missing' /Users/sa/Documents/AptanaWorkspace/recprototype/config/environme nt.rb:5:in `<top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `block in require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:236:in `load_dependency' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/activesupport-3.2.1/lib/activ e_support/dependencies.rb:251:in `require' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:103:in `require_environment!' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/railties-3.2.1/lib/rails/appl ication.rb:292:in `block (2 levels) in initialize_tasks' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `call' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :205:in `block in execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :200:in `execute' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :158:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :176:in `block in invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :174:in `invoke_prerequisites' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :157:in `block in invoke_with_call_chain' /Users/sa/.rvm/rubies/ruby-1.9.2-p318/lib/ruby/1.9.1/monitor.rb:201:in `mon_synchronize' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :151:in `invoke_with_call_chain' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/task.rb :144:in `invoke' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:116:in `invoke_task' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block (2 levels) in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `each' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:94:in `block in top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:88:in `top_level' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:66:in `block in run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:133:in `standard_exception_handling' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/lib/rake/applica tion.rb:63:in `run' /Users/sa/.rvm/gems/ruby-1.9.2-p318/gems/rake-0.9.2.2/bin/rake:33:in ` <top (required)>' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `load' /Users/sa/.rvm/gems/ruby-1.9.2-p318/bin/rake:19:in `<main>' Tasks: TOP => db:migrate => environment

    Read the article

  • Ant MXMLC task with arbitrary list of source/lib paths?

    - by sascha
    Does anyone know of a way to use the mxmlc task of the Flex Ant tasks with a user-definable list of source path or library paths? The idea is that the user can define an arbitrary list of source paths and/or library (swc) paths into an Ant properties file and the build file takes these values and evaluates them for use in the mxmlc task. Just wondering if there are any tricks (maybe utilizing filtering/string replacing) to get this working?

    Read the article

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • Troubles with list "dropdowns" and which list item gets the dropdown

    - by Andrew
    I'm working on a project for an MMO "guild" that gives members of the guild randomly generated tasks for the game. They can "block" three tasks from being assigned. The lists will look something like this: <ul> <li class="blocked">Task that is blocked</li> <li class="blocked-open">Click to block a task</li> <li class="blocked-open">Click to block a task</li> </ul> The blocked-open class means they haven't chosen a task to block yet. The blocked task means they've already blocked a task. When they click the list item, I want this to appear: <ul class="tasks-dropdown no-display"> <li><h1>Click a Task to Block</h1></li> <ul class="task-dropdown-inner"> <?php //output all tasks foreach($tasks as $task) { echo '<li class="blocked-option"><span id="'.$task.'">'.$task.'</span></li>'; } ?> <br class="clear" /> </ul> </ul> I don't quite know how, when the user clicks the .blocked-open line-item, to show that dropdown under only the one they clicked. My jQuery looked like this before I became confused. $("li.blocked-open").click(function() { $("ul.no-display").slideToggle("900"); }); $(".blocked-option span").click(function() { var task = $(this).attr('id'); alert("You have blocked: " + task); location.reload(true); }); I tested it by putting the dropdown under a line item in the code, and it worked fine, but when I have more than one dropdown in the code, clicking on one line item toggles all the dropdowns. I'm not sure what to do. :-p.

    Read the article

  • How to assign Application Icon that will display in Task bar?

    - by viky
    I am working on a Wpf desktop application, whenever i run my application it shows me a window and associated tab in the task bar(Normal windows feature). My problem is that the tab is using window's icon for unknown file-type, I tried with Icon property of Window, Icon gets assigned but still problem is when I run application, task bar Tab initially displays window's icon for unknown file-type and when window-load completes it changes to the Icon assigned. I want Icon there from beginning. Any help?

    Read the article

  • asp.net Web server control with child controls, event not firing

    - by bleeeah
    I have a simple web control (TaskList) that can have children (Task) which inherit from LinkButton, that can be added declaratively or programatically. This works ok, but I can't get the onclick event of a Task to be fired in my code behind. The code .. [ToolboxData("<{0}:TaskList runat=\"server\"> </{0}:TaskList>")] [ParseChildren(true)] [PersistChildren(false)] public class TaskList : System.Web.UI.Control { //[DefaultProperty("Text")] public TaskList() {} private List<Task> _taskList = new List<Task>(); private string _taskHeading = ""; public string Heading { get { return this._taskHeading; } set { this._taskHeading = value; } } [NotifyParentProperty(true)] [PersistenceMode(PersistenceMode.InnerProperty)] [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public List<Task> Tasks { get { return this._taskList; } set { this._taskList = value; } } protected override void CreateChildControls() { foreach (Task task in this._taskList) this.Controls.Add(task); base.CreateChildControls(); } protected override void Render(HtmlTextWriter writer) { writer.Write("<h2>" + this._taskHeading + "</h2>"); writer.Write("<div class='tasks_container'>"); writer.Write("<div class='tasks_list'>"); writer.Write("<ul>"); foreach (Task task in this._taskList) { writer.Write("<li>"); task.RenderControl(writer); writer.Write("</li>"); } writer.Write("</ul>"); writer.Write("</div>"); writer.Write("</div>"); } } public class Task : LinkButton { private string _key = ""; public string Key { get { return this._key; } set { this._key = value; } } } Markup: <rf:TaskList runat="server" ID="tskList" Heading="Tasks"> <Tasks> <rf:Task Key="ba" ID="L1" Text="Helllo" OnClick="task1_Click" runat="server" /> </Tasks> </rf:TaskList> The Onclick event task1_Click never fires when clicked (although a postback occurs).

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • The curious case of SOA Human tasks' automatic completion

    - by Kavitha Srinivasan
    A large south-Asian insurance industry customer using Oracle BPM and SOA ran into this. I have survived this ordeal previously myself but didnt think to blog it then. However, it seems like a good idea to share this knowledge with this reader community and so here goes.. Symptom: A human task (in a SOA/BPEL/BPM process) completes automatically while it should have been assigned to a proper user.There are no stack traces, no related exceptions in the logs. Why: The product is designed to treat human tasks that don't have assignees as one that is eligible for completion. And hence no warning/error messages are recorded in the logs. Usecase variant: A variant of this usecase, where an assignee doesnt exist in the repository is treated as a recoverable error. One can find this in the 'pending recovery' instances in EM and reactivate the task by changing the assignees in the bpm workspace as a process owner /administrator. But back to the usecase when tasks get completed automatically... When: This happens when the users/groups assigned to a task are 'empty' or null. This has been seen only on tasks whose assignees are derived from an assignment expression - ie at runtime an XPath is used to determine who to assign the task to. (This should not happen if task assignees are populated via swim-lane roles.) How to detect this in EM For instances that are auto-completed thus, one will notice in the Audit Trail of such instances, that the 'outcome' of the task is empty. The 'acquired by' element will also show as empty/null. Enabling the oracle.soa.services.workflow.* logger in em should print more verbose messages about this. How to fix this The application code needs two fixes: input to HT: The XSLT/XPath used  to set the task 'assignee' and the process itself should be enhanced to handle nulls better. For eg: if no-data-found, set assignees to alternate value, force default assignees etc. output from HT: Additionally, in the application code, check that the 'outcome' of the HT is not-null. If null, route the task to be performed again after setting the assignee correctly. Beginning PS4FP, one should be able to use 'grab' to route back to the task to fire again. Hope this helps. 

    Read the article

  • MSSQL: Copying data from one database to another

    - by DigiMortal
    I have database that has data imported from another server using import and export wizard of SQL Server Management Studio. There is also empty database with same tables but it also has primary keys, foreign keys and indexes. How to get data from first database to another? Here is the description of my crusade. And believe me – it is not nice one. Bugs in import and export wizard There is some awful bugs in import and export wizard that makes data imports and exports possible only on very limited manner: wizard is not able to analyze foreign keys, wizard wants to create tables always, whatever you say in settings. The result is faulty and useless package. Now let’s go step by step and make things work in our scenario. Database There are two databases. Let’s name them like this: PLAIN – contains data imported from remote server (no indexes, no keys, no nothing, just plain dumb data) CORRECT – empty database with same structure as remote database (indexes, keys and everything else but no data) Our goal is to get data from PLAIN to CORRECT. 1. Create import and export package In this point we will create faulty SSIS package using SQL Server Management Studio. Run import and export wizard and let it create SSIS package that reads data from CORRECT and writes it to, let’s say, CORRECT-2. Make sure you enable identity insert. Make sure there are no views selected. Make sure you don’t let package to create tables (you can miss this step because it wants to create tables anyway). Save package to SSIS. 2. Modify import and export package Now let’s clean up the package and remove all faulty crap. Connect SQL Server Management Studio to SSIS instance. Select the package you just saved and export it to your hard disc. Run Business Intelligence Studio. Create new SSIS project (DON’T MISS THIS STEP). Add package from disc as existing item to project and open it. Move to Control Flow page do one of following: Remove all preparation SQL-tasks and connect Data Flow tasks. Modify all preparation SQL-tasks so the existence of tables is checked before table is created (yes, you have to do it manually). Add new Execute-SQL task as first task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL' GO   EXEC sp_MSForEachTable 'DELETE FROM ?' GO   Save task. Add new Execute-SQL task as last task in control flow: Open task properties. Assign destination connection as connection to use. Insert the following SQL as command:   EXEC sp_MSForEachTable 'ALTER TABLE ? CHECK CONSTRAINT ALL' GO   Save task Now connect first Execute-SQL task with first Data Flow task and last Data Flow task with second Execute-SQL task. Now move to Package Explorer tab and change connections under Connection Managers folder. Make source connection to use database PLAIN. Make destination connection to use database CORRECT. Save package and rebuilt the project. Update package using SQL Server Management Studio. Some hints: Make sure you take the package from solution folder because it is saved there now. Don’t overwrite existing package. Use numeric suffix and let Management Studio to create a new version of package. Now you are done with your package. Run it to test it and clean out all the errors you find. TRUNCATE vs DELETE You can see that I used DELETE FROM instead of TRUNCATE. Why? Because TRUNCATE has some nasty limits (taken from MSDN): “You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint; instead, use DELETE statement without a WHERE clause. Because TRUNCATE TABLE is not logged, it cannot activate a trigger. TRUNCATE TABLE may not be used on tables participating in an indexed view.” As I am not sure what tables you have and how they are used I provided here the solution that should work for all scenarios. If you need better performance then in some cases you can use TRUNCATE table instead of DELETE. Conclusion My conclusion is bitter this time although I am very positive guy. It is A.D. 2010 and still we have to write stupid hacks for simple things. Simple tools that existed before are long gone and we have to live mysterious bloatware that is our only choice when using default tools. If you take a look at the length of this posting and the count of steps I had to do for one easy thing you should treat it as a signal that something has went wrong in last years. Although I got my job done I would be still more happy if out of box tools are more intelligent one day. References T-SQL Trick for Deleting All Data in Your Database (Mauro Cardarelli) TRUNCATE TABLE (MSDN Library) Error Handling in SQL 2000 – a Background (Erland Sommarskog) Disable/Enable Foreign Key and Check constraints in SQL Server (Decipher)

    Read the article

  • Running Teamsite User Admin tool IWUSERADM.exe from ASP.NET

    - by Narendra Tiwari
    It has really been a head scratching task for me. I 've tried many options but nothing worked. Finally I found a workaround on google to achive this by TaskScheduler. PROBLEM When we run Teamsite user administration command line tool IWUSERADM.exe though ASP.Net it gives following error: Application popup: cmd.exe - Application Error : The application failed to initialize properly (0xc0000142). Click on OK to terminate the application. CAUSE No specific cause, it seems to be a bug, supposed to be resolved with this Microsoft patch http://support.microsoft.com/kb/960266. and there is nothing related to permission issue, y web application is impersonated with an administrator account. off course running a bat file from dmin account is a potential secury threat but for this scenario lets conifned our discussion to run the command line tool. RESOLUTION I have not tried this patch as I have not permitted to run this patch on server. Below are the steps to achive the requirement. 1/ Create a batch file which runs the IWUSERADM.exe.         echo - Add Teamsite User    CD E:\Appli\GN00\iw-home\bin    iwuseradm add-user %1 2/ Temporarily create a schedule task and run  the .bat file by scheduled task by ASP.Net code using TaskScheduler http://www.codeproject.com/KB/cs/tsnewlib.aspx. 3/ Here is the function: private int AddTeamsiteUser(string strBatchFilePath, string strUser) { //Get a ScheduledTasks object for the local computer. ScheduledTasks st = new ScheduledTasks(); // Create a task Task t; try{ t = st.CreateTask("~AddTeamsiteUser"); } catch { throw new Exception("Schedule Task ~AddTeamsiteUser already exist."); }    t.SetAccountInformation(yourLogin, yourPassword); //Set the account under which the task should run.  t.Save();  t.Run(); Thread.Sleep(2000); //for sync issue //Remove the scheduled task st.DeleteTask("~AddTeamsiteUser"); return t.ExitCode;   Below are few resources related to the above scenario:- - Task Scheduler Class Library for .NET  http://www.codeproject.com/KB/cs/tsnewlib.aspx - Run a .BAT file from ASP.NET  http://codebetter.com/blogs/brendan.tompkins/archive/2004/05/13/13484.aspx - TaskScheduler Class  http://msdn.microsoft.com/en-us/library/system.threading.tasks.taskscheduler.aspx - Application Hangs whle running iwuseradm.exe through ASP.Net  http://bytes.com/topic/asp-net/answers/733098-system-diagnostics-process-hangs     t.ApplicationName = strBatchFilePath; t.Parameters = strUser; t.Comment = "Adding user to Teamsite Application"

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >