Search Results

Search found 1678 results on 68 pages for 'workflow'.

Page 8/68 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • workflow assign task to multiple users

    - by Artru
    I have MOSS. I want to make a page where a user, say administrator, could send some instructions to a server, for example using standard library and make task for a group of users forcing them to read files. After the reading users would press "already read it" and administrator would know who did it/who did not. I created simple workflow in office designer and choose assigne task to Group1, which is in the sharepoint server. After WF run everyone who is in Group1 get message about a task, that's great. However this task is general for group and if we go to the site section "current tasks", we can see it, while I want this task for every person in Group1. Futher question, is it possible to create form where administrator will choose users for this task, 'cause now I munualy set group in WF.

    Read the article

  • Standard Workflow when working with JPA

    - by jschoen
    I am currently trying to wrap my head around working with JPA. I can't help but feel like I am missing something or doing it the wrong way. It just seems forced so far. What I think I know so far is that their are couple of ways to work with JPA and tools to support this. You can do everything in Java using annotations, and let JPA (whatever implementation you decide to use) create your schema and update it when changes are made. You can use a tool to reverse engineer you database and generate the entity classes for you. When the schema is updated you have to regenerate these classes, or manually update them. There seems to be drawbacks to both, and benefits to both (as with all things). My question is in an ideal situation what is the standard workflow with JPA? Most schemas will require updates during the maintenance phase and especially during the development phase, so how is this handled?

    Read the article

  • Silverlight calling windows workflow foundation as a web service

    - by wissem
    hi, I'm trying to call a windows workflow foundation published as a web service from a Silverlight project. When I call it from a console application it works fine because I can add a web reference then I just make an instance of that webservice then I invoke the method I want. The problem is in the silverlight project cause i can just add a service reference so I find myself working with soap stuff that doesn't work at all, and here is the code: private void btnUpdate_Click(object sender, RoutedEventArgs e) { xxxxxxx.Workflow1_WebServiceSoapClient zer = new xxxxx.Workflow1_WebServiceSoapClient(); zer.demanderSubmitReportCompleted += new EventHandler<xxxxxxxxxxxxxxx.demanderSubmitReportCompletedEventArgs>(service2); zer.demanderSubmitReportAsync("zzz", 20000); } public void service2(object sender, xxxxx.demanderSubmitReportCompletedEventArgs e) { string a = e.Result; } plz help ,thanks

    Read the article

  • Workflow Foundation: Asynchronous operations (lengthy network I/O)

    - by StormianRootSolver
    I have to create an application that will be started a few times per day (it's non - interactive). To operate, it needs LARGE amounts of data from the Internet (megabytes) via a rather slow connection, so the WCF service calls take quite some time. At the same time, it needs to perform local calculations and has a sophisticated initialization process. So, what I want to do is to create a workflow that asynchronously fetches the data (takes a few minutes) while already initializing / calculating locally. Is there a way to accomplish this?

    Read the article

  • How do I access database from WF4?

    - by Patrol02
    Hi Guys, I host a wcf workflow service within my ASP.NET MVC2 application. I need to be able to load/save data inside my workflow (WF4). How it can be done? Should I just instantiate my Entity Framework context within my activities and read/write? Cheers.

    Read the article

  • Does WF4 have the ability to search for instances?

    - by racingcow
    Hello, I have a WF4 workflow service deployed in AppFabric. Is there any built-in way to do a generic search across all currently active workflow instances? For example, "get me a list of all active instances that have variable x = 5"? If someone could point me in the right direction on this it would be much appreciated.

    Read the article

  • Problem with final branch in a parallel activity

    - by Dan Revell
    This might seem like a silly thing to say, the final branch in a parallel activity so I'll clarify. It's a parallel activity with three branches each containing a simple create task, on task changed and complete task. The branch containing the task that is last to complete seems to break. So every task works in it's own right, but the last one encounters a problem. Say the user clicks the final tasks link to open the attached infopath form and submits that. Execution gets to the event handler for that onTaskChanged where a taskCompleted variable gets set to true which will exit the while loop. I've successfully hit a breakpoint on this line so I know that happens. However the final activity in that branch, the completeTask doesn't get hit. When submit is clicked in the final form, the operation in progess screen says of for quite a while before returning to the workflow status page. The task that was opened and submitted says "Not Started". I can disable any of the branches to leave only two, but the same problem happens with the last to be completed. Earlier on in the workflow I do essencially the same thing. I have another 3 branch parallel activity with each brach containing a task. This one works correctly which leads me to believe that it might be a problem with having two parallel activites in the same sequential workflow. I've considered the possibility that it might be a correlation token problem. The token that every task branch uses is unique to that branch and it's owner activity name is est to that of the branch. It stands to reason that if the task complete variable is indeed getting set to true but the while loop isn't being exited, then there's a wire crossing with the variable somewhere. However I'd still have thought that the task status back on the workflow status page would at least say that the task is in progress. This is a frustrating show stopper of a bug for me. Any thoughts or suggestions would be much appricated so I can investigate them.

    Read the article

  • Workflow for academic research projects, one-step builds, and the Joel Test

    - by Steve
    Working alone on academic research sometimes breeds bad habits. With no one else reading my code, I would write a lot of throw-away code, and I would lose track of intermediate results which, weeks or months later, I wish I had retained. My recent attempts to make my personal workflow conform to the Joel Test raised interesting questions. Academic research has inherently different goals than industrial software development, and therefore some aspects of the Joel Test become less valid. Nevertheless, I find these steps to be still valuable for academic research: Do you use source control? Can you make a build in one step? Do you have an up-to-date schedule? Do you have a spec? Of particular use is the one-step build. I find myself more organized now that I have implemented the following "one-step build": In other words, I have a single script, build.py, that accepts Python code, data, and TeX as inputs. The outputs are results, figures, and a paper with all the results filled in. (Yes, I know "build" is probably not accurate in this context, but you get the idea.) By consolidating many small steps into one, I am not backtracking as much as I used to. ...but I'm sure there is still room for improvement. Question: For research projects, which steps of the Joel Test do you still value? Do you have a one-step build? If so, what does yours consist of, i.e., what inputs does it accept, and what output does it generate?

    Read the article

  • Workflow Foundation (WF) -- Why does setting a DependencyProperty to a COM object using SetValue() t

    - by stakx
    Assume that I have a .NET Workflow Foundation (WF) SequenceActivity class with the following property: public IWorkspace Workspace { get; set; } // ^^^^^^^^^^ // important: this is a COM interface type! public static DependencyProperty WorkspaceProperty = DependencyProperty.Register( "Workspace", typeof(IWorkspace), typeof(FoobarActivity)); // <-- this activity class This activity executes some code that sets both of the above like this: this.Workspace = ...; // exact code not relevant; property set to a COM object SetValue(WorkspaceProperty, this.Workspace); The last line (which makes the call to SetValue) results in an ArgumentException for the second parameter (having the value of this.Workspace): Type […].IWorkspace of dependency property Workspace does not match the value's type System.__ComObject.                                           (translated from German, the English exception text might differ slightly) As soon as I register the dependency property with typeof(object) instead of typeof(IWorkspace) as the second parameter, the code executes just fine. However, that would result in the possibility to assign just about any value to the dependency property, and I do not want that. It seems to me that WF dependency properties don't work for COM interop objects.Does anyone have a solution to this?

    Read the article

  • Liferay Document Management System Workflow

    - by Rajkumar
    I am creating a DMS in Liferay. So far I could upload documents in Liferay in document library. And also i can see documents in document and media portlet. The problem is though status for the document is in pending state, the workflow is not started. Below is my code. Anyone please help. Very urgent. Folder folder = null; // getting folder try { folder = DLAppLocalServiceUtil.getFolder(10181, 0, folderName); System.out.println("getting folder"); } catch(NoSuchFolderException e) { // creating folder System.out.println("creating folder"); try { folder = DLAppLocalServiceUtil.addFolder(userId, 10181, 0, folderName, description, serviceContext); } catch (PortalException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } catch (SystemException e3) { // TODO Auto-generated catch block e3.printStackTrace(); } } catch (PortalException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } catch (SystemException e4) { // TODO Auto-generated catch block e4.printStackTrace(); } // adding file try { System.out.println("New File"); fileEntry = DLAppLocalServiceUtil.addFileEntry(userId, 10181, folder.getFolderId(), sourceFileName, mimeType, title, "testing description", "changeLog", sampleChapter, serviceContext); Map<String, Serializable> workflowContext = new HashMap<String, Serializable>(); workflowContext.put("event",DLSyncConstants.EVENT_CHECK_IN); DLFileEntryLocalServiceUtil.updateStatus(userId, fileEntry.getFileVersion().getFileVersionId(), WorkflowConstants.ACTION_PUBLISH, workflowContext, serviceContext); System.out.println("after entry"+ fileEntry.getFileEntryId()); } catch (DuplicateFileException e) { } catch (PortalException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } catch (SystemException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } catch (PortalException e) { // TODO Auto-generated catch block e.printStackTrace(); } catch (SystemException e) { // TODO Auto-generated catch block e.printStackTrace(); } } return fileEntry.getFileEntryId(); } I have even used WorkflowHandlerRegistryUtil.startWorkflowInstance(companyId, userId, fileEntry.getClass().getName(), fileEntry.getClassPK, fileEntry, serviceContext); But still i have the same problem

    Read the article

  • Help a CRUD programmer think about an "approval workflow"

    - by gerdemb
    I've been working on a web application that is basically a CRUD application (Create, Read, Update, Delete). Recently, I've started working on what I'm calling an "approval workflow". Basically, a request is generated for a material and then sent for approval to a manager. Depending on what is requested, different people need to approve the request or perhaps send it back to the requester for modification. The approvers need to keep track of what to approve what has been approved and the requesters need to see the status of their requests. As a "CRUD" developer, I'm having a hard-time wrapping my head around how to design this. What database tables should I have? How do I keep track of the state of the request? How should I notify users of actions that have happened to their requests? Is their a design pattern that could help me with this? Should I be drawing state-machines in my code? I think this is a generic programing question, but if it makes any difference I'm using Django with MySQL.

    Read the article

  • Typical Search, Result and Detail Workflow Staying Within an Android Tab

    - by Justin
    So, I've been banging my head looking for a good solution for a few days and am stuck. I have a search screen (Activity) in a tab, and after the user enters a value and clicks "search" I would like the results to come back in that same tab, and then if an item from the results is selected, to show more detailed results, in that same tab. I have it all working now in separate activities, and even the first step working in a tab, but as soon as I call the activity to process he search results... i.e. startActivity(i); for the results Activity, the results displayed are not in the tab! I am having a very difficult time getting this flow to work all under a tab. Any thoughts on how to make this happen? I keep hearing that Android views should be used instead of activities, but am I then to assume that all the logic I have right now for 3 activity needs to go inside 1 activity and then I need to handle setting the content and state for each of these cases? Plus, won't the history stack not work as pressing the back button will take the user out of the application, instead of taking them from say the search result to the search screen, or the details to the search results, etc. This seems like a mess. Can anyone show a more complex example of tabs or how one might have a simple search, result and detail workflow staying in a tab? I have seen a few questions on this concept of keeping activities "within a tab", but no good resolution. Please help.

    Read the article

  • Enabling EUS support in OUD 11gR2 using command line interface

    - by Sylvain Duloutre
    Enterprise User Security (EUS) allows Oracle Database to use users & roles stored in LDAP for authentication and authorization.Since the 11gR2 release, OUD natively supports EUS. EUS can be easily configured during OUD setup. ODSM (the graphical admin console) can also be used to enable EUS for a new suffix. However, enabling EUS for a new suffix using command line interface is currently not documented, so here is the procedure: Let's assume that EUS support was enabled during initial setup.Let's o=example be the new suffix I want to use to store Enterprise users. The following sequence of command must be applied for each new suffix: // Create a local database holding EUS context infodsconfig create-workflow-element --set base-dn:cn=OracleContext,o=example --set enabled:true --type db-local-backend --element-name exampleContext -n // Add a workflow element in the call path to generate on the fly attributes required by EUSdsconfig create-workflow-element --set enabled:true --type eus-context --element-name eusContext --set next-workflow-element:exampleContext -n // Add the context to a workflow for routingdsconfig create-workflow --set base-dn:cn=OracleContext,o=example --set enabled:true --set workflow-element:eusContext --workflow-name exampleContext_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:exampleContext_workflow -n // Create the local database for o=exampledsconfig create-workflow-element --set base-dn:o=example --set enabled:true --type db-local-backend --element-name example -n // Create a workflow element in the call path to the user data to generate on the fly attributes expected by EUS dsconfig create-workflow-element --set enabled:true --set eus-realm:o=example --set next-workflow-element:example --type eus --element-name eusWfe// Add the db to a workflow for routingdsconfig create-workflow --set base-dn:o=example --set enabled:true --set workflow-element:eusWfe --workflow-name example_workflow -n //Add the new workflow to the appropriate network groupdsconfig set-network-group-prop --group-name network-group --add workflow:example_workflow -n  // Add the appropriate acis for EUSdsconfig set-access-control-handler-prop \           --add global-aci:'(target="ldap:///o=example")(targetattr="authpassword")(version 3.0; acl "EUS reads authpassword"; allow (read,search,compare) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' dsconfig set-access-control-handler-prop \       --add global-aci:'(target="ldap:///o=example")(targetattr="orclaccountstatusevent")(version 3.0; acl "EUS writes orclaccountstatusenabled"; allow (write) userdn="ldap:///??sub?(&(objectclass=orclservice)(objectclass=orcldbserver))";)' Last but not least you must adapt the content of the ${OUD}/config/EUS/eusData.ldif  file with your suffix value then inport it into OUD.

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • When to Use workflow engines?

    - by A01_
    I'm totally new to this concept from design perspective. I've worked in past on some of the workflow engines as programmer but never had a clarity on why we chose the work-flow engines in first place. And as programmer I know that there are at least 100 ways to do anything when you are writing code but only few of the ways are the best! I still don't understand which use cases are best solved by workflow engines (or rather their concept) than designing a good DI enabled application. I'm looking for any general characteristics of domain-neutral use cases, where work-flow engines are one of the the best options. So my question is: What are general characteristics of a requirement which can be taken as a signal for opting for a good workflow engine and coding around it? Cheers!

    Read the article

  • Can we host a Workflow Service as a Windows Service?

    - by arsayed
    I am working on a logging application that requires me to have a Workflow that is exposed as a Service (Workflow Service). We want to host it as a Windows Service (don't want to host workflow service as .svc file in IIS). Another reason for having it as windows service is to be able to communicate with the service through the Named pipes. Can we expose a Workflow Service through Named Pipes without hosting it in IIS?

    Read the article

  • Get Started using Build-Deploy-Test Workflow with TFS 2012

    - by Jakob Ehn
    TFS 2012 introduces a new type of Lab environment called Standard Environment. This allows you to setup a full Build Deploy Test (BDT) workflow that will build your application, deploy it to your target machine(s) and then run a set of tests on that server to verify the deployment. In TFS 2010, you had to use System Center Virtual Machine Manager and involve half of your IT department to get going. Now all you need is a server (virtual or physical) where you want to deploy and test your application. You don’t even have to install a test agent on the machine, TFS 2012 will do this for you! Although each step is rather simple, the entire process of setting it up consists of a bunch of steps. So I thought that it could be useful to run through a typical setup.I will also link to some good guidance from MSDN on each topic. High Level Steps Install and configure Visual Studio 2012 Test Controller on Target Server Create Standard Environment Create Test Plan with Test Case Run Test Case Create Coded UI Test from Test Case Associate Coded UI Test with Test Case Create Build Definition using LabDefaultTemplate 1. Install and Configure Visual Studio 2012 Test Controller on Target Server First of all, note that you do not have to have the Test Controller running on the target server. It can be running on another server, as long as the Test Agent can communicate with the test controller and the test controller can communicate with the TFS server. If you have several machines in your environment (web server, database server etc..), the test controller can be installed either on one of those machines or on a dedicated machine. To install the test controller, simply mount the Visual Studio Agents media on the server and browse to the vstf_controller.exe file located in the TestController folder. Run through the installation, you might need to reboot the server since it installs .NET 4.5. When the test controller is installed, the Test Controller configuration tool will launch automatically (if it doesn’t, you can start it from the Start menu). Here you will supply the credentials of the account running the test controller service. Note that this account will be given the necessary permissions in TFS during the configuration. Make sure that you have entered a valid account by pressing the Test link. Also, you have to register the test controller with the TFS collection where your test plan is located (and usually the code base of course) When you press Apply Settings, all the configuration will be done. You might get some warnings at the end, that might or might not cause a problem later. Be sure to read them carefully.   For more information about configuring your test controllers, see Setting Up Test Controllers and Test Agents to Manage Tests with Visual Studio 2. Create Standard Environment Now you need to create a Lab environment in Microsoft Test Manager. Since we are using an existing physical or virtual machine we will create a Standard Environment. Open MTM and go to Lab Center. Click New to create a new environment Enter a name for the environment. Since this environment will only contain one machine, we will use the machine name for the environment (TargetServer in this case) On the next page, click Add to add a machine to the environment. Enter the name of the machine (TargetServer.Domain.Com), and give it the Web Server role. The name must be reachable both from your machine during configuration and from the TFS app tier server. You also need to supply an account that is a local administration on the target server. This is needed in order to automatically install a test agent later on the machine. On the next page, you can add tags to the machine. This is not needed in this scenario so go to the next page. Here you will specify which test controller to use and that you want to run UI tests on this environment. This will in result in a Test Agent being automatically installed and configured on the target server. The name of the machine where you installed the test controller should be available on the drop down list (TargetServer in this sample). If you can’t see it, you might have selected a different TFS project collection. Press Next twice and then Verify to verify all the settings: Press finish. This will now create and prepare the environment, which means that it will remote install a test agent on the machine. As part of this installation, the remote server will be restarted. 3-5. Create Test Plan, Run Test Case, Create Coded UI Test I will not cover step 3-5 here, there are plenty of information on how you create test plans and test cases and automate them using Coded UI Tests. In this example I have a test plan called My Application and it contains among other things a test suite called Automated Tests where I plan to put test cases that should be automated and executed as part of the BDT workflow. For more information about Coded UI Tests, see Verifying Code by Using Coded User Interface Tests   6. Associate Coded UI Test with Test Case OK, so now we want to automate our Coded UI Test and have it run as part of the BDT workflow. You might think that you coded UI test already is automated, but the meaning of the term here is that you link your coded UI Test to an existing Test Case, thereby making the Test Case automated. And the test case should be part of the test suite that we will run during the BDT. Open the solution that contains the coded UI test method. Open the Test Case work item that you want to automate. Go to the Associated Automation tab and click on the “…” button. Select the coded UI test that you corresponds to the test case: Press OK and the save the test case For more information about associating an automated test case with a test case, see How to: Associate an Automated Test with a Test Case 7. Create Build Definition using LabDefaultTemplate Now we are ready to create a build definition that will implement the full BDT workflow. For this purpose we will use the LabDefaultTemplate.11.xaml that comes out of the box in TFS 2012. This build process template lets you take the output of another build and deploy it to each target machine. Since the deployment process will be running on the target server, you will have less problem with permissions and firewalls than if you were to remote deploy your solution. So, before creating a BDT workflow build definition, make sure that you have an existing build definition that produces a release build of your application. Go to the Builds hub in Team Explorer and select New Build Definition Give the build definition a meaningful name, here I called it MyApplication.Deploy Set the trigger to Manual Define a workspace for the build definition. Note that a BDT build doesn’t really need a workspace, since all it does is to launch another build definition and deploy the output of that build. But TFS doesn’t allow you to save a build definition without adding at least one mapping. On Build Defaults, select the build controller. Since this build actually won’t produce any output, you can select the “This build does not copy output files to a drop folder” option. On the process tab, select the LabDefaultTemplate.11.xaml. This is usually located at $/TeamProject/BuildProcessTemplates/LabDefaultTemplate.11.xaml. To configure it, press the … button on the Lab Process Settings property First, select the environment that you created before: Select which build that you want to deploy and test. The “Select an existing build” option is very useful when developing the BDT workflow, because you do not have to run through the target build every time, instead it will basically just run through the deployment and test steps which speeds up the process. Here I have selected to queue a new build of the MyApplication.Test build definition On the deploy tab, you need to specify how the application should be installed on the target server. You can supply a list of deployment scripts with arguments that will be executed on the target server. In this example I execute the generated web deploy command file to deploy the solution. If you for example have databases you can use sqlpackage.exe to deploy the database. If you are producing MSI installers in your build, you can run them using msiexec.exe and so on. A good practice is to create a batch file that contain the entire deployment that you can run both locally and on the target server. Then you would just execute the deployment batch file here in one single step. The workflow defines some variables that are useful when running the deployments. These variables are: $(BuildLocation) The full path to where your build files are located $(InternalComputerName_<VM Name>) The computer name for a virtual machine in a SCVMM environment $(ComputerName_<VM Name>) The fully qualified domain name of the virtual machine As you can see, I specify the path to the myapplication.deploy.cmd file using the $(BuildLocation) variable, which is the drop folder of the MyApplication.Test build. Note: The test agent account must have read permission in this drop location. You can find more information here on Building your Deployment Scripts On the last tab, we specify which tests to run after deployment. Here I select the test plan and the Automated Tests test suite that we saw before: Note that I also selected the automated test settings (called TargetServer in this case) that I have defined for my test plan. In here I define what data that should be collected as part of the test run. For more information about test settings, see Specifying Test Settings for Microsoft Test Manager Tests We are done! Queue your BDT build and wait for it to finish. If the build succeeds, your build summary should look something like this:

    Read the article

  • WF 4.0 can't get to resume workflow on the staging/production environment

    - by Yasmine Atta Hajjaj
    I have developed various registeration workflows using WF4.0. Each work flow has various bookmarks. I am using the registeration wf for an asp.net application. I tested the asp.net application locally and it is working fine( Starting WF, Persisting to db and resuming bookmarks). When I try to test it on the staging server, everything goes messy. I can no longer resume wfs and I get an error message : System.Runtime.DurableInstancing.InstancePersistenceCommandException was unhandled by user code Message=The execution of the InstancePersistenceCommand named {urn:schemas-microsoft-com:System.Activities.Persistence/command}LoadWorkflow was interrupted by an error. Source=System.Runtime.DurableInstancing StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Runtime.DurableInstancing.InstancePersistenceContext.OuterExecute(InstanceHandle initialInstanceHandle, InstancePersistenceCommand command, Transaction transaction, TimeSpan timeout) at System.Runtime.DurableInstancing.InstanceStore.Execute(InstanceHandle handle, InstancePersistenceCommand command, TimeSpan timeout) at System.Activities.WorkflowApplication.PersistenceManager.Load(TimeSpan timeout) at System.Activities.WorkflowApplication.LoadCore(TimeSpan timeout, Boolean loadAny) at System.Activities.WorkflowApplication.Load(Guid instanceId, TimeSpan timeout) at System.Activities.WorkflowApplication.Load(Guid instanceId) at CEO_StartUpCEORegisterationTest.LoadInstance(Guid wfInstanceId) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 64 at CEO_StartUpCEORegisterationTest.Page_Load(Object sender, EventArgs e) in c:\Users\Kunoichi\Documents\Visual Studio 2010\Projects\CMERegistrationSystem\RegistrationPortal\CEO\StartUpCEORegisterationTest.aspx.cs:line 44 at System.Web.Util.CalliHelper.EventArgFunctionCaller(IntPtr fp, Object o, Object t, EventArgs e) at System.Web.Util.CalliEventHandlerDelegateProxy.Callback(Object sender, EventArgs e) at System.Web.UI.Control.OnLoad(EventArgs e) at System.Web.UI.Control.LoadRecursive() at System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) InnerException: System.Data.SqlClient.SqlException Message=Index 'NCIX_KeysTable_SurrogateInstanceId' on table 'KeysTable' (specified in the FROM clause) does not exist. Source=.Net SqlClient Data Provider ErrorCode=-2146232060 Class=16 LineNumber=211 Number=308 Procedure=LoadInstance Server= State=1 StackTrace: at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result) at System.Activities.DurableInstancing.SqlWorkflowInstanceStoreAsyncResult.SqlCommandAsyncResultCallback(IAsyncResult result) I know that this is quite verbose. But I have been banging my head against the wall for more than a week. I did search and all I came to know was to work on ms dtc. I enabled it on the staging server , I installed application server on the staging server and I am still getting the same error. I would appreciate if anyone could help with the problem. Thanks in advance :)

    Read the article

  • Workflow Foundation 4 - DeclarativeServiceLibrary - Error while calling second ReceiveAndSendReply

    - by dotnetexperiments
    Hi, I have created a DeclarativeServiceLibrary using VS2010 beta 2, Please check this image of Sequential Service Following is the code used to call these two activities ` int? data = 123; ServiceReference1.ServiceClient client1 = new ServiceReference1.ServiceClient(); string result1 = client1.GetData(data); //This line shows error :( string result2 = client1.Operation1(); Response.Write(result1 + " :: ::" + result2);` client1.GetData works perfectly, but client1.Operation1 show the following error. Please let me know how to fix this. There is no context attached to the incoming message for the service and the current operation is not marked with "CanCreateInstance = true". In order to communicate with this service check whether the incoming binding supports the context protocol and has a valid context initialized.

    Read the article

  • Loading a Workflow 4 from xaml file and adding it to workflowdesigner

    - by Jimmy Engtröm
    Hi I have created a couple of activities and stored them as XAML. Opening them in the Workflowdesigner works great and I can Execute them. Now I would like to create a new Activity and add the activities I created to it. Basically loading it from the XAML and into the designer as part of another activity/flow. I have tried adding my activities to the toolbox but the render as dynamicactivity and (understandably) does not work. Any suggestions? Is it even possible? /Jimmy

    Read the article

  • IVR-style dialog system / workflow / menu

    - by unbeli
    I need to build a dialog system similar to IVR used in call centers. My system is not phone-based, but the dialog is similar. Something like System: "Main menu: Enter [1] for menu1, [2] for menu2" User: [1] System: "menu1: enter [1] for apples, [2] for oranges, [3] for main menu" User: [7] System: "What??" System: "menu1: enter [1] for apples, [2] for oranges, [3] for main menu" User: [2] ... and so on I want to have a nice declarative description of all the possible options and a nice way to run through that tree, guided by user input. Already considered: ANTLR-generated lexer/parser (seems to be an overkill), SCXML-based state machine (seems like only transitions can be declared, the rest needs to be coded)

    Read the article

  • Git subtree workflow

    - by Cedric
    In my current project I'm using an open source forum (https://github.com/vanillaforums/Garden.git). I was planning on doing something like this : git remote add vanilla_remote https://github.com/vanillaforums/Garden.git git checkout -b vanilla vanilla_remote/master git checkout master git read-tree --prefix=vanilla -u vanilla This way I can make change into the vanilla folder (like changing config) and commit it to my master branch and I can also switch into my vanilla branch to fetch updates. My problem is when I try to merge the branch together git checkout vanilla git pull git checkout master git merge --squash -s subtree --no-commit vanilla The problem is that the "update commit" goes on top of my commits and "overwrite" my change. I would rather like to have my commits replay on top of the update. Is there a simple way to do that? I'm not very good in git so maybe this is the wrong approach. Also, I really don't want to mix my history with the vanilla history.

    Read the article

  • How to improve workflow for creating a Lua-based Wireshark dissector

    - by piyo
    I've finally created a Dissector for my UDP protocol in Lua for Wireshark, but the work flow is just horrendous. It consists of editing my custom Lua file in my editor, then double-clicking my example capture file to launch Wireshark to see the changes. If there was an error, Wireshark informs me via dialogs or a red line in the Tree analysis sub-pane. I then re-edit my custom Lua file and then close that Wireshark instance, then double-click my example capture file again. It's like compiling a C file and only seeing one compiler error at a time. Is there a better (faster) way of looking at my changes, without having to restart Wireshark all the time? At the time, I was using Wireshark 1.2.9 for Windows with Lua enabled.

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >