Search Results

Search found 2157 results on 87 pages for 'sequential workflow'.

Page 19/87 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • DefaultSchedulerService in ASP.NET application

    - by Samir P
    Hi, My project has a requirement to implement look-ahead caching i.e. triggering another request on invokation of a specific request. The following details in short the implementation - HttpModule parses the SOAPRequest and matches entry in a configuration file for look-ahead candidate. If the request matches, it prepares the Parameters dictionary and starts appropriate workflow. Single workflow runtime is used across all requests is ensured through initializing the runtime instance at Application_Start event and stored in Application Dictionary. Using persistence service and DefaultScheduler service. We can't implement windows service model, as current requirement mandates passing the SOAPRequest parameters as arguments. ManualSchedulerService is not in contention due to synchronous nature of it's actual behaviour. Still the performance is pretty bad and product team is not happy. Can anybody suggest me better solution? Thanks, Samir

    Read the article

  • Space-based architecture?

    - by rcampbell
    One chapter in Pragmatic Programmer recommends looking at a blackboard/space-based architecture + a rules engine as a more flexible alternative to a traditional workflow system. The project I'm working on currently uses a workflow engine, but I'd like to evaluate alternatives. I really feel like a SBA would be a better solution to our business problems, but I'm worried about a total lack of community support/user base/venders/options. JavaSpaces is dead, and the JINI spin-off Apache River seems to be on life support. SemiSpace looks perfect, but it's a one-man show. The only viable solution seems to be GigaSpaces. I'd like to hear your thoughts on space based architecture and any experiences you've had with real world implementations.

    Read the article

  • Progressing Alfresco workflows through web script

    - by Domchi
    I have an Alfresco document reference; what I'm looking for is a way to access workflow attached to that document and finish it (or progress it to the next transition) through Javascript. Almost every example on the web shows how to start workflow, and from the dashlet I could call task command processor (/alfresco/command/task/end/[/transition]) if I knew the task ID, but how do I do the same thing from server-side web script starting only from the document reference? There must be a way to access workflows from document and manage them programatically.

    Read the article

  • Fixing warning from git

    - by japancheese
    I've been doing a workflow of making a git repository on a remote central repository, cloning that repo on my local dev machine, doing some work, and then pushing the changes back to the same repo on the remote server. However, and I believe this was after an update I did to git recently, after pushing up a change, I'm getting the following warning: Counting objects: 2724, done. Delta compression using up to 2 threads. Compressing objects: 100% (2666/2666), done. Writing objects: 100% (2723/2723), 5.90 MiB | 313 KiB/s, done. Total 2723 (delta 219), reused 0 (delta 0) warning: updating the currently checked out branch; this may cause confusion, as the index and working tree do not reflect changes that are now in HEAD. Can someone explain to me exactly what this warning means, and what I'm doing wrong in my workflow to not receive this warning?

    Read the article

  • Why the OnWorkflowItemChanged is different between List and document library?

    - by Yongwei Xing
    I am doing a workflow for a document library. I put a OnWorkflowItemChanged, and I want to get the value of the column which is changed. I use the workflowProperties.Item["name"] and use the afterProperties. But when I use the workflowProperties.Item["column name"], I still got the original value. When I use the afterProperties, it's NULL. Then I make another workflow that is the same as above for a list. I can use the workflowProperties.Item["column name"] to get the new value in OnWorkflowItemChanged. Has anyone come across this problem before? Can you give me some help?

    Read the article

  • Will WF 4.0 make me Obsolete

    - by codemnky
    I saw a post on Oslo about making us obsolete. I just happened to listen to the latest Deep Fried Episode with Brian Noyes. They were talking about SharePoint and Windows Workflow and how the "dream" of Windows Workflow is to let mere Business Analyst Drag and Drop their way to a functioning service. I am a newbie dotnet developer, and afraid that by the time I get to Consulting "Level" my skills would be obsolete. Should I abandon learning basic skills and just learn how to work with Frameworks and Packaged applications such as SAP, SharePoint, BizTalk. Am I wasting time trying to learn Expression Trees and Func of T's?

    Read the article

  • Windows Workflows - While Activity for creating multiple tasks not working

    - by Georgil Mathew
    I am using a while activity for creating multiple tasks for a workflow. The code is executed fine and the task is created when the loop runs only once. But when the loop runs twice or more, only one task is getting created. Also the WF status shows as Error Occured. All I want to do here is create multiple tasks (no of tasks depends on an entered column value) for the same user. Is it posible to use 'while' in this scenario? Or is there any other way to go ahead? NB: I am using state machine workflow.

    Read the article

  • Code reviews for larger ASP.NET MVC team using TFS

    - by Parrots
    I'm trying to find a good code review workflow for my team. Most questions similar to this on SO revolve around using shelved changes for the review, however I'm curious about how this works for people with larger teams. We usually have 2-3 people working a story (UI person, Domain/Repository person, sometimes DB person). I've recommended the shelf idea but we're all concerned about how to manage that with multiple people working the same feature. How could you share a shelf between multiple programmers at that point? We worry it would be clunky and we might easily have unintended consequences moving to this workflow. Of course moving to shelfs for each feature avoids having 10 or so checkins per feature (as developers need to share code) making seeing the diffs at code review time painful. Has anyone else been able to successfully deal with this? Are there any tools out there people have found useful aside from shelfs in TFS (preferably open-source)?

    Read the article

  • what to use for repetitive (daily, weekly, monthly) tasks ? Workflows, Windows Services, something e

    - by mare
    I've been writing Windows Services for a while and they always seem to work fine for things that need to run every day, few times a week, once a month, etc. but I've been lately thinking about going with Windows Workflow Foundation. However, I am unsure how would they run on a server without some container application (for instance SharePoint)? I worked with Sharepoint workflows before and I always had huge problems, at first with the bugs in the workflow architecture implementation (the problems with sleep and delay) and later when they eventually started to work, they were difficult to manage and change. On the other hand Windows Services were always quite easy to implement, easy to create a setup for them and install them and they were always quite resilient (they were often working for months without crashing or something else going wrong). What do you recommend? Please bear in mind we are working in .NET (version is of no problem, if 4.0 brings something new on this subject, we can use it).

    Read the article

  • Mercurial to Mercurial to Subversion Workflow Problem

    - by Dalroth
    We're migrating from Subversion to Mercurial. To facilitate the migration, we're creating an intermediate Mercurial repository that is a clone of our Subversion repository. All developers will be begin switching over to the Mercurial repository, and we'll periodically push changes from the intermediate Mercurial repository to the existing Subversion repository. After a period of time, we'll simply obsolete the Subversion repository and the intermediate Mercurial repository will become the new system of record. Dev 1 Local --+--> Mercurial --+--> Subversion Dev 2 Local --+ + Dev 3 Local --+ + Dev 4 -------------------------+ I've been testing this out, but I keep running into a problem when I push changes from my local repository, to the intermediate Mercurial repository, and then up into our Subversion repository. On my local machine, I have a changeset that is committed and ready to be pushed to our intermediate Mercurial repository. Here you can see it is revision #2263 with hash 625... I push only this changeset to the remote repository. So far, everything looks good. The changeset has been pushed. hg update 1 files updated, 0 files merged, 0 files removed, 0 files unresolved I now switch over to the remote repository, and update the working directory. hg push pushing to svn://... searching for changes [r3834] bmurphy: database namespace pulled 1 revisions saving bundle to /srv/hg/repository/.hg/strip-backup/62539f8df3b2-temp adding branch adding changesets adding manifests adding file changes added 1 changesets with 1 changes to 1 files rebase completed Next, I push the change up to Subversion, works great. At this point, the change is in the Subversion repository and I return attention back to my local client. I pull changes to my local machine. Huh? I've now got two changesets. My original changeset appears as a local branch now. The other changeset has a new revision number 2264, and a new hash 10c1... Anyway, I update my local repo to the new revision. I'm now switched over. So, I finally click the "determine and mark outgoing changesets" and as you can see Mercurial still wants to push out my previous changesets even though they've already been pushed. Clearly, I'm doing something wrong. I also can't merge the two revisions. If I merge the two revisions on my local machine, I end up with a "merge" commit. When I push that merge commit out to the intermediate Mercurial repository, I can no longer push changes out to our Subversion repository. I end up with the following problem: hg update 0 files updated, 0 files merged, 0 files removed, 0 files unresolved hg push pushing to svn://... searching for changes abort: Sorry, can't find svn parent of a merge revision. and I have to rollback the merge to get back to a working state. What am I missing?

    Read the article

  • Best practices for team workflow with RoR/Github for designer + coder?

    - by Josh
    My friend and I have started to try to collaborate on some projects. For background, I come from a PHP/Wordpress/Drupal coding background, but recently I've become more experienced with the RoR framework, while he is more experienced as an HTML/CSS designer, working with PHP and WordPress. We're both relatively new to RoR I think, and so we're trying to figure out our collaborative workflow, but we have no idea where to start. For instance, we were trying to figure out how he could do some minor edits to the CSS file without having to do a full RoR deploy on his box. We still haven't figured out a solution, so I think it's best if we start to set some sort of workflow based on best practices. I was wondering if you guys have any insight or links to articles/case studies regarding this topic?

    Read the article

  • Windows Workflow and sql script in declarative config like InRule

    - by Satish
    We have been using InRule for our Rule needs we have found that it does not scale well and so are investigating the Windows Work Flow. Within InRule we could configure pretty much have any task for example our sql scripts and stored procedures where all part of a separate rule config file, I am wondering if there is a similar functionality within windows work flow where I could just call a declarative task and pass it a bunch of parameters – This task should contain the sql script I would be executing , we should be able to change the script at runtime without recompilation to the WF code. Is this possible in Windows Work flow – How can I accomplish this within work flow. Additionally for sql execution within Work Flow, how does it get the connection string. Should it be passed from the calling program – is passing it as input parameter from the Calling app via the Dictionary object the best way or can the work flow code have visibility to my calling program app.config and get the connection string ?

    Read the article

  • Linq Query Help Needed

    - by Randy Minder
    Say I have the following LINQ queries: var source = from workflow in sourceWorkflowList select new { SubID = workflow.SubID, ReadTime = workflow.ReadTime, ProcessID = workflow.ProcessID, LineID = workflow.LineID }; var target = from workflow in targetWorkflowList select new { SubID = workflow.SubID, ReadTime = workflow.ReadTime, ProcessID = workflow.ProcessID, LineID = workflow.LineID }; var difference = source.Except(target); sourceWorkflowList and targetWorkflowList have the exact same column definitions. But they both contain more columns of data than what is shown in the queries above. Those are just the columns needed for this particular issue. difference contains all rows in sourceWorkflowList that are not contained in targetWorkflowList Now what I would like to do is to remove all rows from sourceWorkflowList that do not exist in difference. Could someone show me a query that would do this? Thanks very much - Randy

    Read the article

  • Can an arbitrary email adress be used in workflow send email activity

    - by Greg McGuffey
    I'm wondering if there is any way to be able to include an arbitrary email address as the To:, From:, CC: or BCC: fields of a send email activity? It appears that they must be contacts in the CRM. I ask this because I have a requirement to cc a known group email (no actual user associated with the email...something like [email protected] it's not a queue at all). I'm concerned that if I create a CRM user for this email, that when I move to production, I'll have to change all the workflows using this email to point to the CRM entity on the production box (assuming GUID is saved with activity). If an arbitrary email isn't possible, any other suggestions? Thanks!

    Read the article

  • Need help with an AJAX workflow

    - by Anders
    Sorry I couldn't be more descriptive with the title, I will elaborate fully below: I have a web application that I want to implement some AJAX functionality into. Currently, it is running ASP.NET 3.5 with VB.NET codebehind. My current "problem" is I want to dynamically be able to populate a DIV when a user clicks an item on a list. The list item currently contains a HttpUtility.UrlEncode() (ASP.NET) string of the content that should appear in the DIV. Example: <li onclick="setFAQ('The+maximum+number+of+digits+a+patient+account+number+can+contain+is+ten+(10).');"> What is the maximum number of digits a patient account number can contain?</li> I can decode the string partially with the JavaScript function unescape() but it does not fully decode the string. I would much rather pass the JavaScript function the faq ID then somehow pull the information from the database where it originates. I am 99% sure it is impossible to call an ASP function from within a JavaScript function, so I am kind of stumped. I am kind of new to AJAX/ASP.NET so this is a learning experience for me.

    Read the article

  • asp.net mvc - reusing pages/controllers in workflow

    - by fregas
    I have 2 workflows: 1) the user signs up for the first time. They see 3 different screens, their basic user information, their credit card, and some additional profile information. They complete these 3 steps in a wizard like fashion, where each time they hit "submit" they leave the current screen and move on to the next. 2) the user already is signed up. He has links in the navigation to these 3 seperate pages. He can update them in any order. When he hits save, he doesn't leave the page he's on, it just shows something at the top that says "Credit Card Info saved..." or whatever. Possibly using ajax or maybe a full page refresh. I would like to reuse the code not only the view but also in the controller for these 3 screens between the two workflows, but without a ton of if...then logic to determine where to go next depending on whether its a first signup in the wizard or updating individual parts of a profile. Any ideas?

    Read the article

  • Verify my form workflow

    - by Shackrock
    I have a form, with some sensitive info (CC numbers). My work flow is: One page to take all form items Upon submission, values are validated. If all is well, all data is stored in a session variable, and the page reloads and displays this info from the session variable. If everything is ok on the review page, the user clicks submit and the session variable is sent to another form for processing (sending payment). Upon success, the session is destroyed. Upon failure (bad CC number, for example) - the user is sent back to the form, with all of the fields filled in just like before, so that they can check for errors and try again (session is NOT destroyed). Does anyone see anything wrong with this, from a security or best practices stand point? UPDATE I'm thinking I can get rid of a step - storing the info in a session EVER. Just have a one page checkout, no review page... makes sense.

    Read the article

  • Workflow for statistical analysis and report writing

    - by ws
    Does anyone have any wisdom on workflows for data analysis related to custom report writing? The use-case is basically this: Client commissions a report that uses data analysis, e.g. a population estimate and related maps for a water district. The analyst downloads some data, munges the data and saves the result (e.g. adding a column for population per unit, or subsetting the data based on district boundaries). The analyst analyzes the data created in (2), gets close to her goal, but sees that needs more data and so goes back to (1). Rinse repeat until the tables and graphics meet QA/QC and satisfy the client. Write report incorporating tables and graphics. Next year, the happy client comes back and wants an update. This should be as simple as updating the upstream data by a new download (e.g. get the building permits from the last year), and pressing a "RECALCULATE" button, unless specifications change. At the moment, I just start a directory and ad-hoc it the best I can. I would like a more systematic approach, so I am hoping someone has figured this out... I use a mix of spreadsheets, SQL, ARCGIS, R, and Unix tools. Thanks! PS: Below is a basic Makefile that checks for dependencies on various intermediate datasets (w/ ".RData" suffix) and scripts (".R" suffix). Make uses timestamps to check dependencies, so if you 'touch ss07por.csv', it will see that this file is newer than all the files / targets that depend on it, and execute the given scripts in order to update them accordingly. This is still a work in progress, including a step for putting into SQL database, and a step for a templating language like sweave. Note that Make relies on tabs in its syntax, so read the manual before cutting and pasting. Enjoy and give feedback! http://www.gnu.org/software/make/manual/html%5Fnode/index.html#Top R=/home/wsprague/R-2.9.2/bin/R persondata.RData : ImportData.R ../../DATA/ss07por.csv Functions.R $R --slave -f ImportData.R persondata.Munged.RData : MungeData.R persondata.RData Functions.R $R --slave -f MungeData.R report.txt: TabulateAndGraph.R persondata.Munged.RData Functions.R $R --slave -f TabulateAndGraph.R report.txt

    Read the article

  • Trying to determine best design for this workflow - c# - 3.0

    - by ltech
    Input Server - files of type jpg,tif, raw,png, mov come in via FTP Each file needs to be watermarked, if applicable, and meta data added to the file Then each file needs to be moved to an orders directory where an order file is generated and then packaged as zip file and moved to processing server. The file names are of [orderid_userid_guid].[jpg|tif|mov|png...] As I expect the volume to grow I dont want to work on one file at a time and move it through the work flow. I would prefer multi threaded/asynchronous if possible..

    Read the article

  • Lego-Style Cocoa Workflow Application

    - by Armin Ronacher
    Hi, I currently have to develop a system very similar to MIT's Scratch's UI. In case you don't know it, here a screenshot: http://kidconfidence.com/blogs/wp-content/uploads/2007/10/scratch1.png Basically you have bricks in the library on the left you can drop into the window on the right side. The problem I have is that I'm new to Cocoa and not sure what would be the best way to accomplish that. Because you can nest these bricks sometimes and other times stick them together I wonder if there is something that would help implementing that. I recognize this is not a very common interface that there are probably no implementations of that around, but maybe there are helpers for parts of this. Regards, Armin

    Read the article

  • What's the best version control/QA workflow for a legacy system?

    - by John Cromartie
    I am struggling to find a good balance with our development and testing process. We use Git right now, and I am convinced that ReinH's Git Workflow For Agile Teams is not just great for capital-A Agile, but for pretty much any team on DVCS. That's what I've tried to implement but it's just not catching. We have a large legacy system with a complex environment, hundreds of outstanding and undiscovered defects, and no real good way to set up a test environment with realistic data. It's also hard to release updates without disrupting users. Most of all, it's hard to do thorough QA with this process... and we need thorough testing with this legacy system. I feel like we can't really pull off anything as slick as the Git workflow outlined in the link. What's the way to do it?

    Read the article

  • Sencha touch 2/ app workflow with navigation view

    - by eplatonov
    I am trying to understand how can I implement the same functionality which provides navigation view in sencha touch 2, but .... Each item of the 'Ext.NavigationView' component should have it's own unique set of 'navigationBar' elements. I mean set of buttons, for example. I know that I can do something like this: this.getMain().getNavigationBar().rightBox.removeAll(); this.getMain().getNavigationBar().rightBox.add(this.getSettingButton()); //where 'getSettingButton' predefined by me a button And do this action each time when 'push' event happens (clear 'navigationBar' and add appropriate set of buttons) Of course, I even can implement 'Ext.Panel' with 'layout: card' and set of 'Ext.panel' elements in the 'items' property, each of which will be have unique 'toolbar'. To control the behavior I can use 'setActiveItem' method. But, I think each of these approaches is a bit weird, isn't it? I expected that would be much more natural approach to implement it. Most likely I don't know what I need. Confirm my doubts. What is the best way to do it.

    Read the article

  • SQL Server database change workflow best practices

    - by kubi
    The Background My group has 4 SQL Server Databases: Production UAT Test Dev I work in the Dev environment. When the time comes to promote the objects I've been working on (tables, views, functions, stored procs) I make a request of my manager, who promotes to Test. After testing, she submits a request to an Admin who promotes to UAT. After successful user testing, the same Admin promotes to Production. The Problem The entire process is awkward for a few reasons. Each person must manually track their changes. If I update, add, remove any objects I need to track them so that my promotion request contains everything I've done. In theory, if I miss something testing or UAT should catch it, but this isn't certain and it's a waste of the tester's time, anyway. Lots of changes I make are iterative and done in a GUI, which means there's no record of what changes I made, only the end result (at least as far as I know). We're in the fairly early stages of building out a data mart, so the majority of the changes made, at least count-wise, are minor things: changing the data type for a column, altering the names of tables as we crystallize what they'll be used for, tweaking functions and stored procs, etc. The Question People have been doing this kind of work for decades, so I imagine there have got to be a much better way to manage the process. What I would love is if I could run a diff between two databases to see how the structure was different, use that diff to generate a change script, use that change script as my promotion request. Is this possible? If not, are there any other ways to organize this process? For the record, we're a 100% Microsoft shop, just now updating everything to SQL Server 2008, so any tools available in that package would be fair game.

    Read the article

  • from ggplot2 to OOo workflow?

    - by Andreas
    This is not really a programming question, but I try here none the less. I once used latex for my reports. But the people I work with needs to make small edits and do not have latex skillz. Openoffice is then the way to go. But saving ggplot images with dpi 100 makes for really ugly graphs. dpi = 600 is a no go (e.g. huge legend). So what to do? I currently save (still via ggsave) to eps - which openoffice can import. But performance is not good at all. Googling I found a bug for the poor eps performance in OOo, and also talk about a non-implemented svg feature. But none helps me right now. If you work with ggplot2 and OOo - What do you do? I have been unsuccesfull with pdf conversion for some reason.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >