Search Results

Search found 1678 results on 68 pages for 'workflow'.

Page 11/68 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • GPG Workflow in 11.04

    - by Ross Bearman
    At work we handle the transfer of small bits of sensitive data with GPG, usually posted on a secure internal website. Until Firefox 4 was released, we used FireGPG for inline decryption; however the IPC libraries that it relied upon were no longer present in FF4, making it unusable and it will no longer install in FF5. Currently I'm manually pasting the GPG blocks into a text file, then using the Nautilus context-menu plugin or the command line to decrypt the contents of the file. When we're handling large amount of these small files throughout the day this starts to become a real chore. I've looked around but can't seem to find much information on useful GPG clients in Ubuntu. A client that allowed me to paste in a GPG block and instantly decrypt it, and also paste in plaintext and easily encrypt it for multiple recipients would be ideal. So my question is does this exist? I can't seem to find anything about this with obvious searches on Google, so hopefully someone here can help, or offer an alternative workflow.

    Read the article

  • Should I use a workflow engine?

    - by Fernando
    I need to add some new features to a PHP application. It is to follow the steps of a order. A process create some orders, the order goes to confirmation, then if approved is sent to a provider, later the provider confirm that can deliver the order, a request is made to the provider and so on... I need to register when every step is made and send notifications. Also, some steps have a estimate time, and if that time is elapsed I need to send notifications so everybody know about the delay. When a process starts, it have a predefined set of steps, but in a middle the user should be able to create new sub-steps, and delete or skip future steps.. Should I use a workflow engine? Which one do you suggests (free-opensource only)?

    Read the article

  • A declarative workflow does not start automatically after you install Windows SharePoint Services 3.

    - by ira
    That's the title of kb947284 actually. Recently I was involved in a Sharepoint project. I run into this problem where my workflow does not start automatically. If I run the workflow manually, it's just fine. I found kb947284. Apparently the cause of this problem is the installation of WSS 3 SP1. I did what it said in the Resolution section but it didn't work. The Resolution said to set the application pool account to use a different user account. What kind of different does it mean? What kind of user account will work? I changed the user account but both old and new account is in the same group of administrator for the server where SP is installed. Oh! I also found a copy of kb947284 and one comments stated there is already a hotfix (kb956057). I've read the issues that are fixed but could not find anywhere about this workflow problem. Could anyone please tell me how it's supposed to be done? Thanks in advance.

    Read the article

  • Flexible design - customizable entity model, UI and workflow

    - by Ngm
    Hi All, I want to achieve the following aspects in the software I am building: 1. Customizable entity model 2. Customizable UI 3. Customizable workflow I have thought about an approach to achieve this, I want you to review this and make suggestions: Entity objects should be plain objects and will hold just data Separate Entity model and DB Schema by using an framework (like NHibernate?). This will allow easy modification of entity objects. Business logic to fetch/modify entities has to be granular enough so that they can be invoked as part of the workflow. Business objects should not hold any state, and hence will contain only static methods The workflow will decide depending upon the "state" of an entity/entities which methods on business object/objects to invoke. The workflow should obtain the results of the processing and then pass on the business objects to the appropriate UI screen. The UI screen has to contain instructions about how to display a given entity/entites. Possibly the UI has to be generated dynamically based on a set of UI instructions. (like XUL) What do you think about this approach? Suggest which existing frameworks (like NHiberante, Window Workflow) fit into this model, so that I will not spend time on coding these frameworks Also suggest is there any asp.net framework that can generate dynamic asp.net ajax pages based on a set of UI instructions (like Mozilla XUL)? I have recently been exploring Apache Ofbiz and was impressed by its ability to customize most areas of the application: UI, workflow, entities. Is there any similar (not necessarily an ERP system) application developed in C#/.Net which offers a similar level of customization? I am looking for examples of applications developed in C# that are highly customizable in terms of UI, Workflow and Entity Model

    Read the article

  • Can I use a specific model from within a behavior in CakePHP?

    - by Paul Willy
    I'm trying to write a behavior that will give my models access to a simple workflow engine I've devised. The workflow engine itself works as a CakePHP model, with workflow data stored in the database just as any other model data is stored. Basically what I want to do is have the behavior use the workflow model whenever an action is called on the base model. For example, if the edit() action is executed for Posts, then the Post (with the behavior attached) will trigger the workflow behavior with its own model name, action, and id as arguments (e.g. [Post, edit, 1]). Then the behavior will invoke the functionality of the Workflow model, which has a record for what to do when edit is run on Posts (e.g. send e-mail to users who are subscribed to that post) and will carry that out. My question is, what is the proper way to invoke model/controller methods from within the behavior? The model to be used from within the behavior will always be Workflow, but the behavior should be usable from basically any model (aside from Workflow itself). I know I could run SQL queries directly from the behavior, but of course this is not the Cake way :-) Or, am I going about this in the wrong way? I want to store a certain amount of logic in the database so that it is easily configurable by different users, and not have endless configuration checks within the model/controller logic itself so that workflow steps can be easily added/changed/removed in the future.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • PHP remote development workflow: git, symfony and hudson

    - by user2022
    I'm looking to develop a website and all the work will be done remotely (no local dev server). The reason for this is that my shared hosting company a2hosting has a specific configuration (symfony,mysql,git) that I don't want to spend time duplicating when I can just ssh and develop remotely or through netbeans remote editing features. My question is how can I use git to separate my site into three areas: live, staging and dev. Here's my initial thought: public_html (live site and git repo) testing: a mirror of the site used for visual tests (full git repo) dev/ticket# : git branches of public_html used for features and bug fixes (full git repo) Version Control with git: Initial setup: cd public_html git init git add * git commit -m ‘initial commit of the site’ cd .. git clone public_html testing mkdir dev Development: cd /dev git clone ../testing ticket# all work is done in ./dev/ticket#, then visit www.domain.com/dev/ticket# to visually test make granular commits as necessary until dev is done git push origin master:ticket# if the above fails: merge latest testing state into current dev work: git merge origin/master then try the push again mark ticket# as ready for integration integration and deployment process: cd ../../testing git merge ticket# -m "integration test for ticket# --no-ff (check for conflicts ) run hudson tests visit www.domain.com/testing for visual test if all tests pass: if this ticket marks the end of a big dev sprint: make a snapshot with git tag git push --tags origin else git push origin cd ../public_html git checkout -f (live site should have the latest dev from ticket#) else: revert the merge: git checkout master~1; git commit -m "reverting ticket#" update ticket# that testing failed with the failure details Snapshots: Each major deployment sprint should have a standard name and should be tracked. Method: git tag Naming convention: TBD Reverting site to previous state If something goes wrong, then revert to previous snapshot and debug the issue in dev with a new ticket#. Once the bug is fixed, follow the deployment process again. My questions: Does this workflow make sense, if not, any recommendations Is my approach for reverting correct or is there a better way to say 'revert to before x commit'

    Read the article

  • Kipróbálható az ingyenes új Oracle Data Miner 11gR2 grafikus workflow-val

    - by Fekete Zoltán
    Oracle Data Mining technológiai információs oldal. Oracle Data Miner 11g Release 2 - Early Adopter oldal. Megjelent, letöltheto és kipróbálható az Oracle Data Mining, az Oracle adatbányászat új grafikus felülete, az Oracle Data Miner 11gR2. Az Oracle Data Minerhez egyszeruen az SQL Developer-t kell letöltenünk, mivel az adatbányászati felület abból indítható. Az Oracle Data Mining az Oracle adatbáziskezelobe ágyazott adatbányászati motor, ami az Oracle Database Enterprise Edition opciója. Az adatbányászat az adattárházak elemzésének kifinomult eszköze és folyamata. Az Oracle Data Mining in-database-mining elonyeit felvonultatja: - nincs felesleges adatmozgatás, a teljes adatbányászati folyamatban az adatbázisban maradnak az adatok - az adatbányászati modellek is az Oracle adatbázisban vannak - az adatbányászati eredmények, cluster adatok, döntések, valószínuségek, stb. szintén az adatbázisban keletkeznek, és ott közvetlenül elemezhetoek Az új ingyenes Data Miner felület "hatalmas gazdagodáson" ment keresztül az elozo verzióhoz képest. - grafikus adatbányászati workflow szerkesztés és futtatás jelent meg! - továbbra is ingyenes - kibovült a felület - új elemzési lehetoségekkel bovült - az SQL Developer 3.0 felületrol indítható, ez megkönnyíti az adatbányászati funkciók meghívását az adatbázisból, ha épp nem a grafikus felületetet szeretnénk erre használni Az ingyenes Data Miner felület az Oracle SQL Developer kiterjesztéseként érheto el, így az elemzok közvetlenül dolgozhatnak az adatokkal az adatbázisban és a Data Miner grafikus felülettel is, építhetnek és kiértékelhetnek, futtathatnak modelleket, predikciókat tehetnek és elemezhetnek, támogatást kapva az adatbányászati módszertan megvalósítására. A korábbi Oracle Data Miner felület a Data Miner Classic néven fut és továbbra is letöltheto az OTN-rol. Az új Data Miner GUI-ból egy képernyokép: Milyen feladatokra ad megoldási lehetoséget az Oracle Data Mining: - ügyfél viselkedés megjövendölése, prediktálása - a "legjobb" ügyfelek eredményes megcélzása - ügyfél megtartás, elvándorlás kezelés (churn) - ügyfél szegmensek, klaszterek, profilok keresése és vizsgálata - anomáliák, visszaélések felderítése - stb.

    Read the article

  • Ruby workflow in Windows

    - by Rig
    I've done some searching and quite haven't come across the answer I am looking for. I do not think this is a duplicate of this question. I believe Windows could be a suitable development environment based on the mix of answers in that question. I have been developing in Ruby (mostly Rails but not entirely) for about a year now for personal projects on a Macbook Pro however that machine has faced an untimely death and has been replaced with a nice Windows 7 machine. Ruby development felt almost natural on the Mac after doing some research and setting up the typical stack. My environment then included the standard (Linux like) stuff built into OSX, Text Wrangler, Git, RVM, et al. Not too much of a deviation from what the 'devotees' tend to assume. Now I am setting up my new Windows box for continuing that development. What would my development environment look like? Should I just cave and run Linux in a VM? Ideally I would develop in Windows native. I am aware of the Windows Ruby installer. It seems decent but its not exactly as nice as RVM in the osx/linux world. Mercurial/Git are available so I would assume they play into the stack. Does one develop entirely in Windows? Does one run a webserver in a Linux VM and use it as a test bed while developing in Windows? Do it all in a VM? What does the standard Windows Ruby developer environment look like and what is the workflow? What would a typical step through be for adding a new feature to an ongoing project and what would the technology stack look like?

    Read the article

  • Creating packages in code - Workflow

    This is just a quick one prompted by a question on the SSIS Forum, how to programmatically add a precedence constraint (aka workflow) between two tasks. To keep the code simple I’ve actually used two Sequence containers which are often used as anchor points for a constraint. Very often this is when you have task that you wish to conditionally execute based on an expression. If it the first or only task in the package you need somewhere to anchor the constraint too, so you can then set the expression on it and control the flow of execution. Anyway, back to my code sample, here’s a quick screenshot of the finished article: Now for the code, which is actually pretty simple and hopefully the comments should explain exactly what is going on. Package package = new Package(); package.Name = "SequenceWorkflow"; // Add the two sequence containers to provide anchor points for the constraint // If you use tasks, it follows exactly the same pattern, they all derive from Executable Sequence sequence1 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence1.Name = "SEQ Start"; Sequence sequence2 = package.Executables.Add("STOCK:Sequence") as Sequence; sequence2.Name = "SEQ End"; // Add the precedence constraint, here we use the package's constraint collection // as it hosts the two objects we want to constrain (link) // The default constraint is a basic On Success constraint just like in the designer PrecedenceConstraint constraint = package.PrecedenceConstraints.Add(sequence1, sequence2); // Change the settings to use a (dummy) expression only constraint.EvalOp = DTSPrecedenceEvalOp.Expression; constraint.Expression = "1 == 1";   The complete code file is available to download below. SequenceWorkflow.cs

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • What does your Lisp workflow look like?

    - by Duncan Bayne
    I'm learning Lisp at the moment, coming from a language progression that is Locomotive BASIC - Z80 Assembler - Pascal - C - Perl - C# - Ruby. My approach is to simultaneously: write a simple web-scraper using SBCL, QuickLisp, closure-html, and drakma watch the SICP lectures I think this is working well; I'm developing good 'Lisp goggles', in that I can now read Lisp reasonably easily. I'm also getting a feel for how the Lisp ecosystem works, e.g. Quicklisp for dependencies. What I'm really missing, though, is a sense of how a seasoned Lisper actually works. When I'm coding for .NET, I have Visual Studio set up with ReSharper and VisualSVN. I write tests, I implement, I refactor, I commit. Then when I'm done enough of that to complete a story, I write some AUATs. Then I kick off a Release build on TeamCity to push the new functionality out to the customer for testing & hopefully approval. If it's an app that needs an installer, I use either WiX or InnoSetup, obviously building the installer through the CI system. So, my question is: as an experienced Lisper, what does your workflow look like? Do you work mostly in the REPL, or in the editor? How do you do unit tests? Continuous integration? Packaging & deployment? When you sit down at your desk, steaming mug of coffee to one side and a framed photo of John McCarthy to the other, what is it that you do? Currently, I feel like I am getting to grips with Lisp coding, but not Lisp development ...

    Read the article

  • Tomcat + Spring + CI workflow

    - by ex3v
    We're starting our very first project with Spring and java web stack. This project will be mainly about rewriting quite large ERP/CRM from Zend Framework to Java. Important factor in my question is that I come from php territory, where things (in terms of quality) tend to look different than in java world. Fatcs: there will be 2-3 developers, at least one of developers uses Windows, rest uses Linux, there is one remote linux-based machine, which should handle test and production instances, after struggling with buggy legacy code, we want to introduce good programming and development practices (CI, tests, clean code and so on) client: internal, frequent business logic changes, scrum, daily deployments What I want to achieve is good workflow on as many development stages as possible (coding - commiting - testing - deploying). The problem is that I've never done this before, so I don't know what are best practices to do this. What I have so far is: developers code locally, there is vagrant instance on every development machine, managed by puppet. It contains the same linux, jenkins and tomcat versions as production machine, while coding, developer deploys to vagrant machine, after local merge to test branch, jenkins on vagrant handles tests, when everything is fine, developer pushes commits and merges jenkins on remote machine pulls commit from test branch, runs tests and so on, if everything looks green, jenkins deploys to test tomcat instance Deployment to production is manual (altough it can be done using helping scripts) when business logic is tested by other divisions and everything looks fine to client. Now, the real question: does above make any sense? Things that I'm not sure about: Remote machine: won't there be any problems with two (or even three, as jenkins might need one) instances of same app on tomcat? Using vagrant to develop on php environment is just vise. Isn't this overkill while using Tomcat? I mean, is there higher probability that tomcat will act the same on every machine? Is there sense of having local jenkins on vagrant?

    Read the article

  • How to update the original InfoPath form from within Workflow?

    - by Allen.Zhang
    Hi All, I have created an InfoPath form (e.g. Form_ExpenseReport) for collect data from end users, and a number of task forms (also InfoPath form, e.g. TaskForm_1, TaskForm_2) for my state machine workflow use. The users want to see all the comments of Task forms (TaskForm_1 & TaskForm_2) in the original IP form (Form_ExpenseReport). How can I update the first form from within workflow? Can anybody give me some tips? My environment: MOSS 2007 Enterprise license VS 2008

    Read the article

  • Does MOSS 2007 workflows support calling external mehods ?

    - by Mina Samy
    Hi all I have a custom sharepoint workflow that I need to call an external method defined in a local service it always throws an exception System.InvalidOperationException: Could not find service of type 'ListItemCheckService.IListItemCheck' through the currently configured services. Consider adding the service to ExternalDataExchangeService. at System.Workflow.Activities.CallExternalMethodActivity.Execute(ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor`1.Execute(T activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutor`1.Execute(Activity activity, ActivityExecutionContext executionContext) at System.Workflow.ComponentModel.ActivityExecutorOperation.Run(IWorkflowCoreRuntime workflowCoreRuntime) at System.Workflow.Runtime.Scheduler.Run() the question is does the sharepoint workflow system support calling external methods from a local service ? thanks

    Read the article

  • Any reccomendations for implementing a user-defined workflow in Ruby?

    - by midas06
    I'm interested in creating a system where the user can define the steps in a workflow. Is there a gem that already handles this? I thought about one of the state machine gems, but they all seem to be for pre-defined states. I've been thinking maybe i can use state machine for the individual step types... An email step could have a few states [New, Assigned, Done], and the workflow could just be lists of these stateful steps. Are there other solutions out there?

    Read the article

  • Implementing a post-notification function to perform custom validation

    - by Alejandro Sosa
    Introduction Oracle Workflow Notification System can be extended to perform extra validation or processing via PLSQL procedures when the notification is being responded to. These PLSQL procedures are called post-notification functions since they are executed after a notification action such as Approve, Reject, Reassign or Request Information is performed. The standard signature for the post-notification function is     procedure <procedure_name> (itemtype  in varchar2,                                itemkey   in varchar2,                                actid     in varchar2,                                funcmode  in varchar2,                                resultout in out nocopy varchar2); Modes The post-notification function provides the parameter 'funcmode' which will have the following values: 'RESPOND', 'VALIDATE, and 'RUN' for a notification is responded to (Approve, Reject, etc) 'FORWARD' for a notification being forwarded to another user 'TRANSFER' for a notification being transferred to another user 'QUESTION' for a request of more information from one user to another 'QUESTION' for a response to a request of more information 'TIMEOUT' for a timed-out notification 'CANCEL' when the notification is being re-executed in a loop. Context Variables Oracle Workflow provides different context information that corresponds to the current notification being acted upon to the post-notification function. WF_ENGINE.context_nid - The notification ID  WF_ENGINE.context_new_role - The new role to which the action on the notification is directed WF_ENGINE.context_user_comment - Comments appended to the notification   WF_ENGINE.context_user - The user who is responsible for taking the action that updated the notification's state WF_ENGINE.context_recipient_role - The role currently designated as the recipient of the notification. This value may be the same as the value of WF_ENGINE.context_user variable, or it may be a group role of which the context user is a member. WF_ENGINE.context_original_recipient - The role that has ownership of and responsibility for the notification. This value may differ from the value of the WF_ENGINE.context_recipient_role variable if the notification has previously been reassigned.  Example Let us assume there is an EBS transaction that can only be approved by a certain people thus any attempt to transfer or delegate such notification should be allowed only to users SPIERSON or CBAKER. The way to implement this functionality would be as follows: Edit the corresponding workflow definition in Workflow Builder and open the notification. In the Function Name enter the name of the procedure where the custom code is handled, for instance, TEST_PACKAGE.Post_Notification In PLSQL create the corresponding package TEST_PACKAGE with a procedure named Post_Notification, as follows:     procedure Post_Notification (itemtype  in varchar2,                                  itemkey   in varchar2,                                  actid     in varchar2,                                  funcmode  in varchar2,                                  resultout in out nocopy varchar2) is     l_count number;     begin       if funcmode in ('TRANSFER','FORWARD') then         select count(1) into l_count         from WF_ROLES         where WF_ENGINE.context_new_role in ('SPIERSON','CBAKER');               --and/or any other conditions         if l_count<1 then           WF_CORE.TOKEN('ROLE', WF_ENGINE.context_new_role);           WF_CORE.RAISE('WFNTF_TRANSFER_FAIL');         end if;       end if;     end Post_Notification; Launch the workflow process with the changed notification and attempt to reassign or transfer it. When trying to reassign the notification to user CBROWN the screen would like like below: Check the Workflow API Reference Guide, section Post-Notification Functions, to see all the standard, seeded WF_ENGINE variables available for extending notifications processing. 

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • How to get notification of workflow errors?

    - by Greg McGuffey
    I am having issues were a workflow is stalled because there is an issue with sending an email (send email activity). Typically, this is simply solved by resuming the workflow. I'm wondering if there any way to react to a workflow error, so the user knows they need to go in and resume the workflow. I'm also wondering about this relative to a workflow that is attempting to assign a task to a user who no longer exists in the CRM or one that has an invalid email address, which I'm assuming would cause errors in workflows as well. Any other suggestions related to this sort if issue would be welcome. Thanks!

    Read the article

  • WorkFlow and WCF dynamically launching WorkFlows

    - by Raj73
    I have a WF which will be hosted on WCF . The service Contract will contain a single operation containing two parameters. Parameter1 will be a string and will contain the name of the workflow to invoke and parameter two will contain the input for the invoked Work Flow. All operations will take the same parameter. All the operations will return the same return value. I have created the service implementation and I would like to depending on the value of parameter1 start executing the appropriate workflow and return the value (There can be number of workflow classes say Operation1, Operation2...which will be the passed in as the value in Parameter1). How can I instantiate different workflow classes and pass parameters and get the return values from them which I should then pass back to the calling Client. (Also Should I be using ReceiveActivities in all of my Launchable WorkFlow Classes ? ) Any code samples or pointers would help

    Read the article

  • What IDE setup and workflow is used for OSGi development?

    - by Falx
    I made quite a few easy OSGi test projects in Eclipse RCP. My typical workflow would always be: Make 3 different projects: APIproject, Clientproject and Serverproject Edit the MANIFEST.MF of APIproject to export the api package Edit the MANIFEST.MF file of Clientproject and Serverproject to add the required API package Choose "Run as..." "Plugin Framework" OSGi console starts in eclipse and everything seems to work I also tried wiring things by using Declarative Services, which worked well like this too. Now recently I wanted to try out iPOJO. The problem is that I get the feeling that I've been doing my OSGi development the wrong way. Can it be that I should instead make 1 project en make it work like no OSGi is involved. And then afterwards, just export each package to its own bundle by means of (for instance) the BNDL tool? Should development be done in a normal Eclipse (java, not RCP) or any other java IDE for that matter? So that's why I have these questions: What IDE setup is normally used to develop OSGi with iPOJO? And what is the normal workflow to be used when developing OSGi projects (maybe with iPOJO)?

    Read the article

  • How would you structure your workflow for a web application ?

    - by cx42net
    Hi ! When designing a web application (or something else), it's good to have a workflow, and it's better to have a well ordered one. Starting with this idea in mind, I'd like to know what is your process from having an idea to maintain this great working project. For me actually, the process is the following one : Having the idea Checking this project already exists, and how it works Describing on a paper its functionalities finding a good and adequate name for this (and checking the domain availability with WHOISMyProject) Making a quick layout of the project on a paper Designing the project (via TheGimp, Photoshop, etc) Making a complete mockup of each pages Developing a prototype of the client-side application (with false datas) Developing the server side Testing Making the documentation/help/faq Releasing the project Maintaining it. Would you change the order of some points ? add/remove some ? I would please to know how you do that. I'm looking to set up a perfect workflow in order to make my project become real in the best way possible. Thank you for your opinion !

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >