Search Results

Search found 23901 results on 957 pages for 'deployment process'.

Page 123/957 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Process AJAX response with long runing tasks

    - by mpz
    I have long time task in controller action. I use delayed job for it. (Also in heroku it is good practice for perfomance - dyno must work for small time in each request) But my client need result of it work and users can wait on that task. It is more clear: no any addition models or records in it, simple view and js... I think about such way: On client run AJAX with very long timeout (5 min for example) Client make request to server as usual On controller in action1 def start_work (with delay work setup) i need NO any response to client After work performs (delay job finished) i need run new action2 with response to client Client recieve response after about 1-5 min It is possible?

    Read the article

  • Get return values from a stored procedure in c# (login process)

    - by Jin
    Hi all, I am trying to use a Stored Procedure which takes two parameters (login, pw) and returns the user info. If I execute the SP manually, I get Session_UID User_Group_Name Sys_User_Name ------------------------------------ -------------------------------------------------- - NULL Administrators NTMSAdmin No rows affected. (1 row(s) returned) @RETURN_VALUE = 0 Finished running [dbo].[p_SYS_Login]. But with the code below, I only get the return value. do you know how to get the other values shown above like Session_UID, User_Group_Name, and Sys_User_Name ? if you see the commented part below code. I tried to add some output parameters but it doesn't work with incorrect number of parameters error. string strConnection = Settings.Default.ConnectionString; using (SqlConnection conn = new SqlConnection(strConnection)) { using (SqlCommand cmd = new SqlCommand()) { SqlDataReader rdr = null; cmd.Connection = conn; cmd.CommandText = "p_SYS_Login"; //cmd.CommandText = "p_sys_Select_User_Group"; cmd.CommandType = CommandType.StoredProcedure; SqlParameter paramReturnValue = new SqlParameter(); paramReturnValue.ParameterName = "@RETURN_VALUE"; paramReturnValue.SqlDbType = SqlDbType.Int; paramReturnValue.SourceColumn = null; paramReturnValue.Direction = ParameterDirection.ReturnValue; //SqlParameter paramGroupName = new SqlParameter("@User_Group_Name", SqlDbType.VarChar, 50); //paramGroupName.Direction = ParameterDirection.Output; //SqlParameter paramUserName = new SqlParameter("@Sys_User_Name", SqlDbType.VarChar, 50); //paramUserName.Direction = ParameterDirection.Output; cmd.Parameters.Add(paramReturnValue); //cmd.Parameters.Add(paramGroupName); //cmd.Parameters.Add(paramUserName); cmd.Parameters.AddWithValue("@Sys_Login", textUserID.Text); cmd.Parameters.AddWithValue("@Sys_Password", textPassword.Text); try { conn.Open(); object result = cmd.ExecuteNonQuery(); int returnValue = (int)cmd.Parameters["@RETURN_VALUE"].Value; if (returnValue == 0) { Hide(); Program.MapForm.Show(); } else if (returnValue == 1) { MessageBox.Show("The username or password you entered is incorrect", "NTMS Login", MessageBoxButtons.OK, MessageBoxIcon.Warning); } else if (returnValue == 2) { MessageBox.Show("This account is disabled", "NTMS Login", MessageBoxButtons.OK, MessageBoxIcon.Warning); } else { MessageBox.Show("Database error. Please contact administrator", "NTMS Login", MessageBoxButtons.OK, MessageBoxIcon.Warning); } } catch (Exception ex) { string message = ex.Message; string caption = "MAVIS Exception"; MessageBoxButtons buttons = MessageBoxButtons.OK; MessageBox.Show( message, caption, buttons, MessageBoxIcon.Warning, MessageBoxDefaultButton.Button1); } } } Thanks for your help.

    Read the article

  • visual studio attaching to a process in debug mode

    - by user1612986
    i have a strange problem. the dll that i built (lets call it my.dll) in c++ visual studio 2010 uses a third party library (say tp.lib) which in turn calls a third party dll (say tp.dll). for debugging prupose i have in configurationProperties-debugging-command: Excel.exe and configurationProperties-debugging-commandArguments: "$(TargetPath)" in my computer i also set PATH variable to the directory where tp.dll resides now when i hit the F5 in visual studio excel opens up with my.dll and crashes giving me a "cannot open in dos mode" error. the reason this happens is tp.dll is not deployed when debug version of my.dll is deployed. when i open an instance of excel seperately and manually drop the debug version of my.dll then everything works fine and i can see all my functions that i wrote in my.dll the only issue is now i do not know how to debug becuase i do not know how to attach visual studio to the instance of excel i opened up seperately. my question is: 1 how can i attach visual studio to an already opened instance of Excel or 2 how can i hit F5 and still make Excel pick up the required tp.dll from the directory specified in the PATH variable before it starts to deploy my.dll. any of these two will allow my to step through the code for the purpose of debugging. thanks in advance.

    Read the article

  • Simple servlet or filter to process form

    - by David
    Is there a simple framework for processing form submissions via a servlet? For my needs, a framework like STRUTS seems like over kill. My ideal processor would be a servlet that converts form elements into a bean object, possibly using typing information in the form to help with the conversion. Does something like this exist or is there another solution out there geared toward simpler needs? Thanks!

    Read the article

  • Process requires redirected input

    - by initialZero
    I have a UNIX native executable that requires the arguments to be fed in like this prog.exe < foo.txt. foo.txt has two lines: bar baz I am using java.lang.ProcessBuilder to execute this command. Unfortunately, prog.exe will only work using the redirect from a file. Is there some way I can mimic this behavior in Java? Of course, ProcessBuilder pb = new ProcessBuilder("prog.exe", "bar", "baz"); does not work. Thanks!

    Read the article

  • POP3 Transmission Process

    - by j-t-s
    Hi All I was wondering if anyone could help me out (not with code, although that would be appreciated), with the logic behind checking and retrieving messages from a POP3 mail server. I.e. Establish connection Validate credentials Enumerate message list Check each message to see if it's "new" Download "new" message(s). Would this be the correct way about doing this? Thank you

    Read the article

  • forcing to Not to process an event

    - by BDotA
    C# WinApps: the main form has a key binding to CTRL-V ...so anywhere in the main app when I press CTRL-V something runs..good ... but also there are some MDI apps that are opening inside this main app ... in one of those there is a test box...ah! now CTRL-V also has a meanining for text box which is "Paste" ... so I added PreViewKeyDown to textbox and handled it, so now it is pasting BUT it is ALSO doing the main CTRL-V key binding that I had defined for the whole app ... but I do no want this to happen.... what can I do? ( I cannot change the key binding od the main app. I must keep it.)

    Read the article

  • communicate with a process in utf-8 on a cp1252 consoless

    - by Mapad
    I need to control a program by sending commands in utf-8 encoding to its standard input. For this I run the program using subprocess.Popen(): proc = Popen("myexecutable.exe", shell=True, stdin=PIPE, stdout=PIPE, stderr=PIPE) proc.stdin.write(u'ééé'.encode('utf_8')) If I run this from a cygwin utf-8 console, it works. If I run it from a windows console (encoding ='cp1252') this doesn't work. Is there a way to make this work without having to install a cygwin utf-8 console on each computer I want it to run from ? (NB: I don't need to output anything to console)

    Read the article

  • Access location variable in another process

    - by user295944
    I have a locationmanager in my main program which is called HomeViewController which I get the latitude and longitude and store them in LAT and LON. I can use LAT and LON in HomeViewController just fine. Once I go to another controller which is sending pictures and setting properties on flicker I want to use LAT and LON, how do I do this? I tried putting HomeViewController.LAT but obviously that does not work, I'm pretty new to the language so I am confused.

    Read the article

  • Unable to return/process JSON in JQuery $.get()

    - by shyam
    I have a problem returning/processing JSON data while calling $.get() function. Here's the code: JQuery: $.get("script.php?", function(data) { if (data.status) { alert('ok'); } else { alert(data.error); } },'json'); PHP if ($r) { $ret = array("status"=>true); } else { $ret = array("status"=>false,"error"=>$error); } echo json_encode( $ret ); So this is the code. But the response is always taken as string in the jquery. data.status and data.error is undefined.

    Read the article

  • read, parse and process json (java)

    - by mac
    Guys, simple situation - read a json file discover all key-value pairs compare key-value pairs I tried gson, package from json.org, but can't seem to get far with it. Can someone please provide a clear sample in Java on how to take a file, read it, end up with json objec i can get key/value pairs from thanks so much

    Read the article

  • process the data after using str.split

    - by juju
    I parse a .txt like this: def parse_file(src): for line in src.readlines(): if re.search('SecId', line): continue else: cols = line.split(',') Time = cols[4] output_file.write('{}\n'.format( Time)) I think cols are lists that I could use index. Although it succeeds in printing out correct result as I want, there exists an out of range error. What's the matter? File "./tdseq.py", line 37, in parse_file Time = cols[4] IndexError: list index out of range make: *** [all] Error 1 Data I use: I10.FE,--,2008-04-16,15:15:00,13450,13488,13450,13470,490,359,16APR2008:09:15:00 I10.FE,--,2008-04-16,15:16:00,13468,13473.8,13467,13467,306,521,16APR2008:09:16:00 ....

    Read the article

  • SSAS Compare version 1.0 released

    - by Red Gate Software BI Tools Team
    We’re pleased to announce that SSAS Compare version 1.0 has been released as a free tool. Version 1.0 includes: Comparisons of live databases and XMLA or Analysis Services Project files MDX syntax diffs and highlighting Server comparisons Deployment wizard with summaries of scripted actions Bug fixes and engine and UI refinements We’ve tested it on as many cube configurations as we could find (not just good old AdventureWorks!), but we can’t provide support for free tools — so if you’re reliant on SSAS Compare for your cube deployment, use it at your own risk. See the user license agreement in the installer for more details. SSAS Compare’s come a long way from its humble beginnings as an internal tool first developed for Red Gate’s own BI developers. Today’s SSAS Compare is now much more stable — not to mention much easier to use — and something the team is proud to have released with Red Gate’s name on. Next: Deployment Manager We’re working on integrating SSAS Compare cube deployment with our new Deployment Manager tool, so you’ll be able to create cube deployment scripts and automate the deployment process, too.  We’re documenting the process in a white paper we’ll publish online in the next week. Thank you! Thanks to all the SSAS Compare users out there. Without your feedback, we could never have produced such a stable product so quickly. We hope you continue to find useful. See you in Deployment Manager!  

    Read the article

  • iPhone reachability checking

    - by Sneakyness
    I've found several examples of code to do what I want (check for reachability), but none of it seems to be exact enough to be of use to me. I can't figure out why this doesn't want to play nice. I have the reachability.h/m in my project, I'm doing #import <SystemConfiguration/SystemConfiguration.h> And I have the framework added. I also have: #import "Reachability.h" at the top of the .m in which I'm trying to use the reachability. Reachability* reachability = [Reachability sharedReachability]; [reachability setHostName:@"http://www.google.com"]; // set your host name here NetworkStatus remoteHostStatus = [reachability remoteHostStatus]; if(remoteHostStatus == NotReachable) {NSLog(@"no");} else if (remoteHostStatus == ReachableViaWiFiNetwork) {NSLog(@"wifi"); } else if (remoteHostStatus == ReachableViaCarrierDataNetwork) {NSLog(@"cell"); } This is giving me all sorts of problems. What am I doing wrong? I'm an alright coder, I just have a hard time when it comes time to figure out what needs to be put where to enable what I want to do, regardless if I want to know what I want to do or not. (So frustrating) Update: This is what's going on. This is in my viewcontroller, which I have the #import <SystemConfiguration/SystemConfiguration.h> and #import "Reachability.h" set up with. This is my least favorite part of programming by far. FWIW, we never ended up implementing this in our code. The two features that required internet access (entering the sweepstakes, and buying the dvd), were not main features. Nothing else required internet access. Instead of adding more code, we just set the background of both internet views to a notice telling the users they must be connected to the internet to use this feature. It was in theme with the rest of the application's interface, and was done well/tastefully. They said nothing about it during the approval process, however we did get a personal phone call to verify that we were giving away items that actually pertained to the movie. According to their usually vague agreement, you aren't allowed to have sweepstakes otherwise. I would also think this adheres more strictly to their "only use things if you absolutely need them" ideaology as well. Here's the iTunes link to the application, EvoScanner.

    Read the article

  • how get fully result from Asynchronism communication?

    - by rima
    Hi all refer to these post : here1 and here2 at last I solve my problem by build a asynchronous solution,and it work well!!! but there is a problem that i face with it,now my code is like this: class MyProcessStarter { private Process process; private StreamWriter myStreamWriter; private static StringBuilder shellOutput = null; public String GetShellOutput { get { return shellOutput.ToString(); }} public MyProcessStarter(){ shellOutput = new StringBuilder(""); process = new Process(); process.StartInfo.FileName = "sqlplus"; process.StartInfo.UseShellExecute = false; process.StartInfo.CreateNoWindow = true; process.OutputDataReceived += new DataReceivedEventHandler(ShellOutputHandler); process.StartInfo.RedirectStandardInput = true; process.StartInfo.RedirectStandardOutput = true; //process.StartInfo.RedirectStandardError = true; process.Start(); myStreamWriter = process.StandardInput; process.BeginOutputReadLine(); } private static void ShellOutputHandler(object sendingProcess,DataReceivedEventArgs outLine) { if (!String.IsNullOrEmpty(outLine.Data)) shellOutput.Append(Environment.NewLine + outLine.Data); } public void closeConnection() { myStreamWriter.Close(); process.WaitForExit(); process.Close(); } public void RunCommand(string arguments) { myStreamWriter.WriteLine(arguments); myStreamWriter.Flush(); process.WaitForExit(100); Console.WriteLine(shellOutput); Console.WriteLine("============="+Environment.NewLine); process.WaitForExit(2000); Console.WriteLine(shellOutput); } } and my input is like this: myProcesStarter.RunCommand("myusername/mypassword"); Console.writeline(myProcesStarter.GetShellOutput); but take a look at my out put: SQL*Plus: Release 11.1.0.6.0 - Production on Thu May 20 11:57:38 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. ============= SQL*Plus: Release 11.1.0.6.0 - Production on Thu May 20 11:57:38 2010 Copyright (c) 1982, 2007, Oracle. All rights reserved. Enter user-name: Connected to: Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 - Production With the Partitioning, OLAP, Data Mining and Real Application Testing options as u see the output for run a function is not same in different time!So now would you do me a faver and help me that how I can wait until all the output done in other mean how I can customize my process to wait until output finishing ?? because I want to write a sqlcompiler so I need the exact output of shell. plz help me soon.thanxxxxxxxxxxxx :X

    Read the article

  • SOA Suite Integration: Part 3: Loading files

    - by Anthony Shorten
    One of the most common scenarios in SOA Integration is the loading of a file into the product from an external source. In Oracle SOA Suite there is a File Adapter that can process many file types into your BPEL process. For this example I will use the File Adapter to load a file of user and emails to update the user object within the Oracle Utilities Application Framework. Remember you can repeat this process with other objects and other file types. Again I am illustrating the ease of integration. The first thing is to create an empty BPEL process that will hold our flow. In Oracle JDeveloper this can be achieved by specifying the Define Service Later template (as other templates have predefined inputs and outputs and in this case we want to specify those). So I will create simpleFileLoad process to house our process. You will start with an empty canvas so you need to first specify the load part of the process using the File Adapter. Select the File Adapter from the Component Palette under BPEL Services and drag and drop it to the left side Partner Links (left is input). You name the Service. In this case I chose LoadFile. Press Next. We will define the interface as part of the wizard so select Define from operation and schema (specified later). Press Next. We are going to choose Read File to denote that we will read the file and specify the default Operation Name as Read. Press Next. The next step is to tell the Adapter the location of the files, how to process them and what to do with them after they have been processed. I am using hardcoded locations in this example but you can have logical locations as well. Press Next. I am now going to tell the adapter how to recognize the files I want to load. In my case I am using CSV files and more importantly I am tell the adapter to run the process for each record in the file it encounters. Press Next. Now, I tell the adapter how often I want to poll for the files. I have taken the defaults. Press Next. At this stage I have no explanation of the format of the input. So I am going to invoke the Native Format Wizard which will guide me through the process of creating the file input format. Clicking the purple cog icon will start the wizard. After an introduction screen (not shown), you specify the format of the input file. The File Adapter supports multiple format types. For this example, I will use Delimited as I am going to load a CSV file. Press Next. The best way for the wizard to work is with a sample. I have a sample file and the wizard will ask how much of the file to use as a template. I will use the defaults. Note: If you are using a language that has other languages other than US-ASCII, it is at this point you specify the character set to use.  Press Next. The sample contains multiple instances of a single record type. The wizard supports complex types as well. We will use the appropriate setting for our file. Press Next. You have to specify the file element and the record element. This will be used by the input wizard to translate the CSV data into an XML structure (this will make sense later). I am using LoadUsers as my file delimiter (root element) and User Record as my record root element. Press Next. As the file is CSV the delimiter is "," so I will also specify that the End Of Line (EOL) indicator indicates the end of a record. Press Next. Up until this point your have not given the columns their names. In my case my sample includes the column names in the first record. This is not always the case but you can specify the names and formats of columns in this dialog (not shown). Press Next. The wizard now generates the schema for the input file. You can specify a name for the schema. I have used userupdate.xsd. We want to verify the schema so press Test. You can test the schema by specifying an input sample. and pressing the green play button. You will see the delimiters you specified earlier for the file and the records. Press Ok to continue. A confirmation screen will be displayed showing you the location of the schema in your project. Press Finish to return to the File Adapter configuration. You will now see the schema and elements prepopulated from the wizard. Press Next. The File Adapter configuration is now complete. Press Finish. Now you need to receive the input from the LoadFile component so we need to place a Receive node in the BPEL process by drag and dropping the Receive component from the Component Palette under BPEL Constructs onto the BPEL process. We link the receive process with the LoadFile component by dragging the left most connect node of the Receive node to the LoadFile component. Once the link is established you need to name the Receive node appropriately and as in the post of the last part of this series you need to generate input variables for the BPEL process to hold the input records in. You need to now add the product Web Service. The process is the same as described in the post of the last part of this series. You drop the Web Service BPEL Service onto the right side of the process and fill in the details of the WSDL URL . You also have to add an Invoke node to call the service and generate the input and outputs variables for the call in the Invoke node. Now, to get the inputs from File to the service. You have to use a Transform (you can use an Assign action but a Transform action is more flexible). You drag and drop the Transform component from the Component Palette under Oracle Extensions and place it between the Receive and Invoke nodes. We name the Transform Node, Mapper File and associate the source of the mapping the schema from the Receive node and the output will be the input variable from the Invoke node. We now build the transform. We first map the user and email attributes by drag and drop the elements from the left to the right. The reason we needed to use the transform is that we will be telling the AS-User service that we want to issue an update action. Remember when we registered the service we actually used Read as the default. If we do not otherwise inform the service to use the Update action it will use the Read action instead (which is not desired). To specify the update action you need to click on the transactionType node on the right and select Set Text to set the action. You need to specify the transactionType of UPD (for update). The mapping is now complete. The final BPEL process is ready for deployment. You then deploy the BPEL process to the server and to test the service by simply dropping a file, in the same pattern/name as you specified, in the directory you specified in the File Adapter. You will see each record as a separate instance entry in the Fusion Middleware Control console. You can now load files into the product. You can repeat this process for each type of file to process. While this was a simple example it illustrates the method of loading data can be achieved using SOA Suite in conjunction with our products.

    Read the article

  • How does process of updating code with Continous Integration work?

    - by BleakCabalist
    I want to draw a model of process of updating the source code with the use of Continous Integration. The main issue is I don't really understand how it works when there are several programmers working on various aspects of the code at the same time. I can't visualize it in my mind. Here's what I know but I might be wrong: New code is sent to repository. Continous Integration server asks Version Control System if there is a new code in repository. If there is than CIS executes tests on the code. If tests show there are problems than CIS orders VCS to revert back to working wersion of the code and communicates it to programmer. If tests are passed positively it compiles the repository code and makes new build of a game? New build is made not after ever single change, but at the end of the day I believe? Are my assumptions above correct? If yes, does it also work when there are several programmers updating repository at once? Is this enough to draw a model of the process in your opinions or did I miss something? Also, what software would I need for above process? Can you guys give examples for CIS software and VCS software and whatever else I need? Does CIS software perform code tests or do I need another tool for that and integrate it with CIS? Is there a repository software?

    Read the article

  • .NET (C#): Getting child windows when you only have a process handle or PID?

    - by shea241
    Kind of a special case problem: I start a process with System.Diagnostics.Process.Start(..) The process opens a splash screen -- this splash screen becomes the main window. The splash screen closes and the 'real' UI is shown. The main window (splash screen) is now invalid. I still have the Process object, and I can query its handle, module, etc. But the main window handle is now invalid. I need to get the process's UI (or UI handle) at this point. Assume I cannot change the behavior of the process to make this any easier (or saner). I have looked around online but I'll admit I didn't look for more than an hour. Seemed like it should be somewhat trivial :-( Thanks

    Read the article

  • What are the pros and cons of AWS Elastic Beanstalk compared with other deployment strategies?

    - by James van Dyke
    I'm pretty new to the whole Netflix OSS stack and deployments in general. As a background for my current level of knowledge ops-wise, my main role is as a front-end application engineer. However, I enjoy the operations side of things, so I'm attempting to setup a new deployment strategy and the tooling for a new project. Our Goals Super easy deploys (we want to push a button to update production) Automated deploys to test environments (using Jenkins) Ease of maintenance (we have an app to write, don't want to spend our time fiddling with production issues) Ability to handle a service oriented architecture (many small apps, various languages and data stores) Enough flexibility to ensure we won't have to change strategies any time soon (we're already trying to get away from RightScale) We're OK with a little more initial setup time if doing so will save us some headaches in the future. So, along these lines, I've been listening to podcasts, watching Ops talks, and reading tons of blog posts and based on our goals and what I've taken to be some forming best practices, we've started forming a plan using Asgard, rolling our package into a jar and rolling that into an AMI. We had this all planned out and like the advantages the process versus using a Chef server and converging instances on the fly (we felt this was error prone given our limited timeline and lack of understanding around a Chef server workflow). However, a coworker did a little looking around on his own and felt like Elastic Beanstalk met our needs. I've looked into it and spun up a test environment with a WAR file and an attached RDS database. Things seem to work and I believe that we can automate deploys to a testing environment using Jenkins via the AWS API. Seems simple enough... perhaps too simple. What I'm wondering is, what's the catch? If Elastic Beanstalk is so simple and effective, why isn't it talked about more? I'm having a hard time finding enough objective opinions and facts about the two different deployment strategies, so I thought I'd ask around. Do you use Elastic Beanstalk? If so, why and what factors lead to that decision? What do you like and dislike? If you don't use Elastic Beanstalk but considered it, what do you use and why didn't you use Elastic Beanstalk? What are the advantages and disadvantages to a Elastic Beanstalk based deployment strategy for an SOA? That is, will Elastic Beanstalk work well with many small applications that rely on each other to work?

    Read the article

  • Is Apache ReverseProxy to Passenger Standalone an acceptable production deployment?

    - by davetron5000
    I have the need to deploy Rails 3 apps, using RVM and gemsets, and am expecting “public” traffic (i.e. this is not an internal-only app). I also must use Apache as the public interface to my app. I understand that Passenger Standalone can help accomplish the rails/RVM end, and I have successfully set it up in my development environment. My question is how viable this setup is for a production deployment. Is deploying via Apache configured to ReverseProxy to my passenger-powered Rails app going to create problems? Since I'm designing the production deployment now, I want to understand if I should spend the additional time to set up Passenger connected to Apache and have that Passenger communicate with Passenger Standalone instance running my Rails app. So, I'm looking for one of I guess three answers: Apache Reverse Proxy to Passenger Standalone will be generally fine You should not use the Apache/Passenger Standalone configuration, but set up Passenger on the Apache side as well Your entire setup is just Wrong, please RTFM (and include link to "FM")

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >