Search Results

Search found 2509 results on 101 pages for 'converting'.

Page 38/101 | < Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >

  • Convert a PPT File into an Image or HTML File in .NET

    Get the .NET code for programmatically converting PowerPoint presentation files into images or HTML files....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • NoSQL MongoDb Overview

    In the current software industry that works around design patterns and OOPs there is a constant battle in converting the data from the database into the objects in the object graph and vice versa. MongoDb is a NoSQL database.  read moreBy prim sDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • NoSQL MongoDb overview

    In the current software industry that works around design patterns and OOPs there is a constant battle in converting the data from the database into the objects in the object graph and vice versa. MongoDb is a NoSQL database.  read moreBy prim sDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Great Functionalities Of Kindle Portable Reader

    Technology has never stopped enhancing people?s way of life as the years go by. It has significantly contributed a lot in the field of reading by converting the printed books into something that is c... [Author: Ben Dave - Computers and Internet - June 02, 2010]

    Read the article

  • Changing coordinate system from Z-up to Y-up

    - by Jari Komppa
    Blender's coordinate system is different from what I'm used to, in that Z points upwards instead of Y. What would be the simplest way of converting all the world data (so that all animations, texture coordinates, etc still work) so that Y points upwards? Clarification: Object positions are defined as matrices, so just switching translation/rotation/scale information in matrices is not a trivial task.

    Read the article

  • Convert a PPT File into an Image or HTML File in .NET

    Get the .NET code for programmatically converting PowerPoint presentation files into images or HTML files....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Benefits of PSD to HTML Service? For Whom and How

    With the advent of Internet and e-industry, most of the companies create website and hire web development professionals. And it is also true that the process of converting a design into web pages is ... [Author: Manish Rawat - Web Design and Development - June 13, 2010]

    Read the article

  • Convert MySQL to an MS SQL Server 2008 Database

    Converting a MySQL database to an MS SQL Server 2 8 database is a bit tricky. It is however an important database migration conversion. Is there some way to do it without resorting to costly database conversion software or facing issues with ODBC connectivity This article will teach you a new method to help you accomplish this conversion.... Test Drive the Next Wave of Productivity Find Microsoft Office 2010 and SharePoint 2010 trials, demos, videos, and more.

    Read the article

  • Cursors Be Gone!

    A short tutorial on converting cursors to more conventional loops. SQL Server monitoring made easy "Keeping an eye on our many SQL Server instances is much easier with SQL Response." Mike Lile.Download a free trial of SQL Response now.

    Read the article

  • Why are there so many string classes in the face of std::string?

    - by fish
    It seems to me that many bigger C++ libraries end up creating their own string type. In the client code you either have to use the one from the library (QString, CString, fbstring etc., I'm sure anyone can name a few) or keep converting between the standard type and the one the library uses (which most of the time involves at least one copy). So, is there a particular misfeature or something wrong about std::string (just like auto_ptr semantics were bad)? Has it changed in C++11?

    Read the article

  • Your First REAL Website!

    Many companies have websites but they need to know if it's working for them. Are you making money off of it? Are you converting your visitors into clients? Or do you simply have a website just to have a website.

    Read the article

  • Should I create inner class here or leave as is?

    - by AndroidNoob
    Folks, I have two separate classes. One of them makes http requests/receives response to/from a server and the second one converts received JSON objects into my models (separate classes). I'm thinking it would be an idea to include this class for data converting as an inner class of my first class. Wouldn't my first class be extra big because of it and I need to leave everything as is or this is a good common practice to behave like that?

    Read the article

  • Unicode To ASCII Conversion [closed]

    - by Yuvaraj
    Hi all, i creating an small application in Delphi 2009. here i got problem that when i run my application in WindowsXP its working but it is not working in Windows95. i know the problem that 95 will not support Unicode. if anyone knows the solution please tell me. and also i have one more idea that converting Unicode to ASCII. is it possible please tell how to do that. Thanks in Advance Worm Regards, Yuvaraj

    Read the article

  • I can't install Truecrypt on Ubuntu 12.04

    - by Duanek
    I have a Windows 7 machine with Truecrypt. I installed Ubuntu 12.04 with the intention of converting to an Ubuntu only machine but I can't install Truecrypt on Ubuntu. I have been looking through the forums and have followed all advice to the letter and still Truecrypt doesn't work; I have only a non-functional icon in "dash". Should I uninstall Ubuntu and start over (ie reinstall)? I love Ubuntu and greatly appreciate your efforts on this forum.

    Read the article

  • The Challenge of the Mobile Web

    Our innovative product allows you to do all that without the hassle. Converting and creating new mobile web sites is a breeze, as our system guides you on every step of the way. Do not hesitate, begin now, and build your mobile web site, all free of charge!

    Read the article

  • Guide to Maintaining an Effective Web Presence

    Most companies and professionals don't understand what it actually takes to maintain a solid web presence and how important it is, for both converting visits and maintaining a positive online public image, to rank well on search engines. Most people think that search engine optimization is simply optimizing a website for it to rank better on search engines.

    Read the article

  • Embed album art in OGG through command line in linux

    - by teratomata
    I want to convert my music from flac to ogg, and currently oggenc does that perfectly except for album art. Metaflac can output album art, however there seems to be no command line tool to embed album art into ogg. MP3Tag and EasyTag are able to do it, and there is a specification for it here which calls for the image to be base64 encoded. However so far I have been unsuccessful in being able to take an image file, converting it to base64 and embedding it into an ogg file. If I take a base64 encoded image from an ogg file that already has the image embedded, I can easily embed it into another image using vorbiscomment: vorbiscomment -l withimage.ogg > textfile vorbiscomment -c textfile noimage.ogg My problem is taking something like a jpeg and converting it to base64. Currently I have: base64 --wrap=0 ./image.jpg Which gives me the image file converted to base64, using vorbiscomment and following the tagging rules, I can embed that into an ogg file like so: echo "METADATA_BLOCK_PICTURE=$(base64 --wrap=0 ./image.jpg)" > ./folder.txt vorbiscomment -c textfile noimage.ogg However this gives me an ogg whose image does not work properly. I noticed when comparing the base64 strings that all properly embedding pictures have a header line but all the base64 strings I generate are lacking this header. Further analysis of the header: od -c header.txt 0000000 \0 \0 \0 003 \0 \0 \0 \n i m a g e / j p 0000020 e g \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 0000040 \0 \0 \0 \0 \0 \0 \0 \0 035 332 0000052 Which follows the spec given above. Notice 003 corresponds to front cover and image/jpeg is the mime type. So finally, my question is, how can I base64 encode a file and generate this header along with it for embedding into an ogg file?

    Read the article

  • Is there a good alternative to Videora iPod Converter?

    - by Richard
    I use Videora to convert my videos (in DivX/XviD format) to something I can play on my iPod Classic. I really dislike it. It's clunky, riddled with adverts, sometimes doesn't convert properly (the infamous "invalid public atom" error - see Google for more) and has a UI that truly stinks. On the upside, it's free, accepts a list of video files (via the oddly hard to find "1-click convert" button) and just gets on with the converting as it already knows the correct settings for my iPod. One final nice touch is that once they are converted, it'll automatically upload them into iTunes. Are there any alternatives which have all the upsides but none of the downsides? Bonus points if they can set the metadata in iTunes correctly for TV shows (season, show, episode) and delete the converted file afterwards (as my iTunes settings means that a copy is made elsewhere). I've looked at a bunch of applications (handbrake, virtualdub, mediacoder, format factory, any video converter, convertxtodvd) but many of them fail the "just select a list of files and get on with converting" test - let alone all the other features I want. I have no desire to individually set the video size of each file or the codec or the post-processing options. I'm currently using the command line version of HandBrake (handbrakecli) and a hand-written DOS batch file to go through every file in a folder and convert it. It does most of what I want, just not in a very slick way. Can anyone recommend anything better? It needs to work on Windows 7 and be free.

    Read the article

  • emacs ORG-mode "headless" export-as commands?

    - by Seamus
    When I use org-export-as-latex or org-export-as-html orgmode turns my buffer into a .tex file or .html file. But I don't want all the extra junk that it adds to the file: I want to handle the documentclass and everything myself and just \input the org mode generated file. (Or the analogous things for html with php). So if my org file just has: * Section - Stuff - Things I want the org mode command to output just \section{Section} \begin{itemize} \item Stuff \item Things \end{itemize} Without any of the extra \tableofcontents junk that ORG adds to it. I know I could define my own kind of #+LaTeX_CLASS that could add the packages I want and so on, but I don't want to do things that way (and that wouldn't remove the \maketitle or the spurious \vspace* that ORG insists on inserting. Is there a command to do this "headless" parsing and converting? I had a look but it's not obvious from the documentation. Presumably some low level ORG command is doing the parsing and converting I want, but I couldn't find what it was called from looking at the docs and C-h pages... This is not a question about HTML or LaTeX but about emacs ORG mode. So don't kick it off to some other site...

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • PDF Converter Elite Giveaway – Lets you create, convert and edit any type of PDF with ease

    - by Gopinath
    Are you looking for a PDF editing software that lets you create, edit and convert  any type of PDF with ease? Then here is a chance for you to win a lifetime free license of PDF Converter Elite software. Tech Dreams in partnership with pdfconverter.com  brings a giveaway contest exclusively for our readers. Continue reading to know the features of the application and giveaway contest details Adobe Acrobat  is the best software for creating, editing and converting PDF files, but you need spend a lot of money to buy it. PDF Converter Elite, which is priced at $100 has a rich set of features that satisfies most of your PDF management needs. Here is a quick run down of the feature of the application Create PDF files from almost every popular Windows file format – You can create a PDF  from almost 300 popular file formats supported by Windows. Want to convert a word document to PDF? It’s just a click away. How about converting Excels, PowerPoint presentations, text files, images, etc? Yes, with a single click you will be able to turn them to PDF Files. Convert PDF to Word, Excel, PowerPoint, Publisher, HTML – This is one of the best features i liked in this software. You can convert a PDF to any MS Office file format without loosing alignment and quality of the document. The converted documents looks exactly same as your PDF documents and you would be surprised to see near 100% layout replication in the converted document. I feel in love with the perfection at which the files are converted. Edit PDF files easily – You can rework with your PDF documents by inserting watermarks, numbers, headers, footers and more. Also you will be able to merge two PDF files, overlay pages, remove unwanted pages, split a single PDF in to multiple files. Secure PDF files by setting password – You can secure PDF files by limiting how others can use them – set password to open the documents, restrict various activities like printing, copy & paste, screen reading, form filling, etc.. If you are looking for an affordable PDF editing application then PDF Converter Elite is there for you. 10 x PDF Converter Elite Licenses Giveaway Here comes the details on wining a free single user license for our readers – we have 10 PDF Converter Elite single user licenses worth of $100 each. To win a license all you need to do is Like Tech Dreams Fan page on Facebook Tweet or Like this post – buttons are available just below the post heading in the top section of this page Finally drop a comment on how you would like to use PDF Converter Elite We will choose 10 winners through a lucky draw and the licenses will be sent to them in a personal email. Names of the winners will also be announced on Tech Dreams. So are you ready to grab a free copy of PDF Converter worth of $100?

    Read the article

  • Knowledge Pathways Designer - Recommended Settings

    - by ted.henson
    The General page of the Options dialog box contains the application preferences for Knowledge Pathways Designer. It is recommended that you leave certain settings as they are, unless you have a specific reason for changing them. The following are a few of the settings on the General page with an explanation of the recommended setting. They are in the order they appear on the page: Allow version 2.0 style links: This option should remain disabled unless you were using content that was created using version 2.0 of Knowledge Pathways and you want the same linking functionality that existed in that version 2.0. This feature enables you to reuse parts of titles that contain no AUs. However, keep in mind that this type of link is not a true link, but a cross between a copy and a link. To create a 2.0 style link, you drag and drop sections between titles. You can only create 2.0 style links to sections that belong to the Title AU. When creating a version 2.0 style link, your mouse pointer will change to indicate a 2.0 link is being created. Confirm deletion of outline items and Confirm deletion of titles: It is recommended that these options remain enabled to avoid deleting something by accident. Display tracking data loss warning when opening a published title: It recommended that this option be enabled so you will receive the warning message when you open the development copy of a title, reminding you of the implications of your changes. ulCopy files when converting a Section to an Assignable Unit: This option should remain enabled unless you have a specific reason for not copying the files. If this is disabled, you will (in effect) lose your content files upon converting because they will not be copied to the new AU directory on the content root. In this case, you would need to use Windows Explorer to copy your files manually. Working with Spelling Options All of the spelling options are enabled by default. Your design team can review these options to determine if you want to make changes, depending upon your specific needs. Understanding Dictionary Options You should leave the dictionary options as they are, unless you have a specific reason for changing them. While you can delete the user (customizable) dictionary, doing so is not recommended. Setting Check In/Check Out Options The ability to check in and check out titles and AUs will impact the efficiency of your design team. Decide what your check in and check out processes are before you start developing titles. The Check In/Check Out page of the Options dialog box contains two options that affect what happens when you open a title using the Open Title dialog box. Both of these options are enabled by default and are described below: Check Out for editing enabled: This option ensures that the Check Out for editing option will be selected when you open the development copy of a title from the Open Title dialog box. If this option is disabled, you must select the Check Out for editing option every time you want to check out a title for editing. Attempt to Check Out for entire branch: When this option is enabled, Designer checks out the selected title and all AUs and sections that are part of that title, provided they are available for check out. If this option is disabled, you will only check out the Title AU and anything that belongs to that Title AU (e.g., sections, questions, etc.), but not other AUs. The Check In/Check Out page of the Options dialog box also contains options that control what happens when you close a title. You can choose one option in the Check In when Closing a Title area. The option selected is a matter of preference and you should determine which option is most appropriate for your design team.

    Read the article

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

< Previous Page | 34 35 36 37 38 39 40 41 42 43 44 45  | Next Page >