Search Results

Search found 32297 results on 1292 pages for 'thought process'.

Page 80/1292 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • How to Build Services from Legacy Applications

    - by Chris Falter
    The SOA consultants invaded the executive suite at your company or agency, preached the true religion, and converted the unbelievers. Now by divine imperative you must convert your legacy applications into a suite of reusable services.  But as usual, you lack the time and resources that you need in order to develop the services properly.  So you googled or bing’ed, found this blog post, and began crying in gratitude.  Yes, as the title implies, I am going to reveal my easy, 3-step, works-every-time process for converting silos of legacy applications into the inventory of services your CIO has been dreaming about.  So just close your eyes and count to 3 … now open them … and here it is…. Not. While wishful thinking is too often the coin of the IT realm, even the most naive practitioner knows that converting legacy applications into reusable services requires more than a magic wand.  The reason is simple: if your starting point is your legacy applications, then you will simply be bolting a web service technology layer on top of your legacy API.  And that legacy API is built in the image of the silo applications.  Enter the wide gate of the legacy API, follow the broad path of generating service interfaces from existing code, and you will arrive at the siloed enterprise destruction that you thought you were escaping. The Straight and Narrow Path This past week I had the opportunity to learn how the FBI Criminal Justice Information Systems department has been transitioning from silo applications to a service inventory.  Lafe Hutcheson, IT Specialist in the architecture group and fellow attendee at an SOA Architect Certification Workshop, was my guide.  Lafe has survived the chaos of an SOA initiative, so it is not surprising that he was able to return from a US Army deployment to Kabul, Afghanistan with nary a scratch.  According to Lafe, building their service inventory is a three-phase process: Model a business process.  This requires intense collaboration between the IT and business wings of the organization, of course.  The FBI uses IBM Websphere tools to model the process with BPMN. Identify candidate services to facilitate the business process. Convert the BPMN to an executable BPEL orchestration, model and develop the services, and use a BPEL engine to run the process.  The FBI uses ActiveVOS for orchestration services. The 12 Step Program to End Your Legacy API Addiction Thomas Erl has documented a process for building a web service inventory that is quite similar to the FBI process. Erl’s process adds a technology architecture definition phase, which allows for the technology environment to influence the inventory blueprint.  For example, if you are using an enterprise service bus, you will probably not need to build your own utility services for logging or intermediate routing.  Erl also lists a service-oriented analysis phase that highlights the 12-step process of applying the principles of service orientation to modeling your services.  Erl depicts the modeling of a service inventory as an iterative process: model a business process, define the relevant technology architecture, define the service inventory blueprint, analyze the services, then model another business process, rinse and repeat.  (Astute readers will note that Erl’s diagram, restricted to analysis and modeling process, does not include the implementation phase that concludes the FBI service development methodology.) The service-oriented analysis phase is where you find the 12 steps that will free you from your legacy API addiction. In a nutshell, you identify the steps in the process that need services; identify the different types of services (agnostic entity services, service compositions, and utility services) that are required; apply service-orientation principles; and normalize the inventory into cohesive service models. Rather than discuss each of the 12 steps individually, I will close by simply referring my readers to Erl’s explanation.

    Read the article

  • VSDBCMD returns "An unexpected failure occurred: Object reference not set to an instance of an objec

    - by Matt Wrock
    I have been succesfully using the command line database deployment tool VSDBCMD on my dev and test environments but the tool fails in our integration environmrnt. I am using the VS 2010 version of the tool. The servers have all of the prerequisites including: .net 4.0 sql server compact edition 3.5 sp1 (as well as the full edition of 2008) sql server 2008 server management objects sql server 2008 native client sql server system clr types msxml 6 all of the dependent DLLs included in: C:\Program Files\Microsoft SQL Server Compact Edition\v3.5\desktop*.dll C:\Program Files\Microsoft SQL Server Compact Edition\v3.5*.dll C:\Program Files (x86)\Microsoft Visual Studio 10.0\VSTSDB\Deploy**. The only reference to this error that I have been able to find has to do with a bug in the VS 2008 edition when the HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\9.0 key is missing. In my case the 10.0 version of the key exists. Has anyone else encountered this?

    Read the article

  • Error handling in VS/C# build events.

    - by ProfK
    I have just written a small utility to be used in a pre-build event. The utilty works fine when run as standalone, but does nothing when used in the build event. Is there a standard way of noticing and dealing with error conditions in build events, or is that the domain of more advanced build control?

    Read the article

  • CruiseControl.Net Build Publisher - Only publish compiled files

    - by FlySwat
    While setting up CruiseControl, I added a buildpublisher block to the publisher tasks: <buildpublisher> <sourceDir>C:\MyBuild\</sourceDir> <publishDir>C:\MyBuildPublished\</publishDir> <alwaysPublish>false</alwaysPublish> </buildpublisher> This works, but it copies the entire file contents of the build, I only want to copy the DLL's and .aspx pages, I don't need the source code to get published. Does anyone know of a way to filter this, or do I need to setup a task to run a RoboCopy script instead?

    Read the article

  • Interpreting w3wp.exe thread-infos, does mscorwks.dll!StrongNameErrorInfo+0x7688 has a negative impa

    - by Robert
    I am trying to interpret the meaning of "mscorwks.dll!StrongNameErrorInfo+0x7688". I guess it means, that the assembly loaded by the mscorworks.dll has no StrongName? If yes, does this have any negative impact for a web application? Is it safe to assume that the thread count of 107 means, that web application has needed a maximum of 107 concurrent threads to handle incoming requests?

    Read the article

  • Identifier for the “completed” stage of a process: 0, 99, something else?

    - by Arnold Sakhnov
    Say, that you are handling a multi-step process (like a complex registration form, with a number of steps the user has go through in order). You need to be able to save the current state of the process (e.g. so the user can come back to that registration form later and continue form the step where they were left off). Obviously, you’ll probably want to give each “step” an identifier you can refer to: 1, 2, 3, 4, etc. You logic will check for this step_id (or whatever you call it) to render the appropriate data. The question: how would you identify the stage after the final step, like the completed registration state (say, that you have to give that last “step” its own id, that’s how your logic is structured). Would it be a 0, 999, a non-integer value, something else entirely?

    Read the article

  • How can I best share Ant targets between projects?

    - by Rob Hruska
    Is there a well-established way to share Ant targets between projects? I have a solution currently, but it's a bit inelegant. Here's what I'm doing so far. I've got a file called ivy-tasks.xml hosted on a server on our network. This file contains, among other targets, boilerplate tasks for managing project dependencies with Ivy. For example: <project name="ant-ivy-tasks" default="init-ivy" xmlns:ivy="antlib:org.apache.ivy.ant"> ... <target name="ivy-download" unless="skip.ivy.download"> <mkdir dir="${ivy.jar.dir}"/> <echo message="Installing ivy..."/> <get src="http://repo1.maven.org/maven2/org/apache/ivy/ivy/${ivy.install.version}/ivy-${ivy.install.version}.jar" dest="${ivy.jar.file}" usetimestamp="true"/> </target> <target name="ivy-init" depends="ivy-download" description="-> Defines ivy tasks and loads global settings"> <path id="ivy.lib.path"> <fileset dir="${ivy.jar.dir}" includes="*.jar"/> </path> <taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpathref="ivy.lib.path"/> <ivy:settings url="http://myserver/ivy/settings/ivysettings-user.xml"/> </target> ... </project> The reason this file is hosted is because I don't want to: Check the file into every project that needs it - this will result in duplication, making maintaining the targets harder. Have my build.xml depend on checking out a project from source control - this will make the build have more XML at the top-level just to access the file. What I do with this file in my projects' build.xmls is along the lines of: <property name="download.dir" location="download"/> <mkdir dir="${download.dir}"/> <echo message="Downloading import files to ${download.dir}"/> <get src="http://myserver/ivy/ivy-tasks.xml" dest="${download.dir}/ivy-tasks.xml" usetimestamp="true"/> <import file="${download.dir}/ivy-tasks.xml"/> The "dirty" part about this is that I have to do the above steps outside of a target, because the import task must be at the top-level. Plus, I still have to include this XML in all of the build.xml files that need it (i.e. there's still some amount of duplication). On top of that, there might be additional situations where I might have common (non-Ivy) tasks that I'd like imported. If I were to provide these tasks using Ivy's dependency management I'd still have problems, since by the time I'd have resolved the dependencies I would have to be inside of a target in my build.xml, and unable to import (due to the constraint mentioned above). Is there a better solution for what I'm trying to accomplish?

    Read the article

  • Recommendations for keeping a build server updated

    - by gareth_bowles
    As a guy who frequently switches between QA, build and operations, I keep running into the issue of what to do about operating system updates on the build server. The dichotomy is the same on Windows, Linux, MacOS or any other o/s that can update itself via the internet: The QA team wants to keep the build server exactly as it is from the beginning of the product release cycle to the end, since installing updates could destabilize the server and means that successive builds aren't made against the same baseline. The ops team wants the software to be deployed on a system with all the latest security patches; this can mean that the software isn't deployed on exactly the same version of the o/s that it was built on. I usually mitigate this by taking release candidate builds and installing them on a test server that has a completely up-to-date o/s, repeating the automated tests that are run on the build server and doing some additional system level testing to make sure everything looks good before deployment. However, this seems inefficient to me; does anyone have a better way ?

    Read the article

  • Is Export table contains all entries of Win32 Exe functions?

    - by Usman
    Hello, I need to know that all Win32 Exe functions or class's member functions contained inside Export table of that Win 32 exe(PE File)? If not then from how and where I would be able to get all these information? (I know PE file format and all sections of it and know what those sections contained but still help required how to proceeed?) Regards Muhammad Usman

    Read the article

  • details on the following Natural Language Processing terms ?

    - by wefwgeweg
    Named Entity Extraction (extract ppl, cities, organizations) Content Tagging (extract topic tags by scanning doc) Structured Data Extraction Topic Categorization (taxonomy classification by scanning doc....bayesian ) Text extraction (HTML page cleaning) are there libraries that i can use to do any of the above functions of NLP ? dont really feel like forking out cash to AlchemyAPI

    Read the article

  • Visual Studio 2005 and Windows SDK 6.1 (Server 2008)

    - by bde
    I am trying to figure out how Visual Studio 2005 and the Windows SDK 6.1 integrate in a command line build environment (if at all). I am mostly interested in x64 development, but also just in how these two packages fit together. Is it possible/advisable to use the compiler and linker from Visual Studio 2005 and the headers/libraries from the newer Windows SDK 6.1? Also, is it possible to use the devenv command (part of VS2005) in the Windows SDK 6.1 build environment?

    Read the article

  • Using Essential Use Cases to design a UI-centric Application

    - by Bruno Brant
    Hello all, I'm begging a new project (oh, how I love the fresh taste of a new project!) and we are just starting to design it. In short: The application is a UI that will enable users to model an execution flow (a Visio like drag & drop interface). So our greatest concern is usability and features that will help the users model fast and clearly the execution flow. Our established methodology makes extensive use of Use Cases in order to create a harmonious view of the application between the programmers and users. This is a business concern, really: I'd prefer to use an Agile Method with User Stories rather than User Cases, but we need to define a clear scope to sell the product to our clients. However, Use Cases have a number of flaws, most of which are related to the fact that they include technical details, like UI, etc, as can be seem here. But, since we can't use User Stories and a fully interactive design, I've decided that we compromise: I will be using Essential Use Cases in order to hide those details. Now I have another problem: it's essential (no pun intended) to have a clear description of UI interaction, so, how should I document it? In other words, how do I specify a application through the use of Essential Use Cases where the UI interaction is vital to it? I can see some alternatives: Abandon the use of Use Cases since they don't correctly represent the problem Do not include interface descriptions in the use case, but create another documentation (Story Boards) and link then to the Essential Use Cases Include UI interaction description to the Essential Use Cases, since they are part of the business rules in the perspective of the users and the application itself

    Read the article

  • Accessing TFS from Powershell

    - by w4ymo
    Hello I am new to PowerShell and I am trying to get branches from TFS and merge them using a PowerShell script. Unfortunately I am failing a first hurdle. I do have Visual Studio 2010 install on my local machine and can access the TFS server (also 2010) fine. I am running the script from my local machine and have the following lines: $tfs = get-tfs http://TFSServerName:8080/TFSProject $branchfolders = $tfs.VCS.GetItems('$/Dev/Branches/', $tfs.RecursionType::OneLevel) and I receive the following error on the second line 2 above Exception calling "GetItems" with "2" argument(s): "Unable to connect to the remote server" I have configured the TFS server to accept incoming connections on port 8080 which works but I am now not to sure how to resolve this error. Is further configuration required? Thanks for any help given.

    Read the article

  • Background processing in rails

    - by hashpipe
    Hi, This might seem like a FAQ on stackoverflow, but my requirements are a little different. While I have previously used BackgroundRB and DJ for running background processes in ruby, my requirement this time is to run some heavy analytics and mathematical computations on a huge set of data, and I need to do this only about the first 15 days of the month. Going by this, I am tempted to use cron and run a ruby script to accomplish this goal. What I would like to know / understand is: 1 - Is using cron a good idea (cause I'm not a system admin, and so while I have basic idea of cron, I'm not overly confident of doing it perfectly) 2 - Can we somehow modify DJ to run only on the first 15 days of the month (with / without using cron), and then just stop and exit once all the jobs in the queue for the day are over (don't want it to ping the DB every time for a new job...whatever the jobs will be in the queue when DJ starts, that will be all). I'm not sure if I have put the question in the right manner, but any help in this direction will be much appreciated. Thanks

    Read the article

  • MSBuild Override Project Reference to resolve to Precompiled Assembly

    - by Ryu
    Situation I have about 400 csproj files using project references. About 3 of those a separate team wants to fork and incorporate into a standalone app. I branched the 3 projects of interest, and because the separate team uses a diff SVN repo I used svn externals to pull in these projects into the folder of the standalone app. Obviously since this team uses a different folder structure the project references no longer resolve. Attempted Solution I figured setting the msbuild properties ReferencePath and AdditionalLibPaths to point to a directory with all the precompiled dependencies would allow the project references a fallback point and resolve correctly. However that doesn't appear to be the case. Question Does anybody know a way to have a failed projectreference look up resolve to the precompiled dll? Perhaps point me to an automated tool to convert projectreferences to dll references? Or is there a better way to solve this problem? Thanks

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

  • How to setup an Eclipse Project with multiple Subprojects (OSGi-Bundles)

    - by stacker
    Sherlog is an OSGi-based log analyzer, if I import this project as an workspace snapshot I receive lot's of projects in my workspace, but I would prefere to have them as subprojects in a project. The other option would be to checkout from svn, but then I face other problems (I don't know how to setup the dependencies for automatically build) Does anyone have an idea or good links on this topic? Thanks

    Read the article

  • How to trigger a Symbian C++ application within a J2ME application for Nokia phones using J2ME API?

    - by kennykee
    Hi all, Anyone knows how to trigger a Symbian C++ application using any J2ME API call? I have a J2ME application that needs a customized photo taking application in Symbian C++. The reason for separating into two applications is because J2ME has a limit in heap size and the J2ME needs to know the path of photo after taking it. Thanks a lot for your help. Regards, Kenny

    Read the article

  • How to determine the (natural) language of a document?

    - by Robert Petermeier
    I have a set of documents in two languages: English and German. There is no usable meta information about these documents, a program can look at the content only. Based on that, the program has to decide which of the two languages the document is written in. Is there any "standard" algorithm for this problem that can be implemented in a few hours' time? Or alternatively, a free .NET library or toolkit that can do this? I know about LingPipe, but it is Java Not free for "semi-commercial" usage This problem seems to be surprisingly hard. I checked out the Google AJAX Language API (which I found by searching this site first), but it was ridiculously bad. For six web pages in German to which I pointed it only one guess was correct. The other guesses were Swedish, English, Danish and French... A simple approach I came up with is to use a list of stop words. My app already uses such a list for German documents in order to analyze them with Lucene.Net. If my app scans the documents for occurrences of stop words from either language the one with more occurrences would win. A very naive approach, to be sure, but it might be good enough. Unfortunately I don't have the time to become an expert at natural-language processing, although it is an intriguing topic.

    Read the article

  • YUI Compressor and .NET Apps

    - by objektivs
    I want to use YUI Compressor (the original) and use it as part of typical MS build processes (Visual Studio 2008, MSBuild). Does anyone have any guidance or thoughts on this? For example, good ways for incorporating into project, what to do with existing CSS and JS references, and the like. I am happy to hear on the benefits of YUI Compressor .NET and alternatives but I'm mor einterested in use of the original. Thanks Scott

    Read the article

  • How to share code with continuous integration

    - by alchemical
    I've just started working in a continuous integration environment (TeamCity). I understand the basic idea of not getting so abstracted out in your code that you are never able to build it to test functionality, etc. However, when there is deep coding going on, occasionally it will take me several days to get buildable code--but in the interim other team members may need to see my code. If I check the code in, it breaks the build. However, if I don't check it in, my team members are unable to see the most recent work. I'm wondering how this situation is best dealt with.

    Read the article

  • Makefiles - Compile all .cpp files in src/ to .o's in obj/, then link to binary in /

    - by Austin Hyde
    So, my project directory looks like this: /project Makefile main /src main.cpp foo.cpp foo.h bar.cpp bar.h /obj main.o foo.o bar.o What I would like my makefile to do would be to compile all .cpp files in the /src folder to .o files in the /obj folder, then link all the .o files in /obj into the output binary in the root folder /project. The problem is, I have next to no experience with Makefiles, and am not really sure what to search for to accomplish this. Also, is this a "good" way to do this, or is there a more standard approach to what I'm trying to do?

    Read the article

  • Thoughts on moving to Maven in an enterprise environment

    - by Josh Kerr
    I'm interested in hearing from those who either A) use Maven in an enterprise environment or B) tried to use Maven in an enterprise environment. I work for a large company that is contemplating bringing in Maven into our environment. Currently we use OpenMake to build/merge and home-grown software to deploy code to 100+ servers running various platforms (eg. WAS and JBoss). OpenMake works fine for us however Maven does have some ideal features, most importantly being dependency management, but is it viable in a large environment? Also what headaches have/did you incur, if any, in maintaining a Maven environment. Side note, I've read http://stackoverflow.com/questions/861382/why-does-maven-have-such-a-bad-rep, http://stackoverflow.com/questions/303853/what-are-your-impressions-of-maven, and a few other posts. It's interesting seeing the split between developers.

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >