Search Results

Search found 28207 results on 1129 pages for 'tfs process template'.

Page 41/1129 | < Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >

  • Where are TFS Alerts stored in the TFS Databases? Receiving duplicate alerts after upgrade 2008 to

    - by MJ Hufford
    I recently performed a migration-upgrade from TFS 2008 to TFS 2010. Almost everything is working properly now. However, our team is getting duplicate emails now. I'm guessing this is because I used the TFS 2008 power tools to setup alerts. After the upgrade, I installed the TFS 2010 power tools and noticed that there were not alerts configured. I setup new alerts and now we get duplicates. Is it possible the old alerts configuration is floating around in the db somewhere?

    Read the article

  • Visual Studio and TFS for Test manager

    - by bisjom
    Hi I have Visual Studio test manager installed in my machine, I have TFS Server installed on another server, I want to connect to that TFS server with new VS 2010. Do I need to install the Visual studio 2010 full version or just the test manager? I installed test manager and its asking a URL to add and I added the one we already have , but its not connecting to that site. Do I need to isntall Full version and TFS on same machine?? Please help Thanks

    Read the article

  • ReSharper C# Live Template for Read-Only Dependency Property and Routed Event Boilerplate

    - by Bart Read
    Following on from my previous post, where I shared a Live Template for quickly declaring a normal read-write dependency property and its associated property change event boilerplate, here's an unsurprisingly similar template for creating a read-only dependency property.        #region $PROPNAME$ Read-Only Property and Property Change Routed Event        private static readonly DependencyPropertyKey $PROPNAME$PropertyKey =                                             DependencyProperty.RegisterReadOnly(             "$PROPNAME$", typeof ( $PROPTYPE$ ), typeof ( $DECLARING_TYPE$ ),             new PropertyMetadata( $DEF_VALUE$ , On$PROPNAME$Changed ) );       public static readonly DependencyProperty $PROPNAME$Property =                                           $PROPNAME$PropertyKey.DependencyProperty;        public $PROPTYPE$ $PROPNAME$         {             get { return ( $PROPTYPE$ ) GetValue( $PROPNAME$Property ); }             private set { SetValue( $PROPNAME$PropertyKey, value ); }         }       public static readonly RoutedEvent $PROPNAME$ChangedEvent   =                                           EventManager.RegisterRoutedEvent(           "$PROPNAME$Changed",           RoutingStrategy.$ROUTINGSTRATEGY$,           typeof( RoutedPropertyChangedEventHandler< $PROPTYPE$ > ),           typeof( $DECLARING_TYPE$ ) );       public event RoutedPropertyChangedEventHandler< $PROPTYPE$ > $PROPNAME$Changed       {           add { AddHandler( $PROPNAME$ChangedEvent, value ); }           remove { RemoveHandler( $PROPNAME$ChangedEvent, value ); }       }        private static void On$PROPNAME$Changed(           DependencyObject d, DependencyPropertyChangedEventArgs e)         {             var $DECLARING_TYPE_var$ = d as $DECLARING_TYPE$;            var args = new RoutedPropertyChangedEventArgs< $PROPTYPE$ >(               ( $PROPTYPE$ ) e.OldValue,               ( $PROPTYPE$ ) e.NewValue );           args.RoutedEvent    = $DECLARING_TYPE$.$PROPNAME$ChangedEvent;           $DECLARING_TYPE_var$.RaiseEvent( args );$END$        }        #endregion The only real difference here is the addition of the DependencyPropertyKey, which allows your implementation to set the value of the dependency property without exposing the setter code to consumers of your type. You'll probably find that you create read-only dependency properties much less often than read-write properties, but this should still save you some typing when you do need to do so. Technorati Tags: resharper,live template,c#,dependency property,read-only,routed events,property change,boilerplate,wpf

    Read the article

  • Partial template specialization of free functions - best practices

    - by Poita_
    As most C++ programmers should know, partial template specialization of free functions is disallowed. For example, the following is illegal C++: template <class T, int N> T mul(const T& x) { return x * N; } template <class T> T mul<T, 0>(const T& x) { return T(0); } // error: function template partial specialization ‘mul<T, 0>’ is not allowed However, partial template specialization of classes/structs is allowed, and can be exploited to mimic the functionality of partial template specialization of free functions. For example, the target objective in the last example can be achieved by using: template <class T, int N> struct mul_impl { static T fun(const T& x) { return x * N; } }; template <class T> struct mul_impl<T, 0> { static T fun(const T& x) { return T(0); } }; template <class T, int N> T mul(const T& x) { return mul_impl<T, N>::fun(x); } It's more bulky and less concise, but it gets the job done -- and as far as users of mul are concerned, they get the desired partial specialization. My questions is: when writing templated free functions (that are intended to be used by others), should you automatically delegate the implementation to a static method function of a class, so that users of your library may implement partial specializations at will, or do you just write the templated function the normal way, and live with the fact that people won't be able to specialize them?

    Read the article

  • How can I get TFS 2010 to build each project to a separate directory?

    - by Jonathan Schuster
    In our project, we'd like to have our TFS build put each project into its own folder under the drop folder, instead of dropping all of the files into one flat structure. To illustrate, we'd like to see something like this: DropFolder/ Foo/ foo.exe Bar/ bar.dll Baz baz.dll This is basically the same question as was asked here, but now that we're using workflow-based builds, those solutions don't seem to work. The solution using the CustomizableOutDir property looked like it would work best for us, but I can't get that property to be recognized. I customized our workflow to pass it in to MSBuild as a command line argument (/p:CustomizableOutDir=true), but it seems MSBuild just ignores it and puts the output into the OutDir given by the workflow. I looked at the build logs, and I can see that the CustomizableOutDir and OutDir properties are both getting set in the command line args to MSBuild. I still need OutDir to be passed in so that I can copy my files to TeamBuildOutDir at the end. Any idea why my CustomizableOutDir parameter isn't getting recognized, or if there's a better way to achieve this?

    Read the article

  • TFS: How to dectet changed files when loading a solution?

    - by marco.ragogna
    I am new to TFS integration with Visual Studio 2010, and I have a problem I would like to solve. Practically, when I open a solution, how can I detect, looking only at the Solution Explorer which file has been changed since my last login? I am able to discover the changed files if I look at the Latest column of Source Control Explorer but it is not so intuitive. I attach you an image for better understanding. I would like to have a different icon, not the lock, for frmAbout.vb (in this case), associated to the item in Solution Explorer. Do you have any idea how can I achieve this behavior? Or some alternatives?

    Read the article

  • Is it possible to exclude some files from checkin (TFS) ?

    - by Thomas Wanner
    We use configuration files within various projects under source control (TFS), where each developer has to make some adjustments in his local copy to configure his environment. The build process takes care about replacing the config files with the server configuration as a part of the deployment, so it doesn't actually matter what is in the repository. However, we would anyway like to keep some kind of a default non-breaking version of config files in the repository, so that e.g. people not involved in the particular project won't run into troubles because of local misconfiguration. We tried to resolve this by introducing the check-in policy that simply forbids to check-in the config files. This works fine, but just because we're lazy to always uncheck those checkboxes in the pending changes window, the question comes : is it possible to transparently disable the check-in of particular files without keeping them out of source control (e.g. locking their current version) ?

    Read the article

  • The Problem Should Define the Process, Not the Tool

    - by thatjeffsmith
    All around awesome tool, but not the only gadget in your toolbox.I’m stepping down from my SQL Developer pulpit today and standing up on my philosophical soap box. I’m frequently asked to help folks transition from one set of database tools over to Oracle SQL Developer, which I’m MORE than happy to do. But, I’m not looking to simply change the way people interact with Oracle database. What I care about is your productivity. Is there a faster, more efficient way for you to connect the dots, get from A to B, or just get home to your kids or to the pub for happy hour? If you have defined a business process around a specific tool, what happens when that tool ‘goes away?’ Does the business stop? No, you feel immediate pain until you are able to re-implement the process using another mechanism. Where I get confused, or even frustrated, is when someone asks me to redesign our tool to match their problem. Tools are just tools. Saying you ‘can’t load your data anymore because XYZ’ isn’t valid when you could easily do that same task via SQL*Loader, Create Table As Selects, or 9 other different mechanisms. Sometimes changes brings opportunity for improvement in the process. Don’t be afraid to step back and re-evaluate a problem with a fresh set of eyes. Just trying to replicate your process in another tool exactly as it was done in the ‘old tool’ doesn’t always make sense. Quick sidebar: scheduling a Windows program to kick off thousands if not millions of table inserts from Excel versus using a ‘proper’ server process using SQL*Loader and or external tables means sacrificing scalability and reliability for convenience. Don’t let old habits blind you to new solutions and possibilities. Of couse I’m not going to sit here and say that our tools aren’t deficient in some areas or can’t be improved upon. But I bet if we work together we can find something that’s not only better for the business, but is also better for you. What do you ‘miss’ since you’ve started using SQL Developer as your primary Oracle database tools? I’d love to start a thread here and share ideas on how we can better serve you and your organizations needs. The end solution might not look exactly what you have in mind starting out, but I had no idea I’d be a Product Manager when I started college either What can you no longer ‘do’ since you picked up SQL Developer? What hurts more than it should? What keeps you from being great versus just good?

    Read the article

  • Why Does TFS Allow Orphaned Content and How Do I Get Rid of It?

    - by Chad
    My TfsVersionControl database has grown to 40+ GB in size. We recently did a TFS Destroy on a folder tree that should have cleared up at least 10 GB but instead it seemed to have no effect. When I look at the tables in TfsVersionControl, I am first shocked to see that there are no foreign keys at all in the database. Running a few queries, I see that there is some orphaning going on: tbl_Content has 13.9 GB of records that don't have a related tbl_File record tbl_File and tbl_Content have 2.4 GB that don't have a related tbl_Namespace record The cleanup job seems to be running nightly (prc_DeleteUnusedContent) and running it against the database manually doesn't remove any orphans. I see in the log for the cleanup job that it failed on 3/16, which is the morning after I destroyed the large amount of data. The error was due to a full transaction log. Could that error be the reason I'm left with all this orphaned data that can't be deleted? How can I permanently destroy this unneeded content?

    Read the article

  • Template Syntax in C++

    - by Crystal
    I don't understand templates really and was trying to run a simple find the minimum for ints, doubles, chars. First question, why is template<typename T> sometimes used, and other times template<>? Second question, I do not know what I am doing wrong with the following code below: #include <iostream> template <typename T> T minimum(T arg1, T arg2) { return arg1 < arg2 ? arg1 : arg2; } template <typename T> // first I tried template <> instd of above, but wasn't sure the difference T minimum<const char *>(const char *arg1, const char *arg2) { return strcmp(arg1, arg2) ? arg2 : arg1; } int main() { std::cout << minimum<int>(4, 2) << '\n'; std::cout << minimum<double>(2.2, -56.7) << '\n'; std::cout << minimum(2.2, 2) << '\n'; } Compile Errors: error C2768: 'minimum' : illegal use of explicit template arguments error C2783: 'T minimum(const char *,const char *)' : could not deduce template argument for 'T' : see declaration of 'minimum' : error C2782: 'T minimum(T,T)' : template parameter 'T' is ambiguous : see declaration of 'minimum' Third, in getting familiar with separating .h and .cpp files, if I wanted this minimum() function to be a static function of my class, but it was the only function in that class, would I have to have a template class as well? I originally tried doing it that way instead of having it all in one file and I got some compile errors as well that I can't remember right now and was unsure how I would do that. Thanks!

    Read the article

  • How to set AssemblyInfo.cs based on the tfs project build number?

    - by Ahok Rudraraju
    The project is hosted on a tfs server and I need to access the build number which I assume is automatically generated when ever you build a project. I need to retrieve that build number and display it on the web pages so that QAs and testing people know exactly which build they are working on. I found how to create customize build numbers in the following link: http://msdn.microsoft.com/en-us/library/aa395241(v=vs.100).aspx but it dose not solve my problem as I do not have access to the build definition file. I am looking for some kind of post deployment task which can access the build number or may be generate one and probably write it down to a file, from where I can read it. I don't know if that makes any sense as this is my first time working on .Net

    Read the article

  • Visual Studio 2013 Static Code Analysis in depth: What? When and How?

    - by Hosam Kamel
    In this post I'll illustrate in details the following points What is static code analysis? When to use? Supported platforms Supported Visual Studio versions How to use Run Code Analysis Manually Run Code Analysis Automatically Run Code Analysis while check-in source code to TFS version control (TFSVC) Run Code Analysis as part of Team Build Understand the Code Analysis results & learn how to fix them Create your custom rule set Q & A References What is static Rule analysis? Static Code Analysis feature of Visual Studio performs static code analysis on code to help developers identify potential design, globalization, interoperability, performance, security, and a lot of other categories of potential problems according to Microsoft's rules that mainly targets best practices in writing code, and there is a large set of those rules included with Visual Studio grouped into different categorized targeting specific coding issues like security, design, Interoperability, globalizations and others. Static here means analyzing the source code without executing it and this type of analysis can be performed through automated tools (like Visual Studio 2013 Code Analysis Tool) or manually through Code Review which already supported in Visual Studio 2012 and 2013 (check Using Code Review to Improve Quality video on Channel9) There is also Dynamic analysis which performed on executing programs using software testing techniques such as Code Coverage for example. When to use? Running Code analysis tool at regular intervals during your development process can enhance the quality of your software, examines your code for a set of common defects and violations is always a good programming practice. Adding that Code analysis can also find defects in your code that are difficult to discover through testing allowing you to achieve first level quality gate for you application during development phase before you release it to the testing team. Supported platforms .NET Framework, native (C and C++) Database applications. Support Visual Studio versions All version of Visual Studio starting Visual Studio 2013 (except Visual Studio Test Professional) check Feature comparisons Create and modify a custom rule set required Visual Studio Premium or Ultimate. How to use? Code Analysis can be run manually at any time from within the Visual Studio IDE, or even setup to automatically run as part of a Team Build or check-in policy for Team Foundation Server. Run Code Analysis Manually To run code analysis manually on a project, on the Analyze menu, click Run Code Analysis on your project or simply right click on the project name on the Solution Explorer choose Run Code Analysis from the context menu Run Code Analysis Automatically To run code analysis each time that you build a project, you select Enable Code Analysis on Build on the project's Property Page Run Code Analysis while check-in source code to TFS version control (TFSVC) Team Foundation Version Control (TFVC) provides a way for organizations to enforce practices that lead to better code and more efficient group development through Check-in policies which are rules that are set at the team project level and enforced on developer computers before code is allowed to be checked in. (This is available only if you're using Team Foundation Server) Require permissions on Team Foundation Server: you must have the Edit project-level information permission set to Allow typically your account must be part of Project Administrators, Project Collection Administrators, for more information about Team Foundation permissions check http://msdn.microsoft.com/en-us/library/ms252587(v=vs.120).aspx In Team Explorer, right-click the team project name, point to Team Project Settings, and then click Source Control. In the Source Control dialog box, select the Check-in Policy tab. Click Add to create a new check-in policy. Double-click the existing Code Analysis item in the Policy Type list to change the policy. Check or Uncheck the policy option based on the configurations you need to perform as illustrated below: Enforce check-in to only contain files that are part of current solution: code analysis can run only on files specified in solution and project configuration files. This policy guarantees that all code that is part of a solution is analyzed. Enforce C/C++ Code Analysis (/analyze): Requires that all C or C++ projects be built with the /analyze compiler option to run code analysis before they can be checked in. Enforce Code Analysis for Managed Code: Requires that all managed projects run code analysis and build before they can be checked in. Check Code analysis rule set reference on MSDN What is Rule Set? Rule Set is a group of code analysis rules like the example below where Microsoft.Design is the rule set name where "Do not declare static members on generic types" is the code analysis rule Once you configured the Analysis rule the policy will be enabled for all the team member in this project whenever a team member check-in any source code to the TFSVC the policy section will highlight the Code Analysis policy as below TFS is a very extensible platform so you can simply implement your own custom Code Analysis Check-in policy, check this link for more details http://msdn.microsoft.com/en-us/library/dd492668.aspx but you have to be aware also about compatibility between different TFS versions check http://msdn.microsoft.com/en-us/library/bb907157.aspx Run Code Analysis as part of Team Build With Team Foundation Build (TFBuild), you can create and manage build processes that automatically compile and test your applications, and perform other important functions. Code Analysis can be enabled in the Build Definition file by selecting the correct value for the build process parameter "Perform Code Analysis" Once configure, Kick-off your build definition to queue a new build, Code Analysis will run as part of build workflow and you will be able to see code analysis warning as part of build report Understand the Code Analysis results & learn how to fix them Now after you went through Code Analysis configurations and the different ways of running it, we will go through the Code Analysis result how to understand them and how to resolve them. Code Analysis window in Visual Studio will show all the analysis results based on the rule sets you configured in the project file properties, let's dig deep into what each result item contains: 1 Check ID The unique identifier for the rule. CheckId and Category are used for in-source suppression of a warning.       2 Title The title of warning message       3 Description A description of the problem or suggested fix 4 File Name File name and the line of code number which violate the code analysis rule set 5 Category The code analysis category for this error 6 Warning /Error Depend on how you configure it in the rule set the default is Warning level 7 Action Copy: copy the warning information to the clipboard Create Work Item: If you're connected to Team Foundation Server you can create a work item most probably you may create a Task or Bug and assign it for a developer to fix certain code analysis warning Suppress Message: There are times when you might decide not to fix a code analysis warning. You might decide that resolving the warning requires too much recoding in relation to the probability that the issue will arise in any real-world implementation of your code. Or you might believe that the analysis that is used in the warning is inappropriate for the particular context. You can suppress individual warnings so that they no longer appear in the Code Analysis window. Two options available: In Source inserts a SuppressMessage attribute in the source file above the method that generated the warning. This makes the suppression more discoverable. In Suppression File adds a SuppressMessage attribute to the GlobalSuppressions.cs file of the project. This can make the management of suppressions easier. Note that the SuppressMessage attribute added to GlobalSuppression.cs also targets the method that generated the warning. It does not suppress the warning globally.       Visual Studio makes it very easy to fix Code analysis warning, all you have to do is clicking on the Check Id hyperlink if you are not aware how to fix the warring and you'll be directed to MSDN online or local copy based on the configuration you did while installing Visual Studio and you will find all the information about the warring including how to fix it. Create a Custom Code Analysis Rule Set The Microsoft standard rule sets provide groups of rules that are organized by function and depth. For example, the Microsoft Basic Design Guidelines Rules and the Microsoft Extended Design Guidelines Rules contain rules that focus on usability and maintainability issues, with added emphasis on naming rules in the Extended rule set, you can create and modify a custom rule set to meet specific project needs associated with code analysis. To create a custom rule set, you open one or more standard rule sets in the rule set editor. Create and modify a custom rule set required Visual Studio Premium or Ultimate. You can check How to: Create a Custom Rule Set on MSDN for more details http://msdn.microsoft.com/en-us/library/dd264974.aspx Q & A Visual Studio static code analysis vs. FxCop vs. StyleCpp http://www.excella.com/blog/stylecop-vs-fxcop-difference-between-code-analysis-tools/ Code Analysis for SharePoint Apps and SPDisposeCheck? This post lists some of the rule set you can run specifically for SharePoint applications and how to integrate SPDisposeCheck as well. Code Analysis for SQL Server Database Projects? This post illustrate how to run static code analysis on T-SQL through SSDT ReSharper 8 vs. Visual Studio 2013? This document lists some of the features that are provided by ReSharper 8 but are missing or not as fully implemented in Visual Studio 2013. References A Few Billion Lines of Code Later: Using Static Analysis to Find Bugs in the Real World http://cacm.acm.org/magazines/2010/2/69354-a-few-billion-lines-of-code-later/fulltext What is New in Code Analysis for Visual Studio 2013 http://blogs.msdn.com/b/visualstudioalm/archive/2013/07/03/what-is-new-in-code-analysis-for-visual-studio-2013.aspx Analyze the code quality of Windows Store apps using Visual Studio static code analysis http://msdn.microsoft.com/en-us/library/windows/apps/hh441471.aspx [Hands-on-lab] Using Code Analysis with Visual Studio 2012 to Improve Code Quality http://download.microsoft.com/download/A/9/2/A9253B14-5F23-4BC8-9C7E-F5199DB5F831/Using%20Code%20Analysis%20with%20Visual%20Studio%202012%20to%20Improve%20Code%20Quality.docx Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue

    - by John-Brown.Evans
    JMS Step 5 - How to Create an 11g BPEL Process Which Reads a Message Based on an XML Schema from a JMS Queue .jblist{list-style-type:disc;margin:0;padding:0;padding-left:0pt;margin-left:36pt} ol{margin:0;padding:0} .c12_5{vertical-align:top;width:468pt;border-style:solid;background-color:#f3f3f3;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c8_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 0pt 5pt} .c10_5{vertical-align:top;width:207pt;border-style:solid;border-color:#000000;border-width:1pt;padding:5pt 5pt 5pt 5pt} .c14_5{vertical-align:top;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 5pt 0pt 5pt} .c21_5{background-color:#ffffff} .c18_5{color:#1155cc;text-decoration:underline} .c16_5{color:#666666;font-size:12pt} .c5_5{background-color:#f3f3f3;font-weight:bold} .c19_5{color:inherit;text-decoration:inherit} .c3_5{height:11pt;text-align:center} .c11_5{font-weight:bold} .c20_5{background-color:#00ff00} .c6_5{font-style:italic} .c4_5{height:11pt} .c17_5{background-color:#ffff00} .c0_5{direction:ltr} .c7_5{font-family:"Courier New"} .c2_5{border-collapse:collapse} .c1_5{line-height:1.0} .c13_5{background-color:#f3f3f3} .c15_5{height:0pt} .c9_5{text-align:center} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt} .subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:24pt;font-family:"Arial";font-weight:normal} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:18pt;font-family:"Arial";font-weight:normal} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:14pt;font-family:"Arial";font-weight:normal} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:12pt;font-family:"Arial";font-weight:normal} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:11pt;font-family:"Arial";font-weight:normal} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#888;font-size:10pt;font-family:"Arial";font-weight:normal} Welcome to another post in the series of blogs which demonstrates how to use JMS queues in a SOA context. The previous posts were: JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g JMS Step 2 - Using the QueueSend.java Sample Program to Send a Message to a JMS Queue JMS Step 3 - Using the QueueReceive.java Sample Program to Read a Message from a JMS Queue JMS Step 4 - How to Create an 11g BPEL Process Which Writes a Message Based on an XML Schema to a JMS Queue Today we will create a BPEL process which will read (dequeue) the message from the JMS queue, which we enqueued in the last example. The JMS adapter will dequeue the full XML payload from the queue. 1. Recap and Prerequisites In the previous examples, we created a JMS Queue, a Connection Factory and a Connection Pool in the WebLogic Server Console. Then we designed and deployed a BPEL composite, which took a simple XML payload and enqueued it to the JMS queue. In this example, we will read that same message from the queue, using a JMS adapter and a BPEL process. As many of the configuration steps required to read from that queue were done in the previous samples, this one will concentrate on the new steps. A summary of the required objects is listed below. To find out how to create them please see the previous samples. They also include instructions on how to verify the objects are set up correctly. WebLogic Server Objects Object Name Type JNDI Name TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue eis/wls/TestQueue Connection Pool eis/wls/TestQueue Schema XSD File The following XSD file is used for the message format. It was created in the previous example and will be copied to the new process. stringPayload.xsd <?xml version="1.0" encoding="windows-1252" ?> <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema"                 xmlns="http://www.example.org"                 targetNamespace="http://www.example.org"                 elementFormDefault="qualified">   <xsd:element name="exampleElement" type="xsd:string">   </xsd:element> </xsd:schema> JMS Message After executing the previous samples, the following XML message should be in the JMS queue located at jms/TestJMSQueue: <?xml version="1.0" encoding="UTF-8" ?><exampleElement xmlns="http://www.example.org">Test Message</exampleElement> JDeveloper Connection You will need a valid Application Server Connection in JDeveloper pointing to the SOA server which the process will be deployed to. 2. Create a BPEL Composite with a JMS Adapter Partner Link In the previous example, we created a composite in JDeveloper called JmsAdapterWriteSchema. In this one, we will create a new composite called JmsAdapterReadSchema. There are probably many ways of incorporating a JMS adapter into a SOA composite for incoming messages. One way is design the process in such a way that the adapter polls for new messages and when it dequeues one, initiates a SOA or BPEL instance. This is possibly the most common use case. Other use cases include mid-flow adapters, which are activated from within the BPEL process. In this example we will use a polling adapter, because it is the most simple to set up and demonstrate. But it has one disadvantage as a demonstrative model. When a polling adapter is active, it will dequeue all messages as soon as they reach the queue. This makes it difficult to monitor messages we are writing to the queue, because they will disappear from the queue as soon as they have been enqueued. To work around this, we will shut down the composite after deploying it and restart it as required. (Another solution for this would be to pause the consumption for the queue and resume consumption again if needed. This can be done in the WLS console JMS-Modules -> queue -> Control -> Consumption -> Pause/Resume.) We will model the composite as a one-way incoming process. Usually, a BPEL process will do something useful with the message after receiving it, such as passing it to a database or file adapter, a human workflow or external web service. But we only want to demonstrate how to dequeue a JMS message using BPEL and a JMS adapter, so we won’t complicate the design with further activities. However, we do want to be able to verify that we have read the message correctly, so the BPEL process will include a small piece of embedded java code, which will print the message to standard output, so we can view it in the SOA server’s log file. Alternatively, you can view the instance in the Enterprise Manager and verify the message. The following steps are all executed in JDeveloper. Create the project in the same JDeveloper application used for the previous examples or create a new one. Create a SOA Project Create a new project and choose SOA Tier > SOA Project as its type. Name it JmsAdapterReadSchema. When prompted for the composite type, choose Empty Composite. Create a JMS Adapter Partner Link In the composite editor, drag a JMS adapter over from the Component Palette to the left-hand swim lane, under Exposed Services. This will start the JMS Adapter Configuration Wizard. Use the following entries: Service Name: JmsAdapterRead Oracle Enterprise Messaging Service (OEMS): Oracle WebLogic JMS AppServer Connection: Use an application server connection pointing to the WebLogic server on which the JMS queue and connection factory mentioned under Prerequisites above are located. Adapter Interface > Interface: Define from operation and schema (specified later) Operation Type: Consume Message Operation Name: Consume_message Consume Operation Parameters Destination Name: Press the Browse button, select Destination Type: Queues, then press Search. Wait for the list to populate, then select the entry for TestJMSQueue , which is the queue created in a previous example. JNDI Name: The JNDI name to use for the JMS connection. As in the previous example, this is probably the most common source of error. This is the JNDI name of the JMS adapter’s connection pool created in the WebLogic Server and which points to the connection factory. JDeveloper does not verify the value entered here. If you enter a wrong value, the JMS adapter won’t find the queue and you will get an error message at runtime, which is very difficult to trace. In our example, this is the value eis/wls/TestQueue . (See the earlier step on how to create a JMS Adapter Connection Pool in WebLogic Server for details.) Messages/Message SchemaURL: We will use the XSD file created during the previous example, in the JmsAdapterWriteSchema project to define the format for the incoming message payload and, at the same time, demonstrate how to import an existing XSD file into a JDeveloper project. Press the magnifying glass icon to search for schema files. In the Type Chooser, press the Import Schema File button. Select the magnifying glass next to URL to search for schema files. Navigate to the location of the JmsAdapterWriteSchema project > xsd and select the stringPayload.xsd file. Check the “Copy to Project” checkbox, press OK and confirm the following Localize Files popup. Now that the XSD file has been copied to the local project, it can be selected from the project’s schema files. Expand Project Schema Files > stringPayload.xsd and select exampleElement: string . Press Next and Finish, which will complete the JMS Adapter configuration.Save the project. Create a BPEL Component Drag a BPEL Process from the Component Palette (Service Components) to the Components section of the composite designer. Name it JmsAdapterReadSchema and select Template: Define Service Later and press OK. Wire the JMS Adapter to the BPEL Component Now wire the JMS adapter to the BPEL process, by dragging the arrow from the adapter to the BPEL process. A Transaction Properties popup will be displayed. Set the delivery mode to async.persist. This completes the steps at the composite level. 3 . Complete the BPEL Process Design Invoke the BPEL Flow via the JMS Adapter Open the BPEL component by double-clicking it in the design view of the composite.xml, or open it from the project navigator by selecting the JmsAdapterReadSchema.bpel file. This will display the BPEL process in the design view. You should see the JmsAdapterRead partner link in the left-hand swim lane. Drag a Receive activity onto the BPEL flow diagram, then drag a wire (left-hand yellow arrow) from it to the JMS adapter. This will open the Receive activity editor. Auto-generate the variable by pressing the green “+” button and check the “Create Instance” checkbox. This will result in a BPEL instance being created when a new JMS message is received. At this point it would actually be OK to compile and deploy the composite and it would pick up any messages from the JMS queue. In fact, you can do that to test it, if you like. But it is very rudimentary and would not be doing anything useful with the message. Also, you could only verify the actual message payload by looking at the instance’s flow in the Enterprise Manager. There are various other possibilities; we could pass the message to another web service, write it to a file using a file adapter or to a database via a database adapter etc. But these will all introduce unnecessary complications to our sample. So, to keep it simple, we will add a small piece of Java code to the BPEL process which will write the payload to standard output. This will be written to the server’s log file, which will be easy to monitor. Add a Java Embedding Activity First get the full name of the process’s input variable, as this will be needed for the Java code. Go to the Structure pane and expand Variables > Process > Variables. Then expand the input variable, for example, "Receive1_Consume_Message_InputVariable > body > ns2:exampleElement”, and note variable’s name and path, if they are different from this one. Drag a Java Embedding activity from the Component Palette (Oracle Extensions) to the BPEL flow, after the Receive activity, then open it to edit. Delete the example code and replace it with the following, replacing the variable parts with those in your sample, if necessary.: System.out.println("JmsAdapterReadSchema process picked up a message"); oracle.xml.parser.v2.XMLElement inputPayload =    (oracle.xml.parser.v2.XMLElement)getVariableData(                           "Receive1_Consume_Message_InputVariable",                           "body",                           "/ns2:exampleElement");   String inputString = inputPayload.getFirstChild().getNodeValue(); System.out.println("Input String is " + inputPayload.getFirstChild().getNodeValue()); Tip. If you are not sure of the exact syntax of the input variable, create an Assign activity in the BPEL process and copy the variable to another, temporary one. Then check the syntax created by the BPEL designer. This completes the BPEL process design in JDeveloper. Save, compile and deploy the process to the SOA server. 3. Test the Composite Shut Down the JmsAdapterReadSchema Composite After deploying the JmsAdapterReadSchema composite to the SOA server it is automatically activated. If there are already any messages in the queue, the adapter will begin polling them. To ease the testing process, we will deactivate the process first Log in to the Enterprise Manager (Fusion Middleware Control) and navigate to SOA > soa-infra (soa_server1) > default (or wherever you deployed your composite to) and click on JmsAdapterReadSchema [1.0] . Press the Shut Down button to disable the composite and confirm the following popup. Monitor Messages in the JMS Queue In a separate browser window, log in to the WebLogic Server Console and navigate to Services > Messaging > JMS Modules > TestJMSModule > TestJMSQueue > Monitoring. This is the location of the JMS queue we created in an earlier sample (see the prerequisites section of this sample). Check whether there are any messages already in the queue. If so, you can dequeue them using the QueueReceive Java program created in an earlier sample. This will ensure that the queue is empty and doesn’t contain any messages in the wrong format, which would cause the JmsAdapterReadSchema to fail. Send a Test Message In the Enterprise Manager, navigate to the JmsAdapterWriteSchema created earlier, press Test and send a test message, for example “Message from JmsAdapterWriteSchema”. Confirm that the message was written correctly to the queue by verifying it via the queue monitor in the WLS Console. Monitor the SOA Server’s Output A program deployed on the SOA server will write its standard output to the terminal window in which the server was started, unless this has been redirected to somewhere else, for example to a file. If it has not been redirected, go to the terminal session in which the server was started, otherwise open and monitor the file to which it was redirected. Re-Enable the JmsAdapterReadSchema Composite In the Enterprise Manager, navigate to the JmsAdapterReadSchema composite again and press Start Up to re-enable it. This should cause the JMS adapter to dequeue the test message and the following output should be written to the server’s standard output: JmsAdapterReadSchema process picked up a message. Input String is Message from JmsAdapterWriteSchema Note that you can also monitor the payload received by the process, by navigating to the the JmsAdapterReadSchema’s Instances tab in the Enterprise Manager. Then select the latest instance and view the flow of the BPEL component. The Receive activity will contain and display the dequeued message too. 4 . Troubleshooting This sample demonstrates how to dequeue an XML JMS message using a BPEL process and no additional functionality. For example, it doesn’t contain any error handling. Therefore, any errors in the payload will result in exceptions being written to the log file or standard output. If you get any errors related to the payload, such as Message handle error ... ORABPEL-09500 ... XPath expression failed to execute. An error occurs while processing the XPath expression; the expression is /ns2:exampleElement. ... etc. check that the variable used in the Java embedding part of the process was entered correctly. Possibly follow the tip mentioned in previous section. If this doesn’t help, you can delete the Java embedding part and simply verify the message via the flow diagram in the Enterprise Manager. Or use a different method, such as writing it to a file via a file adapter. This concludes this example. In the next post, we will begin with an AQ JMS example, which uses JMS to write to an Advanced Queue stored in the database. Best regards John-Brown Evans Oracle Technology Proactive Support Delivery

    Read the article

  • How do I structure code and builds for continuous delivery of multiple applications in a small team?

    - by kingdango
    Background: 3-5 developers supporting (and building new) internal applications for a non-software company. We use TFS although I don't think that matters much for my question. I want to be able to develop a deployment pipeline and adopt continuous integration / deployment techniques. Here's what our source tree looks like right now. We use a single TFS Team Project. $/MAIN/src/ $/MAIN/src/ApplicationA/VSSOlution.sln $/MAIN/src/ApplicationA/ApplicationAProject1.csproj $/MAIN/src/ApplicationA/ApplicationAProject2.csproj $/MAIN/src/ApplicationB/... $/MAIN/src/ApplicationC $/MAIN/src/SharedInfrastructureA $/MAIN/src/SharedInfrastructureB My Goal (a pretty typical promotion pipeline) When a code change is made to a given application I want to be able to build that application and auto-deploy that change to a DEV server. I may also need to build dependencies on Shared Infrastructure Components. I often also have some database scripts or changes as well If developer testing passes I want to have an manually triggered but automated deploy of that build on a STAGING server where end-users will review new functionality. Once it's approved by end users I want to a manually triggered auto-deploy to production Question: How can I best adopt continuous deployment techniques in a multi-application environment? A lot of the advice I see is more single-application-specific, how is that best applied to multiple applications? For step 1, do I simply setup a separate Team Build for each application? What's the best approach to accomplishing steps 2 and 3 of promoting latest build to new environments? I've seen this work well with web apps but what about database changes

    Read the article

  • SSH server not working (respawns until stopped)

    - by Khaled
    I have a running Ubuntu Server 10.04.1. When I tried to login to the server via ssh, I could not. Instead, I got connection refused error. I tried to ping the machine and I got reply! So, the clear reason is that SSH daemon is stopped. After reboot, I was able to login to my server via ssh. After some time, I looked at my logs /var/log/syslog and found the following records: Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2465) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2469) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2473) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2477) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2481) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2485) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2489) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2493) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2497) terminated with status 255 Jan 16 10:57:09 myserver init: ssh main process ended, respawning Jan 16 10:57:09 myserver init: ssh main process (2501) terminated with status 255 Jan 16 10:57:09 myserver init: ssh respawning too fast, stopped I searched for a similar problem/solution. Some people said that this is caused by the SSH daemon trying to start before networking and they suggest to change ListenAddress in /etc/ssh/sshd_config to be 0.0.0.0. I think this is not the cause in my case, because my problem occurs after system is up and running. Any idea what is causing this? This is Ubuntu Server and it should be running and accessed remotely using SSH.

    Read the article

  • TFS 2010 and SSL Configuration: Part 1 Certificates in Place

    - by Enrique Lima
    What is needed?  For starters, an understanding on the how to properly configure a certificate in IIS. Many people have found challenges in working with certificates in IIS 7 First thing is to get your certificate created, and then proceed to add it to IIS 7 By clicking on Server Certificates, we will get to this And we will be able to see the certificate or certificates installed. What options do we have to get certificates? They can be generated “in-house” or purchase them from certification authorities.  If it is “in-house”, you will those options in the Server Certificates area of IIS Manager.

    Read the article

  • Role of "Refactoring" in good programming pratices?

    - by Niranjan Kala
    I have learned in Agile Development that: Refactoring is the process of clarifying and simplifying the design of existing code, without changing its behavior. I have heard about some GUI refactoring tools like resharper and DevExpress Refactor Pro! Here is my Questions: Question 1: how does it takes place in the Software development process and How far it effects the system? Question 2: Is Refactoring using these tools really fast the process of development/ maintenance?

    Read the article

  • ReSharper C# Live Template for Dependency Property and Property Change Routed Event Boilerplate Code

    - by Bart Read
    I don't know about you but it took me about 5 seconds to get royally fed up of typing the boilerplate code necessary for creating WPF (and Silverlight) dependency properties and, if you want them, their associated property change routed events. Being a ReSharper user, I wondered if there was any live template for doing this. It turns out there's nothing built in, but there are many examples of templates for creating dependency properties out there on the web, such as this excellent one from Roy...(read more)

    Read the article

  • Restarting shell script with &disown using Monit

    - by Solas Admin
    I have a shell script that runs a C++ backend mail system (PluginHandler). I need to monitor this process in Monit and restart it if it fails. The script: export LD_LIBRARY_PATH=/usr/local/lib/:/CONFIDENTAL/CONFIDENTAL/Common/ cd PluginHandler/ ./PluginHandler This script does not have a PID file and we run this script by executing ./rundaemon.sh &disown ./pluginhandler starts the process and starts logging into /etc/output/output.log I stop the process by identifying the process ID with [ps -f | grep PluginHandler] and then killing the process. I can check the process in Monit just fine, but I think Monit is starting the process if it is not running but it can't do &disown so the process ends as soon as it starts. This is the code in the monitrc file for checking this process: check process Backend matching "PluginHandler" if not exist then alert start "PATH/TO/SCRIPT/rundaemon.sh &disown" alert [email protected] only on {timeout} with mail-format {subject: "[BLAH"} I tried to stop the script from terminating by modifying the script like the following but this does not work either. export LD_LIBRARY_PATH=/usr/local/lib/:/home/CONFIDENTAL/production/CONFIDENTAL/Common/ cd PluginHandler/ (nohup ./PluginHandler &) return Any help to write a proper Monit rules to resolve this issue would be greatly appreciated :)

    Read the article

  • Team Foundation Server (TFS) Team Build Custom Activity C# Code for Assembly Stamping

    - by Bob Hardister
    For the full context and guidance on how to develop and implement a custom activity in Team Build see the Microsoft Visual Studio Rangers Team Foundation Build Customization Guide V.1 at http://vsarbuildguide.codeplex.com/ There are many ways to stamp or set the version number of your assemblies. This approach is based on the build number.   namespace CustomActivities { using System; using System.Activities; using System.IO; using System.Text.RegularExpressions; using Microsoft.TeamFoundation.Build.Client; [BuildActivity(HostEnvironmentOption.Agent)] public sealed class VersionAssemblies : CodeActivity { /// <summary> /// AssemblyInfoFileMask /// </summary> [RequiredArgument] public InArgument<string> AssemblyInfoFileMask { get; set; } /// <summary> /// SourcesDirectory /// </summary> [RequiredArgument] public InArgument<string> SourcesDirectory { get; set; } /// <summary> /// BuildNumber /// </summary> [RequiredArgument] public InArgument<string> BuildNumber { get; set; } /// <summary> /// BuildDirectory /// </summary> [RequiredArgument] public InArgument<string> BuildDirectory { get; set; } /// <summary> /// Publishes field values to the build report /// </summary> public OutArgument<string> DiagnosticTextOut { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { // Obtain the runtime value of the input arguments string sourcesDirectory = context.GetValue(this.SourcesDirectory); string assemblyInfoFileMask = context.GetValue(this.AssemblyInfoFileMask); string buildNumber = context.GetValue(this.BuildNumber); string buildDirectory = context.GetValue(this.BuildDirectory); // ** Determine the version number values ** // Note: the format used here is: major.secondary.maintenance.build // ----------------------------------------------------------------- // Obtain the build definition name int nameStart = buildDirectory.LastIndexOf(@"\") + 1; string buildDefinitionName = buildDirectory.Substring(nameStart); // Set the primary.secondary.maintenance values // NOTE: these are hard coded in this example, but could be sourced from a file or parsed from a build definition name that includes them string p = "1"; string s = "5"; string m = "2"; // Initialize the build number string b; string na = "0"; // used for Assembly and Product Version instead of build number (see versioning best practices: **TBD reference) // Set qualifying product version information string productInfo = "RC2"; // Obtain the build increment number from the build number // NOTE: this code assumes the default build definition name format int buildIncrementNumberDelimterIndex = buildNumber.LastIndexOf("."); b = buildNumber.Substring(buildIncrementNumberDelimterIndex + 1); // Convert version to integer values int pVer = Convert.ToInt16(p); int sVer = Convert.ToInt16(s); int mVer = Convert.ToInt16(m); int bNum = Convert.ToInt16(b); int naNum = Convert.ToInt16(na); // ** Get all AssemblyInfo files and stamp them ** // Note: the mapping of AssemblyInfo.cs attributes to assembly display properties are as follows: // - AssemblyVersion = Assembly Version - used for the assembly version (does not change unless p, s or m values are changed) // - AssemblyFileVersion = File Version - used for the file version (changes with every build) // - AssemblyInformationalVersion = Product Version - used for the product version (can include additional version information) // ------------------------------------------------------------------------------------------------------------------------------------------------ Version assemblyVersion = new Version(pVer, sVer, mVer, naNum); Version newAssemblyFileVersion = new Version(pVer, sVer, mVer, bNum); Version productVersion = new Version(pVer, sVer, mVer); // Setup diagnostic fields int numberOfReplacements = 0; string addedAssemblyInformationalAttribute = "No"; // Enumerate over the assemblyInfo version attributes foreach (string attribute in new[] { "AssemblyVersion", "AssemblyFileVersion", "AssemblyInformationalVersion" }) { // Define the regular expression to find in each and every Assemblyinfo.cs files (which is for example 'AssemblyVersion("1.0.0.0")' ) Regex regex = new Regex(attribute + @"\(""\d+\.\d+\.\d+\.\d+""\)"); foreach (string file in Directory.EnumerateFiles(sourcesDirectory, assemblyInfoFileMask, SearchOption.AllDirectories)) { string text = File.ReadAllText(file); // Read the text from the AssemblyInfo file // If the AsemblyInformationalVersion attribute is not in the file, add it as the last line of the file // Note: by default the AssemblyInfo.cs files will not contain the AssemblyInformationalVersion attribute if (!text.Contains("[assembly: AssemblyInformationalVersion(\"")) { string lastLine = Environment.NewLine + "[assembly: AssemblyInformationalVersion(\"1.0.0.0\")]"; text = text + lastLine; addedAssemblyInformationalAttribute = "Yes"; } // Search for the expression Match match = regex.Match(text); if (match.Success) { // Get file attributes FileAttributes fileAttributes = File.GetAttributes(file); // Set file to read only File.SetAttributes(file, fileAttributes & ~FileAttributes.ReadOnly); // Insert AssemblyInformationalVersion attribute into the file text if does not already exist string newText = string.Empty; if (attribute == "AssemblyVersion") { newText = regex.Replace(text, attribute + "(\"" + assemblyVersion + "\")"); numberOfReplacements++; } if (attribute == "AssemblyFileVersion") { newText = regex.Replace(text, attribute + "(\"" + newAssemblyFileVersion + "\")"); numberOfReplacements++; } if (attribute == "AssemblyInformationalVersion") { newText = regex.Replace(text, attribute + "(\"" + productVersion + " " + productInfo + "\")"); numberOfReplacements++; } // Publish diagnostics to build report (diagnostic verbosity only) context.SetValue(this.DiagnosticTextOut, " Added AssemblyInformational Attribute: " + addedAssemblyInformationalAttribute + " Number of replacements: " + numberOfReplacements + " Build number: " + buildNumber + " Build directory: " + buildDirectory + " Build definition name: " + buildDefinitionName + " Assembly version: " + assemblyVersion + " New file version: " + newAssemblyFileVersion + " Product version: " + productVersion + " AssemblyInfo.cs Text Last Stamped: " + newText); // Write the new text in the AssemblyInfo file File.WriteAllText(file, newText); // restore the file's original attributes File.SetAttributes(file, fileAttributes); } } } } } }

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46 47 48  | Next Page >