Search Results

Search found 93603 results on 3745 pages for 'one to many'.

Page 396/3745 | < Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >

  • Complex knowledge management system with CRM..written internally

    - by JonH
    We've all heard of salesforce and sugarcrm and the likes of systems like this. Unfortunately at my workplace we have been asked to write a similiar system (rather then license or purchase). Basically the database is fairly large. Think of modules such as: Corporate groups, customers, programs, projects, sub projects, and issue management. In simple terms a corporate group has one to many customers. A program has one or more projects. A project has one or more sub projects. And an issue can be created on many sub projects. Of course the system is a bit more complex but instead of listing every single module I think its best to keep it simple. In any event, the system in its current state has only two resources to be working on it (basically we have to do it all: CSS, database, jquery, asp.net and C#). We've started off well by defining the UI master and footer pages that way we can reuse those across all of our pages. Now comes the hard part. The system will have about 4k end users with say 5-10% being concurrent users. We are wondering if it makes sense to cache our database data (For say 5-10 minutes) rather then continously hit our database. The reason being is some of these pages may have 5-10 search filters associated with the page. Imagine every time a selection is made from a search box how many database hits. Also some of these search fields cascade so selecting for instance an initial drop down may cascade several drop down boxes under them. Is it wrong to cache because I am not finding too many articles on whether it is a good idea or not. Remember the system is similiar to say a CRM system where we manage our various customers, projects, sub projects, issues, etc.

    Read the article

  • XSLT: Splitting none continues Elements/Grouping continues Elements

    - by Gerald
    Hi! Need some help with this problem in implementing with Xslt, I had already implemented a java code of this one using sax parser but it is a troublesome due to customer request to change something... so we are doing it now using an XSLT with don't need to be compiled and deployed to a web server. I have an xml like the one below Example 1: <ShotRows> <ShotRow row="3" col="3" bit="1" position="1"/> <ShotRow row="3" col="4" bit="1" position="2"/> <ShotRow row="3" col="5" bit="1" position="3"/> <ShotRow row="3" col="6" bit="1" position="4"/> <ShotRow row="3" col="7" bit="1" position="5"/> <ShotRow row="3" col="8" bit="1" position="6"/> <ShotRow row="3" col="9" bit="1" position="7"/> <ShotRow row="3" col="10" bit="1" position="8"/> <ShotRow row="3" col="11" bit="1" position="9"/> </ShotRows> Output 1: <ShotRows> <ShotRow row="3" colStart="3" colEnd="11" /> </ShotRows> -- Be cause the col is continues from 3 to 11 Example 2: <ShotRows> <ShotRow row="3" col="3" bit="1" position="1"/> <ShotRow row="3" col="4" bit="1" position="2"/> <ShotRow row="3" col="6" bit="1" position="3"/> <ShotRow row="3" col="7" bit="1" position="4"/> <ShotRow row="3" col="8" bit="1" position="5"/> <ShotRow row="3" col="10" bit="1" position="6"/> <ShotRow row="3" col="11" bit="1" position="7"/> <ShotRow row="3" col="15" bit="1" position="8"/> <ShotRow row="3" col="19" bit="1" position="9"/> </ShotRows> Output 2: <ShotRows> <ShotRow row="3" colStart="3" colEnd="4" /> <ShotRow row="3" colStart="6" colEnd="8" /> <ShotRow row="3" colStart="10" colEnd="11" /> <ShotRow row="3" colStart="15" colEnd="15" /> <ShotRow row="3" colStart="19" colEnd="19" /> </ShotRows> -- Basic idea is to group any continues col into one element, like the col 3 to 4, col 6 to 8, col 10 to 11, col 15 is only one, and col 19 is only one. Thanks in advance. Sincerely, Gerald

    Read the article

  • Partner Infoline & Service Portal

    - by uwes
    As an EMEA-wide team we're supporting the daily work of our partners. Our team consists of 24 sales consultants, one third is specialized on the Partner Infoline. Partner Infoline's main focus is to deliver actively and reactively technical pre sales knowledge about the Oracle hardware portfolio to our partners.With infoline we assist our partners in their daily work, furthermore we help to educate our partners to be self sufficient in all aspects and questions about hardware configurations and hardware quotes. For our Infoline Service we use a ticketing system called Service Portal which is widely used within Oracle and delivers a good and stable functionality and availability. Our Infoline-Service provides answers to questions concerning technical pre-sales matters that are related to hardware and the corresponding hardware related software.* You can address these types of questions by sending them to our mailing list: [email protected] The serviceportal will send you an auto-reply including a unique reference number, which will be the identification for your request until it is closed. Depending on the complexity of the request, it might be necessary to forward it to our specialists (servers, storage, tape, Solaris etc.) located whole over Europe. In order to make the whole process smooth here are some recommendations: write your request in English; saves translation-time, when it has to be forwarded to the specialists stating clearly in the title your interest area, like for example "memory in M4000 server". one request/one subject; makes it easier to maintain and keep the correspondence clear and simple. The rule of the service is to provide an answer quick, which means the vast majority of the requests are answered within a couple of hours. However please keep in mind that some requests may need extra work by involving the appropriate person within Europe or even in US. Therefore there is no official SLA for this service. * This excludes Oracle "classic" products and post-sales support. The latter should still be addressed through MOS (http://support.oracle.com)

    Read the article

  • Choosing a web development framework?

    - by Bob
    So, I've sort of reached a point where I want to start developing a website. Originally, I planned to build said website using PHP and CodeIgniter, I'm familiar with both, but, truth be told, I'm not too fond of either. I find they just get rather messy, CodeIgniter helps somewhat, but no matter what, it seems that most PHP comes out more obfuscated than it has to be. Anyways, I've come to the point where I want to either use Python or Ruby. I'm familiar in both, though more so towards Python, but I've never done any web development in them. I'll take the necessary time to learn the frameworks (and further my knowledge in the language of my choosing), but I need to choose one. I don't like either language more than the other, they both have their benefits... However, since I've never done any web development with either language, I was hoping that you guys could give me some pointers. What are the available frameworks for each language? What do you recommend and why? Note: I've primarily looked into Rails and Django - but I'm still open to others. I'm looking for one that will work for just one (or maybe two) developers. It has to be fairly easy to learn (but I will take the time to learn it). Also, I'd like it to easily support clean code and agile development.

    Read the article

  • How to eager load sibling data using LINQ to SQL?

    - by Scott
    The goal is to issue the fewest queries to SQL Server using LINQ to SQL without using anonymous types. The return type for the method will need to be IList<Child1>. The relationships are as follows: Parent Child1 Child2 Grandchild1 Parent Child1 is a one-to-many relationship Child1 Grandchild1 is a one-to-n relationship (where n is zero to infinity) Parent Child2 is a one-to-n relationship (where n is zero to infinity) I am able to eager load the Parent, Child1 and Grandchild1 data resulting in one query to SQL Server. This query with load options eager loads all of the data, except the sibling data (Child2): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; I need to load the sibling data as well. One approach I have tried is splitting the query into two LINQ to SQL queries and merging the result sets together (not pretty), however upon accessing the sibling data it is lazy loaded anyway. Adding the sibling load option will issue a query to SQL Server for each Grandchild1 and Child2 record (which is exactly what I am trying to avoid): DataLoadOptions loadOptions = new DataLoadOptions(); loadOptions.LoadWith<Child1>(o => o.GrandChild1List); loadOptions.LoadWith<Child1>(o => o.Parent); loadOptions.LoadWith<Parent>(o => o.Child2List); dataContext.LoadOptions = loadOptions; IQueryable<Child1> children = from child in dataContext.Child1 select child; exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=1 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=2 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=3 exec sp_executesql N'SELECT * FROM [dbo].[Child2] AS [t0] WHERE [t0].[ForeignKeyToParent] = @p0',N'@p0 int',@p0=4 I've also written LINQ to SQL queries to join in all of the data in hopes that it would eager load the data, however when the LINQ to SQL EntitySet of Child2 or Grandchild1 are accessed it lazy loads the data. The reason for returning the IList<Child1> is to hydrate business objects. My thoughts are I am either: Approaching this problem the wrong way. Have the option of calling a stored procedure? My organization should not be using LINQ to SQL as an ORM? Any help is greatly appreciated. Thank you, -Scott

    Read the article

  • Application Lifecycle Management Tools

    - by John K. Hines
    Leading a team comprised of three former teams means that we have three of everything.  Three places to gather requirements, three (actually eight or nine) places for customers to submit support requests, three places to plan and track work. We’ve been looking into tools that combine these features into a single product.  Not just Agile planning tools, but those that allow us to look in a single place for requirements, work items, and reports. One of the interesting choices is Software Planner by Automated QA (the makers of Test Complete).  It's a lovely tool with real end-to-end process support.  We’re probably not going to use it for one reason – cost.  I’m sure our company could get a discount, but it’s on a concurrent user license that isn’t cheap for a large number of users.  Some initial guesswork had us paying over $6,000 for 3 concurrent users just to get started with the Enterprise version.  Still, it’s intuitive, has great Agile capabilities, and has a reputation for excellent customer support. At the moment we’re digging deeper into Rational Team Concert by IBM.  Reading the docs on this product makes me want to submit my resume to Big Blue.  Not only does RTC integrate everything we need, but it’s free for up to 10 developers.  It has beautiful support for all phases of Scrum.  We’re going to bring the sales representative in for a demo. This marks one of the few times that we’re trying to resist the temptation to write our own tool.  And I think this is the first time that something so complex may actually be capably provided by an external source.   Hooray for less work! Technorati tags: Scrum Scrum Tools

    Read the article

  • Architecting Python application consisting of many small scripts

    - by Duke Dougal
    I am building an application which, at the moment, consists of many small Python scripts. Each Python script processes items from one Amazon SQS queue. Emails come into an initial queue and are processed by a script and typically the script will do a small unit of processing (for example, parse email and store some database fields), then an item will be placed on the next queue for further processing, until eventually the email has finished going through the various scripts and queues. What I like about this approach is that it is very loosely coupled. However, I'm not sure how I should implement live. Should I make each script a daemon which is constantly polling it's inbound queue for things to do? Or should there be some overarching orchestration program or process? Or maybe I should not have lots of small Python scripts but one large application? Specific questions: How should I run each of these scripts - as a daemon with some sort or restart monitor to restart them in case they stop for any reason? If yes, should I have some program which orchestrates this? Or is the idea of many small script not a good one, would it make more sense to have a larger python program which contains all the functionality and does all the queue polling and execution of functionality for each queue? What is the current preferred approach to daemonising Python scripts? Broadly I would welcome any comments or opinions on any aspect of this. thanks

    Read the article

  • Move a SQL Azure server between subscriptions

    - by jamiet
    In September 2011 I published a blog post SSIS Reporting Pack v0.2 now available in which I made available the credentials of a sample database that one could use to test SSIS Reporting Pack. That database was sitting on a paid-for Azure subscription and hence was costing me about £5 a month - not a huge amount but when I later got a free Azure subscription through my MSDN Subscription in January 2012 it made sense to migrate the database onto that subscription. Since then I have been endeavouring to make that move but a few failed attempts combined with lack of time meant that I had not yet gotten round to it.That is until this morning when I heard about a new feature available in the Azure Management Portal that enables one to move a SQL Azure server from one subscription to another. Up to now I had been attempting to use a combination of SSIS packages and/or scripts to move the data but, as I alluded, I ran into a few roadblocks hence the ability to move a SQL Azure server was a godsend to me. I fired up the Azure Management Portal and a few clicks later my server had been successfully migrated, moreover the name of the server doesn't change and neither do any credentials so I have no need to go and update my original blog post either. Its easy to be cynical about SQL Azure (and I maintain a healthy scepticism myself) but that, my friends, is cool!You can read more about the ability to move SQL Azure servers between subscriptions from the official blog post Moving SQL Azure Servers Between Subscriptions.@Jamiet

    Read the article

  • Is there a way communicate or measure levels of abstraction?

    - by hydroparadise
    I'll be the first to say that this question is a bit... out there. But here are a couple questions I bear in mind : Is abstraction continuous or discrete? Is there a single unit of abstraction? But I'm not sure those questions are truly answerable or even really makes sence. My naive answer would be something along the lines of abitrarily discrete but not necescarily having a single unit measure. Here's what I mean... Take a Black Labrador; an abstraction that could be made is that a Black Lab is a type of animal. [Animal]<--[Black Lab] A Black Lab is also a type of Dog. [Dog]<--[Black Lab] One way to establish a degree of abstraction is by comparing the two the abstractions. We could say that [Animal] is more abstract than [Dog] in respect to a Black Lab. It just so happens [Animal] can also be used as an abstraction of [Dog] So, we might end up with something like [Animal]<--[Dog]<--[Black Lab] With the model above, one might be inclined to say that there's two hops of abstraction to get from [Black Lab] to [Animal]. But you can't exactly tell somebody they need one level abstraction and reasonalby expect they will come up with [Dog] given they aren't explicity given the options above. If I needed to tell someobody in a single email that they needed an abstract class with out knowing what that abstract class is, is there a way to communaticate a degree of abstraction such that they might end up on Dog instead of Animal? As a side note, what area of study might this type of analysis fall under?

    Read the article

  • Disable the Old Adobe Flash Plugin in Google Chrome

    - by The Geek
    If you’ve just updated to the Dev or Beta release of Google Chrome, you might have noticed that a special version of Adobe Flash is now integrated into the default distribution of Chrome. But what about your old plug-in? As it turns out, the old plug-in is generally still installed… but you can easily disable Chrome plug-ins in the latest version, so let’s get to work. Disable the Extra Flash Plug-in Head over to about:plugins and look through the list—you should notice two Shockwave Flash plugins. The first one should be in your Google Chrome installation folder, and has the filename gcswf32.dll. This is the NEW one, so don’t disable it! If you keep scrollling down, you’ll see the old one, with the file name NPSWF32.dll. This is the OLD plugin, and you can safely disable it. Of course, if you only use Chrome you could just completely uninstall Adobe Flash from your system by heading into Control Panel’s Uninstall Programs screen, and then finding and uninstalling Adobe Flash Player Plugin. The ActiveX version is for Internet Explorer. We’ve not done any testing to see if the old Flash plugin is even still active or not, but may as well disable it just to be sure, right? Similar Articles Productive Geek Tips How To Disable Individual Plug-ins in Google ChromeSearch for Install Packages from the Ubuntu Command LineStop YouTube Videos from Automatically Playing in ChromeHow To Disable Javascript in Adobe Reader and Patch the Latest Massive Security HoleStupid Geek Tricks: Compare Your Browser’s Memory Usage with Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird

    Read the article

  • Naming conventions: camelCase versus underscore_case ? what are your thoughts about it? [closed]

    - by poelinca
    I've been using underscore_case for about 2 years and I recently switched to camelCase because of the new job (been using the later one for about 2 months and I still think underscore_case is better suited for large projects where there are alot of programmers involved, mainly because the code is easier to read). Now everybody at work uses camelCase because (so they say) the code looks more elegant . What are you're thoughts about camelCase or underscore_case p.s. please excuse my bad english Edit Some update first: platform used is PHP (but I'm not expecting strict PHP platform related answers , anybody can share their thoughts on which would be the best to use , that's why I came here in the first place) I use camelCase just as everibody else in the team (just as most of you recomend) we use Zend Framework which also recommends camelCase Some examples (related to PHP) : Codeigniter framework recommends underscore_case , and honestly the code is easier to read . ZF recomends camelCase and I'm not the only one who thinks ZF code is a tad harder to follow through. So my question would be rephrased: Let's take a case where you have the platform Foo which doesn't recommend any naming conventions and it's the team leader's choice to pick one. You are that team leader, why would you pick camelCase or why underscore_case? p.s. thanks everybody for the prompt answers so far

    Read the article

  • links for 2010-12-15

    - by Bob Rhubart
    Pravin Janardanam: Security in OBIEE 11g, Part 1 Guest blogger Pravin Janardanam kicks off a two-part series in which he tackles the differences in security between OBIEE 11g and 10g, and provides some hints on security migration from a 10g environment. (tags: oracle otn businessintelligence obiee) HttpClusterServlet Configuration (Weblogic Server Acting as a Proxy) Quick tips from Divay Dureja. (tags: oracle weblogic servlet configuration) Accelerating Deployment of Virtualized Infrastructures with the Oracle VM Blade Cluster Reference Configuration "The Oracle VM blade cluster reference configuration is a single-vendor solution that addresses every layer of the virtualization stack with Oracle hardware and software components." - from the white paper. (tags: oracle otn oraclevm virtualization) A SOA Safari (Antony Reynolds' Blog) SOA author Antony Reynolds shares links to some of his favorite SOA titles available for reading on Safari. (tags: oracle otn soa) Using Crossbow and Solaris 11 Express Zones for a single machine proof of concept environment with Puppet "My last blog entry was about my debugging experience with Puppet and promise to share the setup that I used. I now follow up that previous entry with this one which describes my Crossbow + NAT + S11 Zones proof of concept." - Michael Tin (tags: oracle solaris crossbow) @myfear: One thing you did not know about Java EE class loading in GlassFish 2.x "Be careful migrating apps from one app server to the other. And don't expect to have a strong hierarchical class loader in place. That is especially true for GF 2.x class loading." Oracle ACE Director Markus Eisele (tags: oracle otn oracleace java glassfish weblogic)

    Read the article

  • EVENT RECAP: Oracle Day & Product Fair - Atlanta

    - by cwarticki
    Are you attending any of the Oracle Days and other Events? They are fantastic!  Keep track of the Oracle Events by following @OracleEvents on Twitter.  Also, stay in the know by subscribing to one of the several Oracle Newsletters. Those will also keep you posted of upcoming in-person and webcast events. From the Oracle Events website, simply navigate to your geography and refine your options to locate what interests you. You can also perform keyword searches. Yesterday, I had the opportunity to participate in the Oracle Day & Product Fair in Atlanta, Georgia.  Thanks to those who stopped by to ask your support questions and watch me demo My Oracle Support features and best practices. It was a fantastic turnout.  The Buckhead Theatre provided served as an excellent venue. It was standing room only for the double keynotes on topics of interest to our customers: Navigating Complexity by Simplifying I.T., and Oracle Exadata X3-Transforming Data Management. The Product Fair was staffed by many Oracle professionals and our Partners too.  It was great meeting new people like the team representing OAUG.  Many thanks to our sponsors: BIAS, Cloudera, Intel and TekStream Solutions.Come attend one of the many Oracle Days & other events planned for you. I'll be attending the one Ft. Lauderdale, FL on November 16th. See you there. -Chris WartickiGlobal Customer Management

    Read the article

  • Go Big or Go Home

    - by user12601034
    For those who don’t know, Oracle sponsors a group called “OWL” – Oracle Women’s Leadership - and the purpose of the group is to create local and global opportunities that support, educate and empower current and future women leaders at Oracle. This week, I had the opportunity to attend the Denver OWL roadshow, and I was really impressed with the quality of speakers and interactions that I experienced. One theme that arose throughout the day was that of “Lean In.” In a nutshell, “Lean In” requires you to take advantage of the opportunities that you’re given. One of my personal mantras is “Go big or go home."  That is, if you’re not willing to give it your all, don’t do it at all. Regardless of how you phrase it, it’s a life lesson that I believe needs to be tossed in our face every so often simply, if for no other reason, to get our attention. You are given a finite amount of time in your life; in your job role; in your interactions with others. Do you make the most of the opportunities given to you every day? Or do you believe that life just happens, and you have to deal with whatever is handed to you? I have a challenge for you. Set aside any concerns or fears you have about something and Lean In. Make the most of an opportunity presented to you…or make your own opportunity! If you start with just one thing, you’ll start building a mindset to make the most of additional opportunities. Not only will you be better for leaning in, but I’m betting that those around you will be better for it as well.

    Read the article

  • LINQ: Single vs. SingleOrDefault

    - by Paulo Morgado
    Like all other LINQ API methods that extract a scalar value from a sequence, Single has a companion SingleOrDefault. The documentation of SingleOrDefault states that it returns a single, specific element of a sequence of values, or a default value if no such element is found, although, in my opinion, it should state that it returns a single, specific element of a sequence of values, or a default value if no such element is found. Nevertheless, what this method does is return the default value of the source type if the sequence is empty or, like Single, throws an exception if the sequence has more than one element. I received several comments to my last post saying that SingleOrDefault could be used to avoid an exception. Well, it only “solves” half of the “problem”. If the sequence has more than one element, an exception will be thrown anyway. In the end, it all comes down to semantics and intent. If it is expected that the sequence may have none or one element, than SingleOrDefault should be used. If it’s not expect that the sequence is empty and the sequence is empty, than it’s an exceptional situation and an exception should be thrown right there. And, in that case, why not use Single instead? In my opinion, when a failure occurs, it’s best to fail fast and early than slow and late. Other methods in the LINQ API that use the same companion pattern are: ElementAt/ElementAtOrDefault, First/FirstOrDefault and Last/LastOrDefault.

    Read the article

  • What does this WCF error mean: "Custom tool warning: Cannot import wsdl:portType"

    - by stiank81
    I created a WCF service library project in my solution, and have service references to this. I use the services from a class library, so I have references from my WPF application project in addition to the class library. Services are set up straight forward - only changed to get async service functions. Everything was working fine - until I wanted to update my service references. It failed, so I eventually rolled back and retried, but it failed even then! So - updating the service references fails without doing any changes to it. Why?! The error I get is this one: Custom tool error: Failed to generate code for the service reference 'MyServiceReference'. Please check other error and warning messages for details. The warning gives more information: Custom tool warning: Cannot import wsdl:portType Detail: An exception was thrown while running a WSDL import extension: System.ServiceModel.Description.DataContractSerializerMessageContractImporter Error: List of referenced types contains more than one type with data contract name 'Patient' in namespace 'http://schemas.datacontract.org/2004/07/MyApp.Model'. Need to exclude all but one of the following types. Only matching types can be valid references: "MyApp.Dashboard.MyServiceReference.Patient, Medski.Dashboard, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" (matching) "MyApp.Model.Patient, MyApp.Model, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" (matching) XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='ISomeService'] There are two similar warnings too saying: Custom tool warning: Cannot import wsdl:binding Detail: There was an error importing a wsdl:portType that the wsdl:binding is dependent on. XPath to wsdl:portType: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:portType[@name='ISomeService'] XPath to Error Source: //wsdl:definitions[@targetNamespace='http://tempuri.org/']/wsdl:binding[@name='WSHttpBinding_ISomeService'] And the same for: Custom tool warning: Cannot import wsdl:port .. I find this all confusing.. I don't have a Patient class on the client side Dashboard except the one I got through the service reference. So what does it mean? And why does it suddenly show? Remember: I didn't even change anything! Now, the solution to this was found here, but without an explanation to what this means. So; in the "Configure service reference" for the service I uncheck the "Reuse types in the referenced assemblies" checkbox. Rebuilding now it all works fine without problems. But what did I really change? Will this make an impact on my application? And when should one uncheck this? I do want to reuse the types I've set up DataContract on, but no more. Will I still get access to those without this checked?

    Read the article

  • I thought everyone did it like this – Training Session Code Management

    - by Fatherjack
    One of an occasional series of blogs about things that I do that perhaps others don’t. From very early on in my dealings with SQL Server Management Studio I started using Solutions and Projects. This means that I started using them when writing sessions and it wasn’t until speaking with someone at PASS Summit 2013 that I found out that this was a process that was unheard of by some people. So, here we go, a run through how I create and manage code and other documents that I use in presentations. For people unsure what solutions and projects are; • Solution – a container for one or more projects. • Project – a container for files, .sql files are grouped as Queries, all other files are stored as Misc. How do I start? Open Management Studio as normal, and then click File | New and select Project This will bring up the New Project dialog box and you can select/add details as necessary in the places indicated. If this is the first project you are creating then be sure to select the Create directory for solution check box (4). If know in advance that you are going to have more than one project in the solution then you may want to edit the Solution name (3) as by default it will take the name of the project that you enter at (2). This will lead you to the following folder structure (depending on the location that you chose in 3) above. In SSMS you need to turn on the Solution Explorer, either via the View menu or pressing Ctrl + Alt + L                   This will bring up a dockable window that will let you quickly access the files that you choose to include in the Solution.                     Can we get to work and write some code yet please? Yes, we can. As with many Microsoft products there are several ways to go about this, let’s look at the easiest way when creating new code. When writing a presentation I usually start from the position we are currently in – a brand new solution and project with no code. Later on we will look at incorporating existing code files into the Project where we need it. Right-click on the Project name and choose Add New Query           As soon as you click this you will be prompted to select the sql server that you want to connect to and once you have done that you will have your new query open in the text editor and the Solution Explorer will now look like this, showing your server connection and your new query.               And the Project folder will look like this         Now once you have written your code don’t press save, choose Save As and give the code a better name than QueryX.sql. SSMS will interpret this as a request to rename Query1 and your Project and the Project folder will show that SQLQuery1.sql no longer exists but there is now a file named as you requested. If you happen to click save in error then right-click the query in the project and choose rename.               You can then alter the name as you like, even when open in the SSMS text editor, and the file will be renamed. When creating a set of scripts for a presentation I name files with a numeric prefix so that when they are sorted by name they are in the order that I need to use them during the session. I love this idea but I’ve got loads of existing scripts I want to put in Projects Excellent, adding existing files to a project is easy, let’s consider that you have query files in your My Documents folder and you want to bring them into the Project we have just created. Right-click on the Project and choose Add | Existing Item           Navigate to the location of your chosen file and select it. The file will open in SSMS text editor and the Project will be updated to show that the selected query is now part of your project. If you look in Windows Explorer you will see that the query file has been copied into the Project folder, the original file still remains in your My Documents (or wherever it existed). I’ll leave it as an exercise for the reader to explore creating further Projects within a solution but will happily answer questions if you get into difficulties. What other advantages do I get from this? Well, as all your code is neatly in one Solution folder and the folder contains only files that are pertinent to the session you are presenting then it makes it very easy to share this code, simply copy the whole folder onto a USB stick, Blog, FTP location, wherever you choose and it’s all there in one self-contained parcel. You don’t have to limit yourself to .sql query files, you can add any sort of document via the Add Existing Item method, just try it out. Right-click on the protect and choose Add | Existing Item           Change the file type filter.                       You can multi select items here using Ctrl as you click each item you want. When you are done, click the Add button and the items will be brought into your project.                 Again, using this process means the files are copied into the project folder, leaving you original files untouched in their original location. Once they are here you can double click them in the SSMS Solution Explorer to open them, for files with a specific file type then the appropriate application will be launched – ie Word, Excel etc. However, if the files are something that the SSMS Text editor can display then they will open in a tab in SSMS. Try it out with a text file or even a PS1 file … This sounds excellent but what do I need to watch out for? One big thing to consider when working like this is the version of SSMS that you are using. There is something fundamentally different between the different versions in the way that the project (.ssmssqlproj) and solution (.sqlsuo and .ssmssln) files are formatted. If you create a solution in an older version of SSMS and then open it in a newer version you will be given the option to upgrade it. Once you do this upgrade then the older version of SSMS will not be able to open the solution any more. Now this ranks as more of an annoyance than disaster as the files within the projects are not affected in any way, you would just have to delete the files mentioned and recreate the solution in the older version again. Summary So, here we have seen how using SSMS Projects and Solutions can help keep related code files (and other document types) together in a neat structure so that they can be quickly navigated during a presentation and it also makes it incredibly simple to distribute your code and share it with others. I hope this is of use to you and helps you bring more order into your sql files, whether you are a person that does technical presentations or not, having your code grouped and managed can make for a lot of advantages as your code library expands.  

    Read the article

  • Syslog/kernlog filling up with "did not claim interface N before use"

    - by Wayne Werner
    As I discovered when asking this question, it appears that demond_nscan is trying to use a device without claiming it. And it does it.... hundreds of time per second apparently. This makes kern.log and syslog huge (100GB ). In this particular case the problem is directly a result of some Lexmark drivers that were installed (found at /usr/local/lexmark/unix_scan_drivers/bin/demond_nscan). Here are a few things I know: The drivers are for an all-in-one printer/scanner device. There was a previous lexmark printer-only that was installed with CUPS This driver was the one for CUPS systems, and I think that it automatically added it to the list of printers in CUPS. The issue started spamming kern/syslog only after these drivers were installed, using the lexmark installers While googling around I found this thread that's not completely related, but it does mention that it might be happening when two drivers try to control the same device at the same time. How can I resolve this issue so that either I only have one driver, or get the driver to claim the device before usage?

    Read the article

  • Similar But Not The Same

    - by rickramsey
    A few weeks ago we published an article that explained how to use Oracle Solaris Cluster 3.3 5/11 to provide a virtual, multitiered architecture for Oracle Real Application Cluster (Oracle RAC) 11.2.0.2. We called it ... How to Deploy Oracle RAC on Zone Clusters Welllllll ... we just published another article just like it. Except that it's different. The earlier article was for Oracle RAC 11.2.0.2. This one is for Oracle RAC 11.2.0.3. This one describes how to do the same thing as the earlier one --create an Oracle Solaris Zone cluster, install and configure Oracle Grid Infrastructure and Oracle RAC in the zone cluster, and create an Oracle Solaris Cluster resource for Oracle RAC-- but for version 11.2.0.3 of Oracle RAC. Even though the objective is the same, and the version is only a dot-dot-dot release away, the process is quite different. So we decided to call it: How to Deploy Oracle RAC 11.2.0.3 on Zone Clusters Hope you can keep the different versions clear in your head. If not, let me know, and I'll try to make them easier to distinguish. - Rick Website Newsletter Facebook Twitter

    Read the article

  • How much system and business analysis should a programmer be reasonably expected to do?

    - by Rahul
    In most places I have worked for, there were no formal System or Business Analysts and the programmers were expected to perform both the roles. One had to understand all the subsystems and their interdependencies inside out. Further, one was also supposed to have a thorough knowledge of the business logic of the applications and interact directly with the users to gather requirements, answer their queries etc. In my current job, for ex, I spend about 70% time doing system analysis and only 30% time programming. I consider myself a good programmer but struggle with developing a good understanding of the business rules of a complex application. Often, this creates a handicap because while I can write efficient algorithms and thread-safe code, I lose out to guys who may be average programmers but have a much better understanding of the business processes. So I want to know - How much business and systems knowledge should a programmer have ? - How does one go about getting this knowledge in an immensely complex software system (e.g. trading applications) with several interdependent business processes but poorly documented business rules.

    Read the article

  • SQLAuthority News – Resolution for New Year 2011

    - by pinaldave
    Today is the first day of the year so I want to write something very light. Last Year: 2010 Last Year was a blast; really traveled a lot. My family and I went on vacation. There I enjoyed being father, rolling on the floor and playing with my daughter. Here is the list of the countries I visited throughout 2010: Singapore (twice) Malaysia (twice) Sri Lanka (thrice) Nepal (once) United States of America (twice) United Arab Emirates (UAE) (once) My daughter who just completed 1 year on September 1, 2010 has so far visited three countries: Singapore, Malaysia and Sri Lanka, where I have done lots of community activities. The list containing all my activities can be found at Pinal Dave’s Community Events. I have written nearly 380 blog posts last year. It would be difficult for me to pick a few. However, I keep a running list of all of my articles over here: All Articles on SQLAuthority.com. I have so far received more than 10,000 email questions during the year and consequently I have done my best to answer most of them. I strongly believe if one would Search SQLAuthority.com blog, they would have found the answer quickly. The best part of 2010 for me was working on SQL Server Health Check and SQL Server Performance Tuning. This Year: 2011 This year, I came up with two simple goals: 1. Personal Goal: Reduce Weight 2. Professional Goal: Stay busy for the entire year with SQL Server Performance Tuning Projects. (Currently January 2011 is booked with performance tuning projects and 40 other days are already booked throughout the year). Future The future is something one cannot exactly guess and one cannot see. I just want to wish all of you the very best for this coming New Year. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Future Air Plane – A new world

    - by Rekha
    For the first time in my life, I wished I had more number of years to live. The world has evolved from the cave man life to the man who is almost The Creator. When I was about 12 years old, I was taken to Chennai Planetarium for my school excursion. That day we were made to lie down in a dark room and the ceiling was full of stars and planets. All those were just videos but the day still stands in my mind. Same kind of experience in real is waiting for our future generations.Even though the English movies have gone beyond imaginations, we still have chances to bring those imaginations to real. You must be wondering why all these hype. Recently Airbus unveiled a news on transparent Airplane in 2050. This Airplane will have a body transparent to view the sky from all sides of the airplane when we are flying high above the grounds. And it will have all possible technologies under one roof that would give immense pleasure for the passengers. The journey would be an unforgettable one for each one of us. Image and News Credit: Daily Telegraph This article titled,Future Air Plane – A new world, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Responsive website VS mobile website

    - by Saif Bechan
    I am creating a new blog. Nowadays, especially for a blog, it's important that the websites are accessible for all devices. Now I have to make a choice on what to do. I have seen 2 options. Option 1 is to go with a normal fixed website, for example 960px wide (grid960). And for mobile users have a mobile version. This takes some more time, but then there are 2 good versions of the website. Option 2 I haven't seen a lot yet, creating a adaptive website, or also called responsive website. I am now looking into the LESS framework, where the website automatically switches to to required width. Only downside is that when the normal browser is re-sized, everything re-sizes. Another problem I found is that pinch-to-zoom on devices does not work. Now the question is, which one would you prefer for a blog. One that constantly changes layout when you move your device, or one where you have the choice to view mobile and normal. If there are any other options, please let me know.

    Read the article

  • I Could Sure Use Some Windows Azure Code Samples…

    - by BuckWoody
    There are multiple ways to learn, and one of the most effective is with examples. You have multiple options with Windows Azure, including the Software Development Kit, the Windows Azure Training Kit and now another one…. the Microsoft All-In-One Code Framework,, a free, centralized code sample library driven by developers' real-world pains and needs. The goal is to provide customer-driven code samples for all Microsoft development technologies, and Windows Azure is included. Once you hit the site, you download an EXE that will create a web-app based installer for a Code Browser.   Once inside, you can configure where the samples store data and other settings, and then search for what you want.   You can also request a sample – if enough people ask, we do it. OneCode also partnered with gallery and Visual Studio team to develop this Sample Browser Visual Studio extension.  It’s an easy way for developers to find and download samples from within Visual Studio.  

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

< Previous Page | 392 393 394 395 396 397 398 399 400 401 402 403  | Next Page >