Search Results

Search found 10149 results on 406 pages for 'acceptance testing'.

Page 131/406 | < Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >

  • SQLAuthority News – Book Signing Event – SQLPASS 2011 Event Log

    - by pinaldave
    I have been dreaming of writing book for really long time, and I finally got the chance – in fact, two chances!  I recently wrote two books: SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008 [Amazon] | [Flipkart] | [Kindle] and SQL Wait Stats Joes 2 Pros: SQL Performance Tuning Techniques Using Wait Statistics, Types & Queues [Amazon] | [Flipkart] | [Kindle].  I had a lot of fun writing these two books, even though sometimes I had to sacrifice some family time and time for other personal development to write the books. The good side of writing book is that when the efforts put in writing books are recognize by books readers and kind organizations like expressor studio. Book Signing Event Book writing is a complex process.  Even after you spend months, maybe years, writing the material you still have to go through the editing and fact checking processes.  And, once the book is out there, there is no way to take back all the copies to change mistakes or add something you forgot.  Most of the time it is a one-way street. Book Signing Event Just like every author, I had a dream that after the books were written, they would be loved by people and gain acceptance by an audience. My first book, SQL Programming Joes 2 Pros: Programming and Development for Microsoft SQL Server 2008, is extremely popular because it helps lots of people learn various fundamental topics. My second book covers beginning to learn SQL Server Wait Stats, which is a relatively new subject. This book has had very good acceptance in the community. Book Signing Event Helping my community is my primary focus, so I was happy to see this year’s SQLPASS tag line: ‘This is a Community.‘ At the event, the expressor studio guys came up with a very novel idea. They had previously used my books and they had found them very useful. They got 100 copies of the book and decided to give it away to community folks. They invited me and my co-author Rick Morelan to hold a book signing event. We did a book signing on Thursday between 1 pm and 2 pm. Book Signing Event This event was one of the best events for me. This was my first book signing event outside of India. I reached the book signing location around 20 minutes before the scheduled time and what I saw was a big line for the book signing event. I felt very honored looking at the crowd and all the people around the event location. I felt very humbled when I saw some of my very close friends standing in the line to get my signature. It was really heartwarming to see so many enthusiasts waiting for more than an hour to get my signature. While standing in line I had the chance to have a conversation with every single person who showed up for the signature. I made sure that I repeated every single name and wrote it in every book with my signature. There is saying that if we write a name once we will remember it forever. I want to remember all of you who saw me at the book signing. Your comments were wonderful, your feedback was amazing and you were all very supportive. Book Signing Event I have made a note of every conversation I had with all of you when I was signing the books. Once again, I just want to express my thanks for coming to my book signing event. The whole experience was very humbling. On the top of it, I want to thank the expressor studio people who made it possible, who organized the whole signing event. I am so thankful to them for facilitating the whole experience, which is going to be hard to beat by any future experience. My books Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL PASS, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL, Technology

    Read the article

  • UPK Hands-on Labs at OHUG

    - by Karen Rihs
    Going to OHUG, June 18-22? Be sure to attend one or more UPK hands-on labs! Choose from Basic, Advanced, What's New, and Prebuilt Content!   Oracle User Productivity Kit 11.1 Workshop – Basic Stephen Armbruster, Oracle Corporation June 19, 2012, 11:00 a.m. – 12:00 p.m. June 20, 2012, 4:30 – 5:30 p.m. The User Productivity Kit (UPK) is a comprehensive, cost-effective, customizable solution that helps your organization quickly create the critical documentation, training, and support materials needed to drive project team and user productivity throughout the lifecycle of your software. The User Productivity Kit provides system process documentation, user acceptance test scripts, comprehensive instructor-led training materials, web-based training materials, role-based performance support, and complete documentation. Also provided is the UPK Developer, which serves as a single-source development and customization tool to enable rapid content creation and customization. The User Productivity Kit delivers: Business process documentation for fit-gap analysis - providing time and cost savings that jump-start your implementation or upgrade User Acceptance test scripts to help test applications prior to go-live State-of-the-art instructional design tools to rapidly build and tailor documentation, instructor-led training materials, and web-based training to fit organizational needs Live-application performance support with transactional and procedural information to maximize user efficiency. By registering for this hands-on UPK workshop, participants will use UPK to build an application job aid and simulation that can be used as performance support for the application. But hurry, space is limited! Oracle User Productivity Kit 11.1 Workshop – Advanced Stephen Armbruster, Oracle Corporation June 20, 2012, 1:30 – 2:30 p.m. This special workshop is for those already familiar with UPK and will cover advanced concepts. In this workshop, you will gain an in-depth knowledge of working with the UPK Developer. Following this workshop, you will be able to: Create publishing categories Add a logo to a publishing project Publish using the newly created category Configure your own library view Manage topic history in a multi-user environment Oracle User Productivity Kit 11.1 Workshop – What’s NEW! Stephen Armbruster, Oracle Corporation June 19, 2012, 1:30 – 2:30 p.m. June 21, 2012, 1:00 – 2:00 p.m. This special workshop is for those already familiar with UPK and will focus on the new features included in the latest version 11.1. In this workshop, you will review most of the new features included in the UPK Developer. Oracle User Productivity Kit 11.1 Workshop – Prebuilt Content Stephen Armbruster, Oracle Corporation June 19, 2012, 4:30 – 5:30 p.m. June 21, 2012, 2:15 – 3:15 p.m. This special workshop is for those already familiar with UPK and will focus on the latest version 11.1. At the end of this workshop, you will be able to demonstrate how to: Import prebuilt content Modify content frames Add a decision frame Translate a topic into Spanish Stephen Armbruster is a principal sales consultant, specializing in HCM and UPK applications for Oracle over the past twelve years. In addition to his current role, he serves as an ambassador for the Fusion User Experience (UX) team and is tasked with evangelizing the UX for end users across all Oracle brands (Fusion, PSFT, JDE, and EBS).  He is also a trusted advisor to Oracle’s Product Management teams related to Learning Management Systems (LMS). Prior to joining Oracle, he was an instructor as well as an instructional technologist working in the medical diagnostics, high tech, and information management industries. As an expert in both LMS and UPK, he regularly speaks at Oracle conferences including Oracle OpenWorld and OHUG on topics that span using Oracle solutions to accomplish employee training, certification, and user adoption. His presentations are both entertaining and engaging.

    Read the article

  • Lease Accounting Closed for Comment

    - by Theresa Hickman
    December 15, 2010 marked the last day to send public comments to FASB and IASB on lease accounting. June 2011 is the deadline for the final consideration of the Leases Exposure Draft that will be given to standard setters in order to create a new lease accounting standard. Landlords, lessees, retailers, airlines industry, etc. are all worried right now about the changes to lease accounting. They feel the changes will be too costly and complex without adding significant improvement to the quality and relevance of financial statements. In a nutshell, IASB and FASB want to abolish operating leases where the lessee records the periodic payments as an expense over time. The proposed changes will mean that the accounting for leases will move from the P&L and hit both the lessee's and lessor's balance sheets. For companies that occupy a lot of property, this could significantly increase their liabilities not to mention front-load much of the costs that they were able to spread out over time before. Why are IASB and FASB doing this? Their goal is to have consistent accounting for both the lessees and lessors with higher quality financial statements. Leasing is one of four major projects being undertaken by the IASB and FASB in order to complete convergence between US GAAP and IFRS. I spoke to our resident accounting expert Seamus Moran about this to better understand how this might impact accounting software. He reminded me that the proposed changes to both US GAAP and IFRS in respect to leases are "proposed." It is still inappropriate to account for leases the way they are being proposed and we still need to account for them in accordance to the current regulations, which is what current accounting software programs, such as E-Business Suite Release 12.1 and prior and PeopleSoft Enterprise support. The FASB (US GAAP) and IASB (IFRS) exposure drafts (EDs) that outline the proposal were published. The FASB edition was published on August 17th, with comments due by December 15th. The IASB edition was published on the same date, and comments were due in London on the same date. Exposure drafts are the method both the FASB and the IASB use to solicit General Acceptance, the "GA" in GAAP. Both Boards will consider the input they have received, and perhaps revise the proposal. The proposal has come in for some criticism, both from the finance houses and the uses of the leased assets. There is, given the opposition to it, an excellent chance that the Leasing proposal will be modified or rewritten. We will know this in about six months, the usual time it takes for the FASB and IASB to digest the comments they receive. If they feel the proposal has General Acceptance, they will issue the final Standard at that time; if not, they will issue a revised proposal with another year of comment of drafting. Oracle participates in the standard setting process and is fully aware of the leasing proposal. We have designs that would reflect the proposal in hand. These designs will be finalized when the proposal is finalized. It is likely that customers will develop new financial arrangements if the proposal is finalized, and we are working with customers and partners to stay in touch with people's business responses to the proposal. The IASB and FASB are aware that ERP companies will have to revise their software, and that the companies filing results under IFRS or under US GAAP will have to implement such software. The form and timing of the release of the updated software will depend on the schedule of the take up of the new standard, the complexity of the standard, and the releases supported at the time the standard becomes effective.

    Read the article

  • How to create a column containing a string of stars to inidcate levels of a factor in a data frame i

    - by PaulHurleyuk
    (second question today - must be a bad day) I have a dataframe with various columns, inculding a concentration column (numeric), a flag highlighting invalid results (boolean) and a description of the problem (character) dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), rawconc = c(77.4, 52.6, 86.5, 44.5, 167, 16.2, 59.3, 123, 1.95, 181), reason = structure(c(NA, NA, 2L, NA, NA, NA, 2L, 1L, NA, NA), .Label = c("Fails Acceptance Criteria", "Poor Injection"), class = "factor"), flag = c("False", "False", "True", "False", "False", "False", "True", "True", "False", "False" )), .Names = c("x", "rawconc", "reason", "flag"), row.names = c(NA, -10L), class = "data.frame") I can create a column with the numeric level of the reason column df$level<-as.numeric(df$reason) df x rawconc reason flag level 1 1 77.40 <NA> False NA 2 2 52.60 <NA> False NA 3 3 86.50 Poor Injection True 2 4 4 44.50 <NA> False NA 5 5 167.00 <NA> False NA 6 6 16.20 <NA> False NA 7 7 59.30 Poor Injection True 2 8 8 123.00 Fails Acceptance Criteria True 1 9 9 1.95 <NA> False NA 10 10 181.00 <NA> False NA and here's what I want to do to create a column with 'level' many stars, but it fails df$stars<-paste(rep("*",df$level)sep="",collapse="") Error: unexpected symbol in "df$stars<-paste(rep("*",df$level)sep" df$stars<-paste(rep("*",df$level),sep="",collapse="") Error in rep("*", df$level) : invalid 'times' argument rep("*",df$level) Error in rep("*", df$level) : invalid 'times' argument df$stars<-paste(rep("*",pmax(df$level,0,na.rm=TRUE)),sep="",collapse="") Error in rep("*", pmax(df$level, 0, na.rm = TRUE)) : invalid 'times' argument It seems that rep needs to be fed one value at a time. I feel that this should be possible (and my gut says 'use lapply' but my apply fu is v. poor) ANy one want to try ?

    Read the article

  • How to fix Monogame WP8 Touch Position bug?

    - by Moses Aprico
    Normally below code will result in X:Infinity, Y:Infinity TouchCollection touchState = TouchPanel.GetState(); foreach (TouchLocation t in touchState) { if (t.State == TouchLocationState.Pressed) { vb.ButtonTouched((int)t.Position.X, (int)t.Position.Y); } } Then, I followed this https://github.com/mono/MonoGame/issues/1046 and added below code at the first line in update method. (I still don't know how it's worked, but it fixed the problem) if (_firstUpdate) { typeof(Microsoft.Xna.Framework.Input.Touch.TouchPanel).GetField("_touchScale",System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static).SetValue(null, Vector2.One); _firstUpdate = false; } And then, when I randomly testing something, there are several area that won't read the user touch. The tile with the purple dude is the area which won't receive user input (It don't even detect "Pressed", the TouchCollection.Count = 0) Any idea how to fix this? UPDATE 1 : The second attempt in recompiling The difference is weird. Dunno why the consistent clickable area is just 2/3 area to the left UPDATE 2 : After trying to rotate to landscape and back to portrait to randomly testing, then the outcome become :

    Read the article

  • Data Masking Pack 12.1.0.3 Certified with E-Business Suite 12.1.3

    - by Elke Phelps (Oracle Development)
    I'm pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12.1.0.3. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.     You may scramble data in E-Business Suite cloned environments with EM12.1.0.3 using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 18462641) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application.  How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack to use the Oracle E-Business Suite 12.1.3 template. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    Read the article

  • TestDriven.NET - Free unit test tool for Visual Studio

    - by Guilherme Cardoso
    Developers that use unit testing are familiar with Resharper and his plugin for Unit Testing. For those that like me, don't have a great pc hardware (Pentium 4, 3ghz, 1GB ram) the Resharper can be really slow, and affect the performance of pc. That's why i use TestDriven.NET TestDriven.NET is a freeware license tool (there are others licenses for this product) that gives us the possibility to run unit tests with this plugin, that's integrated with Visual Studio. You can check some screenshots here: http://www.testdriven.net/Screenshots.aspx It's compatible with: NUnit, MbUnit, MSTest, NCover, Reflector, TypeMock, dotTrace and MSBee. More information and free download here: http://www.testdriven.net

    Read the article

  • AVTest.org Results for March – April 2014 now Available

    - by Akemi Iwaya
    Do you like to keep up with how well the various anti-virus programs are doing, or just want to see how well your favorite one did? Then you will definitely want to have a look at the latest batch of test results from AVTest.org. The results for testing during March and April are now available for viewing at your leisure. One thing to keep in mind when viewing the latest set of results: the testing was performed on Windows 8.1 during this round. Current security products for Windows 8.1 put to the test [AVTest.org] Note: When you visit the page, you may need to scroll down just a tiny bit in order to see the results listing. [via ZDNet News]

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by Mohammad Ashraful Alam
    I just released an open source project at codeplex, which includes a set of T-4 templates that will enable you to build ASP.NET Web Form Data Bound controls friendly testable logical layer based on Entity Framework 4.0 with just few clicks! In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

  • How to make FN keys working on Asus G75 laptop

    - by c_inconnu
    I just bought a Asus G75 and I cannot make the FN keys working. I only found how to control the brightness (http://askubuntu.com/questions/126441/brightness-controls-doesnt-work-on-a-macbook-pro-5-5-ubuntu-12-04-lts) but the other keys are not recognized. I didn't know much things about key binding before digging, but I tried : testing with xev : no output... testing with keymap : no output... modprobe asus-laptop : FATAL: Error inserting asus_laptop (/lib/modules/3.2.0-25-generic/kernel/drivers/platform/x86/asus-laptop.ko): No such device (not sure what that means) modprobe asus-nb-wmi : FATAL: Error inserting asus_nb_wmi (/lib/modules/3.2.0-25-generic/drivers/platform/x86/asus-nb-wmi.ko): No such device (not sure what that means) Thanks for your advice David

    Read the article

  • VS 2010: SP1

    - by xamlnotes
    SP1 for VS 2010 just hit the web today. Check it out at http://support.microsoft.com/kb/983509/en-usHTH This should fix lots of big and little things such as startup time, bugs and more. Plus there are tons of features in there too for web, xaml, and other application types.  I am really excited about the unit testing and load testing features that were added. Theres also an update for .Net 4 framework. And check out the new Silverlight performance wizard. Lots of really cool stuff. Get it today! Download it from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=11ea69cb-cf12-4842-a3d7-b32a1e5642e2

    Read the article

  • VS 2010: SP1

    - by xamlnotes
    I posted this yesterday but had the wrong link at the bottom. SP1 for VS 2010 just hit the web today. Check it out at http://support.microsoft.com/kb/983509/en-usHTH This should fix lots of big and little things such as startup time, bugs and more. Plus there are tons of features in there too for web, xaml, and other application types.  I am really excited about the unit testing and load testing features that were added. Theres also an update for .Net 4 framework. And check out the new Silverlight performance wizard. Lots of really cool stuff. Get it today! For now I looks like only MSDN subscribers can download it. Download it from here: http://msdn.microsoft.com/en-us/vstudio/default

    Read the article

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Finding an alert in the middle of your javascript

    - by Ariel Popovsky
    I was debugging a script injection issue the other day using some sample code with an alert in it. The alert was popping out meaning the code got executed leaving open the possibility for a hacker to put there some nasty malicious code. I knew my alert was being executed but didn’t know how. So I tried something that worked perfectly for this problem, replaced the native alert function with my own one. All I had to do in Chrome was open the javascript console and type: alert = function(msg){ console.log(msg); console.trace(); }; The next time the malicious code was executed, instead of the regular alert I got something similar to this:   alert("testing") testing console.trace() alert:2 (anonymous function):2 InjectedScript._evaluateOn:312 InjectedScript._evaluateAndWrap:294 InjectedScript.evaluate:288 undefined In my case I was able to see what was going on and find the offending function. This was tested on Firebug in Firefox and it works as.

    Read the article

  • Programming Test

    - by Travis Webb
    We are looking to hire some more Java developers onto our team, and plan to test their coding abilities with a test. We currently use a web-based Java test that automatically compiles and runs the code, but it is very flaky and we're having problems with our candidates losing their work on this site. Not only is this frustrating for everyone, it makes us look like we don't know what we're doing. Is there a popular testing suite out there? What do you use? I'm not interested in dogmatic arguments on whether or not I should be testing my candidates in this way, I'm looking for a tool that will help me do it.

    Read the article

  • How can I merge two SubVersion branches to one working copy without committing?

    - by Eric Belair
    My current SubVersion workflow is like so: The trunk is used to make small content changes and bug fixes to the main source code. Branches are used for adding/editing enhancements and projects. So, trunk changes are made, tested, committed and deployed pretty quickly. Whereas, enhancements and projects need additional user testing and approval. At time, I have two branches that need testing and approval at the same time. I don't want to merge to the trunk and commit until the changes are fully tested and approved. What I need to do is merge both branches to one working copy without any commits. I am using Tortoise SVN, and when I try to merge the second branch, I get an error message: Cannot merge into a working copy that has local modifications Is there a way that I can do this without committing either merge?

    Read the article

  • What do you think of a performance engineer should have?

    - by Vance
    I believe performance tuning (or even testing) is one the most challenging for an engineer. Well, in lots of company, this is the lowest priority than others "important" thing. My purpose of opening this post is to know what do you think*good* performance engineer should have. I can list some things like: Solid database,programming knowledge. Do single thread performance testing. Good knowledge of using the load generator tools to simulate the concurrent loads. Use different tools to monitor/measure the app/db server performance status Understand and can debug the codes. Even tune the codes. Any more ideas are always appreciated!

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Always use dtexec.exe to test performance of your dataflows. No exceptions.

    - by jamiet
    Earlier this evening I posted a blog post entitled Investigation: Can different combinations of components effect Dataflow performance? where I compared the performance of three different dataflows all working to the same overall goal. I wanted to make one last point related to the results but I thought it warranted a blog post all of its own. Here is a screenshot of one of the dataflows that I was testing: Pretty complicated I’m sure you’ll agree. Now, when I executed this dataflow in the test it was executing in ~19seconds however in that case I was executing using the command-line tool dtexec. I also tried executing inside the BIDS development environment and in that case it took much longer – 139seconds. That’s more than seven times as long. The point I want to make is very simple. If you are testing your dataflows for performance please use dtexec. Nothing else will suffice. @Jamiet

    Read the article

  • Teaching programming to a non-CS graduate

    - by Shahzada
    I have a couple of friends interested in computer programming, but they're non-CS graduates; some of them have very little experience in software testing field (some of them took some basic software testing courses). I am going to be working with them on teaching basic computer programming, and computer science fundamentals (data structures etc). My questions are; What language should I start with? What are essential computer science topics that I should cover before jumping them into computer programming? What readings can I incorporate to make the topic interesting and non-overwhelming? If we want to spend a year on it, what topics should take priority and must be covered in 12 months? Again, these are non computer science folks, and I want to keep the learning as much fun as possible. Thanks everyone.

    Read the article

  • DevConnections new "Fundamentals" Track!

    - by psheriff
    Hi All, I am now the new Track Chair for the "Fundamentals" track at DevConnections. I know many of my readers feel overwhelmed by all of the "advanced" topics out there. The folks at the DevConnections conference realized that too and have added many new sessions that help programmers that are in the beginning to intermediate stage get up to speed on all the new technology that is coming out so fast. I will be presenting a whole day long workshop at the DevConnections conference in Orlando on March 27th entitled "Essential Business Desktop Programming with .NET". In addition I will be presenting the following sessions in the Fundamentals Track. MVVM Made Simple Unit Testing Basics and Architecting Your Application for Unit Testing Data Binding from A-Z in Silverlight From Zero to Windows Phone 7 in 75 MinutesI hope I will see you there! Join me at DevConnections @devconnections in Orlando March 27-30.   Save $200 use discount code DevCon1 Register today at bit.ly/fIZjXO

    Read the article

  • Exchange 2013 goes RTM!

    - by marc dekeyser
    Exchange 2013 has been signed off and is now RTM! Hoozaaa!!   From the Exchange team blog: Today we reached an important milestone in the development of the new Exchange. Moments ago, the Exchange engineering team signed off on the Release to Manufacturing (RTM) build. This milestone means the coding and testing phase of the project is complete and we are now focused on releasing the new Exchange via multiple distribution channels to our business customers. General availability is planned for the first quarter of 2013. We have a number of programs that provide business customers with early access so they can begin testing, piloting and adopting Exchange within their organizations: We will begin rolling out new capabilities to Office 365 Enterprise customers in our next service updates, starting in November through general availability. Volume Licensing customers with Software Assurance will be able to download Exchange Server 2013 through the Volume Licensing Service Center by mid-November. These products will be available on the Volume Licensing price list on December 1. Read more…

    Read the article

  • Rewrite for robots.txt and favicon.ico

    - by BHare
    I have setup some rules in which subdomains (my users) will default to where I have located the robots.txt, favicon.ico, and crossdomain.xml therefore if a user creates a site say testing.mywebsite.com and they don't make their own favicon.ico at testing.mywebsite.com/favicon.ico, then it will use the favicon.ico I have in /misc/favicon.ico This works perfect, but it doesn't work for the main website. If you attempt to go to mywebsite.com/favicon.ico it will check if "/" exists, in which it does. And then never redirects to /misc/favicon.ico How can I get it so both instances redirect to /misc/favicon.ico ? # Set all crossdomain (openpalace file) favorite icons and robots.txt doesnt exist on their # side, then redirect to site's just to have something to go on. RewriteCond %{REQUEST_URI} crossdomain.xml$ RewriteCond ^(.+)crossdomain.xml !-f RewriteRule ^(.*)$ /misc/crossdomain.xml [L] RewriteCond %{REQUEST_URI} favicon.ico$ RewriteCond ^(.+)favicon.ico !-f RewriteRule ^(.*)$ /misc/favicon.ico [L] RewriteCond %{REQUEST_URI} robots.txt$ RewriteCond ^(.+)robots.txt !-f RewriteRule ^(.*)$ /misc/robots.txt [L]

    Read the article

  • "My stuff" vs. "Your stuff" in UI texts

    - by JD Isaacks
    When refering to a users stuff should you use My or Your, for example: My Cart | My Account | My Wishlist Or Your Cart | Your Account | Your Wishlist I found this article that argues for the use of your. It says flikr does this. It also says MySpace and MyYahoo are wrong. I also noticed today that Amazon uses the term Your. However, I have heard they are the masters at testing variations and finding the best one, so what you see on their site might be the best variation, or simply something they are currently testing. I personally like the way my looks better, but thats just my opinion. What do you think? What will hever the better impact on customers? Does it really even matter?

    Read the article

< Previous Page | 127 128 129 130 131 132 133 134 135 136 137 138  | Next Page >