Search Results

Search found 28841 results on 1154 pages for 'simple'.

Page 95/1154 | < Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Yet Another SQL Strategy for Versioned Data

    There is a popular design for a database that requires a built-in audit-trail of amendments and additions, where data is never deleted, but merely superseded by a later version. Whilst this is conceptually simple, it has always made for complicated SQL for reporting the latest version of data. Alex joins the debate on the best way of doing this with an example using an indexed view and the filtered index.

    Read the article

  • SQL Server Unit Testing with tSQLt

    When one considers the amount of time and effort that Unit Testing consumes for the Database Developer, is surprising how few good SQL Server Test frameworks are around. tSQLt , which is open source and free to use, is one of the frameworks that provide a simple way to populate a table with test data as part of the unit test, and check the results with what should be expected. Sebastian and Dennis, who created tSQLt, explain.

    Read the article

  • How to build a Query Template Explorer

    Having introduced his cross-platform Query Template solution, Michael now gives us the technical details on how to integrate his .NET controls into applications both simple and complex. With screenshots and code samples, this has everything you need to build your own powerful SQL editor or Query Template explorer.

    Read the article

  • A TDD Journey: 4-Tests as Documentation; False Positive Results; Component Isolation

    In Test-Driven Development (TDD) , The writing of a unit test is done more to design and to document than to verifiy. By writing a unit test you close a number of feedback loops, and verifying the functionality of the code is just a minor one. everything you need to know about your class under test is embodied in a simple list of the names of the tests. Michael Sorens continues his introduction to TDD that is more of a journey in six parts, by discussing Tests as Documentation, False Positive Results and Component Isolation.

    Read the article

  • Where to store short strings (with my key) on the internet?

    - by Vi
    Is there simple service to store strings under my key that can be used by bots? Requirements: Simple command line access, automatic posting allowed No need to keep some session with the service alive I choose the key (so pastebins fail) No requirement for registration/authentication (for simplicity) The string should be kept for about a month. I want something like: Store: $ echo some_data_0x1299C0FF | store_my_string testtest2011 Retrieve: $ retrive_my_string testtest2011 some_data_0x1299C0FF Do you have ideas what should I use for it? I can only think of using IRC somehow (channel topics, /whowas, ...), but this is too complex for this simple task. No security is needed: anyone can update my string. The task looks very simple, so I expect the solution to be similarly simple. Expecting something like single simple curl call.

    Read the article

  • Problem Installing older TestNG plugin on Eclipse 3.5

    - by Stefan
    I'm trying to install TestNG 5.11 on eclipse 3.5 and gettign the following. eclipse.buildId=unknown java.version=1.6.0_19 java.vendor=Sun Microsystems Inc. BootLoader constants: OS=win32, ARCH=x86, WS=win32, NL=no_NO Framework arguments: -product org.eclipse.epp.package.jee.product Command-line arguments: -os win32 -ws win32 -arch x86 -product org.eclipse.epp.package.jee.product Error Mon Jun 07 15:45:53 CEST 2010 Artifact not found: org.eclipse.update.feature,org.testng.eclipse,5.11.0.28. java.io.FileNotFoundException: "http://beust.com/eclipse/features/org.testng.eclipse_5.11.0.28.jar" at org.eclipse.equinox.internal.p2.repository.RepositoryStatusHelper.checkFileNotFound(RepositoryStatusHelper.java:289) at org.eclipse.equinox.internal.p2.repository.FileReader.checkException(FileReader.java:352) at org.eclipse.equinox.internal.p2.repository.FileReader.sendRetrieveRequest(FileReader.java:326) at org.eclipse.equinox.internal.p2.repository.FileReader.readInto(FileReader.java:263) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:71) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:127) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:468) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:451) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:518) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.getArtifact(MirrorRequest.java:200) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transferSingle(MirrorRequest.java:175) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transfer(MirrorRequest.java:159) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.perform(MirrorRequest.java:95) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:507) at org.eclipse.equinox.internal.p2.artifact.repository.simple.DownloadJob.run(DownloadJob.java:64) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Error Mon Jun 07 15:45:53 CEST 2010 Artifact not found: osgi.bundle,org.testng.eclipse,5.11.0.28. java.io.FileNotFoundException: "http://beust.com/eclipse/plugins/org.testng.eclipse_5.11.0.28.jar" at org.eclipse.equinox.internal.p2.repository.RepositoryStatusHelper.checkFileNotFound(RepositoryStatusHelper.java:289) at org.eclipse.equinox.internal.p2.repository.FileReader.checkException(FileReader.java:352) at org.eclipse.equinox.internal.p2.repository.FileReader.sendRetrieveRequest(FileReader.java:326) at org.eclipse.equinox.internal.p2.repository.FileReader.readInto(FileReader.java:263) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:71) at org.eclipse.equinox.internal.p2.repository.RepositoryTransport.download(RepositoryTransport.java:127) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:468) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.downloadArtifact(SimpleArtifactRepository.java:451) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:518) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.getArtifact(MirrorRequest.java:200) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transferSingle(MirrorRequest.java:175) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.transfer(MirrorRequest.java:159) at org.eclipse.equinox.internal.p2.artifact.repository.MirrorRequest.perform(MirrorRequest.java:95) at org.eclipse.equinox.internal.p2.artifact.repository.simple.SimpleArtifactRepository.getArtifact(SimpleArtifactRepository.java:507) at org.eclipse.equinox.internal.p2.artifact.repository.simple.DownloadJob.run(DownloadJob.java:64) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55) Error Mon Jun 07 15:45:53 CEST 2010 session context was:(profile=epp.package.jee, phase=org.eclipse.equinox.internal.provisional.p2.engine.phases.Collect, operand=, action=). I'm kinda stuck so I would really like help. Thanks

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • Oh no! My padding's invalid!

    - by Simon Cooper
    Recently, I've been doing some work involving cryptography, and encountered the standard .NET CryptographicException: 'Padding is invalid and cannot be removed.' Searching on StackOverflow produces 57 questions concerning this exception; it's a very common problem encountered. So I decided to have a closer look. To test this, I created a simple project that decrypts and encrypts a byte array: // create some random data byte[] data = new byte[100]; new Random().NextBytes(data); // use the Rijndael symmetric algorithm RijndaelManaged rij = new RijndaelManaged(); byte[] encrypted; // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); encrypted = encryptedStream.ToArray(); } byte[] decrypted; // and decrypt it again using (var decryptor = rij.CreateDecryptor()) using (CryptoStream crypto = new CryptoStream( new MemoryStream(encrypted), decryptor, CryptoStreamMode.Read)) { byte[] decrypted = new byte[data.Length]; crypto.Read(decrypted, 0, decrypted.Length); } Sure enough, I got exactly the same CryptographicException when trying to decrypt the data even in this simple example. Well, I'm obviously missing something, if I can't even get this single method right! What does the exception message actually mean? What am I missing? Well, after playing around a bit, I discovered the problem was fixed by changing the encryption step to this: // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) { using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); } encrypted = encryptedStream.ToArray(); } Aaaah, so that's what the problem was. The CryptoStream wasn't flushing all it's data to the MemoryStream before it was being read, and closing the stream causes it to flush everything to the backing stream. But why does this cause an error in padding? Cryptographic padding All symmetric encryption algorithms (of which Rijndael is one) operates on fixed block sizes. For Rijndael, the default block size is 16 bytes. This means the input needs to be a multiple of 16 bytes long. If it isn't, then the input is padded to 16 bytes using one of the padding modes. This is only done to the final block of data to be encrypted. CryptoStream has a special method to flush this final block of data - FlushFinalBlock. Calling Stream.Flush() does not flush the final block, as you might expect. Only by closing the stream or explicitly calling FlushFinalBlock is the final block, with any padding, encrypted and written to the backing stream. Without this call, the encrypted data is 16 bytes shorter than it should be. If this final block wasn't written, then the decryption gets to the final 16 bytes of the encrypted data and tries to decrypt it as the final block with padding. The end bytes don't match the padding scheme it's been told to use, therefore it throws an exception stating what is wrong - what the decryptor expects to be padding actually isn't, and so can't be removed from the stream. So, as well as closing the stream before reading the result, an alternative fix to my encryption code is the following: // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); // explicitly flush the final block of data crypto.FlushFinalBlock(); encrypted = encryptedStream.ToArray(); } Conclusion So, if your padding is invalid, make sure that you close or call FlushFinalBlock on any CryptoStream performing encryption before you access the encrypted data. Flush isn't enough. Only then will the final block be present in the encrypted data, allowing it to be decrypted successfully.

    Read the article

  • How much is a subscriber worth?

    - by Tom Lewin
    This year at Red Gate, we’ve started providing a way to back up SQL Azure databases and Azure storage. We decided to sell this as a service, instead of a product, which means customers only pay for what they use. Unfortunately for us, it makes figuring out revenue much trickier. With a product like SQL Compare, a customer pays for it, and it’s theirs for good. Sure, we offer support and upgrades, but, fundamentally, the sale is a simple, upfront transaction: we’ve made this product, you need this product, we swap product for money and everyone is happy. With software as a service, it isn’t that easy. The money and product don’t change hands up front. Instead, we provide a service in exchange for a recurring fee. We know someone buying SQL Compare will pay us $X, but we don’t know how long service customers will stay with us, or how much they will spend. How do we find this out? We use lifetime value analysis. What is lifetime value? Lifetime value, or LTV, is how much a customer is worth to the business. For Entrepreneurs has a brilliant write up that we followed to conduct our analysis. Basically, it all boils down to this equation: LTV = ARPU x ALC To make it a bit less of an alphabet-soup and a bit more understandable, we can write it out in full: The lifetime value of a customer equals the average revenue per customer per month, times the average time a customer spends with the service Simple, right? A customer is worth the average spend times the average stay. If customers pay on average $50/month, and stay on average for ten months, then a new customer will, on average, bring in $500 over the time they are a customer! Average spend is easy to work out; it’s revenue divided by customers. The problem comes when we realise that we don’t know exactly how long a customer will stay with us. How can we figure out the average lifetime of a customer, if we only have six months’ worth of data? The answer lies in the fact that: Average Lifetime of a Customer = 1 / Churn Rate The churn rate is the percentage of customers that cancel in a month. If half of your customers cancel each month, then your average customer lifetime is two months. The problem we faced was that we didn’t have enough data to make an estimate of one month’s cancellations reliable (because barely anybody cancels)! To deal with this data problem, we can take data from the last three months instead. This means we have more data to play with. We can still use the equation above, we just need to multiply the final result by three (as we worked out how many three month periods customers stay for, and we want our answer to be in months). Now these estimates are likely to be fairly unreliable; when there’s not a lot of data it pays to be cautious with inference. That said, the numbers we have look fairly consistent, and it’s super easy to revise our estimates when new data comes in. At the very least, these numbers give us a vague idea of whether a subscription business is viable. As far as Cloud Services goes, the business looks very viable indeed, and the low cancellation rates are much more than just data points in LTV equations; they show that the product is working out great for our customers, which is exactly what we’re looking for!

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • Using Deployment Manager

    - by Jess Nickson
    One of the teams at Red Gate has been working very hard on a new product: Deployment Manager. Deployment Manager is a free tool that lets you deploy updates to .NET apps, services and databases through a central dashboard. Deployment Manager has been out for a while, but I must admit that even though I work in the same building, until now I hadn’t even looked at it. My job at Red Gate is to develop and maintain some of our community sites, which involves carrying out regular deployments. One of the projects I have to deploy on a fairly regular basis requires me to send my changes to our build server, TeamCity. The output is a Zip file of the build. I then have to go and find this file, copy it across to the staging machine, extract it, and copy some of the sub-folders to other places. In order to keep track of what builds are running, I need to rename the folders accordingly. However, even after all that, I still need to go and update the site and its applications in IIS to point at these new builds. Oh, and then, I have to repeat the process when I deploy on production. Did I mention the multiple configuration files that then need updating as well? Manually? The whole process can take well over half an hour. I’m ready to try out a new process. Deployment Manager is designed to massively simplify the deployment processes from what could be lots of manual copying of files, managing of configuration files, and database upgrades down to a few clicks. It’s a big promise, but I decided to try out this new tool on one of the smaller ASP.NET sites at Red Gate, Format SQL (the result of a Red Gate Down Tools week). I wanted to add some new functionality, but given it was a new site with no set way of doing things, I was reluctant to have to manually copy files around servers. I decided to use this opportunity as a chance to set the site up on Deployment Manager and check out its functionality. What follows is a guide on how to get set up with Deployment Manager, a brief overview of its features, and what I thought of the experience. To follow along with the instructions that follow, you’ll first need to download Deployment Manager from Red Gate. It has a free ‘Starter Edition’ which allows you to create up to 5 projects and agents (machines you deploy to), so it’s really easy to get up and running with a fully-featured version. The Initial Set Up After installing the product and setting it up using the administration tool it provides, I launched Deployment Manager by going to the URL and port I had set it to run on. This loads up the main dashboard. The dashboard does a good job of guiding me through the process of getting started, beginning with a prompt to create some environments. 1. Setting up Environments The dashboard informed me that I needed to add new ‘Environments’, which are essentially ways of grouping the machines you want to deploy to. The environments that get added will show up on the main dashboard. I set up two such environments for this project: ‘staging’ and ‘live’.   2. Add Target Machines Once I had created the environments, I was ready to add ‘target machine’s to them, which are the actual machines that the deployment will occur on.   To enable me to deploy to a new machine, I needed to download and install an Agent on it. The ‘Add target machine’ form on the ‘Environments’ page helpfully provides a link for downloading an Agent.   Once the agent has been installed, it is just a case of copying the server key to the agent, and the agent key to the server, to link them up.   3. Run Health Check If, after adding your new target machine, the ‘Status’ flags an error, it is possible that the Agent and Server keys have not been entered correctly on both Deployment Manager and the Agent service.     You can ‘Check Health’, which will give you more information on any issues. It is probably worth running this regardless of what status the ‘Environments’ dashboard is claiming, just to be on the safe side.     4. Add Projects Going back to the main Dashboard tab at this point, I found that it was telling me that I needed to set up a new project.   I clicked the ‘project’ link to get started, gave my new project a name and clicked ‘Create’. I was then redirected to the ‘Steps’ page for the project under the Projects tab.   5. Package Steps The ‘Steps’ page was fairly empty when it first loaded.   Adding a ‘step’ allowed me to specify what packages I wanted to grab for the deployment. This part requires a NuGet package feed to be set up, which is where Deployment Manager will look for the packages. At Red Gate, we already have one set up, so I just needed to tell Deployment Manager about it. Don’t worry; there is a nice guide included on how to go about doing all of this on the ‘Package Feeds’ page in ‘Settings’, if you need any help with setting these bits up.    At Red Gate we use a build server, TeamCity, which is capable of publishing built projects to the NuGet feed we use. This makes the workflow for Format SQL relatively simple: when I commit a change to the project, the build server is configured to grab those changes, build the project, and spit out a new NuGet package to the Red Gate NuGet package feed. My ‘package step’, therefore, is set up to look for this package on our feed. The final part of package step was simply specifying which machines from what environments I wanted to be able to deploy the project to.     Format SQL Now the main Dashboard showed my new project and environment in a rather empty looking grid. Clicking on my project presented me with a nice little message telling me that I am now ready to create my first release!   Create a release Next I clicked on the ‘Create release’ button in the Projects tab. If your feeds and package step(s) were set up correctly, then Deployment Manager will automatically grab the latest version of the NuGet package that you want to deploy. As you can see here, it was able to pick up the latest build for Format SQL and all I needed to do was enter a version number and description of the release.   As you can see underneath ‘Version number’, it keeps track of what version the previous release was given. Clicking ‘Create’ created the release and redirected me to a summary of it where I could check the details before deploying.   I clicked ‘Deploy this release’ and chose the environment I wanted to deploy to and…that’s it. Deployment Manager went off and deployed it for me.   Once I clicked ‘Deploy release’, Deployment Manager started to automatically update and provide continuing feedback about the process. If any errors do arise, then I can expand the results to see where it went wrong. That’s it, I’m done! Keep in mind, if you hit errors with the deployment itself then it is possible to view the log output to try and determine where these occurred. You can keep expanding the logs to narrow down the problem. The screenshot below is not from my Format SQL deployment, but I thought I’d post one to demonstrate the logging output available. Features One of the best bits of Deployment Manager for me is the ability to very, very easily deploy the same release to multiple machines. Deploying this same release to production was just a case of selecting the deployment and choosing the ‘live’ environment as the place to deploy to. Following on from this is the fact that, as Deployment Manager keeps track of all of your releases, it is extremely easy to roll back to a previous release if anything goes pear-shaped! You can view all your previous releases and select one to re-deploy. I needed this feature more than once when differences in my production and staging machines lead to some odd behavior.     Another option is to use the TeamCity integration available. This enables you to set Deployment Manager up so that it will automatically create releases and deploy these to an environment directly from TeamCity, meaning that you can always see the latest version up and running without having to do anything. Machine Specific Deployments ‘What about custom configuration files?’ I hear you shout. Certainly, it was one of my concerns. Our setup on the staging machine is not in line with that on production. What this means is that, should we deploy the same configuration to both, one of them is going to break. Thankfully, it turns out that Deployment Manager can deal with this. Given I had environments ‘staging’ and ‘live’, and that staging used the project’s web.config file, while production (‘live’) required the config file to undergo some transformations, I simply added a web.live.config file in the project, so that it would be included as part of the NuGet package. In this file, I wrote the XML document transformations I needed and Deployment Manager took care of the rest. Another option is to set up ‘variables’ for your project, which allow you to specify key-value pairs for your configuration file, and which environment to apply them to. You’ll find Variables as a full left-hand submenu within the ‘Projects’ tab. These features will definitely be of interest if you have a large number of environments! There are still many other features that I didn’t get a chance to play around with like running PowerShell scripts for more personalised deployments. Maybe next time! Also, let’s not forget that my use case in this article is a very simple one – deploying a single package. I don’t believe that all projects will be equally as simple, but I already appreciate how much easier Deployment Manager could make my life. I look forward to the possibility of moving our other sites over to Deployment Manager in the near future.   Conclusion In this article I have described the steps involved in setting up and configuring an instance of Deployment Manager, creating a new automated deployment process, and using this to actually carry out a deployment. I’ve tried to mention some of the features I found particularly useful, such as error logging, easy release management allowing you to deploy the same release multiple times, and configuration file transformations. If I had to point out one issue, then it would be that the releases are immutable, which from a development point of view makes sense. However, this causes confusion where I have to create a new release to deploy to a newly set up environment – I cannot simply deploy an old release onto a new environment, the whole release needs to be recreated. I really liked how easy it was to get going with the product. Setting up Format SQL and making a first deployment took very little time. Especially when you compare it to how long it takes me to manually deploy the other site, as I described earlier. I liked how it let me know what I needed to do next, with little messages flagging up that I needed to ‘create environments’ or ‘add some deployment steps’ before I could continue. I found the dashboard incredibly convenient. As the number of projects and environments increase, it might become awkward to try and search them and find out what state they are in. Instead, the dashboard handily keeps track of the latest deployments of each project and lets you know what version is running on each of the environments, and when that deployment occurred. Finally, do you remember my complaint about having to rename folders so that I could keep track of what build they came from? This is yet another thing that Deployment Manager takes care of for you. Each release is put into its own directory, which takes the name of whatever version number that release has, though these can be customised if necessary. If you’d like to take a look at Deployment Manager for yourself, then you can download it here.

    Read the article

  • Going to the Score Cards - Exceptional DBA Awards 2011

    - by Rodney
    This year marks my 4th year as a judge for the Exceptional DBA Awards, founded by Red Gate in 2008 to "recognize the essential but often overlooked contributions of DBAs, the unsung heroes of the IT community." As a professional DBA myself I have been honored to participate as a judge. It is not an easy job because there is a voluminous amount of nominees from all over the world. Each judge has to read through every word of the nominee's answers, deciding what makes each person special and stand out amongst their peers. What drives them? What single element of their submission will shine above all others? It is my hope that what I am about to divulge to you as a judge will prompt you to think about yourself or someone you know and decide that you may be the exceptional DBA who can take home the gold at this year's award ceremony in Seattle. We are more than a few weeks into the nomination process and there are quite a number of submissions already. I can not tell you how many as that would not be fair. I can say it is not 1 million or more. I can also say that it is not 100,000. But that is all I can say about that. However, I can tell you that it is enough this year that we are breaking records on the number of people who have been influenced, inspired or intrigued by the awards in the past. I remember them all like it were yesterday. fuzzy thought cloud here. It was a rainy day in Seattle (all memories for each award ceremony will start thusly) and I was in the hotel going over my notes on what I wanted to say about the winner of the 2008 Red Gate Exceptional DBA Award. The notes were on index cards that I had either bought or stolen from my wife, I do not recall, but I was nervous which was unlike me. This was, after all, a big night for the winner. Of course, we, the judges and the SQL community, had already decided the winner and now all that remained was to present the award. The room was packed. It was Casino night, sponsored by sqlservercentral.com. Money (fake), drinks (not fake) and camaraderie flowed through the room. Dan McClain won the award that year. He worked for Anheuser-Busch at the time. I promise that did not influence my decision. We presented Dan with the award. He was very proud of this achievement, rightfully so, as was the SQL community for him. I spoke with Dan throughout the conference and realized how huge this award was for him, not just personally but professionally. It was a rainy day in Seattle in 2009 and I was nervous. I was asked to speak to a group of people again as a judge for the Exceptional DBA Awards. This year, Josef Richberg would be the recipient of the award, but he would not be able to attend. We all prayed for him as he fought through an illness and congratulated him for his accomplishments as a DBA for his company. He got better and sallied forth and continued to give back to the SQL community that he saw as one big family. In 2010, and I am getting ahead of myself, he was asked to be a judge himself for the very award he had just received the year before. It was a sunny day in Seattle and I missed it, because it was in July and I was not there. It was a rainy day in Seattle and it is 2010 and Tracy Hamlin enters a submission that blows this judge away. She is managing a 50 Terabyte distributed database ("50 Gigabytes! Are you kidding me!!!", Rodney jokes.)  and loves her daily job as a DBA working with developers, mentoring them and teaching them best practices with kindness and patience. She is a people person who just happens to have 10+ years experience with RDBMS'. She wins the award and goes on to be recognized as famous at PASS. It will be a rainy day in Seattle this year when I sit amongst my old constituent judges and friends, Brad McGehee, http://www.simple-talk.com/books/sql-books/how-to-become-an-exceptional-dba,-2nd-edition/, Steve Jones, whom we all know and love at http://www.sqlservercentral.com and a young upstart to the SQL Community, this cat named Brent Ozar to announce the 2011 winner. I personally have not heard of Brent but I am told I have interviewed him for a DBA position several years ago and turned him down, http://www.brentozar.com/archive/2011/05/exceptional-dba-contest/ . I hope that did not jeopardize his future in the SQL world. I am a big hearted oaf and would feel horrible. Hopefully I will meet him at PASS and we can work this all out and I can help him get a DBA job. The rain has stopped and a new year is upon us. The stakes are high...the competition is fierce...the rewards are incredible. The entry form awaits you. http://www.exceptionaldba.com/ I very much look forward to meeting you and presenting the award to you in front of hundreds of your envious but proud peers as the new Exceptional DBA for 2011 at the PASS Summit. Here is what you could win: The Exceptional DBA of the Year receives full conference registration for the 2011 PSS Summit in Seattle, where the awards ceremony will take place, four nights' hotel accommodation, and $300 towards travel expenses. They will also be featured on Simple-Talk. Are you ready? Are you nervous?

    Read the article

  • Last chance to enter! Exceptional DBA Awards 2011

    - by Rebecca Amos
    Only 1 day left to enter the Exceptional DBA Awards! Get started on your entry today, and you could be heading to Seattle for the PASS Summit in October. All you need to do is visit the Exceptional DBA website and answer a few questions about: Your career and achievements as a SQL Server DBAAny mistakes you've made along the way (and how you tackled them)Activities you're involved in within the SQL Server community – for example writing, blogging, contributing to forums, speaking at events, or organising user groupsWhy you think you should be the Exceptional DBA of 2011 As well as the respect and recognition of your peers – and a great boost to your CV – you could win full conference registration to this year's PASS Summit in Seattle (including accommodation and $300 towards travel expenses) – where the award will be presented, as well as a copy of Red Gate's SQL DBA Bundle, and a chance to be featured here, on Simple-Talk.com. So why not give it a shot? Start your entry now at www.exceptionaldba.com (nominations close on 30 June).

    Read the article

  • How do you use blog content?

    - by fatherjack
    Do you write a blog, have you ever thought about it? I think people fall into one of a few categories when it comes to blogs, especially blogs with technical content. Writing articles furiously - daily, twice daily and reading dozens of others. Writing the odd piece of content and read plenty of others' output. Started a blog once and its fizzled out but reading lots. Thought about starting a blog someday but never got around to it, hopping into the occasional blog when a link or a Tweet takes them there. Never thought about writing one but often catching content from them when Google (or other preferred search engine) finds content related to their search. Now I am not saying that either of these is right or wrong, nor am I saying that anyone should feel any compulsion to be in any particular category. What I would say is that you as a blog reader have the power to move blog writers from one category to another. How, you might ask? How do I have any power over a blog writer? It is very simple - feedback. If you give feedback then the blog writer knows that they are reaching an audience, if there is no response then they we are simply writing down our thoughts for what could amount to nothing more than a feeble amount of exercise and a few more key stokes towards the onset of RSI. Most blogs have a mechanism to alert the writer when there are comments, and personally speaking, if an email is received saying there has been a response to a blog article then there is a rush of enthusiasm, a moment of excitement that someone is actually reading and considering the text that was submitted and made available for the whole world to read. I am relatively new to this blog game and could be in some extended honeymoon period as I have also recently been incorporated into the Simple Talk 'stable'. I can understand that once you get to the "Dizzy Heights of Ozar" (www.brentozar.com) then getting comments and feedback might not be such a pleasure and may even be rather more of a chore but that, I guess, is the price of fame. For us mere mortals starting out blogging, getting feedback (or even at the moment for me, simply the hope of getting feedback) is what keeps it going. The hope that you will pick a topic that hasn't been done recently by Brad McGehee, Grant Fritchey,  Paul Randall, Thomas LaRock or any one of the dozen of rock star bloggers listed here or others from SQLServerPedia and so on, and then do it well enough to be found, reviewed, or <shudder> (re)tweeted to bring more visitors is what we are striving for, along with the fact that the content we might produce is something that will be of benefit to others. There is only so much point to typing content that no-one is reading and putting it on a blog. You may as well just write it in a diary. A technical blog is not like, say, a blog covering photography techniques where the way to frame and take a picture stands true whether it was written last week, last year or last century - technical content goes sour, quite quickly. There isn't much call for articles about yesterdays technology unless its something that still applies to current versions too, so some content written no more than 2 years ago isn't worth having now. The combination of a piece of content that you know is going to not last long and the fact that no-one reads it is a strong force against writing anything else. Getting feedback counters that despair and gives a value to writing something new. I would say that any feedback is good but there are obviously comments that are just so negative or otherwise badly phrased that they would hasten the demise of a blog but, in general most feedback will encourage a writer. It may not be a comment that supports or agrees with the main theme of a post but if it generates discussion or opens up a previously unexplored viewpoint it is contributing to the blog and is therefore encouraging to the writer. Even if you only say "thank you" before you leave a blog, having taken a section of script to use for yourself or having been given a few links to some content that has widened your knowledge it will be so welcome to the blog owner. Isn't it also the decent thing to do, acknowledging that you have benefited from another's efforts?

    Read the article

  • SQLBeat Podcast – Episode 8 – Interviewing Patrick LeBlanc On Interviewing

    - by SQLBeat
    In this episode of the SQLBeat Podcast (@SQLBeat on twitter) I had a chance to speak with Patrick LeBlanc, currently with Microsoft and former SQL Server MVP. We spend a good amount of time talking about his current gig and his apparent fascination with almost never being dressed. Fortunately he had on some jeans and a shirt for this interview, though he did look at bit dozy because he had not slept the night before. With his help we came up with what is going to be a recurring section on the podcast and that is failed or embarrassing job interviews. Patrick has quite a humiliating one for the launch of “Tell Me About Your Worst Job Interview”. Ok, I can come up with a better title for that section surely. Anyway, have a listen and if you can think of something better, let me know in the comments section. <source src="http://www.simple-talk.com/blogbits/rodneylandrum/Patrick_LeBlanc.ogg" type="audio/ogg; codecs="vorbis"

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • Justifiable Perks.

    - by Phil Factor
        I was once the director of a start-up IT Company, and had the task of recruiting a proportion of the management team. As my background was in IT management, I was rather more familiar with recruiting Geeks for technology jobs, but here, one of my early tasks was interviewing a Marketing Director.  The small group of financiers had suggested a rather strange Irishman called  Halleran.  From my background in City of London dealing-rooms, I was slightly unprepared for the experience of interviewing anyone wearing a pink suit. Many of my older City colleagues would have required resuscitation after seeing his white leather shoes. However, nobody will accuse me of prejudging an interviewee. After all, many Linux experts who I’ve come to rely on have appeared for interview dressed as hobbits. In fact, the interview went well, and we had even settled his salary.  I was somewhat unprepared for the coda.    ‘And I will need to be provided with a Ferrari  by the company.’    ‘Hmm. That seems reasonable.’    Initially, he looked startled, and then a slow smile of victory spread across his face.    ‘What colour would you like?’ I asked genially.    ‘It has to be red.’ He looked very earnest on this point.    ‘Fine. I have to go past Hamleys on the way home this evening, so I’ll pick one up then for you.’    ‘Er.. Hamley’s is a toyshop, not a Ferrari Dealership.’    I stared at him in bafflement for a few seconds. ‘You’re not seriously asking for a real Ferrari are you?’     ‘Well, yes. Not for my own sake, you understand. I’d much prefer a simple run-about, but my position demands it. How could I maintain the necessary status in the office without one? How could I do my job in marketing when my grey Datsun was all too visible in the car Park? It is a tool of the job.’    ‘Excuse me a moment, but I must confer with the MD’    I popped out to see Chris, the MD. ‘Chris, I’m interviewing a lunatic in a pink suit who is trying to demand that a Ferrari is a precondition of his employment. I tried the ‘misunderstanding trick’ but it didn’t faze him.’     ‘Sorry, Phil, but we’ve got to hire him. The VCs insist on it. You’ve got to think of something that doesn’t involve committing to the purchase of a Ferrari. Current funding barely covers the rent for the building.’    ‘OK boss. Leave it to me.’    On return, I slapped O’Halleran’s file on the table with a genial, paternalistic smile. ‘Of course you should have a Ferrari. The only trouble is that it will require a justification document that can be presented to the board. I’m sure you’ll have no problem in preparing this document in the required format.’ The initial look of despair was quickly followed by a bland look of acquiescence. He had, earlier in the interview, argued with great eloquence his skill in preparing the tiresome documents that underpin the essential corporate and government deals that were vital to the success of this new enterprise. The justification of a Ferrari should be a doddle.     After the interview, Chris nervously asked how I’d fared.     ‘I think it is all solved.’    ‘… without promising a Ferrari, I hope.’    ‘Well, I did actually; on condition he justified it in writing.’    Chris issued a stream of invective. The strain of juggling the resources in an underfunded startup was beginning to show.    ‘Don’t worry. In the unlikely event of him coming back with the required document, I’ll give him mine.’    ‘Yours?’ He strode over to the window to stare down at the car park.    He needn’t have worried: I knew that his breed of marketing man could more easily lay an ostrich egg than to prepare a decent justification document. My Ferrari is still there at the back of my garage. Few know of the Ferrari cultivator, a simple inexpensive motorized device designed for the subsistence farmers of southern Italy. It is the very devil to start, but it creates a perfect tilth for the seedbed.

    Read the article

  • Towards Database Continuous Delivery – What Next after Continuous Integration? A Checklist

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database delivery patterns & practices STAGE 4 AUTOMATED DEPLOYMENT If you’ve been fortunate enough to get to the stage where you’ve implemented some sort of continuous integration process for your database updates, then hopefully you’re seeing the benefits of that investment – constant feedback on changes your devs are making, advanced warning of data loss (prior to the production release on Saturday night!), a nice suite of automated tests to check business logic, so you know it’s going to work when it goes live, and so on. But what next? What can you do to improve your delivery process further, moving towards a full continuous delivery process for your database? In this article I describe some of the issues you might need to tackle on the next stage of this journey, and how to plan to overcome those obstacles before they appear. Our Database Delivery Learning Program consists of four stages, really three – source controlling a database, running continuous integration processes, then how to set up automated deployment (the middle stage is split in two – basic and advanced continuous integration, making four stages in total). If you’ve managed to work through the first three of these stages – source control, basic, then advanced CI, then you should have a solid change management process set up where, every time one of your team checks in a change to your database (whether schema or static reference data), this change gets fully tested automatically by your CI server. But this is only part of the story. Great, we know that our updates work, that the upgrade process works, that the upgrade isn’t going to wipe our 4Tb of production data with a single DROP TABLE. But – how do you get this (fully tested) release live? Continuous delivery means being always ready to release your software at any point in time. There’s a significant gap between your latest version being tested, and it being easily releasable. Just a quick note on terminology – there’s a nice piece here from Atlassian on the difference between continuous integration, continuous delivery and continuous deployment. This piece also gives a nice description of the benefits of continuous delivery. These benefits have been summed up by Jez Humble at Thoughtworks as: “Continuous delivery is a set of principles and practices to reduce the cost, time, and risk of delivering incremental changes to users” There’s another really useful piece here on Simple-Talk about the need for continuous delivery and how it applies to the database written by Phil Factor – specifically the extra needs and complexities of implementing a full CD solution for the database (compared to just implementing CD for, say, a web app). So, hopefully you’re convinced of moving on the the next stage! The next step after CI is to get some sort of automated deployment (or “release management”) process set up. But what should I do next? What do I need to plan and think about for getting my automated database deployment process set up? Can’t I just install one of the many release management tools available and hey presto, I’m ready! If only it were that simple. Below I list some of the areas that it’s worth spending a little time on, where a little planning and prep could go a long way. It’s also worth pointing out, that this should really be an evolving process. Depending on your starting point of course, it can be a long journey from your current setup to a full continuous delivery pipeline. If you’ve got a CI mechanism in place, you’re certainly a long way down that path. Nevertheless, we’d recommend evolving your process incrementally. Pages 157 and 129-141 of the book on Continuous Delivery (by Jez Humble and Dave Farley) have some great guidance on building up a pipeline incrementally: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912 For now, in this post, we’ll look at the following areas for your checklist: You and Your Team Environments The Deployment Process Rollback and Recovery Development Practices You and Your Team It’s a cliché in the DevOps community that “It’s not all about processes and tools, really it’s all about a culture”. As stated in this DevOps report from Puppet Labs: “DevOps processes and tooling contribute to high performance, but these practices alone aren’t enough to achieve organizational success. The most common barriers to DevOps adoption are cultural: lack of manager or team buy-in, or the value of DevOps isn’t understood outside of a specific group”. Like most clichés, there’s truth in there – if you want to set up a database continuous delivery process, you need to get your boss, your department, your company (if relevant) onside. Why? Because it’s an investment with the benefits coming way down the line. But the benefits are huge – for HP, in the book A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware, these are summarized as: -2008 to present: overall development costs reduced by 40% -Number of programs under development increased by 140% -Development costs per program down 78% -Firmware resources now driving innovation increased by a factor of 8 (from 5% working on new features to 40% But what does this mean? It means that, when moving to the next stage, to make that extra investment in automating your deployment process, it helps a lot if everyone is convinced that this is a good thing. That they understand the benefits of automated deployment and are willing to make the effort to transform to a new way of working. Incidentally, if you’re ever struggling to convince someone of the value I’d strongly recommend just buying them a copy of this book – a great read, and a very practical guide to how it can really work at a large org. I’ve spoken to many customers who have implemented database CI who describe their deployment process as “The point where automation breaks down. Up to that point, the CI process runs, untouched by human hand, but as soon as that’s finished we revert to manual.” This deployment process can involve, for example, a DBA manually comparing an environment (say, QA) to production, creating the upgrade scripts, reading through them, checking them against an Excel document emailed to him/her the night before, turning to page 29 in his/her notebook to double-check how replication is switched off and on for deployments, and so on and so on. Painful, error-prone and lengthy. But the point is, if this is something like your deployment process, telling your DBA “We’re changing everything you do and your toolset next week, to automate most of your role – that’s okay isn’t it?” isn’t likely to go down well. There’s some work here to bring him/her onside – to explain what you’re doing, why there will still be control of the deployment process and so on. Or of course, if you’re the DBA looking after this process, you have to do a similar job in reverse. You may have researched and worked out how you’d like to change your methodology to start automating your painful release process, but do the dev team know this? What if they have to start producing different artifacts for you? Will they be happy with this? Worth talking to them, to find out. As well as talking to your DBA/dev team, the other group to get involved before implementation is your manager. And possibly your manager’s manager too. As mentioned, unless there’s buy-in “from the top”, you’re going to hit problems when the implementation starts to get rocky (and what tool/process implementations don’t get rocky?!). You need to have support from someone senior in your organisation – someone you can turn to when you need help with a delayed implementation, lack of resources or lack of progress. Actions: Get your DBA involved (or whoever looks after live deployments) and discuss what you’re planning to do or, if you’re the DBA yourself, get the dev team up-to-speed with your plans, Get your boss involved too and make sure he/she is bought in to the investment. Environments Where are you going to deploy to? And really this question is – what environments do you want set up for your deployment pipeline? Assume everyone has “Production”, but do you have a QA environment? Dedicated development environments for each dev? Proper pre-production? I’ve seen every setup under the sun, and there is often a big difference between “What we want, to do continuous delivery properly” and “What we’re currently stuck with”. Some of these differences are: What we want What we’ve got Each developer with their own dedicated database environment A single shared “development” environment, used by everyone at once An Integration box used to test the integration of all check-ins via the CI process, along with a full suite of unit-tests running on that machine In fact if you have a CI process running, you’re likely to have some sort of integration server running (even if you don’t call it that!). Whether you have a full suite of unit tests running is a different question… Separate QA environment used explicitly for manual testing prior to release “We just test on the dev environments, or maybe pre-production” A proper pre-production (or “staging”) box that matches production as closely as possible Hopefully a pre-production box of some sort. But does it match production closely!? A production environment reproducible from source control A production box which has drifted significantly from anything in source control The big question is – how much time and effort are you going to invest in fixing these issues? In reality this just involves figuring out which new databases you’re going to create and where they’ll be hosted – VMs? Cloud-based? What about size/data issues – what data are you going to include on dev environments? Does it need to be masked to protect access to production data? And often the amount of work here really depends on whether you’re working on a new, greenfield project, or trying to update an existing, brownfield application. There’s a world if difference between starting from scratch with 4 or 5 clean environments (reproducible from source control of course!), and trying to re-purpose and tweak a set of existing databases, with all of their surrounding processes and quirks. But for a proper release management process, ideally you have: Dedicated development databases, An Integration server used for testing continuous integration and running unit tests. [NB: This is the point at which deployments are automatic, without human intervention. Each deployment after this point is a one-click (but human) action], QA – QA engineers use a one-click deployment process to automatically* deploy chosen releases to QA for testing, Pre-production. The environment you use to test the production release process, Production. * A note on the use of the word “automatic” – when carrying out automated deployments this does not mean that the deployment is happening without human intervention (i.e. that something is just deploying over and over again). It means that the process of carrying out the deployment is automatic in that it’s not a person manually running through a checklist or set of actions. The deployment still requires a single-click from a user. Actions: Get your environments set up and ready, Set access permissions appropriately, Make sure everyone understands what the environments will be used for (it’s not a “free-for-all” with all environments to be accessed, played with and changed by development). The Deployment Process As described earlier, most existing database deployment processes are pretty manual. The following is a description of a process we hear very often when we ask customers “How do your database changes get live? How does your manual process work?” Check pre-production matches production (use a schema compare tool, like SQL Compare). Sometimes done by taking a backup from production and restoring in to pre-prod, Again, use a schema compare tool to find the differences between the latest version of the database ready to go live (i.e. what the team have been developing). This generates a script, User (generally, the DBA), reviews the script. This often involves manually checking updates against a spreadsheet or similar, Run the script on pre-production, and check there are no errors (i.e. it upgrades pre-production to what you hoped), If all working, run the script on production.* * this assumes there’s no problem with production drifting away from pre-production in the interim time period (i.e. someone has hacked something in to the production box without going through the proper change management process). This difference could undermine the validity of your pre-production deployment test. Red Gate is currently working on a free tool to detect this problem – sign up here at www.sqllighthouse.com, if you’re interested in testing early versions. There are several variations on this process – some better, some much worse! How do you automate this? In particular, step 3 – surely you can’t automate a DBA checking through a script, that everything is in order!? The key point here is to plan what you want in your new deployment process. There are so many options. At one extreme, pure continuous deployment – whenever a dev checks something in to source control, the CI process runs (including extensive and thorough testing!), before the deployment process keys in and automatically deploys that change to the live box. Not for the faint hearted – and really not something we recommend. At the other extreme, you might be more comfortable with a semi-automated process – the pre-production/production matching process is automated (with an error thrown if these environments don’t match), followed by a manual intervention, allowing for script approval by the DBA. One he/she clicks “Okay, I’m happy for that to go live”, the latter stages automatically take the script through to live. And anything in between of course – and other variations. But we’d strongly recommended sitting down with a whiteboard and your team, and spending a couple of hours mapping out “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” NB: Most of what we’re discussing here is about production deployments. It’s important to note that you will also need to map out a deployment process for earlier environments (for example QA). However, these are likely to be less onerous, and many customers opt for a much more automated process for these boxes. Actions: Sit down with your team and a whiteboard, and draw out the answers to the questions above for your production deployments – “What do we do now?”, “What do we actually want?”, “What will satisfy our needs for continuous delivery, but still maintaining some sort of continuous control over the process?” Repeat for earlier environments (QA and so on). Rollback and Recovery If only every deployment went according to plan! Unfortunately they don’t – and when things go wrong, you need a rollback or recovery plan for what you’re going to do in that situation. Once you move in to a more automated database deployment process, you’re far more likely to be deploying more frequently than before. No longer once every 6 months, maybe now once per week, or even daily. Hence the need for a quick rollback or recovery process becomes paramount, and should be planned for. NB: These are mainly scenarios for handling rollbacks after the transaction has been committed. If a failure is detected during the transaction, the whole transaction can just be rolled back, no problem. There are various options, which we’ll explore in subsequent articles, things like: Immediately restore from backup, Have a pre-tested rollback script (remembering that really this is a “roll-forward” script – there’s not really such a thing as a rollback script for a database!) Have fallback environments – for example, using a blue-green deployment pattern. Different options have pros and cons – some are easier to set up, some require more investment in infrastructure; and of course some work better than others (the key issue with using backups, is loss of the interim transaction data that has been added between the failed deployment and the restore). The best mechanism will be primarily dependent on how your application works and how much you need a cast-iron failsafe mechanism. Actions: Work out an appropriate rollback strategy based on how your application and business works, your appetite for investment and requirements for a completely failsafe process. Development Practices This is perhaps the more difficult area for people to tackle. The process by which you can deploy database updates is actually intrinsically linked with the patterns and practices used to develop that database and linked application. So you need to decide whether you want to implement some changes to the way your developers actually develop the database (particularly schema changes) to make the deployment process easier. A good example is the pattern “Branch by abstraction”. Explained nicely here, by Martin Fowler, this is a process that can be used to make significant database changes (e.g. splitting a table) in a step-wise manner so that you can always roll back, without data loss – by making incremental updates to the database backward compatible. Slides 103-108 of the following slidedeck, from Niek Bartholomeus explain the process: https://speakerdeck.com/niekbartho/orchestration-in-meatspace As these slides show, by making a significant schema change in multiple steps – where each step can be rolled back without any loss of new data – this affords the release team the opportunity to have zero-downtime deployments with considerably less stress (because if an increment goes wrong, they can roll back easily). There are plenty more great patterns that can be implemented – the book Refactoring Databases, by Scott Ambler and Pramod Sadalage is a great read, if this is a direction you want to go in: http://www.amazon.com/Refactoring-Databases-Evolutionary-paperback-Addison-Wesley/dp/0321774515 But the question is – how much of this investment are you willing to make? How often are you making significant schema changes that would require these best practices? Again, there’s a difference here between migrating old projects and starting afresh – with the latter it’s much easier to instigate best practice from the start. Actions: For your business, work out how far down the path you want to go, amending your database development patterns to “best practice”. It’s a trade-off between implementing quality processes, and the necessity to do so (depending on how often you make complex changes). Socialise these changes with your development group. No-one likes having “best practice” changes imposed on them, so good to introduce these ideas and the rationale behind them early.   Summary The next stages of implementing a continuous delivery pipeline for your database changes (once you have CI up and running) require a little pre-planning, if you want to get the most out of the work, and for the implementation to go smoothly. We’ve covered some of the checklist of areas to consider – mainly in the areas of “Getting the team ready for the changes that are coming” and “Planning our your pipeline, environments, patterns and practices for development”, though there will be more detail, depending on where you’re coming from – and where you want to get to. This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • Some notes on Reflector 7

    - by CliveT
    Both Bart and I have blogged about some of the changes that we (and other members of the team) have made to .NET Reflector for version 7, including the new tabbed browsing model, the inclusion of Jason Haley's PowerCommands add-in and some improvements to decompilation such as handling iterator blocks. The intention of this blog post is to cover all of the main new features in one place, and to describe the three new editions of .NET Reflector 7. If you'd simply like to try out the latest version of the beta for yourself you can do so here. Three new editions .NET Reflector 7 will come in three new editions: .NET Reflector .NET Reflector VS .NET Reflector VSPro The first edition is just the standalone Windows application. The latter two editions include the Windows application, but also add the power of Reflector into Visual Studio so that you can save time switching tools and quickly get to the bottom of a debugging issue that involves third-party code. Let's take a look at some of the new features in each edition. Tabbed browsing .NET Reflector now has a tabbed browsing model, in which the individual tabs have independent histories. You can open a new tab to view the selected object by using CTRL+CLICK. I've found this really useful when I'm investigating a particular piece of code but then want to focus on some other methods that I find along the way. For version 7, we wanted to implement the basic idea of tabs to see whether it is something that users will find helpful. If it is something that enhances productivity, we will add more tab-based features in a future version. PowerCommands add-in We have also included Jason Haley's PowerCommands add-in as part of version 7. This add-in provides a number of useful commands, including support for opening .xap files and extracting the constituent assemblies, and a query editor that allows C# queries to be written and executed against the Reflector object model . All of the PowerCommands features can be turned on from the options menu. We will be really interested to see what people are finding useful for further integration into the main tool in the future. My personal favourite part of the PowerCommands add-in is the query editor. You can set up as many of your own queries as you like, but we provide 25 to get you started. These do useful things like listing all extension methods in a given assembly, and displaying other lower-level information, such as the number of times that a given method uses the box IL instruction. These queries can be extracted and then executed from the 'Run Query' context menu within the assembly explorer. Moreover, the queries can be loaded, modified, and saved using the built-in editor, allowing very specific user customization and sharing of queries. The PowerCommands add-in contains many other useful utilities. For example, you can open an item using an external application, work with enumeration bit flags, or generate assembly binding redirect files. You can see Bart's earlier post for a more complete list. .NET Reflector VS .NET Reflector VS adds a brand new Reflector object browser into Visual Studio to save you time opening .NET Reflector separately and browsing for an object. A 'Decompile and Explore' option is also added to the context menu of references in the Solution Explorer, so you don't need to leave Visual Studio to look through decompiled code. We've also added some simple navigation features to allow you to move through the decompiled code as quickly and easily as you can in .NET Reflector. When this is selected, the add-in decompiles the given assembly, Once the decompilation has finished, a clone of the Reflector assembly explorer can be used inside Visual Studio. When Reflector generates the source code, it records the location information. You can therefore navigate from the source file to other decompiled source using the 'Go To Definition' context menu item. This then takes you to the definition in another decompiled assembly. .NET Reflector VSPro .NET Reflector VSPro builds on the features in .NET Reflector VS to add the ability to debug any source code you decompile. When you decompile with .NET Reflector VSPro, a matching .pdb is generated, so you can use Visual Studio to debug the source code as if it were part of the project. You can now use all the standard debugging techniques that you are used to in the Visual Studio debugger, and step through decompiled code as if it were your own. Again, you can select assemblies for decompilation. They are then decompiled. And then you can debug as if they were one of your own source code files. The future of .NET Reflector As I have mentioned throughout this post, most of the new features in version 7 are exploratory steps and we will be watching feedback closely. Although we don't want to speculate now about any other new features or bugs that will or won't be fixed in the next few versions of .NET Reflector, Bart has mentioned in a previous post that there are lots of improvements we intend to make. We plan to do this with great care and without taking anything away from the simplicity of the core product. User experience is something that we pride ourselves on at Red Gate, and it is clear that Reflector is still a long way off our usual standards. We plan for the next few versions of Reflector to be worked on by some of our top usability specialists who have been involved with our other market-leading products such as the ANTS Profilers and SQL Compare. I re-iterate the need for the really great simple mode in .NET Reflector to remain intact regardless of any other improvements we are planning to make. I really hope that you enjoy using some of the new features in version 7 and that Reflector continues to be your favourite .NET development tool for a long time to come.

    Read the article

< Previous Page | 91 92 93 94 95 96 97 98 99 100 101 102  | Next Page >