Search Results

Search found 24353 results on 975 pages for 'test coverage'.

Page 11/975 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Black box test cases for insertion procedure

    - by AJ
    insertion_procedure (int a[], int p [], int N) { int i,j,k; for (i=0; i<=N; i++) p[i] = i; for (i=2; i<=N; i++) { k = p[i]; j = 1; while (a[p[j-1]] > a[k]) {p[j] = p[j-1]; j--} p[j] = k; } } What would be few good test cases for this particular insertion procedure?

    Read the article

  • Timeout Considerations for Solicit Response – Part 2

    - by Michael Stephenson
    To follow up a previous article about timeouts and how they can affect your application I have extended the sample we were using to include WCF. I will execute some test scenarios and discuss the results. The sample We begin by consuming exactly the same web service which is sitting on a remote server. This time I have created a .net 3.5 application which will consume the web service using the basichttp binding. To show you the configuration for the consumption of this web service please refer to the below diagram. You can see like before we also have the connectionManagement element in the configuration file. I have added a WCF service reference (also using the asynchronous proxy methods) and have the below code sample in the application which will asynchronously make the web service calls and handle the responses on a call back method invoked by a delegate. If you have read the previous article you will notice that the code is almost the same.   Sample 1 – WCF with Default Timeouts In this test I set about recreating the same scenario as previous where we would run the test but this time using WCF as the messaging component. For the first test I would use the default configuration settings which WCF had setup when we added a reference to the web service. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client-side trace is as follows:   The server-side trace is as follows: Some observations on the results are as follows: The timeouts happened quicker than in the previous tests because some calls were timing out before they attempted to connect to the server The first few calls that timed out did actually connect to the server and did execute successfully on the server   Test 2 – Increase Open Connection Timeout & Send Timeout In this test I wanted to increase both the send and open timeout values to try and give everything a chance to go through. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The client side trace for this test was   The server-side trace for this test was: Some observations on this test are: This test proved if the timeouts are high enough everything will just go through   Test 3 – Increase just the Send Timeout In this test we wanted to increase just the send timeout. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:10:00"   The Test We simulated 21 calls to the web service   Test Results The below is the client side trace The below is the server side trace Some observations on this test are: In this test from both the client and server perspective everything ran through fine The open connection timeout did not seem to have any effect   Test 4 – Increase Just the Open Connection Timeout In this test I wanted to validate the change to the open connection setting by increasing just this on its own. The timeout values for this test are: closeTimeout="00:01:00" openTimeout="00:10:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"   The Test We simulated 21 calls to the web service Test Results The client side trace was The server side trace was Some observations on this test are: In this test you can see that the open connection which relates to opening the channel timeout increase was not the thing which stopped the calls timing out It's the send of data which is timing out On the server you can see that the successful few calls were fine but there were also a few calls which hit the server but timed out on the client You can see that not all calls hit the server which was one of the problems with the WSE and ASMX options   Test 5 – Smaller Increase in Send Timeout In this test I wanted to make a smaller increase to the send timeout than previous just to prove that it was the key setting which was controlling what was timing out. The timeout values for this test are: openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:02:30"   The Test We simulated 21 calls to the web service Test Results The client side trace was   The server side trace was Some observations on this test are: You can see that most of the calls got through fine On the client you can see that call 20 timed out but still hit the server and executed fine.   Summary At this point between the two articles we have quite a lot of scenarios showing the different way the timeout setting have played into our original performance issue, and now we can see how WCF could offer an improved way to handle the problem. To summarise the differences in the timeout properties for the three technology stacks: ASMX The timeout value only applies to the execution time of your request on the server. The timeout does not consider how long your code might be waiting client side to get a connection. WSE The timeout value includes both the time to obtain a connection and also the time to execute the request. A timeout will not be thrown as an error until an attempt to connect to the server is made. This means a 40 second timeout setting may not throw the error until 60 seconds when the connection to the server is made. If the connection to the server is made you should be aware that your message will be processed and you should design for this. WCF The WCF send timeout is the setting most equivalent to the settings we were looking at previously. Like WSE this setting the counter includes the time to get a connection as well as the time to execute on a server. Unlike WSE and ASMX an error will be thrown as soon as the send timeout from making your call from user code has elapsed regardless of whether we are waiting for a connection or have an open connection to the server. This may to a user appear to have better latency in getting an error response compared to WSE or ASMX.

    Read the article

  • Visual Studio Load Testing using Windows Azure

    - by Tarun Arora
    In my opinion the biggest adoption barrier in performance testing on smaller projects is not the tooling but the high infrastructure and administration cost that comes with this phase of testing. Only if a reusable solution was possible and infrastructure management wasn’t as expensive, adoption would certainly spike. It certainly is possible if you bring Visual Studio and Windows Azure into the equation. It is possible to run your test rig in the cloud without getting tangled in SCVMM or Lab Management. All you need is an active Azure subscription, Windows Azure endpoint enabled developer workstation running visual studio ultimate on premise, windows azure endpoint enabled worker roles on azure compute instances set up to run as test controllers and test agents. My test rig is running SQL server 2012 and Visual Studio 2012 RC agents. The beauty is that the solution is reusable, you can open the azure project, change the subscription and certificate, click publish and *BOOM* in less than 15 minutes you could have your own test rig running in the cloud. In this blog post I intend to show you how you can use the power of Windows Azure to effectively abstract the administration cost of infrastructure management and lower the total cost of Load & Performance Testing. As a bonus, I will share a reusable solution that you can use to automate test rig creation for both VS 2010 agents as well as VS 2012 agents. Introduction The slide show below should help you under the high level details of what we are trying to achive... Leveraging Azure for Performance Testing View more PowerPoint from Avanade Scenario 1 – Running a Test Rig in Windows Azure To start off with the basics, in the first scenario I plan to discuss how to, - Automate deployment & configuration of Windows Azure Worker Roles for Test Controller and Test Agent - Automate deployment & configuration of SQL database on Test Controller on the Test Controller Worker Role - Scaling Test Agents on demand - Creating a Web Performance Test and a simple Load Test - Managing Test Controllers right from Visual Studio on Premise Developer Workstation - Viewing results of the Load Test - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig Scenario 2 – The scaled out Test Rig and sharing data using SQL Azure A scaled out version of this implementation would involve running multiple test rigs running in the cloud, in this scenario I will show you how to sync the load test database from these distributed test rigs into one SQL Azure database using Azure sync. The selling point for this scenario is being able to collate the load test efforts from across the organization into one data store. - Deploy multiple test rigs using the reusable solution from scenario 1 - Set up and configure Windows Azure Sync - Test SQL Azure Load Test result database created as a result of Windows Azure Sync - Cleaning up - Have the above work in the shape of a reusable solution for both VS2010 and VS2012 Test Rig The Ingredients Though with an active MSDN ultimate subscription you would already have access to everything and more, you will essentially need the below to try out the scenarios, 1. Windows Azure Subscription 2. Windows Azure Storage – Blob Storage 3. Windows Azure Compute – Worker Role 4. SQL Azure Database 5. SQL Data Sync 6. Windows Azure Connect – End points 7. SQL 2012 Express or SQL 2008 R2 Express 8. Visual Studio All Agents 2012 or Visual Studio All Agents 2010 9. A developer workstation set up with Visual Studio 2012 – Ultimate or Visual Studio 2010 – Ultimate 10. Visual Studio Load Test Unlimited Virtual User Pack. Walkthrough To set up the test rig in the cloud, the test controller, test agent and SQL express installers need to be available when the worker role set up starts, the easiest and most efficient way is to pre upload the required software into Windows Azure Blob storage. SQL express, test controller and test agent expose various switches which we can take advantage of including the quiet install switch. Once all the 3 have been installed the test controller needs to be registered with the test agents and the SQL database needs to be associated to the test controller. By enabling Windows Azure connect on the machines in the cloud and the developer workstation on premise we successfully create a virtual network amongst the machines enabling 2 way communication. All of the above can be done programmatically, let’s see step by step how… Scenario 1 Video Walkthrough–Leveraging Windows Azure for performance Testing Scenario 2 Work in progress, watch this space for more… Solution If you are still reading and are interested in the solution, drop me an email with your windows live id. I’ll add you to my TFS preview project which has a re-usable solution for both VS 2010 and VS 2012 test rigs as well as guidance and demo performance tests.   Conclusion Other posts and resources available here. Possibilities…. Endless!

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • Test-Drive ASP.NET MVC Review

    - by Ben Griswold
    A few years back I started dallying with test-driven development, but I never fully committed to the practice. This wasn’t because I didn’t believe in the value of TDD; it was more a matter of not completely understanding how to incorporate “test first” into my everyday development. Back in my web forms days, I could point fingers at the framework for my ignorance and laziness. After all, web forms weren’t exactly designed for testability so who could blame me for not embracing TDD in those conditions, right? But when I switched to ASP.NET MVC and quickly found myself fresh out of excuses and it became instantly clear that it was time to get my head around red-green-refactor once and for all or I would regretfully miss out on one of the biggest selling points the new framework had to offer. I have previously written about how I learned ASP.NET MVC. It was primarily hands on learning but I did read a couple of ASP.NET MVC books along the way. The books I read dedicated a chapter or two to TDD and they certainly addressed the benefits of TDD and how MVC was designed with testability in mind, but TDD was merely an afterthought compared to, well, teaching one how to code the model, view and controller. This approach made some sense, and I learned a bunch about MVC from those books, but when it came to TDD the books were just a teaser and an opportunity missed.  But then I got lucky – Jonathan McCracken contacted me and asked if I’d review his book, Test-Drive ASP.NET MVC, and it was just what I needed to get over the TDD hump. As the title suggests, Test-Drive ASP.NET MVC takes a different approach to learning MVC as it focuses on testing right from the very start. McCracken wastes no time and swiftly familiarizes us with the framework by building out a trivial Quote-O-Matic application and then dedicates the better part of his book to testing first – first by explaining TDD and then coding a full-featured Getting Organized application inspired by David Allen’s popular book, Getting Things Done. If you are a learn-by-example kind of coder (like me), you will instantly appreciate and enjoy McCracken’s style – its fast-moving, pragmatic and focused on only the most relevant information required to get you going with ASP.NET MVC and TDD. The book continues with the test-first theme but McCracken moves away from the sample application and incorporates other practical skills like persisting models with NHibernate, leveraging Inversion of Control with the IControllerFactory and building a RESTful web service. What I most appreciated about this section was McCracken’s use of and praise for open source libraries like Rhino Mocks, SQLite and StructureMap (to name just a few) and productivity tools like ReSharper, Web Platform Installer and ASP.NET SQL Server Setup Wizard.  McCracken’s emphasis on real world, pragmatic development was clearly demonstrated in every tool choice, straight-forward code block and developer tip. Whether one is already familiar with the tools/tips or not, McCracken’s thought process is easily understood and appreciated. The final section of the book walks the reader through security and deployment – everything from error handling and logging with ELMAH, to ASP.NET Health Monitoring, to using MSBuild with automated builds, to the deployment  of ASP.NET MVC to various web environments. These chapters, like those prior, offer enough information and explanation to simply help you get the job done.  Do I believe Test-Drive ASP.NET MVC will turn you into an expert MVC developer overnight?  Well, no.  I don’t think any book can make that claim.  If that were possible, I think book list prices would skyrocket!  That said, Test-Drive ASP.NET MVC provides a solid foundation and a unique (and dare I say necessary) approach to learning ASP.NET MVC.  Along the way McCracken shares loads of very practical software development tips and references numerous tools and libraries. The bottom line is it’s a great ASP.NET MVC primer – if you’re new to ASP.NET MVC it’s just what you need to get started.  Do I believe Test-Drive ASP.NET MVC will give you everything you need to start employing TDD in your everyday development?  Well, I used to think that learning TDD required a lot of practice and, if you’re lucky enough, the guidance of a mentor or coach.  I used to think that one couldn’t learn TDD from a book alone. Well, I’m still no pro, but I’m testing first now and Jonathan McCracken and his book, Test-Drive ASP.NET MVC, played a big part in making this happen.  If you are an MVC developer and a TDD newb, Test-Drive ASP.NET MVC is just the book for you.

    Read the article

  • Test descriptions/name, say what the test is? or what it means when it fails?

    - by xenoterracide
    The API docs for Test::More::ok is ok($got eq $expected, $test_name); right now in one of my apps I have $test_name print what the test is testing. So for example in one of my tests I have set this to 'filename exists'. What I realized after I got a bug report recently, and realized that the only time I ever see this message is when the test is failing, if the test is failing that means the file doesn't exist. In your opinion, do you think these $test_name's should say what the test means if successful? what it means if it failed? or do you think it should say something else? please explain why?

    Read the article

  • Can you recommend a good test plan template?

    - by Ethel Evans
    Can you recommend a good test plan template for an agile testing team? I know there are templates for testing on the web and have already looked at some found by search engines, but I could really use something lightweight and something that has already been tried by skilled testers and is known to work well. Many templates I've seen give me the feeling that writing test documents is expected to be a third of the work that those testers are doing, but my team really prefers to use less documentation and more actual test writing. We use a wiki for documentation, so an approach that lends itself to living documents would be great. My hope is that using a more structured approach to test planning will increase the usefulness of my test plan while reducing the effort to create it by allowing me to think about the tests, and not the format and structure of the plan. My workplace does not have something already on hand, so whatever I start doing might be adopted by the company.

    Read the article

  • Can we generate multiple coverage reports using Hudson Emma plugin

    - by Subhashish
    We run both unit (junit) and system (fit) tests on instrumented code in our build. The consolidated coverage report for both is generated as part of the build itself. We then feed the unit test coverage report to the Hudson Emma plugin, configure benchmark numbers and things work nicely. Is it possible to also feed in the system test coverage report separately to the same plugin so that we can get that report and configure benchmarks for that as well? I know there is a workaround of creating a downstream project for the latter activity but it would be good to be able to do both in the same build.

    Read the article

  • I want tell the VC++ Compiler to compile all code. Can it be done?

    - by KGB
    I am using VS2005 VC++ for unmanaged C++. I have VSTS and am trying to use the code coverage tool to accomplish two things with regards to unit tests: See how much of my referenced code under test is getting executed See how many methods of my code under test (if any) are not unit tested at all Setting up the VSTS code coverage tool (see the link text) and accomplishing task #1 was straightforward. However #2 has been a surprising challenge for me. Here is my test code. class CodeCoverageTarget { public: std::string ThisMethodRuns() { return "Running"; } std::string ThisMethodDoesNotRun() { return "Not Running"; } }; #include <iostream> #include "CodeCoverageTarget.h" using namespace std; int main() { CodeCoverageTarget cct; cout<<cct.ThisMethodRuns()<<endl; } When both methods are defined within the class as above the compiler automatically eliminates the ThisMethodDoesNotRun() from the obj file. If I move it's definition outside the class then it is included in the obj file and the code coverage tool shows it has not been exercised at all. Under most circumstances I want the compiler to do this elimination for me but for the code coverage tool it defeats a significant portion of the value (e.g. finding untested methods). I have tried a number of things to tell the compiler to stop being smart for me and compile everything but I am stumped. It would be nice if the code coverage tool compensated for this (I suppose by scanning the source and matching it up with the linker output) but I didn't find anything to suggest it has a special mode to be turned on. Am I totally missing something simple here or is this not possible with the VC++ compiler + VSTS code coverage tool? Thanks in advance, KGB

    Read the article

  • Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

    - by oazabir
    I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting. Watch the screencast here: Doing BDD with xUnit, Subspec and on a WCF Service  Warning: you might hear some noise in the audio in some places. Something wrong with audio bit rate. I suggest you let the video download for a while and then play it. If you still get noise, go back couple of seconds earlier and then resume play. It eliminates the noise.  The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors. Doing BDD with xUnit and WatiN on a ASP.NET webform Hope you like it!

    Read the article

  • Mock the window.setTimeout in a Jasmine test to avoid waiting

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/21/mock-the-window.settimeout-in-a-jasmine-test-to-avoid-waiting.aspxJasmine has a clock mocking feature, but I was unable to make it work in a function that I’m calling and want to test. The example only shows using clock for a setTimeout in the spec tests and I couldn’t find a good example. Here is my current and slightly limited approach.   If we have a method we want to test: var test = function(){ var self = this; self.timeoutWasCalled = false; self.testWithTimeout = function(){ window.setTimeout(function(){ self.timeoutWasCalled = true; }, 6000); }; }; Here’s my testing code: var realWindowSetTimeout = window.setTimeout; describe('test a method that uses setTimeout', function(){ var testObject; beforeEach(function () { // force setTimeout to be called right away, no matter what time they specify jasmine.getGlobal().setTimeout = function (funcToCall, millis) { funcToCall(); }; testObject = new test(); }); afterEach(function() { jasmine.getGlobal().setTimeout = realWindowSetTimeout; }); it('should call the method right away', function(){ testObject.testWithTimeout(); expect(testObject.timeoutWasCalled).toBeTruthy(); }); }); I got a good pointer from Andreas in this StackOverflow question. This would also work for window.setInterval. Other possible approaches: create a wrapper module of setTimeout and setInterval methods that can be mocked. This can be mocked with RequireJS or passed into the constructor. pass the window.setTimeout function into the method (this could get messy)

    Read the article

  • Templated function with two type parameters fails compile when used with an error-checking macro

    - by SirPentor
    Because someone in our group hates exceptions (let's not discuss that here), we tend to use error-checking macros in our C++ projects. I have encountered an odd compilation failure when using a templated function with two type parameters. There are a few errors (below), but I think the root cause is a warning: warning C4002: too many actual parameters for macro 'BOOL_CHECK_BOOL_RETURN' Probably best explained in code: #include "stdafx.h" template<class A, class B> bool DoubleTemplated(B & value) { return true; } template<class A> bool SingleTemplated(A & value) { return true; } bool NotTemplated(bool & value) { return true; } #define BOOL_CHECK_BOOL_RETURN(expr) \ do \ { \ bool __b = (expr); \ if (!__b) \ { \ return false; \ } \ } while (false) \ bool call() { bool thing = true; // BOOL_CHECK_BOOL_RETURN(DoubleTemplated<int, bool>(thing)); // Above line doesn't compile. BOOL_CHECK_BOOL_RETURN((DoubleTemplated<int, bool>(thing))); // Above line compiles just fine. bool temp = DoubleTemplated<int, bool>(thing); // Above line compiles just fine. BOOL_CHECK_BOOL_RETURN(SingleTemplated<bool>(thing)); BOOL_CHECK_BOOL_RETURN(NotTemplated(thing)); return true; } int _tmain(int argc, _TCHAR* argv[]) { call(); return 0; } Here are the errors, when the offending line is not commented out: 1>------ Build started: Project: test, Configuration: Debug Win32 ------ 1>Compiling... 1>test.cpp 1>c:\junk\temp\test\test\test.cpp(38) : warning C4002: too many actual parameters for macro 'BOOL_CHECK_BOOL_RETURN' 1>c:\junk\temp\test\test\test.cpp(38) : error C2143: syntax error : missing ',' before ')' 1>c:\junk\temp\test\test\test.cpp(38) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(41) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(48) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(49) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(52) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(54) : error C2065: 'argv' : undeclared identifier 1>c:\junk\temp\test\test\test.cpp(54) : error C2059: syntax error : ']' 1>c:\junk\temp\test\test\test.cpp(55) : error C2143: syntax error : missing ';' before '{' 1>c:\junk\temp\test\test\test.cpp(58) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(60) : error C2143: syntax error : missing ';' before '}' 1>c:\junk\temp\test\test\test.cpp(60) : fatal error C1004: unexpected end-of-file found 1>Build log was saved at "file://c:\junk\temp\test\test\Debug\BuildLog.htm" 1>test - 12 error(s), 1 warning(s) ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Any ideas? Thanks!

    Read the article

  • Code Coverage Analysis for Embedded C++ projects

    - by Steve Hawkins
    I have recently started working on a very large C++ project that, after completing 90% of the implementation, has determined that they need to demonstrate 100% branch coverage during testing. The project is hosted on an embedded platform (Green Hills Integrity). I'm looking for suggestions and experiences from others on StackOverflow that have used code coverage products in similar environments. I'm interested in both positive and negative comments regarding these types of tools.

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • VerifyError When Running jUnit Test on Android 1.6

    - by DKnowles
    Here's what I'm trying to run on Android 1.6: package com.healthlogger.test; public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includeAllPackagesUnderHere().build(); } } and: package com.healthlogger.test; public class RecordTest extends AndroidTestCase { /** * Ensures that the constructor will not take a null data tag. */ @Test(expected=AssertionFailedError.class) public void testNullDataTagInConstructor() { Record r = new Record(null, Calendar.getInstance(), "Data"); fail("Failed to catch null data tag."); } } The main project is HealthLogger. These are run from a separate test project (HealthLoggerTest). HealthLogger and jUnit4 are in HealthLoggerTest's build path. jUnit4 is also in HealthLogger's build path. The class "Record" is located in com.healthlogger. Commenting out the "@Test..." and "Record r..." lines allows this test to run. When they are uncommented, I get a VerifyError exception. I am severely blocked by this; why is it happening? EDIT: some info from logcat after the crash: E/AndroidRuntime( 3723): Uncaught handler: thread main exiting due to uncaught exception E/AndroidRuntime( 3723): java.lang.VerifyError: com.healthlogger.test.RecordTest E/AndroidRuntime( 3723): at java.lang.Class.getDeclaredConstructors(Native Method) E/AndroidRuntime( 3723): at java.lang.Class.getConstructors(Class.java:507) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.hasValidConstructor(TestGrouping.java:226) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:215) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping$TestCasePredicate.apply(TestGrouping.java:211) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.select(TestGrouping.java:170) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.selectTestClasses(TestGrouping.java:160) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.testCaseClassesInPackage(TestGrouping.java:154) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestGrouping.addPackagesRecursive(TestGrouping.java:115) E/AndroidRuntime( 3723): at android.test.suitebuilder.TestSuiteBuilder.includePackages(TestSuiteBuilder.java:103) E/AndroidRuntime( 3723): at android.test.InstrumentationTestRunner.onCreate(InstrumentationTestRunner.java:321) E/AndroidRuntime( 3723): at android.app.ActivityThread.handleBindApplication(ActivityThread.java:3848) E/AndroidRuntime( 3723): at android.app.ActivityThread.access$2800(ActivityThread.java:116) E/AndroidRuntime( 3723): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1831) E/AndroidRuntime( 3723): at android.os.Handler.dispatchMessage(Handler.java:99) E/AndroidRuntime( 3723): at android.os.Looper.loop(Looper.java:123) E/AndroidRuntime( 3723): at android.app.ActivityThread.main(ActivityThread.java:4203) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invokeNative(Native Method) E/AndroidRuntime( 3723): at java.lang.reflect.Method.invoke(Method.java:521) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:791) E/AndroidRuntime( 3723): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:549) E/AndroidRuntime( 3723): at dalvik.system.NativeStart.main(Native Method)

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Announcing SO-Aware Test Workbench

    - by gsusx
    Yesterday was a big day for Tellago Studios . After a few months hands down working, we announced the release of the SO-Aware Test Workbench tool which brings sophisticated performance testing and test visualization capabilities to theWCF world. This work has been the result of the feedback received by many of our SO-Aware and Tellago customers in terms of how to improve the WCF testing. More importantly, with the SO-Aware Test Workbench we are trying to address what has been one of the biggest challenges...(read more)

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >