Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 40/313 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • Having Latest Tests Results info in the notified email with Hudson

    - by Roberto
    I have a project with a lot of tests failing, so it would be great for me to receive by email the number of failed tests compare from the latest build. What i need is just the info that appears in the project's page by the test results link: Latest Test Result (10 failures / -2) Is this possible? I've already tried the email-ext plugin, but it is not telling me that info (I can have the list of failing tests with output etc., but I really just need that info above). Any ideas?

    Read the article

  • Not Able to call The method Asynchronously in the Unit Test.

    - by user43838
    Hi everyone, I am trying to call a method that passes an object called parameters. public void LoadingDataLockFunctionalityTest() { DataCache_Accessor target = DataCacheTest.getNewDataCacheInstance(); target.itemsLoading.Add("WebFx.Caching.TestDataRetrieverFactorytestsync", true); DataParameters parameters = new DataParameters("WebFx.Core", "WebFx.Caching.TestDataRetrieverFactory", "testsync"); parameters.CachingStrategy = CachingStrategy.TimerDontWait; parameters.CacheDuration = 0; string data = (string)target.performGetForTimerDontWaitStrategy(parameters); TestSyncDataRetriever.SimulateLoadingForFiveSeconds = true; Thread t1 = new Thread(delegate() { string s = (string)target.performGetForTimerDontWaitStrategy(parameters); Console.WriteLine(s ?? String.Empty); }); t1.Start(); t1.Join(); Thread.Sleep(1000); ReaderWriterLockSlim rw = DataCache_Accessor.GetLoadingLock(parameters); Assert.IsTrue(rw.IsWriteLockHeld); Assert.IsNotNull(data); } My test is failing all the time and i am not able step through the method.. Can someone please put me in the right direction Thanks

    Read the article

  • Auto Re-Running of Tests that fail

    - by Tangopop
    I have a set of Selenium/MbUnit Tests that work fine, but tend to take a while to run (over 4 hours) A problem i am finding is that about 1 in 20 test seems to timeout when running. I have confirmed the Selenium GRID is working and the Selenium RC's are all fine, it just seems to be a qwerk of the system. What is really annoying though is that if i run these tests again they will usually pass. What i want to know is if there is a way for me to auto rerun the tests (probably in the code) if a perticular type of exception is caught... I have attempted to put a few lines of code in the catch statement but i know this is a very hacky way of re running the tests. Here is the code: catch (AssertionException e) { if (e.Message() == "TimeOut") //Something similar to this { this.Test(); } else { verificationErrors.AppendLine(browserList[i] + " :: " + e.Message); } } Any suggestions?

    Read the article

  • Code generation tool, to create C# adapter classes for unit testing?

    - by RyBolt
    I know I wouldn't need this with Typemock, however, with something like MoQ , I need to use the adapter pattern to enable the creation of mocks via interfaces for code I don't control. For example, TcpClient is a .NET class, so I use adapter pattern to enable mocking of this object, b/c I need an interface of that class. I then produce interface ITcpClient, that can then be implemented via a TcpClientAdapter class, which is just plain vanilla adapter pattern implementation. I am looking for a tool to do this automatically (creation of interface and adapter), I would think there is one out there somewhere? (or is everyone just hand coding these)

    Read the article

  • How to prevent unit test from using util from test project?

    - by calucier
    I am using eclipse and I have two projects, project1 and project1-test. Below is the example layout of my projects: project1 -src --my.package ----MyClass.java --my.package.util ----util.java project1-test -src --my.package ----MyClassTest.java --my.package.util ----util.java MyClass.java makes a static call to the util.java in project1. MyClassTests.java is testing MyClass.java. When the test class runs, it fails and complains that MyClass.java is referencing a method in util.java that doesn't exist. Under project1, the method being referenced exists in util.java but under project1-test, the method doesn't. When I run MyClassTests.java, the util.java that is being referenced from MyClass.java is from project1-test when it should be project1. Is there some way to make MyClass.java not reference util.java from project1-test when running MyClassTest.java?

    Read the article

  • Can I use breakpoints (as while debugging) while 'unit testing' ?

    - by Richard77
    Hello, I'm walking through the FrontStore series tutorial on TDD in MVC (Part 3 by Rob Conery/ASP.NET). The test I'm concerned with is the CatalogRepository_Each_Category_Contains_5_Products(). Until I get to that test, everything was working fine. Now, I've gone through every line that makes this test (including the test itself, the TestCatalogRepository, ...). I've also compared my code to that of Rob, but the test keeps failing. I also checked the source code from CodePlex, that test was not there. Now, I wonder if I can put a break point somewhere to check the local values as the test is being executed? If not, something similar? Thanks for helping.

    Read the article

  • Unit tests logged (or run) multiple times

    - by HeavyWave
    I have this simple test: protected readonly ILog logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().ReflectedType); private static int count = 0; [Test] public void TestConfiguredSuccessfully() { logger.Debug("in test method" + count++); } log4net is set up like this: [TestFixtureSetUp] public void SetUp() { log4net.Config.BasicConfigurator.Configure(); } The problem is, that if I run this test in nUnit once, I get the output (as expected): 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method0 But if I press RUN in nUnit.exe again (or more) I get the following: 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method1 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method1 And so on (if I run it 5 times, I'll get 5 repeating lines). Now, if I run the same test alone from reSharper the output is fine and does not repeat. However, if I run this test along side 2 other tests in the same class, the output is repeated three times. I am totally confused. What the hell is going on here?

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Tests for hard drive health

    - by Samik R
    I have a 5-year old hard drive (bought new at the time), but it was sitting in my closet for 5 years, unused. I have just started using it, and seems to be getting a whirring sound (rather distinct from the other noises like fans etc.). I ran a few diagnostics tests, like Seagate's SeaTools, and the SMART test, and a few generic tests and all passed. Should I be concerned? Is there any other test that I should run? It's an internal IDE WD 5400RPM drive. Being used for a desktop, which is itself pretty high-end (AMD Phenom II X6 1100T, AMD Radeon GPU etc.), but would be used rather occasionally to begin with (avg. 1-2 hrs. per day). Thanks for any pointers.

    Read the article

  • Why is the unit test useful when the view is not unit testable in MVVM?

    - by BigTiger
    Why is the unit test useful when the view is not unit testable in MVVM? In MVVM, we have the models, view-models, and views. The claimed advantage is that MVVM can make the models and view=models unit testable. But all the three parties belong to the same application. If the views are not unit testable, why test the other two? Will unit testing the other two and leave one not tested improve the quality? Removing all the code-behind from the views sounds weird to me. How about the code-behind only handles the pure UI operations?

    Read the article

  • Is it possible to compile IronRuby code to a .NET assembly (EXE or DLL)

    - by Chris Ammerman
    My scenario consists of the following points. I have a packaged software product I am developing in C# Since it is a packaged product, the public interfaces of the assemblies need to be tightly controlled... All assemblies are strong-named Any classes that don't absolutely have to be "public" are "internal" I want to write unit tests for those "internal" classes, since they are the bulk of the code And finally.... I want to try writing the unit tests in Ruby. Since the unit tests would be external to the assembly containing the code under test, the assemblies under test would each need to have an "InternalsVisibleTo" attribute specifying the name of the unit test assembly. Which of course would mean that the Ruby unit tests would have to compile down to a .NET assembly so they can be given access in this way. Can this be done? If so, how? All I can find on the web about "compiling IronRuby" is about building the actual IronRuby runtime from source.

    Read the article

  • Why does 'uses unit' disappear when I had a new unit ?

    - by TridenT
    I have a Unit test project for my Application using DUnit framework. This project have a unit surrounded by a $IFDEF to output test-results in a xml file instead of the gui or just command line. XML_OUTPUT define is enabled by switching the Build configuration. program DelphiCodeToDoc_Tests; uses TestFramework, TextTestRunner, Sysutils, Forms, GUITestRunner, {$IFDEF XML_OUTPUT} XmlTestRunner2 in 'DUnit_addon\XmlTestRunner2.pas', {$ENDIF} DCTDSetupTests in 'IntegrationTests\DCTDSetupTests.pas', ... This works perfectly. The issue starts when I'm adding a new unit to this project from the IDE (a new unit with 'FileNewUnit'). The Test project is now : uses TestFramework, TextTestRunner, Sysutils, Forms, GUITestRunner, DCTDSetupTests in 'IntegrationTests\DCTDSetupTests.pas', ... MyNewUnit in 'IntegrationTests\MyNewUnit.pas'; As you see, the test XML_OUTPUT has disappeared ... Each time I'm adding a unit, Delphi IDE deletes this test. Do you know why and how I can avoid it ?

    Read the article

  • Custom validation works in development but not in unit test

    - by Geolev
    I want to validate that at least one of two columns have a value in my model. I found somewhere on the web that I could create a custom validator as follows: # Check for the presence of one or another field: # :validates_presence_of_at_least_one_field :last_name, :company_name - would require either last_name or company_name to be filled in # also works with arrays # :validates_presence_of_at_least_one_field :email, [:name, :address, :city, :state] - would require email or a mailing type address module ActiveRecord module Validations module ClassMethods def validates_presence_of_at_least_one_field(*attr_names) msg = attr_names.collect {|a| a.is_a?(Array) ? " ( #{a.join(", ")} ) " : a.to_s}.join(", ") + "can't all be blank. At least one field must be filled in." configuration = { :on => :save, :message => msg } configuration.update(attr_names.extract_options!) send(validation_method(configuration[:on]), configuration) do |record| found = false attr_names.each do |a| a = [a] unless a.is_a?(Array) found = true a.each do |attr| value = record.respond_to?(attr.to_s) ? record.send(attr.to_s) : record[attr.to_s] found = !value.blank? end break if found end record.errors.add_to_base(configuration[:message]) unless found end end end end end I put this in a file called lib/acs_validator.rb in my project and added "require 'acs_validator'" to my environment.rb. This does exactly what I want. It works perfectly when I manually test it in the development environment but when I write a unit test it breaks my test environment. This is my unit test: require 'test_helper' class CustomerTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "customer not valid" do puts "customer not valid" customer = Customer.new assert !customer.valid? assert customer.errors.invalid?(:subdomain) assert_equal "Company Name and Last Name can't both be blank.", customer.errors.on(:contact_lname) end end This is my model: class Customer < ActiveRecord::Base validates_presence_of :subdomain validates_presence_of_at_least_one_field :customer_company_name, :contact_lname, :message => "Company Name and Last Name can't both be blank." has_one :service_plan end When I run the unit test, I get the following error: DEPRECATION WARNING: Rake tasks in vendor/plugins/admin_data/tasks, vendor/plugins/admin_data/tasks, and vendor/plugins/admin_data/tasks are deprecated. Use lib/tasks instead. (called from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/tasks/rails.rb:10) Couldn't drop acs_test : #<ActiveRecord::StatementInvalid: PGError: ERROR: database "acs_test" is being accessed by other users DETAIL: There are 1 other session(s) using the database. : DROP DATABASE IF EXISTS "acs_test"> acs_test already exists NOTICE: CREATE TABLE will create implicit sequence "customers_id_seq" for serial column "customers.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "customers_pkey" for table "customers" NOTICE: CREATE TABLE will create implicit sequence "service_plans_id_seq" for serial column "service_plans.id" NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index "service_plans_pkey" for table "service_plans" /usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/unit/customer_test.rb" "test/unit/service_plan_test.rb" "test/unit/helpers/dashboard_helper_test.rb" "test/unit/helpers/customers_helper_test.rb" "test/unit/helpers/service_plans_helper_test.rb" /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.8/lib/active_record/base.rb:1994:in `method_missing_without_paginate': undefined method `validates_presence_of_at_least_one_field' for #<Class:0xb7076bd0> (NoMethodError) from /usr/lib/ruby/gems/1.8/gems/will_paginate-2.3.12/lib/will_paginate/finder.rb:170:in `method_missing' from /home/george/projects/advancedcomfortcs/app/models/customer.rb:3 from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' from /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:158:in `require' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:265:in `require_or_load' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:224:in `depend_on' from /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.8/lib/active_support/dependencies.rb:136:in `require_dependency' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:414:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:413:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `each' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:411:in `load_application_classes' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:197:in `process' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `send' from /usr/lib/ruby/gems/1.8/gems/rails-2.3.8/lib/initializer.rb:113:in `run' from /home/george/projects/advancedcomfortcs/config/environment.rb:9 from ./test/test_helper.rb:2:in `require' from ./test/test_helper.rb:2 from ./test/unit/customer_test.rb:1:in `require' from ./test/unit/customer_test.rb:1 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `load' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5:in `each' from /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake/rake_test_loader.rb:5 rake aborted! Command failed with status (1): [/usr/bin/ruby1.8 -I"lib:test" "/usr/lib/ru...] (See full trace by running task with --trace) It seems to have stepped on will_paginate somehow. Does anyone have any suggestions? Is there another way to do the validation I'm attempting to do? Thanks, George

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • Integrating Hudson with MS Test?

    - by hangy
    Is it possible to integrate Hudson with MS Test? I am setting up a smaller CI server on my development machine with Hudson right now, just so that I can have some statistics (ie. FxCop and compiler warnings). Of course, it would also be nice if it could just run my unit tests and present their output. Up to now, I have added the following batch task to Hudson, which makes it run the tests properly. "%PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" /runconfig:LocalTestRun.testrunconfig /testcontainer:Tests\bin\Debug\Tests.dll However, as far as I know, Hudson does not support analysis of MS Test results, yet. Does anyone know whether the TRX files generated by MSTest.exe can be transformed to the JUnit or NUnit result format (because those are supported by Hudson), or whether there is any other way to integrate MS Test unit tests with Hudson?

    Read the article

  • Specify test method name prefix for test suite in junit 3

    - by Marko Kocic
    Is it possible to tell JUnit 3 to use additional method name prefix when looking up test method names? The goal is to have additional tests running locally that should not be run on continuous integration server. CI server doesn't use test suites, it look up for all classes which name ends with "Test" and execute all methods that begins with "test". The goal is to be able to locally run not only tests run by integration server, but also tests which method name starts with, for example "nocitest" or something like that. I don't mind having to organize tests into tests suite locally, since CI is just ignoring them.

    Read the article

  • How should I mock out my data connectivity

    - by BobTheBuilder
    I'm trying to unit test my Data Access Layer and I'm in the process of trying to mock my data connectivity to unit test my DAL and I'm coming unstuck trying to mock out the creation of the commands. I thought about using a queue of IDbParameters for the creation of the parameters, but the unit tests then require that the parameters are configured in the right order. I'm using MOQ and having looked around for some documentation to walk me through this, I'm finding lots of recommendation not to do this, but to write a wrapper for the connection, but it's my contention that my DAL is supposed to be the wrapper for my database and I don't feel I should be writing wrappers... if I do, how do I unit test the connectivity to the database for my wrapper? By writing another wrapper? It seems like it's turtles all the way down. So does anyone have any recommendations or tutorials regarding this particular area of unit testing/mocking?

    Read the article

  • Regression testing with Selenium GRID

    - by Ben Adderson
    A lot of software teams out there are tasked with supporting and maintaining systems that have grown organically over time, and the web team here at Red Gate is no exception. We're about to embark on our first significant refactoring endeavour for some time, and as such its clearly paramount that the code be tested thoroughly for regressions. Unfortunately we currently find ourselves with a codebase that isn't very testable - the three layers (database, business logic and UI) are currently tightly coupled. This leaves us with the unfortunate problem that, in order to confidently refactor the code, we need unit tests. But in order to write unit tests, we need to refactor the code :S To try and ease the initial pain of decoupling these layers, I've been looking into the idea of using UI automation to provide a sort of system-level regression test suite. The idea being that these tests can help us identify regressions whilst we work towards a more testable codebase, at which point the more traditional combination of unit and integration tests can take over. Ending up with a strong battery of UI tests is also a nice bonus :) Following on from my previous posts (here, here and here) I knew I wanted to use Selenium. I also figured that this would be a good excuse to put my xUnit [Browser] attribute to good use. Pretty quickly, I had a raft of tests that looked like the following (this particular example uses Reflector Pro). In a nut shell the test traverses our shopping cart and, for a particular combination of number of users and months of support, checks that the price calculations all come up with the correct values. [BrowserTheory] [Browser(Browsers.Firefox3_6, "http://www.red-gate.com")] public void Purchase1UserLicenceNoSupport(SeleniumProvider seleniumProvider) {     //Arrange     _browser = seleniumProvider.GetBrowser();     _browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                  //Act     _browser = ShoppingCartHelpers.TraverseShoppingCart(_browser, 1, 0, ".NET Reflector Pro");     //Assert     var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);         Assert.Equal(priceResult.Price, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.Equal(priceResult.Tax, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.Equal(priceResult.Total, _browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } These tests are pretty concise, with much of the common code in the TraverseShoppingCart() and GetNewPurchasePrice() methods. The (inevitable) problem arose when it came to execute these tests en masse. Selenium is a very slick tool, but it can't mask the fact that UI automation is very slow. To give you an idea, the set of cases that covers all of our products, for all combinations of users and support, came to 372 tests (for now only considering purchases in dollars). In the world of automated integration tests, that's a very manageable number. For unit tests, it's a trifle. However for UI automation, those 372 tests were taking just over two hours to run. Two hours may not sound like a lot, but those cases only cover one of the three currencies we deal with, and only one of the many different ways our systems can be asked to calculate a price. It was already pretty clear at this point that in order for this approach to be viable, I was going to have to find a way to speed things up. Up to this point I had been using Selenium Remote Control to automate Firefox, as this was the approach I had used previously and it had worked well. Fortunately,  the guys at SeleniumHQ also maintain a tool for executing multiple Selenium RC tests in parallel: Selenium Grid. Selenium Grid uses a central 'hub' to handle allocation of Selenium tests to individual RCs. The Remote Controls simply register themselves with the hub when they start, and then wait to be assigned work. The (for me) really clever part is that, as far as the client driver library is concerned, the grid hub looks exactly the same as a vanilla remote control. To create a new browser session against Selenium RC, the following C# code suffices: new DefaultSelenium("localhost", 4444, "*firefox", "http://www.red-gate.com"); This assumes that the RC is running on the local machine, and is listening on port 4444 (the default). Assuming the hub is running on your local machine, then to create a browser session in Selenium Grid, via the hub rather than directly against the control, the code is exactly the same! Behind the scenes, the hub will take this request and hand it off to one of the registered RCs that provides the "*firefox" execution environment. It will then pass all communications back and forth between the test runner and the remote control transparently. This makes running existing RC tests on a Selenium Grid a piece of cake, as the developers intended. For a more detailed description of exactly how Selenium Grid works, see this page. Once I had a test environment capable of running multiple tests in parallel, I needed a test runner capable of doing the same. Unfortunately, this does not currently exist for xUnit (boo!). MbUnit on the other hand, has the concept of concurrent execution baked right into the framework. So after swapping out my assembly references, and fixing up the resulting mismatches in assertions, my example test now looks like this: [Test] public void Purchase1UserLicenceNoSupport() {    //Arrange    ISelenium browser = BrowserHelpers.GetBrowser();    var db = DbHelpers.GetWebsiteDBDataContext();    browser.Start();    browser.Open("http://www.red-gate.com/dynamic/shoppingCart/ProductOption.aspx?Product=ReflectorPro");                 //Act     browser = ShoppingCartHelpers.TraverseShoppingCart(browser, 1, 0, ".NET Reflector Pro");    var priceResult = PriceHelpers.GetNewPurchasePrice(db, "ReflectorPro", 1, 0, Currencies.Euros);    //Assert     Assert.AreEqual(priceResult.Price, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl01_Price"));     Assert.AreEqual(priceResult.Tax, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Tax"));     Assert.AreEqual(priceResult.Total, browser.GetText("ctl00_content_InvoiceShoppingItemRepeater_ctl02_Total")); } This is pretty much the same as the xUnit version. The exceptions are that the attributes have changed,  the //Arrange phase now has to handle setting up the ISelenium object, as the attribute that previously did this has gone away, and the test now sets up its own database connection. Previously I was using a shared database connection, but this approach becomes more complicated when tests are being executed concurrently. To avoid complexity each test has its own connection, which it is responsible for closing. For the sake of readability, I snipped out the code that closes the browser session and the db connection at the end of the test. With all that done, there was only one more step required before the tests would execute concurrently. It is necessary to tell the test runner which tests are eligible to run in parallel, via the [Parallelizable] attribute. This can be done at the test, fixture or assembly level. Since I wanted to run all tests concurrently, I marked mine at the assembly level in the AssemblyInfo.cs using the following: [assembly: DegreeOfParallelism(3)] [assembly: Parallelizable(TestScope.All)] The second attribute marks all tests in the assembly as [Parallelizable], whilst the first tells the test runner how many concurrent threads to use when executing the tests. I set mine to three since I was using 3 RCs in separate VMs. With everything now in place, I fired up the Icarus* test runner that comes with MbUnit. Executing my 372 tests three at a time instead of one at a time reduced the running time from 2 hours 10 minutes, to 55 minutes, that's an improvement of about 58%! I'd like to have seen an improvement of 66%, but I can understand that either inefficiencies in the hub code, my test environment or the test runner code (or some combination of all three most likely) contributes to a slightly diminished improvement. That said, I'd love to hear about any experience you have in upping this efficiency. Ultimately though, it was a saving that was most definitely worth having. It makes regression testing via UI automation a far more plausible prospect. The other obvious point to make is that this approach scales far better than executing tests serially. So if ever we need to improve performance, we just register additional RC's with the hub, and up the DegreeOfParallelism. *This was just my personal preference for a GUI runner. The MbUnit/Gallio installer also provides a command line runner, a TestDriven.net runner, and a Resharper 4.5 runner. For now at least, Resharper 5 isn't supported.

    Read the article

  • What is the best position for power unit?

    - by guest86
    I would like to buy new computer case. Last time I bought a computer was in 2008 and many things have changed up to day. Many new computer cases have power unit placed down, on bottom. I'm thinking about buying some of those cases, but i'm not sure about something - if power unit is placed on the bottom it can't take away hot air from the case and pump it out right? All my PC parts are silent - CPU (E8200, placed below 12cm Nochtua fan of power unit) has heat-pipe cooler with Nochtua fan spinning at only 800rpms, GPU has cooler powered by 7V instead 12 and that's why i don't want to HAVE TO place another fan to pump out hot air instead of PU placed on top. That might make some noise. So i ask someone more experienced: if i buy some computer case with PU placed down, do i HAVE TO place some fan to pump out hot air?

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • How do I run NUnit in debug mode from Visual Studio?

    - by Jon Cage
    I've recently been building a test framework for a bit of C# I've been working on. I have NUnit set up and a new project within my workspace to test the component. All works well if I load up my unit tests from Nunit (v2.4), but I've got to the point where it would be really useful to run in debug mode and set some break points. I've tried the suggestions from several guides which all suggest changing the 'Debug' properties of the test project: Start external program: C:\Program Files\NUnit 2.4.8\bin\nunit-console.exe Command line arguments: /assembly: <full-path-to-solution>\TestDSP\bin\Debug\TestDSP.dll I'm using the console version there, but have tried the calling the GUI as well. Both give me the same error when I try and start debugging: Cannot start test project 'TestDSP' because the project does not contain any tests. Is this because I normally load \DSP.nunit into the Nunit GUI and that's where the tests are held? I'm beginning to think the problem may be that VS wants to run it's own test framework and that's why it's failing to find the NUnit tests? [Edit] To those asking about test fixtures, one of my .cs files in the TestDSP project looks roughly like this: namespace Some.TestNamespace { // Testing framework includes using NUnit.Framework; [TestFixture] public class FirFilterTest { /// <summary> /// Tests that a FirFilter can be created /// </summary> [Test] public void Test01_ConstructorTest() { ...some tests... } } } ...I'm pretty new to C# and the Nunit test framework so it's entirely possible I've missed some crucial bit of information ;-) [FINAL SOLUTION] The big problem was the project I'd used. If you pick: Other Languages->Visual C#->Test->Test Project ...when you're choosing the project type, Visual Studio will try and use it's own testing framework as far as I can tell. You should pick a normal c# class library project instead and then the instructions in my selected answer will work.

    Read the article

  • Formatting XML using XSLT1.0

    - by DS
    Hi, I have the following xml: <Subscriptions> <Subscription> <Uplink> <Size>15</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class D</Name> </Subscription> <Subscription> <Uplink> <Size>10</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class A</Name> </Subscription> <Subscription> <Downlink> <Size>50</Size> <Unit>Mbps</Unit> </Downlink> <Name>Class B</Name> </Subscription> <Subscription> <Uplink> <Size>10</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class B</Name> </Subscription> <Subscription> <Downlink> <Size>40000</Size> <Unit>Mbps</Unit> </Downlink> <Name>Class A</Name> </Subscription> <Subscription> <Downlink> <Size>20</Size> <Unit>Mbps</Unit> </Downlink> <Name>Class C</Name> </Subscription> <Subscription> <Downlink> <Size>45</Size> <Unit>Mbps</Unit> </Downlink> <Name>Class D</Name> </Subscription> </Subscriptions> I want to group it in the following format based on name using XSLT1.0. Please help <?xml version="1.0" encoding="UTF-8"?> <Subscriptions> <Subscription> <Downlink> <Size>45</Size> <Unit>Mbps</Unit> </Downlink> <Uplink> <Size>15</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class D</Name> </Subscription> <Subscription> <Downlink> <Size>40000</Size> <Unit>Mbps</Unit> </Downlink> <Uplink> <Size>10</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class A</Name> </Subscription> <Subscription> <Downlink> <Size>50</Size> <Unit>Mbps</Unit> </Downlink> <Uplink> <Size>10</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class B</Name> </Subscription> <Subscription> <Downlink> <Size>20</Size> <Unit>Mbps</Unit> </Downlink> <Uplink> <Size>0</Size> <Unit>Mbps</Unit> </Uplink> <Name>Class C</Name> </Subscription> </Subscriptions> Thanks & Regards, D

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >