Search Results

Search found 7245 results on 290 pages for 'meta tests'.

Page 10/290 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • Junit: splitting integration test and Unit tests.

    - by jeff porter
    Hello all, I've inherited a load of Junit test, but these tests (apart from most not working) are a mixture of actual unit test and integration tests (requiring external systems, db etc). So I'm trying to think of a way to actually separate them out, so that I can run the unit test nice and quickly and the integration tests after that. The options are.. 1: Split them into separate directories. 2: Move to Junit4 and annotate the classes to separate them. 3: Use a file naming convention to tell what a class is , i.e. AdapterATest and AdapterAIntergrationTest. 3 has the issue that Eclipse has the option to "Run all tests in the selected project/package or folder". So it would make it very hard to just run the integration tests. 2: runs the risk that developers might start writing integration tests in unit test classes and it just gets messy. 1: Seems like the neatest solution, but my gut says there must be a better solution out there. So that is my question, how do you lot break apart integration tests and proper unit tests?

    Read the article

  • Doing unit and integration tests with the Web API HttpClient

    - by cibrax
    One of the nice things about the new HttpClient in System.Net.Http is the support for mocking responses or handling requests in a http server hosted in-memory. While the first option is useful for scenarios in which we want to test our client code in isolation (unit tests for example), the second one enables more complete integration testing scenarios that could include some more components in the stack such as model binders or message handlers for example.   The HttpClient can receive a HttpMessageHandler as argument in one of its constructors. public class HttpClient : HttpMessageInvoker { public HttpClient(); public HttpClient(HttpMessageHandler handler); public HttpClient(HttpMessageHandler handler, bool disposeHandler); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } For the first scenario, you can create a new HttpMessageHandler that fakes the response, which you can use in your unit test. The only requirement is that you somehow inject an HttpClient with this custom handler in the client code. public class FakeHttpMessageHandler : HttpMessageHandler { HttpResponseMessage response; public FakeHttpMessageHandler(HttpResponseMessage response) { this.response = response; } protected override Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken) { var tcs = new TaskCompletionSource<HttpResponseMessage>(); tcs.SetResult(response); return tcs.Task; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } In an unit test, you can do something like this. var fakeResponse = new HttpResponse(); var fakeHandler = new FakeHttpMessageHandler(fakeResponse); var httpClient = new HttpClient(fakeHandler); var customerService = new CustomerService(httpClient); // Do something // Asserts .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } CustomerService in this case is the class under test, and the one that receives an HttpClient initialized with our fake handler. For the second scenario in integration tests, there is a In-Memory host “System.Web.Http.HttpServer” that also derives from HttpMessageHandler and you can use with a HttpClient instance in your test. This has been discussed already in these two great posts from Pedro and Filip. 

    Read the article

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Jasmine BDD vs Integration Tests

    - by lfender6445
    Lets say I need to write a test for the front end. A user visits buysomething.com, saves something to their wishlist, and a saved item count is updated. DOM gets manipulated. In my heart I feel this is better suited as an integration test - but my team is currently using jasmine to load fixtures and test such interactions. This leads to extremely brittle tests as they are reliant on a static fixture instead of the actual markup. Are we misusing jasmine here?

    Read the article

  • Learning a new language using broken unit tests

    - by Brian MacKay
    I was listening to a dot net rocks the other day where they mentioned, almost in passing, a really intriguing tool for learning new languages -- I think they were specifically talking about F#. It's a solution you open up and there are a bunch of broken unit tests. Fixing them walks you through the steps of learning the language. I want to check it out, but I was driving in my car and I have no idea what the name of the project is or which dot net rocks episode it was. Google hasn't helped much. Any idea?

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Dynamic tests with mstest and T4

    - by Victor Hurdugaci
    If you used mstest and NUnit you might be aware of the fact that the former doesn't support dynamic, data driven test cases. For example, the following scenario cannot be achieved with the out-of-box mstest: given a dataset, create distinct test cases for each entry in it, using a predefined generic test case. The best result that can be achieved using mstest is a single testcase that will iterate through the dataset. There is one disadvantage: if the test fails for one entry in the dataset, the whole test case fails. So, in order to overcome the previously mentioned limitation, I decided to create a text template that will generate the test cases for me. As an example, I will write some tests for an integer multiplication function that has 2 bugs in it: Read more >> [Cross post from victorhurdugaci.com]

    Read the article

  • Software, script or a tool to automate managing which tests to run

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. I have to update the batch file everytime a new file is added or changed. Is there a software, script or a tool, that does this automatically, or makes it easier for me to do so? I basically need it to be aware of and ask me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Tip #15: How To Debug Unit Tests During Maven Builds

    - by ByronNevins
    It must be really really hard to step through unit tests in a debugger during a maven build.  Right? Wrong! Here is how i do it: 1) Set up these environmental variables: MAVEN_OPTS=-Xmx1024m -Xms256m -XX:MaxPermSize=512mMAVEN_OPTS_DEBUG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m  -Xdebug (no line break here!!)  -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=9999MAVEN_OPTS_REG=-Xmx1024m -Xms256m -XX:MaxPermSize=512m 2) create 2 scripts or aliases like so:  maveny.bat: set MAVEN_OPTS=%MAVEN_OPTS_DEBUG% mavenn.bat: set MAVEN_OPTS=%MAVEN_OPTS_REG%    To debug do this: run maveny.bat run mvn install attach your debugger to port 9999 (set breakpoints of course) When maven gets to the unit test phase it will hit your breakpoint and wait for you. When done debugging simply run mavenn.bat Notes If it takes a while to do the build then you don't really need to set the suspend=y flag. If you set the suspend=n flag then you can just leave it -- but only one maven build can run at a time because of the debug port conflict.

    Read the article

  • Automated tests for differencing algorithm

    - by Matthew Rodatus
    We are designing a differencing algorithm (based on Longest Common Subsequence) that compares a source text and a modified copy to extract the new content (i.e. content that is only in the modified copy). I'm currently compiling a library of test case data. We need to be able to run automated tests that verify the test cases, but we don't want to verify strict accuracy. Given the heuristic nature of our algorithm, we need our test pass/failures to be fuzzy. We want to specify a threshold of overlap between the desired result and the actual result (i.e. the content that is extracted). I have a few sketches in my mind as to how to solve this, but has anyone done this before? Does anyone have guidance or ideas about how to do this effectively?

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • New WebKit tests

    I have updated the WebKit comparison table with data from Safari 5, Chrome 5, and Android 2.1. Improvements throughout!The top five WebKit browsers according to these tests are now: Chrome 5 Safari 5 Safari 4 Samsung WebKit (on bada) Android 2.1Interesting findings: Chrome and Android now support localStorage (Safari already did). Chrome and Android now support geolocation. Safari does in theory, but it doesn’t give the actual coordinates, making the whole exercise a bit pointless. Chrome and Android...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What does well written, readable tests look like?

    - by Industrial
    Doing unit testing for the first time at a large scale, I find myself writing a lot of repetitive unit tests for my business logic. Sure, to create complete test suites I need to test all possibilities but readability feels compromised doing what I do - as shown in the psuedocode below. How would a well written, readable test suit look like? describe "UserEntity" -> it "valid name validates" ... it "invalid name doesnt validate" ... it "valid list of followers validate" ..

    Read the article

  • Writing the tests for FluentPath

    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasnt designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against a mock file system, which would enable me to verify that my code is doing the expected file system operations without having to manipulate the actual, physical file system: what we are testing here is FluentPath, not System.IO. Unfortunately, that...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Is there a standardized (Meta?) Tag for the Date of a Website?

    - by Michael Stum
    One thing that search engines really suck with is the date when a website was created. You know the problem: You search for some CSS or JavaScript problem and Google returns a ton of results from 2002 explaining how to fix the problem in IE 5.5 and Netscape 4.6 while the helpful articles are buried on Page 3. There is only one use for Page 3, and meaningful search results are not it. Anyway, I just wonder if there is a standardized or at least generally accepted tag or meta tag that I can put on my own pages to indicate the date they were created? Not that it helps filtering out the old crap out of search results (especially since the people at #1 with their 2002 articles have zero incentive to change), but I'd just like to do my part :P

    Read the article

  • TeamCity sends inadequate responses after Selenium tests

    - by Dmitriy Sukharev
    I have a TeamCity 7.0.2 at CentOS 6.2 server without X Server. I've installed x11-fonts*, xvfb, firefox, xauth, extracted env. variable DISPLAY=localhost:1, and started xvfb. After that I could start Selenium tests using maven. Tests are executed, but there's an issue with TeamCity. Usually TeamCity starts hehaves absolutely inadequate (it confuses images at the page, sends xml or strange text ampersants and numbers in responses and is a bit slower), also tests are executed 4 times slower (1h 15m) at server than at tester Windows 7-based machine (25m). It worth to notice that tests launch two Jetty servers for tested application (one for REST-services application and another for client). In TeamCity I set JVM command line parameters: -Xms256m -Xmx1224m -XX:MaxPermSize=320m, and Additional Maven command line parameters ends with "-DMAVEN_OPTS=-Xmx1024m" (without quotes). Also both web-services and TeamCity uses the same Oracle server (but different Oracle users). Finally TeamCity and its build agent is at the same server. Server has only 4GB of RAM, but during testing there're 400MB of RAM and 1.2GB of swap. TeamCity and Firefox uses about 65% of CPU during testing. There's no firefox process after end of testing. My knowledge about Selenium is weak. I only know that we use 2.20.0 version of selenium-java maven dependency. Please help me to determine why TeamCity sends wrong responces after Selenium tests. I've tried to give you all information I have, but feel free to ask me for more information.

    Read the article

  • Ethernet run tests green but won't connect

    - by Simon Gillbee
    I have a single ethernet run at home that I just added. I have a cable tester that tests for pin/pair crossover or miswired pins. The entire line tests green (all 4 LEDs light up green on the tester) but I can't get any PC to connect through the link. No link light on the ethernet connection. Any simple tests/fixes, or do I rip out the wall sockets and do it again?

    Read the article

  • Ethernet run tests green but won't connect

    - by Simon Gillbee
    I have a single ethernet run at home that I just added. I have a cable tester that tests for pin/pair crossover or miswired pines. The entire line tests green (all 4 LEDs light up green on the tester) but I can't get any PC to connect through the link. No link light on the ethernet connection. Any simple tests/fixes, or do I rip out the wall sockets and do it again?

    Read the article

  • automated GUI tests fails when running from Jenkins

    - by adm
    Jenkins(master) is installed on the Linux system and runs automated tests on the node slave (Win-XP) via ssh connection. But all the GUi tests are failed, when GUI tests are running locally(WINXP system) testst are passed. I tried tscon.exe 0 /dest:console for forwards the calls to the console but I am getting the error: Could not connect sessionID 0 to sessionname console, Error code 7045 Error [7045]:The requested session access is denied. thanks

    Read the article

  • Weirdness with cabal, HTF, and HUnit assertions

    - by rampion
    So I'm trying to use HTF to run some HUnit-style assertions % cat tests/TestDemo.hs {-# OPTIONS_GHC -Wall -F -pgmF htfpp #-} module Main where import Test.Framework import Test.HUnit.Base ((@?=)) import System.Environment (getArgs) -- just run some tests main :: IO () main = getArgs >>= flip runTestWithArgs Main.allHTFTests -- all these tests should fail test_fail_int1 :: Assertion test_fail_int1 = (0::Int) @?= (1::Int) test_fail_bool1 :: Assertion test_fail_bool1 = True @?= False test_fail_string1 :: Assertion test_fail_string1 = "0" @?= "1" test_fail_int2 :: Assertion test_fail_int2 = [0::Int] @?= [1::Int] test_fail_string2 :: Assertion test_fail_string2 = "true" @?= "false" test_fail_bool2 :: Assertion test_fail_bool2 = [True] @?= [False] And when I use ghc --make, it seems to work correctly. % ghc --make tests/TestDemo.hs [1 of 1] Compiling Main ( tests/TestDemo.hs, tests/TestDemo.o ) Linking tests/TestDemo ... % tests/TestDemoA ... * Tests: 6 * Passed: 0 * Failures: 6 * Errors: 0 Failures: * Main:fail_int1 (tests/TestDemo.hs:9) * Main:fail_bool1 (tests/TestDemo.hs:12) * Main:fail_string1 (tests/TestDemo.hs:15) * Main:fail_int2 (tests/TestDemo.hs:19) * Main:fail_string2 (tests/TestDemo.hs:22) * Main:fail_bool2 (tests/TestDemo.hs:25) But when I use cabal to build it, not all the tests that should fail, fail. % cat Demo.cabal ... executable test-demo build-depends: base >= 4, HUnit, HTF main-is: TestDemo.hs hs-source-dirs: tests % cabal configure Resolving dependencies... Configuring Demo-0.0.0... % cabal build Preprocessing executables for Demo-0.0.0... Building Demo-0.0.0... [1 of 1] Compiling Main ( tests/TestDemo.hs, dist/build/test-demo/test-demo-tmp/Main.o ) Linking dist/build/test-demo/test-demo ... % dist/build/test-demo/test-demo ... * Tests: 6 * Passed: 3 * Failures: 3 * Errors: 0 Failures: * Main:fail_int2 (tests/TestDemo.hs:23) * Main:fail_string2 (tests/TestDemo.hs:26) * Main:fail_bool2 (tests/TestDemo.hs:29) What's going wrong and how can I fix it?

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • TFS 2010 RC does not run Visual Studio 2008 MSTest unit tests

    - by Bernard Vander Beken
    Steps: Run the build including unit tests. Expected result: the unit tests are executed and succeed. Actual result: the unit tests are built by the build, but this is the result: 1 test run(s) completed - 0% average pass rate (0% total pass rate) 0/4 test(s) passed, 0 failed, 4 inconclusive, View Test Results Other Errors and Warnings 1 error(s), 0 warning(s) TF270015: 'MSTest.exe' returned an unexpected exit code. Expected '0'; actual '1'. All the tests are enumerated (four), but the result for each test is "Not Executed". Context: Team Foundation Server 2010 release candidate A build definition that runs projects using the Visual Studio 2008 project format and .NET 3.5 SP1. The unit tests run on a development machine, within Visual Studio. The unit tests project references C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll Typical test class [TestClass] public class DemoTest { [TestMethod] public void DemoTestName() { } // etc }

    Read the article

  • New NCover 3.4.2 makes all my MSTest unit tests fail

    - by Steven
    Yesterday, I decided to install the newest NCover version (3.4.2). However, when I ran it on my existing .ncover configuration file, the NCover output suddenly reported that all my MSTest tests failed. Of course those tests succeed when ran within Visual Studio. Because of this, NCover isn't able to determine any coverage. Somehow the old configuration doesn't seem to work with the new version. Does anyone have any idea what the problem could be or how to solve it? Btw. Here is my ncover configuration. Project settings: Path to application to profile: c:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe Arguments for the application to profile: /testcontainer:D:\dev\MyApp\MyApp.Services.Tests.Unit\bin\Debug\MyApp.Services.Tests.Unit.dll /testcontainer:D:\dev\MyApp\MyApp.WS.Tests.Unit\bin\Debug\MyApp.WS.Tests.Unit.dll Working folder: D:\dev\MyApp

    Read the article

  • Prevent OCUnit tests from running when compilation fails

    - by mhenry1384
    I'm using Xcode 3.2.2 and the built in OCUnit test stuff. One problem I'm running into is that every time I do a build my unit tests are run, even if the build failed. Let's say I make a syntax error in one of my tests. The test fails to compile and the last successful compilation of the unit tests are run. The same thing happens if one of the dependent targets fail to build - the tests are still run. Which is obviously not what I want. How can I prevent the tests from running if the build fails? If this is not possible then I'd rather have the tests never run automatically, is that possible? Sorry if this is obvious, I'm an Xcode noob. Should I be using a better unit testing framework?

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >