Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 21/328 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • How to set up a Hudson server to run cppunit tests

    - by kyue
    Hello, I'm having problems setting up my Hudson server to run cpp unit tests so I can output an .xml file. I tried searching the web for some more straight forward instructions on how to set this up but still don't understand how to. It sounds like I need to set up ant to run...but how?? I'm currently running Hudson ver 1.352. Any suggestions will be greatly appreciated. Kat

    Read the article

  • Selenium RC Error when running tests

    - by Sheoque
    I get this error when running a number of tests in seleniums Bromine, The selenium RC version 1.0.2 outputs this: WARN - GET /selenium-server/driver/?cmd=testComplete&1=&2=&sessionId=1274d41621c64fc08c1e7ea0a58f260b HTTP/1.0 java.lang.IllegalStateException: unexpected command json={command:"open",target:"/Library/Security/Login.aspx?ReturnUrl=%2fIndex.aspx",value:""} in place before new command selectWindow could be added at org.openqa.selenium.server.CommandQueue.doCommandWithoutWaitingForARe sponse(CommandQueue.java:121) Any ideas

    Read the article

  • Automated tests for Java Swing GUIs

    - by pauldoo
    What options are there for building automated tests for GUIs written in Java Swing? I'd like to test some GUIs which have been written using the NetBeans Swing GUI Builder, so something that works without requiring special tampering of the code under test would be ideal.

    Read the article

  • Writing a Makefile.am to invoke googletest unit tests

    - by jmglov
    I am trying to add my first unit test to an existing Open Source project. Specifically, I added a new class, called audio_manager: src/audio/audio_manager.h src/audio/audio_manager.cc I created a src/test directory structure that mirrors the structure of the implementation files, and wrote my googletest unit tests: src/test/audio/audio_manager.cc Now, I am trying to set up my Makefile.am to compile and run the unit test: src/test/audio/Makefile.am I copied Makefile.am from: src/audio/Makefile.am Does anyone have a simple recipe for me, or is it to the cryptic automake documentation for me? :)

    Read the article

  • No tests found with test runner 'JUnit 4'

    - by lamisse
    Hello , my java test worked well from Eclipse ,I do not know I relaunch test from run menu and then have the following message: No tests found with test runner 'JUnit 4' In the .classpath file I have all jar files, and at the end have : <classpathentry exported="true" kind="con" path="org.eclipse.jdt.junit.JUNIT_CONTAINER/4"/> <classpathentry kind="output" path="bin"/> </classpath> please help!

    Read the article

  • How rspec works with rails3 for integration-tests?

    - by makevoid
    What I'm trying to ahieve is to do integration tests with webrat in rails3 like Yehuda does with test-unit in http://pivotallabs.com/talks/76-extending-rails-3 minute 34. an example: describe SomeApp it "should show the index page" visit "/" body.should =~ /hello world/ end end Does someone knows a way to do it?

    Read the article

  • DUnit: How to run tests?

    - by Ian Boyd
    How do i run TestCase's from the IDE? i created a new project, with a single, simple, form: unit Unit1; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) private public end; var Form1: TForm1; implementation {$R *.DFM} end. Now i'll add a test case to check that pushing Button1 does what it should: unit Unit1; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) Button1: TButton; procedure Button1Click(Sender: TObject); private public end; var Form1: TForm1; implementation {$R *.DFM} uses TestFramework; type TForm1Tests = class(TTestCase) private f: TForm1; protected procedure SetUp; override; procedure TearDown; override; published procedure TestButton1Click; end; procedure TForm1.Button1Click(Sender: TObject); begin //todo end; { TForm1Tests } procedure TForm1Tests.SetUp; begin inherited; f := TForm1.Create(nil); end; procedure TForm1Tests.TearDown; begin f.Free; inherited; end; procedure TForm1Tests.TestButton1Click; begin f.Button1Click(nil); Self.CheckEqualsString('Hello, world!', f.Caption); end; end. Given what i've done (test code in the GUI project), how do i now trigger a run of the tests? If i push F9 then the form simply appears: Ideally there would be a button, or menu option, in the IDE saying Run DUnit Tests: Am i living in a dream-world? A fantasy land, living in a gumdrop house on lollipop lane?

    Read the article

  • Loading SQL dump before running Django tests

    - by knutin
    I have a fairly complex Django project which makes it hard/impossible to use fixtures for loading data. What I would like to do is to load a database dump from the production database server after all tables has bene created by the testrunner and before the actual tests start running. I've tried various "magic" in MyTestCase.setUp(), but with no luck. Any suggestions would be most welcome. Thanks.

    Read the article

  • JPA - How to truncate tables between unit tests

    - by Theo
    I want to cleanup the database after every test case without rolling back the transaction. I have tried DBUnit's DatabaseOperation.DELETE_ALL, but it does not work if a deletion violates a foreign key constraint. I know that I can disable foreign key checks, but that would also disable the checks for the tests (which I want to prevent). I'm using JUnit 4, JPA 2.0 (Eclipselink), and Derby's in-memory database. Any ideas? Thanks, Theo

    Read the article

  • Rails - How do you test ActionMailer sent a specific email in tests

    - by adam
    Currently in my tests I do something like this to test if an email is queued to be sent assert_difference('ActionMailer::Base.deliveries.size', 1) do get :create_from_spreedly, {:user_id => @logged_in_user.id} end but if i a controller action can send two different emails i.e. one to the user if sign up goes fine or a notification to admin if something went wrong - how can i test which one actually got sent. The code above would pass regardless.

    Read the article

  • How to write unit tests for an object having multiple properties

    - by jess
    Hi, I have various objects in application,and each has isvalid method to test if values of all properties are set correctly(as per business rules).Now,to test that for each violation isvalid throws false,i will have to write as many tests as rules being checked in isvalid.Is there a simpler way to do this? I am using MBunit.

    Read the article

  • Field name being converted in Unit Tests [rails]?

    - by yar
    I am noting this strange behavior where one of my fields -- receive_empresa_test_info -- has worked fine though it's always been referred to as receive_empresa_info. In Functional Tests, though, the real field name is receive_empresa_test_info. What is going on here? Might this be some part of the Rails environment that I'm missing during testing?

    Read the article

  • Navigate HTML Source while performing WatiN tests

    - by youwhut
    I am performing actions on the page during WatiN tests. What is the neatest method for asserting certain elements are where they should be by evaluating the HTML source? I am scraping the source but looking for a clean way to navigate the tags pulled back. UPDATE: Right now I am thinking about grabbing certain elements within HTML source using regular expressions and then analysing that to see if other elements exist within. Other thoughts appreciated.

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >