Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 34/192 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • Debug using MbUnit/Gallio 3.1

    - by user314096
    When I use the [Debug] button in Gallio, the breakpoints in my unit tests are not hitting. The unit tests are written with MbUnit/Gallio. I am using MbUnit/Gallio version 3.1 build 397 with Visual Studio 2010 Beta 2. The unit tests run to completion in Gallio Icarus, but they run past the breakpoints. I see the symbol tables loading in VS, but it does not stop at the expected breakpoint, so I am unable to debug it.

    Read the article

  • NSInteger differences between CLI and GUI ?

    - by d11wtq
    I've been building a framework and writing unit tests in GHUnit. One of my Framework's accessor methods returns an NSInteger. I assert the expected value in the tests like this: GHAssertEquals(1320, request.port, @"Port number should be 1320"); When running my tests with an AppKit UI based frontend this assertion passes. However, when I run my tests on the command line, it fails with a type-mismatch unless I type-cast my hard-coded 1320 as (NSInteger). What's causing the difference in the way the integer is being interpreted by the compiler? Is xcodebuild on the command line using a different data-type for hard coded integers?

    Read the article

  • Getting black images with selenium.captureScreenshot

    - by Lidia
    I'm executing selenium tests with testng, that are started on a remote system with Selenium RC via hudson (with ssh connection). The remote system is windows xp with MKS Toolkit installed, hence ssh. Tests are NOT executed as a windows service. I've tried using both captureScreenshot and captureEntirePageScreenshot methods. The first one always produces a black image. The second one creates the correct screen shot but it only works on Firefox and our tests usually pass on Firefox and fail in other browsers, so it is crucial to capture screen shots for the other browsers (mainly IE and Safari). The tests are ran in parallel, with many browser windows open at the same time. I'm not certain if this is what's causing the problem. Any thoughts will be appreciated.

    Read the article

  • How to test UI interaction of Silverlight dialogs?

    - by Bernard Vander Beken
    I am using Silverlight 3.0 Unit Testing, version Silverlight Toolkit November 2009. Apart from unit tests, it allows to do UI interaction tests, typically using AutomationPeer subclasses (eg ButtonAutomationPeer to interact with a Button). Are there AutomationPeer classes to test the interaction with the following: OpenFileDialog SaveFileDialog MessageBox In unit tests it would be possible to stub these, but for integration and browser testing it would be great to have this testable.

    Read the article

  • Surprising results with .NET multi-theading algorithm

    - by Myles J
    Hi, I've recently wrote a C# console time tabling algorithm that is based on a combination of a genetic algorithm with a few brute force routines thrown in. The initial results were promising but I figured I could improve the performance by splitting the brute force routines up to run in parallel on multi processor architectures. To do this I used the well documented Producer/Consumer model (as documented in this fantastic article http://www.albahari.com/threading/part2.aspx#_ProducerConsumerQWaitHandle). I changed my code to create one thread per logical processor during the brute force routines. The performance gains on my work station were very pleasing. I am running Windows XP on the following hardware: Intel Core 2 Quad CPU 2.33 GHz 3.49 GB RAM Initial tests indicated average performance gains of approx 40% when using 4 threads. The next step was to deploy the new multi-threading version of the algorithm to our higher spec UAT server. Here is the spec of our UAT server: Windows 2003 Server R2 Enterprise x64 8 cpu (Quad-Core) AMD Opteron 2.70 GHz 255 GB RAM After running the first round of tests we were all extremely surprised to find that the algorithm actually runs slower on the high spec W2003 server than on my local XP work station! In fact the tests seem to indicate that it doesn't matter how many threads are generated (tests were ran with the app spawning between 2 to 32 threads). The algorithm always runs significantly slower on the UAT W2003 server? How could this be? Surely the app should run faster on a 8 cpu (Quad-Core) than my 2 Quad work station? Why are we seeing no performance gains with the multi-threading on the W2003 server whilst the XP workstation tests show gains of up to 40%? Any help or pointers would be appreciated. Regards Myles

    Read the article

  • How to pass variables using Unittest suite

    - by chrissygormley
    Hello I have test's using unittest. I have a test suite and I am trying to pass variables through into each of the tests. The below code shows the test suite used. class suite(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('testmodule1', 'testmodule2') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests It calls tests, I would like to know how to pass variables into the tests from this class. An example test script is below: class TestThis(unittest.TestCase): def runTest(self): assertEqual('1', '1') class TestThisTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeTestThisTestSuite(): suite = unittest.TestSuite() suite.addTest("TestThis") return suite def suite(): return unittest.makeSuite(TestThis) if __name__ == '__main__': unittest.main() So from the class suite() I would like to enter in a value to change the value that is in assert value. Eg. assertEqual(self.value, '1'). I have tried sys.argv for unittest and it doesn't seem to work. Thanks for any help.

    Read the article

  • Too Many Public Methods Forced by Test Driven Development

    - by RoryG
    A very specific question from a novice to TDD: I seperate my tests and my app into different packages. Thus, most of my app methods have to be public for tests to access them. As I progress, it becomes obvious that some methods could become private, but if I make that change, the tests that access them won't work. Am I missing a step, or doing something wrong, or is this just one downfall of TDD?

    Read the article

  • TDD test data loading methods

    - by Dave Hanson
    I am a TDD newb and I would like to figure out how to test the following code. I am trying to write my tests first, but I am having trouble for creating a test that touches my DataAccessor. I can't figure out how to fake it. I've done the extend the shipment class and override the Load() method; to continue testing the object. I feel as though I end up unit testing my Mock objects/stubs and not my real objects. I thought in TDD the unit tests were supposed to hit ALL of the methods on the object; however I can never seem to test that Load() code only the overriden Mock Load My tests were write an object that contains a list of orders based off of shipment number. I have an object that loads itself from the database. public class Shipment { //member variables protected List<string> _listOfOrders = new List<string>(); protected string _id = "" //public properties public List<string> ListOrders { get{ return _listOfOrders; } } public Shipment(string id) { _id = id; Load(); } //PROBLEM METHOD // whenever I write code that needs this Shipment object, this method tries // to hit the DB and fubars my tests // the only way to get around is to have all my tests run on a fake Shipment object. protected void Load() { _listOfOrders = DataAccessor.GetOrders(_id); } } I create my fake shipment class to test the rest of the classes methods .I can't ever test the Real load method without having an actual DB connection public class FakeShipment : Shipment { protected new void Load() { _listOfOrders = new List<string>(); } } Any thoughts? Please advise. Dave

    Read the article

  • How to get Eclipse + PyDev + App Engine + Unit testing to work?

    - by PEZ
    I want to run my unit tests for a Python Google App Engine project using Run As = Python unit-test But when I try that all my Model tests bail with the error message: BadArgumentError: app must not be empty. Anyone got this to work? NB: The tests runs fine using Nose --with-gae. But I want the PyDev integration with hyperlinking of resources and such.

    Read the article

  • Is it possible to compile IronRuby code to a .NET assembly (EXE or DLL)

    - by Chris Ammerman
    My scenario consists of the following points. I have a packaged software product I am developing in C# Since it is a packaged product, the public interfaces of the assemblies need to be tightly controlled... All assemblies are strong-named Any classes that don't absolutely have to be "public" are "internal" I want to write unit tests for those "internal" classes, since they are the bulk of the code And finally.... I want to try writing the unit tests in Ruby. Since the unit tests would be external to the assembly containing the code under test, the assemblies under test would each need to have an "InternalsVisibleTo" attribute specifying the name of the unit test assembly. Which of course would mean that the Ruby unit tests would have to compile down to a .NET assembly so they can be given access in this way. Can this be done? If so, how? All I can find on the web about "compiling IronRuby" is about building the actual IronRuby runtime from source.

    Read the article

  • How can I specifiy JUnit test dependencies?

    - by Egon Willighagen
    Our toolkit has over 15000 JUnit tests, and many tests are known to fail if some other test fails. For example, if the method X.foo() uses functionality from Y.foo() and YTest.testFoo() fails, then XTest.testFoo() will fail too. Obviously, XTest.testFoo() can also fail because of problems specific to X.foo(). While this is fine and I still want both tests run, it would be nice if one could annotate a test dependency with XTest.testFoo() pointing to YTest.testFoo(). This way, one could immediately see what functionality used by X.foo() is also failing, and what not. Is there such annotation available in JUnit or elsewhere? Something like: public YTests { @Test @DependsOn(method=org.example.tests.YTest#testFoo) public void testFoo() { // Assert.something(); } }

    Read the article

  • Opposite of Bloom filter?

    - by abc
    Hi, I'm trying to optimize a piece of software which is basically running millions of tests. These tests are generated in such a way that there can be some repetitions. Of course, I don't want to spend time running tests which I already ran if I can avoid it efficiently. So, I'm thinking about using a Bloom filter to store the tests which have been already ran. However, the Bloom filter errs on the unsafe side for me. It gives false positives. That is, it may report that I've ran a test which I haven't. Although this could be acceptable in the scenario I'm working on, I was wondering if there's an equivalent to a Bloom filter, but erring on the opposite side, that is, only giving false negatives. I've skimmed through the literature without any luck.

    Read the article

  • Questions on about TDD or unit testing in ASP.NET MVC

    - by Diego
    I've been searching on how to do Unit testing and find thats is quite easy, but, what I want to know is, In a asp.net mvc application, what should be REALLY important to test and which methods you guys use? I just can't find a clear answer on about WHAT TO REALLY TEST when programming unit tests. I just don't want to make unecessary tests and loose developement time doing overkill tests.

    Read the article

  • Unit testing an iPhone static library with XCode 3

    - by teabot
    I am writing a number of static libraries for the iPhone and wish also to have suites of unit tests. XCode 3 provides templates for both static libraries and unit tests but I am wondering how they should fit together in a static library project? In my static library project I have created a target for unit testing but expect to also create an executable to kick off the unit tests than run against the classes in the static library. What is the procedure for doing this?

    Read the article

  • How do I test controllers and views?

    - by ryeguy
    I'm using rails for the first time, and I love how test-oriented it is and how it encourages you to write tests. I'm just having a hard time figuring out what I should be testing when I test controllers and views. I know that you should test redirects and authorization in the controller tests, but what else? And what should go in view tests? If I'm "following the rules" and only putting loops, conditionals, and echoes in my views, then what is there left to test?

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • Testing with Qt's QTestLib module

    - by ak
    Hi I started writing some tests with Qt's unit testing system. How do you usually organize the tests? It is one test class per one module class, or do you test the whole module with a single test class? Qt docs (or some podcast that I recently watched) suggested to follow the former strategy. I want to write tests for a module. The module provides only one class that is going to be used by the module user, but there is a lot of logic abstracted in other classes, which I would also like to test, besides testing the public class. The problem is that Qt's proposed way to run tests involved the QTEST_MAIN macro: QTEST_MAIN(TestClass) #include "test_class.moc" and eventually one test program is capable of testing just one test class. And it kinda sucks to create test projects for every single class in the module. Of course, one could take a look at the QTEST_MAIN macro, rewrite it, and run other test classes. But is there something, that works out of the box?

    Read the article

  • Easymock vs Mockito: Design vs Maintainability?

    - by RAbraham
    One way of thinking about this is: if we care about the Design of the code then Easymock is the better choice as it gives feedback to you by its concept of expectations If we care about the maintainability of tests( easier to read,write and having less brittle tests which are not affected much by change), then Mockito seems a better choice. My question is: - If you have used Easymock in large scale projects, do you find that your tests are harder to maintain? - What are the limitations of Mockito( other than endo testing)

    Read the article

  • Why is RSpec so slow under Rails?

    - by Adrian Dunston
    Whenever I run rspec tests for my Rails application it takes forever and a day of overhead before it actually starts running tests. Why is rspec so slow? Is there a way to speed up Rails' initial load or single out the part of my Rails app I need (e.g. ActiveRecord stuff only) so it doesn't load absolutely everything to run a few tests?

    Read the article

  • Programatically Gathering NUnit results

    - by skb
    Hi. I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programatically I get the numbers for Total Success/Failed tests?

    Read the article

  • Grails - Link checking as part of a continuous integration.

    - by Reverend Gonzo
    So, we have a grails app set up with a Hudson CI build process. We're running unit tests, integration tests, and about to set up Selenium for some functional tests as well. However, are there any good ways of fully testing a sites links to make sure nothing has broken in a release. I know there's link checkers in general, but I'd like to have it be a part of the build process, so a build outright fails if something isn't right.

    Read the article

  • Jenkins plugin for different types of slaves

    - by user1195996
    We have some tests that need to be run on multiple types of specific hardware. Its possible that these tests might pass on some pieces of hardware but fail on others, and we want to know where they work and where they fail. So, for certain tests, we would like to provide a list of hardware they need to be tested on. We'd like to put all the needed hardware in a pool that Jenkins has access to, and then have Jenkins run the right tests on the right hardware, depending on the hardware list that comes with the test. And of course we'd like to keep track of which test worked where. Is there a plugin for Jenkins to be able to handle this sort of thing? Has anyone else solved this sort of problem?

    Read the article

  • PHP specifying a fixed include source for scripts in different directories

    - by Extrakun
    I am currently doing unit testing, and use folders to organize my test cases. All cases pertaining to managing of user accounts, for example, go under \tests\accounts. Over time, there are more test cases, and I begin to seperate the cases by types, such as \tests\accounts\create, \tests\account\update and etc. However, one annoying problem is I have to specify the path to a set of common includes. I have to use includes like this: include_once ("../../../../autoload.php"); include_once ("../../../../init.php"); A test case in tests\accounts\ would require change to the include (one less directory level down). Is there anyway to have them somehow locating my two common includes? I understand I could set include paths within my PHP's configurations, or use server environment variables, but I would like to avoid such solutions as they make the application less portable and coupled with another layer which the programmer can't control (some web-host doesn't allow configuration of PHP's configuration settings, for example)

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >