Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 44/313 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • How can I specifiy JUnit test dependencies?

    - by Egon Willighagen
    Our toolkit has over 15000 JUnit tests, and many tests are known to fail if some other test fails. For example, if the method X.foo() uses functionality from Y.foo() and YTest.testFoo() fails, then XTest.testFoo() will fail too. Obviously, XTest.testFoo() can also fail because of problems specific to X.foo(). While this is fine and I still want both tests run, it would be nice if one could annotate a test dependency with XTest.testFoo() pointing to YTest.testFoo(). This way, one could immediately see what functionality used by X.foo() is also failing, and what not. Is there such annotation available in JUnit or elsewhere? Something like: public YTests { @Test @DependsOn(method=org.example.tests.YTest#testFoo) public void testFoo() { // Assert.something(); } }

    Read the article

  • How do I test controllers and views?

    - by ryeguy
    I'm using rails for the first time, and I love how test-oriented it is and how it encourages you to write tests. I'm just having a hard time figuring out what I should be testing when I test controllers and views. I know that you should test redirects and authorization in the controller tests, but what else? And what should go in view tests? If I'm "following the rules" and only putting loops, conditionals, and echoes in my views, then what is there left to test?

    Read the article

  • Testing with Qt's QTestLib module

    - by ak
    Hi I started writing some tests with Qt's unit testing system. How do you usually organize the tests? It is one test class per one module class, or do you test the whole module with a single test class? Qt docs (or some podcast that I recently watched) suggested to follow the former strategy. I want to write tests for a module. The module provides only one class that is going to be used by the module user, but there is a lot of logic abstracted in other classes, which I would also like to test, besides testing the public class. The problem is that Qt's proposed way to run tests involved the QTEST_MAIN macro: QTEST_MAIN(TestClass) #include "test_class.moc" and eventually one test program is capable of testing just one test class. And it kinda sucks to create test projects for every single class in the module. Of course, one could take a look at the QTEST_MAIN macro, rewrite it, and run other test classes. But is there something, that works out of the box?

    Read the article

  • JUnit Test method with randomized nature

    - by Peter
    Hey, I'm working on a small project for myself at the moment and I'm using it as an opportunity to get acquainted with unit testing and maintaining proper documentation. I have a Deck class with represents a deck of cards (it's very simple and, to be honest, I can be sure that it works without a unit test, but like I said I'm getting used to using unit tests) and it has a shuffle() method which changes the order of the cards in the deck. The implementation is very simple and will certainly work: public void shuffle() { Collections.shuffle(this.cards); } But, how could I implement a unit test for this method. My first thought was to check if the top card of the deck was different after calling shuffle() but there is of course the possibility that it would be the same. My second thought was to check if the entire order of cards has changed, but again they could possibly be in the same order. So, how could I write a test that ensures this method works in all cases? And, in general, how can you unit test methods for which the outcome depends on some randomness? Cheers, Pete

    Read the article

  • Caching result of setUp() using Python unittest

    - by dbr
    I currently have a unittest.TestCase that looks like.. class test_appletrailer(unittest.TestCase): def setup(self): self.all_trailers = Trailers(res = "720", verbose = True) def test_has_trailers(self): self.failUnless(len(self.all_trailers) > 1) # ..more tests.. This works fine, but the Trailers() call takes about 2 seconds to run.. Given that setUp() is called before each test is run, the tests now take almost 10 seconds to run (with only 3 test functions) What is the correct way of caching the self.all_trailers variable between tests? Removing the setUp function, and doing.. class test_appletrailer(unittest.TestCase): all_trailers = Trailers(res = "720", verbose = True) ..works, but then it claims "Ran 3 tests in 0.000s" which is incorrect.. The only other way I could think of is to have a cache_trailers global variable (which works correctly, but is rather horrible): cache_trailers = None class test_appletrailer(unittest.TestCase): def setUp(self): global cache_trailers if cache_trailers is None: cache_trailers = self.all_trailers = all_trailers = Trailers(res = "720", verbose = True) else: self.all_trailers = cache_trailers

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • Generating a random displacement on the unit sphere

    - by becko
    Given a unit vector n, I need to generate, as fast as possible, another random unit vector m. The deviation of m from n should be on the order of a positive parameter sigma, and the distribution of m on the unit sphere should be symmetrical around n. I have no specific requirements on the representation of unit vectors, so you can use spherical angles, Cartesian coordinates, or whatever turns out to be convenient. Also, there are no precise requirements on the probability distributions used, as long as it decays when m deviates more than sigma from n. I am working with gsl and C. I have come up with a somewhat convoluted method using Cartesian coordinates. I will post it later if it is useful, but I would like to see people's ideas.

    Read the article

  • Given a short (2-week) sprint, is it ever acceptable to forgo TDD to "get things done"?

    - by Ben Aston
    Given a short sprint, is it ever acceptable to forgo TDD to "get things done" within the sprint. For example a given piece of work might need say 1/3 of the sprint to design the object model around an existing implementation. Under this scenario you might well end up with implemented code, say half way through the sprint, without any tests (implementing unit tests during this "design" stage would add significant effort and the tests would likely be thrown away a few times until the final "design" is settled upon). You might then spend a day or two in the second week adding in unit / integration tests after the fact. Is this acceptable?

    Read the article

  • Android Test testPreconditions

    - by user1184113
    In Android developers I've seen that testPreconditions() method is supposed to be launch before all tests. But in my app test, it's acting like a normal test. It does not run before all tests. Is there something wrong ? Here is the description about testPreconditions() from android developer : "A preconditions test checks the initial application conditions prior to executing other tests. It's similar to setUp(), but with less overhead, since it only runs once."

    Read the article

  • How to run xunit in Visual Studio 2012?

    - by user1978421
    I am very new to unit testing. I have been following the procedures for creating a unit test in visual studio 2012 on http://channel9.msdn.com/Events/TechEd/Europe/2012/DEV214. The test just won't start. And it will prompt me "A project with an Output Type of Class Library cannot be started directly. In order to debug this project, add an executable project to this solution which references the library project. Set an executable project as the startup project. Even though I attached the unit test class code to a console program, the test does not start and the test explorer is empty. In the video, it doesn't need to have any running program. The lady only created a class library, and the test will run. what should I do? Note. there is no "create unit test" on the mouse right click menu

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • JUnit - Testing a method that in turn invokes a few more methods

    - by stratwine
    Hi, This is my doubt on what we regard as a "unit" while unit-testing. say I have a method like this, public String myBigMethod() { String resultOne = moduleOneObject.someOperation(); String resultTwo = moduleTwoObject.someOtherOperation(resultOne); return resultTwo; } ( I have unit-tests written for someOperation() and someOtherOperation() seperately ) and this myBigMethod() kinda integrates ModuleOne and ModuleTwo by using them as above, then, is the method "myBigMethod()" still considered as a "unit" ? Should I be writing a test for this "myBigMethod()" ? Say I have written a test for myBigMethod()... If testSomeOperation() fails, it would also result in testMyBigMethod() to fail... Now testMyBigMethod()'s failure might show a not-so-correct-location of the bug. One-Cause causing two tests to fail doesn't look so good to me. But donno if there's any better way...? Is there ? Thanks !

    Read the article

  • Skipping scheduled self-tests and predicting drive EOL

    - by Steve Madsen
    For a few weeks now, smartd has been reporting that it is skipping some of its scheduled self-tests on the weekends: Apr 24 18:29:32 calvin smartd[4758]: Device: /dev/sda, skip scheduled Offline Immediate Test; 40% remaining of current Self-Test. Apr 24 18:29:33 calvin smartd[4758]: Device: /dev/sdb, skip scheduled Offline Immediate Test; 50% remaining of current Self-Test. The drives in this RAID-1 array are set to run an offline test four times a day, a short self-test at 2am every day, and a long self-test on Saturdays at 2am. For some reason, it looks like the long self-test is taking longer, causing the other scheduled tests to be skipped. First question: is this a sign of likely drive failure? Then today, smartd reported that a self-test failed. Here is the output of smartctl -a /dev/sdb: smartctl version 5.38 [i686-pc-linux-gnu] Copyright (C) 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3250823AS Serial Number: 3ND1GNBC Firmware Version: 3.03 User Capacity: 250,059,350,016 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Apr 25 13:15:34 2010 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 84) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 047 039 006 Pre-fail Always - 168450357 3 Spin_Up_Time 0x0003 098 098 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 33 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 9 7 Seek_Error_Rate 0x000f 087 060 030 Pre-fail Always - 654745480 9 Power_On_Hours 0x0032 055 055 000 Old_age Always - 40141 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 51 194 Temperature_Celsius 0x0022 037 062 000 Old_age Always - 37 (0 17 0 0) 195 Hardware_ECC_Recovered 0x001a 047 039 000 Old_age Always - 168450357 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40131 - # 2 Extended offline Completed: read failure 30% 40129 379795511 # 3 Short offline Completed without error 00% 40084 - # 4 Short offline Completed without error 00% 40060 - # 5 Short offline Completed without error 00% 40036 - # 6 Short offline Completed without error 00% 40013 - # 7 Short offline Completed without error 00% 39990 - # 8 Extended offline Completed without error 00% 39977 - # 9 Short offline Completed without error 00% 39919 - #10 Short offline Completed without error 00% 39895 - #11 Short offline Completed without error 00% 39872 - #12 Short offline Completed without error 00% 39848 - #13 Short offline Completed without error 00% 39824 - #14 Short offline Completed without error 00% 39801 - #15 Extended offline Completed without error 00% 39789 - #16 Short offline Completed without error 00% 39754 - #17 Short offline Completed without error 00% 39732 - #18 Short offline Completed without error 00% 39707 - #19 Short offline Completed without error 00% 39683 - #20 Short offline Completed without error 00% 39660 - #21 Short offline Completed without error 00% 39636 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Given that this drive is about 4.5 years old, I am probably tempting fate by keeping it in service. SMART doesn't seem to get much respect as a reliable way to predict drive failure. What else can I use to get an early indication of drive failure?

    Read the article

  • Intel site says VT-x is supported on my CPU, but tests say otherwise

    - by Anshul
    I have a laptop with an Intel 2nd Gen. i7-2729QM (http://ark.intel.com/products/50067/) and the Intel site says that it supports VT-x, but I've downloaded the Intel Processor Identification Utility (http://www.intel.com/support/processors/tools/piu/sb/CS-014921.htm) and the test says that my CPU does not have VT-x. There is no option in my BIOS that allows me to enable/disable VT-x. I researched my laptop model (SAGER NP8170) and most forums say that it's enabled by default and there's no option in the BIOS. So assuming that's true, what gives? I also downloaded another tool called SecurAble from GRC and it also says that my CPU doesn't support VT-x. VirtualBox also says that my CPU does not support virtualization. My mind is boggled by why it says on the Intel site that my CPU supports VT-x but all other tests show otherwise. Anyone know what's going on? Thanks in advance.

    Read the article

  • Host forwarding fails, server is up, domain name tests ambiguous

    - by jayunit100
    I have a domain name registered with http://www.registryrocket.com/ The "main" site, which is called rudolfcode.net, is registered under godaddy, and forwards to a heroku site (rudolfcode.herokuapp.com). I have found that the main site, rudolfcode.net works, but the hostgator forwarding has stopped working (firefox simply fails when you point to http://www.rudolflabs.com, which is the domain name registered by hostgator). How can I debug this issue ? Finally, I have tried to run some DNS tests, and here are the results : Im not sure what the failures mean .... But Im pretty sure that "Conecting to WWW Home Page" failed is a pretty bad sign ! Thanks.

    Read the article

  • Orthographic unit translation mismatch on grid (e.g. 64 pixels translates incorrectly)

    - by Justin Van Horne
    I am looking for some insight into a small problem with unit translations on a grid. Setup 512x448 window 64x64 grid gl_Position = projection * world * position; projection is defined by ortho(-w/2.0f, w/2.0f, -h/2.0f, h/2.0f); This is a textbook orthogonal projection function. world is defined by a fixed camera position at (0, 0) position is defined by the sprite's position. Problem In the screenshot below (1:1 scaling) the grid spacing is 64x64 and I am drawing the unit at (64, 64), however the unit draws roughly ~10px in the wrong position. I've tried uniform window dimensions to prevent any distortion on the pixel size, but now I am a bit lost in the proper way in providing a 1:1 pixel-to-world-unit projection. Anyhow, here are some quick images to aide in the problem. I decided to super-impose a bunch of the sprites at what the engine believes is 64x offsets. When this seemed off place, I went about and did the base case of 1 unit. Which seemed to line up as expected. The yellow shows a 1px difference in the movement. Vertices It would appear that the vertices going into the vertex shader are correct. For example, in reference to the first image the data looks like this in the VBO: x y x y ---------------------------- tl | 0.0 24.0 64.0 24.0 bl | 0.0 0.0 -> 64.0 0.0 tr | 16.0 0.0 80.0 0.0 br | 16.0 24.0 80.0 24.0 With that said, all I am left to believe is that I am munging up my actual projection. So, I am looking for any insight into maintaining the 1:1 pixel-to-world-unit projection.

    Read the article

  • Best way to calculate unit deaths in browser game combat?

    - by MikeCruz13
    My browser game's combat system is written and mechanically functioning well. It's written in PHP and uses a SQL database. I'm happy with the unit balance in relation to one another. I am, however, a little worried about how I'm calculating unit deaths when one player attacks another because the deaths seem to pile up a little fast for my taste. For this system, a battle doesn't just trigger, calculate winner, and end. Instead, it is allowed to go for several rounds (say one round every 15 mins.) until one side passes a threshold of being too strong for the other player and allows players to send reinforcements between rounds. Each round, units pair up and attack each other. Essentially what I do is calculate the damage: AP = Attack Points HP = Hit Points Units AP * Quantity * Random Factors * other factors (such as attrition) I take that and divide by the defending unit's HP to find the number of casualties of defending units. So, for example (simplified to take out some factors), if I have: 500 attackers with 50 AP vs 1000 defenders with 100 HP = 250 deaths. I wonder if that last step could be handled better to reduce the deaths piling up. Some ideas: I just change all the units with more HP? I make sure to set the Attacking unit's AP to be a max of the defender's HP to make sure they only kill 1 unit. (is that fair if I have less huge units vs many small units?) I spread the damage around more by including the defending unit's quantity more? i.e. in that scenario some are dead and some are 50% damage. (How would I track this every round?) Other better mathematical approaches?

    Read the article

  • How do I silence the following RightAWS messages when running tests

    - by Laurie Young
    I'm using the RighAWS gem, and mocking at the http level so that the RightAWS code is being executed as part of my tests. When this happens I get the following output ....New RightAws::S3Interface using per_request-connection mode Opening new HTTP connection to s3.amazonaws.com:80 .New RightAws::S3Interface using per_request-connection mode . Even though all the tests pass, when I do have errors its harder to scan them because of this output. is there a nice way to silence it?

    Read the article

  • How to run qtestlib unit tests from QtCreator

    - by extropy
    I am developing a GUI application in Qt Creator and want to write some unit tests for it. I followed This guide to make some unit tests with QtTestlib and the program compiles fine. But how do I run them? I would like them to be run before the GUI app starts if debug buid and not run if release build.

    Read the article

  • Integrating YUI tests with CruiseControl

    - by Dan R
    I am using YUI to test my JavaScript app, and want to integrate the test results into my CruiseControl build system. How can I use CruiseControl to run the tests? I initially thought about using the JUnit plugin to drive the tests, but that is a no go. Does anyone else have this working? (Please note: Changing either YUI or CruiseControl isn't an option for me.)

    Read the article

  • Common utility functions for Perl .t tests

    - by zedoo
    Hi I am getting started with Test::More, already have a few .t test scripts. Now I'd like to define a function that will only be used for the tests, but accross different .t files. Where's the best place to put such a function? Define another .t without any tests and require it where needed? (As a sidenote I use the module structure created by Module::Starter)

    Read the article

  • Run NUnit 2.5.5 tests in Gallio 3.1

    - by Daz Lewis
    Hi, I'm trying to get Gallio running some existing NUnit tests but they're not showing in Gallio. The assembly loads into Gallio fine and I can run them fine via Resharper within the VS IDE. I created a really simple NUnit test in VS and this also doesn't show so I know it's not something weird in my existing tests. Any ideas?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >