Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 85/380 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • How to rewrite data-driven test suites of JUnit 3 in Junit 4?

    - by rics
    I am using data-driven test suites running JUnit 3 based on Rainsberger's JUnit Recipes. The purpose of these tests is to check whether a certain function is properly implemented related to a set of input-output pairs. Here is the definition of the test suite: public static Test suite() throws Exception { TestSuite suite = new TestSuite(); Calendar calendar = GregorianCalendar.getInstance(); calendar.set(2009, 8, 05, 13, 23); // 2009. 09. 05. 13:23 java.sql.Date date = new java.sql.Date(calendar.getTime().getTime()); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.YYYY_MON_DD, "2009-SEP-05")); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.DD_MON_YYYY, "05/SEP/2009")); return suite; } and the definition of the testing class: public class DateFormatTestToString extends TestCase { private java.sql.Date date; private JtDateFormat.FormatType dateFormat; private String expectedStringFormat; public DateFormatTestToString(java.sql.Date date, JtDateFormat.FormatType dateFormat, String expectedStringFormat) { super("testGetString"); this.date = date; this.dateFormat = dateFormat; this.expectedStringFormat = expectedStringFormat; } public void testGetString() { String result = JtDateFormat.getString(date, dateFormat); assertTrue( expectedStringFormat.equalsIgnoreCase(result)); } } How is it possible to test several input-output parameters of a method using JUnit 4? This question and the answers explained to me the distinction between JUnit 3 and 4 in this regard. This question and the answers describe the way to create test suite for a set of class but not for a method with a set of different parameters.

    Read the article

  • Python: How to run unittest.main() for all source files in a subdirectory?

    - by Pete
    I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure: dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: if __name__ == "__main__": unittest.main() And run Python on either source, i.e. $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call execfile for every *.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.

    Read the article

  • How to flush coverage data when my test cause app crash - For ios app

    - by Ypy
    I want to get the code coverage of my tests. So I set the settings, build an app with .gcno files and run it on simulator. It can get the coverage data successfully if there is no crash issue. But if the app crashed, I will get nothing. So how can I get the code coverage data when the app crash? In my thought, this is because it will not call __gcov_flush() method when app crash. I only add app does not run in background to my plist file, so __gcov_flush() is called only at the time I press Home button. Is there any way to call __gcov_flush() before the app crash?

    Read the article

  • Is it possible to talk to the iPhone simulator/device.

    - by Plumenator
    I need to automate the build/deploy process for my iphone applications from a script. I can use xcodebuild to build the project, then use Applescript to deploy and debug/run the application. Assuming the application will stop by itself after a while, I need to collect the generated logs for verification. But the problem is I have no way to know when the application ended from outside of the application itself. If the running time is fixed, I can again use Applescript to stop the application (Cmd+Shift+Enter). So there has to be a way to connect to the device/simulator and wait on the application somehow.

    Read the article

  • Why junit ComparisonFailure is not used by assertEquals(Object, Object) ?

    - by Philippe Blayo
    In Junit 4, do you see any drawback to throw a ComparisonFailure instead of an AssertionError when assertEquals(Object, Object) fails ? assertEquals(Object, Object) throws a ComparisonFailure if both expected and actual are String an AssertionError if either is not a String @Test(expected=ComparisonFailure.class ) public void twoString() { assertEquals("a String", "another String"); } @Test(expected=AssertionError.class ) public void oneString() { assertEquals("a String", new Object()); } The two reasons why I ask the question: ComparisonFailure provide far more readable way to spot the differences in dialog box of eclipse or Intellij IDEA (FEST-Assert throws this exception) Junit 4 already use String.valueOf(Object) to build message "expected ... but was ..." (format method invoqued by Assert.assertEquals(message, Object, Object) in junit-4.8.2): static String format(String message, Object expected, Object actual) { ... String expectedString= String.valueOf(expected); String actualString= String.valueOf(actual); if (expectedString.equals(actualString)) return formatted + "expected: " + formatClassAndValue(expected, expectedString) +" but was: " + formatClassAndValue(actual, actualString); else return formatted +"expected:<"+ expectedString +"> but was:<"+ actualString +">"; Isn't it possible in assertEquals(message, Object, Object) to replace fail(format(message, expected, actual)); by throw new ComparisonFailure(message, formatClassAndValue(expectedObject, expectedString), formatClassAndValue(actualObject, actualString)); Do you see any compatibility issue with other tool, any algorithmic problem with that... ?

    Read the article

  • Fluent NHibernate CheckProperty and Dates

    - by Chris C
    I setup a NUnit test as such: new PersistenceSpecification<MyTable>(_session) .CheckProperty(c => c.ActionDate, DateTime.Now); When I run the test via NUnit I get the following error: SomeNamespace.MapTest: System.ApplicationException : Expected '2/23/2010 11:08:38 AM' but got '2/23/2010 11:08:38 AM' for Property 'ActionDate' The ActionDate field is a datetime field in a SQL 2008 database. I use Auto Mapping and declare the ActionDate as a DateTime property in C#. If I change the test to use DateTime.Today the tests pass. My question is why is the test failing with DateTime.Now? Is NHibernate losing some precision when saving the date to the database and if so how do prevent the lose? Thank you.

    Read the article

  • Reports Generation for Web Based Application Using Selenium Tool

    - by Rahul Mendiratta
    Currently we are generating HTML Reports for Automation, but those reports are not good enough to explain number of scenario which we cover in Automation, Is there anything we can use with Selenium to generate a proper reports which can give a complete overview and can easily understand by anyone First Thing we can show a complete pie charts which cover number of test case passed and Failed. Second thing we can show, what are test cases are there in this build.

    Read the article

  • Multi language testing framework

    - by santiiiii
    I need to write two versions of the same application, one in .NET and the other one in Java. So I'd like to write a single test suite, and then use it against both codebases. Which testing tool would you advise me to use?

    Read the article

  • JUnit : Is there a way to skip a test belonging to a Test class's parent?

    - by Jon
    I have two classes: public abstract class AbstractFoobar { ... } and public class ConcreteFoobar extends AbstractFoobar { ... } I have corresponding test classes for these two classes: public class AbstractFoobarTest { ... } and public class ConcreteFoobarTest extends AbstractFoobarTest { ... } When I run ConcreteFoobarTest (in JUnit), the annotated @Test methods in AbstractFoobarTest get run along with those declared directly on ConcreteFoobarTest because they are inherited. Is there anyway to skip them?

    Read the article

  • Asp.Net MVC How to test that the view rendered

    - by Miau
    HI there I was wondering if there is a better way of testing that a view has rendered in MVC. I was thinking perhaps I should render the view to a string but perhaps there are simpler methods? Basically what I want to know if that the view for a given action has rendered without errors I m already testing the view model but I want to see that rendering the view giving a correct ViewData.Model works

    Read the article

  • How to test Guice Singleton?

    - by 01
    Guice Singletons are weird for me First I thought that IService ser = Guice.createInjector().getInstance(IService.class); System.out.println("ser=" + ser); ser = Guice.createInjector().getInstance(IService.class); System.out.println("ser=" + ser); will work as singleton, but it returns ser=Service2@1975b59 ser=Service2@1f934ad its ok, it doesnt have to be easy. Injector injector = Guice.createInjector(); IService ser = injector.getInstance(IService.class); System.out.println("ser=" + ser); ser = injector.getInstance(IService.class); System.out.println("ser=" + ser); works as singleton ser=Service2@1975b59 ser=Service2@1975b59 So i need to have static field with Injector(Singleton for Singletons) how do i pass to it Module for testing?

    Read the article

  • How do I assign a rotating category to database entries in the order the records come in?

    - by Stomped
    I have a table which gets entries from a website, and as those entries go into the database, they need to be assigned the next category on a list of categories that may be changed at any time. Because of this reason I can't do something simple like for mapping the first category of 5 to IDs 1, 6, 11, 16. I've considered reading in the list of currently possibly categories, and checking the value of the last one inserted, and then giving the new record the next category, but I imagine if two requests come in at the same moment, I could potentially assign them both the same category rather then in sequence. So, my current round of thinking is the following: lock the tables ( categories and records ) insert the newest row into records get the newest row's ID select the row previous to the insertl ( by using order by auto_inc_name desc 0, 1 ) take the previous row's category, and grab the next one from the cat list update the new inserted row unlock the table I'm not 100% sure this will work right, and there's possibly a much easier way to do it, so I'm asking: A. Will this work as I described in the original problem? B. Do you have a better/easier way to do this? Thanks ~

    Read the article

  • Receiving an Expectedmessage differs error

    - by Mark
    I am quite new to TDD and am going with NUnit and Moq. I have got a method where I expect an exception, so I wanted to play a little with the frameworks features. My test code looks as follows: [Test] [ExpectedException(ExpectedException = typeof(MockException), ExpectedMessage = "Actual differs from expected")] public void Write_MessageLogWithCategoryInfoFail() { string message = "Info Test Message"; Write_MessageLogWithCategory(message, "Info"); _LogTest.Verify(writeMessage => writeMessage.Info("This should fail"), "Actual differs from expected" ); } But I always receive the errormessage that the error message that the actual exception message differs from the expected message. What am I doing wrong?

    Read the article

  • Is it a bad idea to create tests that rely on each other within a test fixture?

    - by nbolton
    For example: // NUnit-like pseudo code (within a TestFixture) Ctor() { m_globalVar = getFoo(); } [Test] Create() { a(m_globalVar) } [Test] Delete() { // depends on Create being run b(m_globalVar) } … or… // NUnit-like pseudo code (within a TestFixture) [Test] CreateAndDelete() { Foo foo = getFoo(); a(foo); // depends on Create being run b(foo); } … I’m going with the later, and assuming that the answer to my question is: No, at least not with NUnit, because according to the NUnit manual: The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session. ... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.

    Read the article

  • how many lines of code does my class library have

    - by zachary
    for metrics reasons I need to know how many lines of code my class library has. I'm doing this for code coverage.... So if Class library 1 has 50 lines of code and 100% coverage And if Class library 2 has 500 lines of code and 0% coverage My total coverage is 90% Any idea how to do this? Is there a utility or a way to use Visual Studio?

    Read the article

  • How to deal with the test data in Junit?

    - by user351637
    In TDD(Test Driven Development) development process, how to deal with the test data? Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?

    Read the article

  • how to interpret testng console log message for passing tests?

    - by Kunal Cholera
    I ran testng tests for my test classes, I get the following three types of log output when I run the passing tests. I am using org.testng.AssertJUnit.assertTrue() methods in my tests. [testng] PASSED: testMethod1 on null(test.foo.bar.Class1) [testng] PASSED: testMethod2 on Default test name(test.foo.bar.jar.Class2) [testng] PASSED: testMethod3 Can any one please tell me why for some tests it says "on null" for some it says "on Default test name ... " and for some it does not say anything on the console output. Specifically I want to make it consistent message. Environment : linux Testng framework : 6.3.2beta please advise. thanks.

    Read the article

  • Are there any test data generator tools which can be used with selenium/Nunit?

    - by Jon
    Hi, I was wondering if there was anything that provides test data for injecting into Nunit tests? I'm sure I came across something recently that does this but I couldn't find it again. Basically the idea is that I could use selenium and Nunit to create new customers within the system automatically. So I could have selenium type in customer names generated from test generator (the < DataGenerator is just an imaginary class): e.g. dim sFirstName as string = < DataGenerator >.GetRandomFirstName() dim sLastName as string = < DataGenerator >.GetRandomLastName() selenium.type("firstname_field",sFirstName) selenium.type("lastname_field",sLastName ) I've already seen SQLDataGenerator from Redgate which has a cmd line wrapper class, but I was wondering if there was anything else.

    Read the article

  • How do I get the name of the test method that was run in a testng tear down method?

    - by Zachary Spencer
    Basically I have a tear down method that I want to log to the console which test was just run. How would I go about getting that string? I can get the class name, but I want the actual method that was just executed. Class testSomething() { @AfterMethod public void tearDown() { system.out.println('The test that just ran was....' + getTestThatJustRanMethodName()'); } @Test public void testCase() { assertTrue(1==1); } } should output to the screen: "The test that just ran was.... testCase" However I don't know the magic that getTestThatJustRanMethodName should actually be.

    Read the article

  • VC++ and VisualAssert

    - by C_Bevan
    Hi, For some reason I can't get my test's to load in my project. In the test explorer it says "This process exited without registering with the agent - this may be due to the module not containing any test fixtures" ....I've tried Right Click-Add TestFixture, and adding them other various ways I can get it to work on a blank project Any idea which settings I might need to change

    Read the article

  • Returning all "positions" of a list

    - by Daymor
    I Have a list with "a" and "b" and the "b"'s are somewhat of a path and "a"'s are walls. Im writing a program to make a graph of all the possible moves. I got the code running to check the first "b" for possible moves, but i have NO Idea how im going to find all "b"'s , even less check them all without repeating. Major issue im having is getting the tuple coordinates of the "b"'s out of the list. Any pointers/tips?

    Read the article

  • [C++] Boost test: catch user defined exceptions

    - by user231536
    If I have user defined exceptions in my code, I can't get Boost test to consider them as failures. For example, BOOST_AUTO_TEST_CASE_EXPECTED_FAILURES(MyTest,1) BOOST_AUTO_TEST_CASE(MyTest) { // code which throws user defined exception, not derived from std::exception. } I get a generic message: Caught exception: .... unknown location(0):.... It does not recognize this error as a failure since it is not a std::exception. So it does not honor the expected_failures clause. How do I enforce that the piece of code should always throw an exception? THis seems to be a useful thing to want. In case future code changes cause the code to pass and the exception is not thrown, I want to know that.

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >