Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 106/550 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • NUnit doesn't work well with Assert.AreEqual

    - by stasal
    Hi! I'm new to unit-testing and NUit in particular. I'm just typing some examples from the book which refers to Java and JUnit. But I'm using C# instead. The problem is: I've got a class with overriden methods such as Equals() and GetHashCode(), but when I am trying to compare two objects of this class with Assert.AreEqual() my code is not called, so I get an exception. Assert.True(MyClass.Equals(MyClass2)) does work well. But I don't wanna use this construction instead of Assert.AreEqual(). Where the problem can be? Here is the class: public class Money { public int amount; protected string currency; public Money(int amount, string currency) { this.amount = amount; this.currency = currency; } public new bool Equals(object obj) { if (obj == null) return false; Money money = (Money)obj; return (amount == money.amount) && (Currency().Equals(money.Currency())); } public new int GetHashCode() { return (string.Format("{0}{1}", amount, currency)).GetHashCode(); } public static Money Dollar(int amount) { return new Money(amount, "USD"); } public static Money Franc(int amount) { return new Money(amount, "CHF"); } public Money Times(int multiplier) { return new Money(amount * multiplier, currency); } public string Currency() { return currency; } } And the test method itself: [TestFixture] public class DollarTest { [Test] public void TestMultiplication() { Money five = Money.Dollar(5); Assert.True(Money.Dollar(10).Equals(five.Times(2))); // ok Assert.AreEqual(Money.Dollar(10), five.Times(2)); // fails } } Thanks.

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • Dangers when deploying Flash/Flex UI test automation hooks to production?

    - by Merlyn Morgan-Graham
    I am interested in doing automated testing against a Flex based UI. I have found out that my best options for UI automation (due to being C# controllable, good licensing conditions, etc) all seem to require that I compile test hooks into my application. Because of this, I am thinking of recommending that these hooks be compiled into our build. I have found a few places on the net that recommend not deploying bits with this instrumentation enabled, and I'd like to know why. Is it a performance drain, or a security risk? If it is a security risk, can you explain how the attack surface is increased? I am not a Flash or Flex developer, though I have some experience with threat modeling. For reference, here's the tools I'm specifically considering: QTP Selenium-Flex API I am having problems finding all the warnings/suggestions I found last night, but here's an example that I can find: http://www.riatest.com/products/getting-started.html Warning! Automation enabled applications expose all properties of all GUI components. This makes them vulnerable to malicious use. Never make automation enabled application publicly available. Always restrict access to such applications and to RIATest Loader to trusted users only. Related question (how to do conditional compilation to insert/remove those hooks): Conditionally including Flex libraries (SWCs) in mxmlc/compc ant tasks

    Read the article

  • How to mock a String using mockito?

    - by Alceu Costa
    I need to simulate a test scenario in which I call the getBytes() method of a String object and I get an UnsupportedEncodingException. I have tried to achieve that using the following code: String nonEncodedString = mock(String.class); when(nonEncodedString.getBytes(anyString())).thenThrow(new UnsupportedEncodingException("Parsing error.")); The problem is that when I run my test case I get a MockitoException that says that I can't mock a java.lang.String class. Is there a way to mock a String object using mockito or, alternatively, a way to make my String object throw an UnsupportedEncodingException when I call the getBytes method? Here are more details to illustrate the problem: This is the class that I want to test: public final class A{ public static String f(String str){ try{ return new String(str.getBytes("UTF-8")); } catch (UnsupportedEncodingException e) { // This is the catch block that I want to exercise. ... } } } This is my testing class (I'm using JUnit 4 and mockito): public class TestA { @Test(expected=UnsupportedEncodingException.class) public void test(){ String aString = mock(String.class); when(nonEncodedString.getBytes(anyString())).thenThrow(new UnsupportedEncodingException("Parsing error.")); A.f(aString); } }

    Read the article

  • Advanced All In One .NET Framework (should i go for a software factory ?)

    - by alfredo dobrekk
    Hi, i m starting a new project that would basically take input from user and save them to database among about 30 screens, and i would like to find a framework that will allow the maximum number of these features out of the box : .net c#. windows form. unit testing continuous integration logging screens with lists, combo boxes, text boxes, add, delete, save, cancel that are easy to update when you add a property to your classes or a field to your database. auto completion on controls to help user find its way use of an orm like nhibernate easy multithreading and display of wait screens for user easy undo redo tabbed child windows search forms ability to grant access to some functionnalities according to user profiles mvp/mvvm or whatever design patterns either some code generation from database to c# classe or generation of database schema from c# classes some kind of database versioning / upgrade to easily update database when i release patches to application once in production automatic control resizing code metrics analysis some code generator i can use against my entities that would generate some rough form i can rearrange after code documentation generator ... At this point i have 3 options : Build from scratch on top of clr :( Find functionnalities among several open source framework and use them as a stack for infrastucture Find a "software factory" I know its lot but i really would like to use existing code to build upon so i can focus on business rules. What open source tools would u use to achieve these ?

    Read the article

  • Breaking dependencies when you can't make changes to other files?

    - by codemuncher
    I'm doing some stealth agile development on a project. The lead programmer sees unit testing, refactoring, etc as a waste of resources and there is no way to convince him otherwise. His philosophy is "If it ain't broke don't fix it" and I understand his point of view. He's been working on the project for over a decade and knows the code inside and out. I'm not looking to debate development practices. I'm new to the project and I've been tasked with adding a new feature. I've worked on legacy projects before and used agile development practices with good result but those teams were more receptive to the idea and weren't afraid of making changes to code. I've been told I can use whatever development methodology I want but I have to limit my changes to only those necessary to add the feature. I'm using tdd for the new classes I'm writing but I keep running into road blocks caused by the liberal use of global variables and the high coupling in the classes I need to interact with. Normally I'd start extracting interfaces for these classes and make their dependence on the global variables explicit by injecting them as constructor arguments or public properties. I could argue that the changes are necessary but considering the lead never had to make them I doubt he would see it my way. What techniques can I use to break these dependencies without ruffling the lead developer's feathers? I've made some headway using: Extract Interface (for the new classes I'm creating) Extend and override the wayward classes with test stubs. (luckily most methods are public virtual) But these two can only get me so far.

    Read the article

  • Force orientation change in testcase with fragments

    - by user1202032
    I have an Android test project in which I wish to programatically change the orientation. My test: public class MainActivityLandscapeTest extends ActivityInstrumentationTestCase2<MainActivity> { public MainActivityLandscapeTest() { super(MainActivity.class); } private MainActivity mActivity; private Fragment mDetailFragment; private Fragment mListFragment; private Solo mSolo; @Override protected void setUp() throws Exception { super.setUp(); mSolo = new Solo(getInstrumentation(), getActivity()); mSolo.setActivityOrientation(Solo.LANDSCAPE); mActivity = getActivity(); mListFragment = (Fragment) mActivity.getSupportFragmentManager() .findFragmentById(R.id.listFragment); mDetailFragment = (Fragment) mActivity.getSupportFragmentManager() .findFragmentById(R.id.detailFragment); } public void testPreConditions() { assertTrue(mActivity != null); assertTrue(mSolo != null); assertTrue(mListFragment != null); assertTrue(getActivity().getResources().getConfiguration().orientation == Configuration.ORIENTATION_LANDSCAPE); } /** * Only show detailFragment in landscape mode */ public void testOrientation() { assertTrue(mListFragment.isVisible()); assertTrue(mDetailFragment.isVisible()); } } The layouts for the activity is in seperate folders, layout-port and layout-land layout-port fragment_main.xml layout-land fragment_main.xml In landscape mode, the layout contains 2 fragments (Detail and list) while in portrait it contains 1(List only). If the device/emulator is already in landscape mode before testing begins, this test passes. If in portrait, it fails with a NullPointerException on mListFragment and mDetailFragment. Adding a delay (waitForIdleSync() and/or waitForActivity()) did NOT seem to solve my problem. How do i force the orientation to landscape in my test, while still being able to find the fragments using findFragmentById()?

    Read the article

  • How can I build a wrapper to wait for listening on a port?

    - by BillyBBone
    Hi, I am looking for a way of programmatically testing a script written with the asyncore Python module. My test consists of launching the script in question -- if a TCP listen socket is opened, the test passes. Otherwise, if the script dies before getting to that point, the test fails. The purpose of this is knowing if a nightly build works (at least up to a point) or not. I was thinking the best way to test would be to launch the script in some kind of sandbox wrapper which waits for a socket request. I don't care about actually listening for anything on that port, just intercepting the request and using that as an indication that my test passed. I think it would be preferable to intercept the open socket request, rather than polling at set intervals (I hate polling!). But I'm a bit out of my depths as far as how exactly to do this. Can I do this with a shell script? Or perhaps I need to override the asyncore module at the Python level? Thanks in advance, - B

    Read the article

  • Slightly different execution times between python2 and python3

    - by user557634
    Hi. Lastly I wrote a simple generator of permutations in python (implementation of "plain changes" algorithm described by Knuth in "The Art... 4"). I was curious about the differences in execution time of it between python2 and python3. Here is my function: def perms(s): s = tuple(s) N = len(s) if N <= 1: yield s[:] raise StopIteration() for x in perms(s[1:]): for i in range(0,N): yield x[:i] + (s[0],) + x[i:] I tested both using timeit module. My tests: $ echo "python2.6:" && ./testing.py && echo "python3:" && ./testing3.py python2.6: args time[ms] 1 0.003811 2 0.008268 3 0.015907 4 0.042646 5 0.166755 6 0.908796 7 6.117996 8 48.346996 9 433.928967 10 4379.904032 python3: args time[ms] 1 0.00246778964996 2 0.00656183719635 3 0.01419159912 4 0.0406293644678 5 0.165960511097 6 0.923101452814 7 6.24257639835 8 53.0099868774 9 454.540967941 10 4585.83498001 As you can see, for number of arguments less than 6, python 3 is faster, but then roles are reversed and python2.6 does better. As I am a novice in python programming, I wonder why is that so? Or maybe my script is more optimized for python2? Thank you in advance for kind answer :)

    Read the article

  • Should a developer write their own test plan for Q/A?

    - by Mat Nadrofsky
    Who writes the test plans in your shop? Who should write them? I realize developers (like me) regularly do their own unit testing whilst developing and in some cases even their own Q/A depending on the size of the shop and the nature of the business, but in a big software shop with a full development team and Q/A team, who should be writing those official "my changes are done now" test plans? Soon, we'll be bringing on another Q/A member to our development team. My question is, going forward, is it a good practice to get your developers to write their own test plans? Something tells me that part of that might make sense but another part might not... What I like about that: Developer is very familiar with the changes made, thus it's easy to produce a document... What I don't like about that: Developer knows how it's supposed to work and might write a test plan that caters to this without knowing it. So, with the above in mind, what is the general stance on this topic? I'm of course already reading books like the Mythical Man-Month, Code Complete and a few others which really do help, but I'd like to get some input from the group as well.

    Read the article

  • Factorial function - design and test.

    - by lukas
    I'm trying to nail down some interview questions, so I stared with a simple one. Design the factorial function. This function is a leaf (no dependencies - easly testable), so I made it static inside the helper class. public static class MathHelper { public static int Factorial(int n) { Debug.Assert(n >= 0); if (n < 0) { throw new ArgumentException("n cannot be lower that 0"); } Debug.Assert(n <= 12); if (n > 12) { throw new OverflowException("Overflow occurs above 12 factorial"); } //by definition if (n == 0) { return 1; } int factorialOfN = 1; for (int i = 1; i <= n; ++i) { //checked //{ factorialOfN *= i; //} } return factorialOfN; } } Testing: [TestMethod] [ExpectedException(typeof(OverflowException))] public void Overflow() { int temp = FactorialHelper.MathHelper.Factorial(40); } [TestMethod] public void ZeroTest() { int factorialOfZero = FactorialHelper.MathHelper.Factorial(0); Assert.AreEqual(1, factorialOfZero); } [TestMethod] public void FactorialOf5() { int factOf5 = FactorialHelper.MathHelper.Factorial(5); Assert.AreEqual(120,factOf5); } [TestMethod] [ExpectedException(typeof(ArgumentException))] public void NegativeTest() { int factOfMinus5 = FactorialHelper.MathHelper.Factorial(-5); } I have a few questions: Is it correct? (I hope so ;) ) Does it throw right exceptions? Should I use checked context or this trick ( n 12 ) is ok? Is it better to use uint istead of checking for negative values? Future improving: Overload for long, decimal, BigInteger or maybe generic method? Thank you

    Read the article

  • Are TestContext.Properties usable ?

    - by DBJDBJ
    Using Visual Studio generate Test Unit class. Then comment in, the class initialization method. Inside it add your property, using the testContext argument. Upon test app startup this method is indeed called by the testing infrastructure. //Use ClassInitialize to run code before running the first test in the class [ClassInitialize()] public static void MyClassInitialize(TestContext testContext) { /* * Any user defined testContext.Properties * added here will be erased after this method exits */ testContext.Properties.Add("key", 1 ) ; // place the break point here } After leaving MyClassInitialize, any properties added by user are lost. Only the 10 "official" ones are left. Actually TestContext gets overwritten, with the inital offical one, each time before each test method is called. It it not overwritten only if user has test initialization method, the changes made over there are passed to the test. //Use TestInitialize to run code before running each test [TestInitialize()]public void MyTestInitialize(){ this.TestContext.Properties.Add("this is preserved",1) ; } This effectively means TestContext.Properties is "mostly" read only, for users. Which is not clearly documented in MSDN. It seems to me this is very messy design+implementation. Why having TestContext.Properties as an collection, at all ? Users can do many other solutions to have class wide initialization. Please discuss. --DBJ

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • How do you unit test new code that uses a bunch of classes that cannot be instantiated in a test har

    - by trendl
    I'm writing a messaging layer that should handle communication with a third party API. The API has a bunch of classes that cannot be easily (if at all) instantiated in a test harness. I decided to wrap each class that I need in my unit tests with an adapter/wrapper and expose the members I need through this adapter class. Often I need to expose the wrapped type as well which I do by exposing it as an object. I have also provided an interface for for each or the adapter classes to be able to use them with a mocking framework. This way I can substitute the classes in test for whatever I need. The downside is that I have a bunch of adapter classes that so far server no other reason but testing. For me this is a good reason by itself but others may find this not enough. Possibly, when I write an implementation for another third party vendor's API, I may be able to reuse much of my code and only provide the adapters specific to the vendor's API. However, this is a bit of a long shot and I'm not actually sure it will work. What do you think? Is this approach viable or am I writing unnecessary code that serves no real purpose? Let me say that I do want to write unit tests for my messaging layer and I do now know how to do it otherwise.

    Read the article

  • How to test a Grails Service that utilizes a criteria query (with spock)?

    - by user569825
    I am trying to test a simple service method. That method mainly just returns the results of a criteria query for which I want to test if it returns the one result or not (depending on what is queried for). The problem is, that I am unaware of how to right the corresponding test correctly. I am trying to accomplish it via spock, but doing the same with any other way of testing also fails. Can one tell me how to amend the test in order to make it work for the task at hand? (BTW I'd like to keep it a unit test, if possible.) The EventService Method public HashSet<Event> listEventsForDate(Date date, int offset, int max) { date.clearTime() def c = Event.createCriteria() def results = c { and { le("startDate", date+1) // starts tonight at midnight or prior? ge("endDate", date) // ends today or later? } maxResults(max) order("startDate", "desc") } return results } The Spock Specification package myapp import grails.plugin.spock.* import spock.lang.* class EventServiceSpec extends Specification { def event def eventService = new EventService() def setup() { event = new Event() event.publisher = Mock(User) event.title = 'et' event.urlTitle = 'ut' event.details = 'details' event.location = 'location' event.startDate = new Date(2010,11,20, 9, 0) event.endDate = new Date(2011, 3, 7,18, 0) } def "list the Events of a specific date"() { given: "An event ranging over multiple days" when: "I look up a date for its respective events" def results = eventService.listEventsForDate(searchDate, 0, 100) then: "The event is found or not - depending on the requested date" numberOfResults == results.size() where: searchDate | numberOfResults new Date(2010,10,19) | 0 // one day before startDate new Date(2010,10,20) | 1 // at startDate new Date(2010,10,21) | 1 // one day after startDate new Date(2011, 1, 1) | 1 // someday during the event range new Date(2011, 3, 6) | 1 // one day before endDate new Date(2011, 3, 7) | 1 // at endDate new Date(2011, 3, 8) | 0 // one day after endDate } } The Error groovy.lang.MissingMethodException: No signature of method: static myapp.Event.createCriteria() is applicable for argument types: () values: [] at myapp.EventService.listEventsForDate(EventService.groovy:47) at myapp.EventServiceSpec.list the Events of a specific date(EventServiceSpec.groovy:29)

    Read the article

  • How can I get 100% test coverage in a Perl module that uses DBI?

    - by BrianH
    I am a bit new to the Devel::Cover module, but have found it very useful in making sure I am not missing tests. A problem I am running into is understanding the report from Devel::Cover. I've looked at the documentation, but can't figure out what I need to test to get 100% coverage. Here is the output from the cover report: line err stmt bran cond sub pod time code ... 36 sub connect_database { 37 3 3 1 1126 my $self = shift; 38 3 100 24 if ( !$self->{dsn} ) { 39 1 7 croak 'dsn not supplied - cannot connect'; 40 } 41 *** 2 33 21 $self->{dbh} = DBI->connect( $self->{dsn}, q{}, q{} ) 42 || croak "$DBI::errstr"; 43 1 11 return $self; 44 } ... line err % l !l&&r !l&&!r expr ----- --- ------ ------ ------ ------ ---- 41 *** 33 1 0 0 'DBI'->connect($$self{'dsn'}, '', '') || croak("$DBI::errstr") And here is and example of my code that tests this specific line: my $database = MyModule::Database->new( { dsn => 'Invalid DSN' }); throws_ok( sub { $database->connect_database() }, qr/Can't connect to data source/, 'Test connection exception (invalid dsn)' ); This test passes - the connect does throw an error and fulfills my "throws_ok" test. I do have some tests that test for a successful connection, which is why I think I have 33% coverage, but if I'm reading it correctly, cover thinks I am not testing the "|| croak" part of the statement. I thought I was, with the "throws_ok" test, but obviously I am missing something. Does anyone have advice on how I can test my DBI-connect line successfully? Thanks!

    Read the article

  • What would be a better implementation of shared variable among subclass

    - by Churk
    So currently I have a spring unit testing application. And it requires me to get a session cookie from a foreign authentication source. Problem what that is, this authentication process is fairly expensive and time consuming, and I am trying to create a structure where I am authenticate once, by any subclass, and any subsequent subclass is created, it will reuse this session cookie without hitting the authentication process again. My problem right now is, the static cookie is null each time another subclass is created. And I been reading that using static as a global variable is a bad idea, but I couldn't think of another way to do this because of Spring framework setting things during run time and how I would set the cookie so that all other classes can use it. Another piece of information. The variable is being use, but is change able during run time. It is not a single user being signed in and used across the board. But more like a Sub1 would call login, and we have a cookie. Then multiple test will be using that login until SubX will come in and say, I am using different credential, so I need to login as something else. And repeats. Here is a outline of my code: public class Parent implements InitializingBean { protected static String BASE_URL; public static Cookie cookie; ... All default InitializingBean methods ... afterPropertiesSet() { cookie = // login process returns a cookie } } public class Sub1 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomething() { // Do something with cookie, because it should have been set by parent class } } public class Sub2 extends Parent { @resource public String baseURL; @PostConstruct public void init() { // set parents with my baseURL; BASE_URL = baseURL; } public void doSomethingElse() { // Do something with cookie, because it should have been set by parent class } }

    Read the article

  • How to rewrite data-driven test suites of JUnit 3 in Junit 4?

    - by rics
    I am using data-driven test suites running JUnit 3 based on Rainsberger's JUnit Recipes. The purpose of these tests is to check whether a certain function is properly implemented related to a set of input-output pairs. Here is the definition of the test suite: public static Test suite() throws Exception { TestSuite suite = new TestSuite(); Calendar calendar = GregorianCalendar.getInstance(); calendar.set(2009, 8, 05, 13, 23); // 2009. 09. 05. 13:23 java.sql.Date date = new java.sql.Date(calendar.getTime().getTime()); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.YYYY_MON_DD, "2009-SEP-05")); suite.addTest(new DateFormatTestToString(date, JtDateFormat.FormatType.DD_MON_YYYY, "05/SEP/2009")); return suite; } and the definition of the testing class: public class DateFormatTestToString extends TestCase { private java.sql.Date date; private JtDateFormat.FormatType dateFormat; private String expectedStringFormat; public DateFormatTestToString(java.sql.Date date, JtDateFormat.FormatType dateFormat, String expectedStringFormat) { super("testGetString"); this.date = date; this.dateFormat = dateFormat; this.expectedStringFormat = expectedStringFormat; } public void testGetString() { String result = JtDateFormat.getString(date, dateFormat); assertTrue( expectedStringFormat.equalsIgnoreCase(result)); } } How is it possible to test several input-output parameters of a method using JUnit 4? This question and the answers explained to me the distinction between JUnit 3 and 4 in this regard. This question and the answers describe the way to create test suite for a set of class but not for a method with a set of different parameters.

    Read the article

  • Creating mock Objects in PHP unit

    - by Mike
    Hi, I've searched but can't quite find what I'm looking for and the manual isn't much help in this respect. I'm fairly new to unit testing, so not sure if I'm on the right track at all. Anyway, onto the question. I have a class: <?php class testClass { public function doSomething($array_of_stuff) { return AnotherClass::returnRandomElement($array_of_stuff); } } ?> Now, clearly I want the AnotherClass::returnRandomElement($array_of_stuff); to return the same thing every time. My question is, in my unit test, how do I mockup this object? I've tried adding the AnotherClass to the top of the test file, but when I want to test AnotherClass I get the "Cannot redeclare class" error. I think I understand factory classes, but I'm not sure how I would apply that in this instance. Would I need to write an entirely seperate AnotherClass class which contained test data and then use the Factory class to load that instead of the real AnotherClass? Or is using the Factory pattern just a red herring. I tried this: $RedirectUtils_stub = $this->getMockForAbstractClass('RedirectUtils'); $o1 = new stdClass(); $o1->id = 2; $o1->test_id = 2; $o1->weight = 60; $o1->data = "http://www.google.com/?ffdfd=fdfdfdfd?route=1"; $RedirectUtils_stub->expects($this->any()) ->method('chooseRandomRoot') ->will($this->returnValue($o1)); $RedirectUtils_stub->expects($this->any()) ->method('decodeQueryString') ->will($this->returnValue(array())); in the setUp() function, but these stubs are ignored and I can't work out whether it's something I'm doing wrong, or the way I'm accessing the AnotherClass methods. Help! This is driving me nuts.

    Read the article

  • A good way to write unit tests

    - by bobobobo
    So, I previously wasn't really in the practice of writing unit tests - now I kind of am and I need to check if I'm on the right track. Say you have a class that deals with math computations. class Vector3 { public: // Yes, public. float x,y,z ; // ... ctors ... } ; Vector3 operator+( const Vector3& a, const Vector3 &b ) { return Vector3( a.x + b.y /* oops!! hence the need for unit testing.. */, a.y + b.y, a.z + b.z ) ; } There are 2 ways I can really think of to do a unit test on a Vector class: 1) Hand-solve some problems, then hard code the numbers into the unit test and pass only if equal to your hand and hard-coded result bool UnitTest_ClassVector3_operatorPlus() { Vector3 a( 2, 3, 4 ) ; Vector3 b( 5, 6, 7 ) ; Vector3 result = a + b ; // "expected" is computed outside of computer, and // hard coded here. For more complicated operations like // arbitrary axis rotation this takes a bit of paperwork, // but only the final result will ever be entered here. Vector3 expected( 7, 9, 11 ) ; if( result.isNear( expected ) ) return PASS ; else return FAIL ; } 2) Rewrite the computation code very carefully inside the unit test. bool UnitTest_ClassVector3_operatorPlus() { Vector3 a( 2, 3, 4 ) ; Vector3 b( 5, 6, 7 ) ; Vector3 result = a + b ; // "expected" is computed HERE. This // means all you've done is coded the // same thing twice, hopefully not having // repeated the same mistake again Vector3 expected( 2 + 5, 6 + 3, 4 + 7 ) ; if( result.isNear( expected ) ) return PASS ; else return FAIL ; } Or is there another way to do something like this?

    Read the article

  • Ways to support manually executed tests? (that can be used on a Mac)

    - by Rinzwind
    Are there any tools that can be used on a Mac to support manually executed tests? I have a number of tests that I'm executing manually and which I'm currently documenting using merely a plain text file. "Tools" can be interpreted rather loosely here, anything that's a step up from the plain text file would be useful: a template for some suitable application, supporting AppleScript scripts, a web-based system, a full-blown application ... Some things that would be great to have better support for (see also the example below): Checking off each step while you're manually executing the test. Showing the next step(s) in a small window that is always kept in front of all other windows. Automatically updating the 'last tested' and 'using svn revision' info. Keeping a record of all previous testing rounds (not just the last one). ... Any suggestions for any such "tools" that can be used on a Mac? An example (faked) entry from the plain text file to give you a better idea of what I'm looking for: - Check that exported web pages render properly in Safari. Last tested: 2010-03-24 Using SVN revision: 1000 Steps: - Open a new document. - Add some items to the document. - Export the document to a web page "Test.html" in a new folder "Export Test" on the Desktop. - Open the web page in Safari, script: tell application "Finder" open file "Test.html" of folder "Export Test" of desktop end tell Expected results: - The web page should appear properly with all items shown. Clean up steps: - Remove the folder "Export Test" from the Desktop. ( Note: for those unaware, the snippet of AppleScript in the above can be executed from most text editing applications through the Services menu by selecting the snippet and using: the application menu Services Script Editor Run as AppleScript. This is quite useful to automate some steps for tests that are difficult to automate as a whole. )

    Read the article

  • Where to start with the development of first database driven Web App (long question)?

    - by Ryan
    Hi all, I've decided to develop a database driven web app, but I'm not sure where to start. The end goal of the project is three-fold: 1) to learn new technologies and practices, 2) deliver an unsolicited demo to management that would show how information that the company stores as office documents spread across a cumbersome network folder structure can be consolidated and made easier to access and maintain and 3) show my co-workers how Test Drive Development and prototyping via class diagrams can be very useful and reduces future maintenance headaches. I think this ends up being a basic CMS to which I have generated a set of features, see below. 1) Create a database to store the site structure (organized as a tree with a 'project group'-project structure). 2) Pull the site structure from the database and display as a tree using basic front end technologies. 3) Add administrator privileges/tools for modifying the site structure. 4) Auto create required sub pages* when an admin adds a new project. 4.1) There will be several sub pages under each project and the content for each sub page is different. 5) add user privileges for assigning read and write privileges to sub pages. What I would like to do is use Test Driven Development and class diagramming as part of my process for developing this project. My problem; I'm not sure where to start. I have read on Unit Testing and UML, but never used them in practice. Also, having never worked with databases before, how to I incorporate these items into the models and test units? Thank you all in advance for your expertise.

    Read the article

  • Unit Tests Architecture Question

    - by Tom Tresansky
    So I've started to layout unit tests for the following bit of code: public interface MyInterface { void MyInterfaceMethod1(); void MyInterfaceMethod2(); } public class MyImplementation1 implements MyInterface { void MyInterfaceMethod1() { // do something } void MyInterfaceMethod2() { // do something else } void SubRoutineP() { // other functionality specific to this implementation } } public class MyImplementation2 implements MyInterface { void MyInterfaceMethod1() { // do a 3rd thing } void MyInterfaceMethod2() { // do something completely different } void SubRoutineQ() { // other functionality specific to this implementation } } with several implementations and the expectation of more to come. My initial thought was to save myself time re-writing unit tests with something like this: public abstract class MyInterfaceTester { protected MyInterface m_object; @Setup public void setUp() { m_object = getTestedImplementation(); } public abstract MyInterface getTestedImplementation(); @Test public void testMyInterfaceMethod1() { // use m_object to run tests } @Test public void testMyInterfaceMethod2() { // use m_object to run tests } } which I could then subclass easily to test the implementation specific additional methods like so: public class MyImplementation1Tester extends MyInterfaceTester { public MyInterface getTestedImplementation() { return new MyImplementation1(); } @Test public void testSubRoutineP() { // use m_object to run tests } } and likewise for implmentation 2 onwards. So my question really is: is there any reason not to do this? JUnit seems to like it just fine, and it serves my needs, but I haven't really seen anything like it in any of the unit testing books and examples I've been reading. Is there some best practice I'm unwittingly violating? Am I setting myself up for heartache down the road? Is there simply a much better way out there I haven't considered? Thanks for any help.

    Read the article

  • Rails Functional test assert_select javascript respond_to

    - by Macint
    Hello, I am currently trying to write functional tests for a charging form which gets loaded on to the page via AJAX(jQuery). It loads the form from the charge_form action which returns the consult_form.js.erb view. This all works, but I am having trouble with my testing. In the functional I can go to the action but I cannot use assert_select to find a an element and verify that the form is in fact there. Error: 1) Failure: test_should_create_new_consult(ConsultsControllerTest) [/test/functional/consults_controller_test.rb:8]: Expected at least 1 element matching "h4", found 0. <false> is not true. This is the view. consult_form.js.erb: <div id="charging_form"> <h4>Charging form</h4> <div class="left" id="charge_selection"> <%= select_tag("select_category", options_from_collection_for_select(@categories, :id, :name)) %><br/> ... consults_controller_test.rb: require 'test_helper' class ConsultsControllerTest < ActionController::TestCase def test_should_create_new_consult get_with_user :charge_form, :animal_id => animals(:one), :id => consults(:one), :format => 'js' assert_response :success assert_select 'h4', "Charging form" #can't find h4 end end Is there a problem with using assert_select with types other than html? Thank you for any help!

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >