Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 6/1052 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Specify test method name prefix for test suite in junit 3

    - by Marko Kocic
    Is it possible to tell JUnit 3 to use additional method name prefix when looking up test method names? The goal is to have additional tests running locally that should not be run on continuous integration server. CI server doesn't use test suites, it look up for all classes which name ends with "Test" and execute all methods that begins with "test". The goal is to be able to locally run not only tests run by integration server, but also tests which method name starts with, for example "nocitest" or something like that. I don't mind having to organize tests into tests suite locally, since CI is just ignoring them.

    Read the article

  • Unit Testing Interfaces in Python

    - by Nicholas Mancuso
    I am currently learning python in preperation for a class over the summer and have gotten started by implementing different types of heaps and priority based data structures. I began to write a unit test suite for the project but ran into difficulties into creating a generic unit test that only tests the interface and is oblivious of the actual implementation. I am wondering if it is possible to do something like this.. suite = HeapTestSuite(BinaryHeap()) suite.run() suite = HeapTestSuite(BinomialHeap()) suite.run() What I am currently doing just feels... wrong (multiple inheritance? ACK!).. class TestHeap: def reset_heap(self): self.heap = None def test_insert(self): self.reset_heap() #test that insert doesnt throw an exception... for x in self.inseq: self.heap.insert(x) def test_delete(self): #assert we get the first value we put in self.reset_heap() self.heap.insert(5) self.assertEquals(5, self.heap.delete_min()) #harder test. put in sequence in and check that it comes out right self.reset_heap() for x in self.inseq: self.heap.insert(x) for x in xrange(len(self.inseq)): val = self.heap.delete_min() self.assertEquals(val, x) class BinaryHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinaryHeap() def reset_heap(self): self.heap = BinaryHeap() class BinomialHeapTest(TestHeap, unittest.TestCase): def setUp(self): self.inseq = range(99, -1, -1) self.heap = BinomialHeap() def reset_heap(self): self.heap = BinomialHeap() if __name__ == '__main__': unittest.main()

    Read the article

  • iOS - Unit tests for KVO/delegate codes

    - by ZhangChn
    I am going to design a MVC pattern. It could be either designed as a delegate pattern, or a Key-Value-Observing(KVO), to notify the controller about changing models. The project requires certain quality control procedures to conform to those verification documents. My questions: Does delegate pattern fit better for unit testing than KVO? If KVO fits better, would you please suggest some sample codes?

    Read the article

  • CakePHP Test Fixtures Drop My Tables Permanently After Running A Test Case

    - by Frank
    I'm not sure what I've done wrong in my CakePHP unit test configuration. Every time I run a test case, the model tables associated with my fixtures are missing form my test database. After running an individual test case I have to re-import my database tables using phpMyAdmin. Here are the relevant files: This is the class I'm trying to test comment.php. This table is dropped after the test. App::import('Sanitize'); class Comment extends AppModel{ public $name = 'Comment'; public $actsAs = array('Tree'); public $belongsTo = array('User' => array('fields'=>array('id', 'username'))); public $validate = array( 'text' = array( 'rule' =array('between', 1, 4000), 'required' ='true', 'allowEmpty'='false', 'message' = "You can't leave your comment text empty!") ); database.php class DATABASE_CONFIG { var $default = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'projectdb', 'prefix' = '' ); var $test = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'testprojectdb', 'prefix' = '' ); } My comment.test.php file. This is the table that keeps getting dropped. <?php App::import('Model', 'Comment'); class CommentTestCase extends CakeTestCase { public $fixtures = array('app.comment', 'app.user'); function start(){ $this-Comment =& ClassRegistry::init('Comment'); $this-Comment-useDbConfig = 'test_suite'; } This is my comment_fixture.php class: <?php class CommentFixture extends CakeTestFixture { var $name = "Comment"; var $import = 'Comment'; } And just in case, here is a typical test method in the CommentTestCase class function testMsgNotificationUserComment(){ $user_id = '1'; $submission_id = '1'; $parent_id = $this-Comment-commentOnModel('Submission', $submission_id, '0', $user_id, "Says: A"); $other_user_id = '2'; $msg_id = $this-Comment-commentOnModel('Submission', $submission_id, $parent_id, $other_user_id, "Says: B"); $expected = array(array('Comment'=array('id'=$msg_id, 'text'="Says: B", 'submission_id'=$submission_id, 'topic_id'='0', 'ack'='0'))); $result = $this-Comment-getMessages($user_id); $this-assertEqual($result, $expected); } I've been dealing with this for a day now and I'm starting to be put off by CakePHP's unit testing. In addition to this issue -- Servral times now I've had data inserted into by 'default' database configuration after running tests! What's going on with my configuration?!

    Read the article

  • During Spring unit test, data written to db but test not seeing the data

    - by richever
    I wrote a test case that extends AbstractTransactionalJUnit4SpringContextTests. The single test case I've written creates an instance of class User and attempts to write it to the database using Hibernate. The test code then uses SimpleJdbcTemplate to execute a simple select count(*) from the user table to determine if the user was persisted to the database or not. The test always fails though. I was suspect because in the Spring controller I wrote, the ability to save an instance of User to the db is successful. So I added the Rollback annotation to the unit test and sure enough, the data is written to the database since I can even see it in the appropriate table -- the transaction isn't rolled back when the test case is finished. Here's my test case: @ContextConfiguration(locations = { "classpath:context-daos.xml", "classpath:context-dataSource.xml", "classpath:context-hibernate.xml"}) public class UserDaoTest extends AbstractTransactionalJUnit4SpringContextTests { @Autowired private UserDao userDao; @Test @Rollback(false) public void teseCreateUser() { try { UserModel user = randomUser(); String username = user.getUserName(); long id = userDao.create(user); String query = "select count(*) from public.usr where usr_name = '%s'"; long count = simpleJdbcTemplate.queryForLong(String.format(query, username)); Assert.assertEquals("User with username should be in the db", 1, count); } catch (Exception e) { e.printStackTrace(); Assert.assertNull("testCreateUser: " + e.getMessage()); } } } I think I was remiss by not adding the configuration files. context-hibernate.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd> <bean id="namingStrategy" class="org.springframework.beans.factory.config.FieldRetrievingFactoryBean"> <property name="staticField"> <value>org.hibernate.cfg.ImprovedNamingStrategy.INSTANCE</value> </property> </bean> <bean id="sessionFactory" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean" destroy-method="destroy" scope="singleton"> <property name="namingStrategy"> <ref bean="namingStrategy"/> </property> <property name="dataSource" ref="dataSource"/> <property name="mappingResources"> <list> <value>com/company/model/usr.hbm.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.PostgreSQLDialect</prop> <prop key="hibernate.show_sql">true</prop> <prop key="hibernate.use_sql_comments">true</prop> <prop key="hibernate.query.substitutions">yes 'Y', no 'N'</prop> <prop key="hibernate.cache.provider_class">org.hibernate.cache.EhCacheProvider</prop> <prop key="hibernate.cache.use_query_cache">true</prop> <prop key="hibernate.cache.use_minimal_puts">false</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> <prop key="hibernate.current_session_context_class">thread</prop> </props> </property> </bean> <bean id="transactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory"/> <property name="nestedTransactionAllowed" value="false" /> </bean> <bean id="transactionInterceptor" class="org.springframework.transaction.interceptor.TransactionInterceptor"> <property name="transactionManager"> <ref local="transactionManager"/> </property> <property name="transactionAttributes"> <props> <prop key="create">PROPAGATION_REQUIRED</prop> <prop key="delete">PROPAGATION_REQUIRED</prop> <prop key="update">PROPAGATION_REQUIRED</prop> <prop key="*">PROPAGATION_SUPPORTS,readOnly</prop> </props> </property> </bean> </beans> context-dataSource.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="dataSource" class="com.mchange.v2.c3p0.ComboPooledDataSource" destroy-method="close"> <property name="driverClass" value="org.postgresql.Driver" /> <property name="jdbcUrl" value="jdbc\:postgresql\://localhost:5432/company_dev" /> <property name="user" value="postgres" /> <property name="password" value="postgres" /> </bean> </beans> context-daos.xml <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd"> <bean id="extendedFinderNamingStrategy" class="com.company.dao.finder.impl.ExtendedFinderNamingStrategy"/> <bean id="finderIntroductionAdvisor" class="com.company.dao.finder.impl.FinderIntroductionAdvisor"/> <bean id="abstractDaoTarget" class="com.company.dao.impl.GenericDaoHibernateImpl" abstract="true" depends-on="sessionFactory"> <property name="sessionFactory"> <ref bean="sessionFactory"/> </property> <property name="namingStrategy"> <ref bean="extendedFinderNamingStrategy"/> </property> </bean> <bean id="abstractDao" class="org.springframework.aop.framework.ProxyFactoryBean" abstract="true"> <property name="interceptorNames"> <list> <value>transactionInterceptor</value> <value>finderIntroductionAdvisor</value> </list> </property> </bean> <bean id="userDao" parent="abstractDao"> <property name="proxyInterfaces"> <value>com.company.dao.UserDao</value> </property> <property name="target"> <bean parent="abstractDaoTarget"> <constructor-arg> <value>com.company.model.UserModel</value> </constructor-arg> </bean> </property> </bean> </beans> Some of this I've inherited from someone else. I wouldn't have used the proxying that is going on here because I'm not sure it's needed but this is what I'm working with. Any help much appreciated.

    Read the article

  • Unit test distributed software

    - by user262646
    Is there any tool or framework able to make it easier to "unit test" distributed software written in Java? My system under test is a peer-to-peer software, and I'd like to perform testing using something like PNUnit, but with Java instead of .Net.

    Read the article

  • Unit Testing DateTime – The Crazy Way

    - by João Angelo
    We all know that the process of unit testing code that depends on DateTime, particularly the current time provided through the static properties (Now, UtcNow and Today), it’s a PITA. If you go ask how to unit test DateTime.Now on stackoverflow I’ll bet that you’ll get two kind of answers: Encapsulate the current time in your own interface and use a standard mocking framework; Pull out the big guns like Typemock Isolator, JustMock or Microsoft Moles/Fakes and mock the static property directly. Now each alternative has is pros and cons and I would have to say that I glean more to the second approach because the first adds a layer of abstraction just for the sake of testability. However, the second approach depends on commercial tools that not every shop wants to buy or in the not so friendly Microsoft Moles. (Sidenote: Moles is now named Fakes and it will ship with VS 2012) This tends to leave people without an acceptable and simple solution so after reading another of these types of questions in SO I came up with yet another alternative, one based on the first alternative that I presented here but tries really hard to not get in your way with yet another layer of abstraction. So, without further dues, I present you, the Tardis. The Tardis is single section of conditionally compiled code that overrides the meaning of the DateTime expression inside a single class. You still get the normal coding experience of using DateTime all over the place, but in a DEBUG compilation your tests will be able to mock every static method or property of the DateTime class. An example follows, while the full Tardis code can be downloaded from GitHub: using System; using NSubstitute; using NUnit.Framework; using Tardis; public class Example { public Example() : this(string.Empty) { } public Example(string title) { #if DEBUG this.DateTime = DateTimeProvider.Default; this.Initialize(title); } internal IDateTimeProvider DateTime { get; set; } internal Example(string title, IDateTimeProvider provider) { this.DateTime = provider; #endif this.Initialize(title); } private void Initialize(string title) { this.Title = title; this.CreatedAt = DateTime.UtcNow; } private string title; public string Title { get { return this.title; } set { this.title = value; this.UpdatedAt = DateTime.UtcNow; } } public DateTime CreatedAt { get; private set; } public DateTime UpdatedAt { get; private set; } } public class TExample { public void T001() { // Arrange var tardis = Substitute.For<IDateTimeProvider>(); tardis.UtcNow.Returns(new DateTime(2000, 1, 1, 6, 6, 6)); // Act var sut = new Example("Title", tardis); // Assert Assert.That(sut.CreatedAt, Is.EqualTo(tardis.UtcNow)); } public void T002() { // Arrange var tardis = Substitute.For<IDateTimeProvider>(); var sut = new Example("Title", tardis); tardis.UtcNow.Returns(new DateTime(2000, 1, 1, 6, 6, 6)); // Act sut.Title = "Updated"; // Assert Assert.That(sut.UpdatedAt, Is.EqualTo(tardis.UtcNow)); } } This approach is also suitable for other similar classes with commonly used static methods or properties like the ConfigurationManager class.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • boost.test and eclipse

    - by Anton Potapov
    Hi all, I'm using Eclipse CDT and Boost.Test(with Boost.Build). I would like Eclipse to parse output of Boost.Test generated during by run of test suites during build. Does anybody know how to achieve this? Thanks in advance

    Read the article

  • Problem while executing test case in VS2008 test project

    - by sukumar
    Hi all I have the situation as follows I have develpoed one test project in visual studio 2008 to test my target project. I was getting the following exception when i ran test case in my PC System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.GetType(UnitTestElement unitTest, String type) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.ResolveMethods(). but the same project runs successfully in my colleague PC. as per my Understanding System.IO.FileNotFoundException will occur in case of missing out the dlls. i checked up with dependency walker to trace out the missed dll.dependency walke traced out the following dlls 1)MFC90D.dll 2)mSvcr90d.dll 3)msvcp90d.dll i copied this dlls to C:\windows\system32 from Microsoft visual studio 9.0 dir and again i ran the dependency walker.this time dependency walker is able to open the given testproject dll with 0 errors .even then the same exception comes up when i ran the test. i got fed up with this. can any one tell why it is behaving as PC dependent.is there any thing that i still missing? any suggestion can be helpfull Thakns in Advance Sukumar i

    Read the article

  • Unit tests and Test Runner problems under .Net 4.0

    - by Brett Rigby
    Hi there, We're trying to migrate a .Net 3.5 solution into .Net 4.0, but are experiencing complications with the testing frameworks that can operate using an assembly that is built using version 4.0 of the .Net Framework. Previously, we used NUnit 2.4.3.0 and NCover 1.5.8.0 within our NAnt scripts, but NUnit 2.4.3.0 doesn't like .Net 4.0 projects. So, we upgraded to a newer version of the NUnit framework within the test project itself, but then found that NCover 1.5.8.0 doesn't support this version of NUnit. We get errors in the code saying words to the effect of the assembly was built using a newer version of the .Net Framework than is currently in use, as it's using .Net Framework 2.0 to run the tools. We then tried using Gallio's Icarus test runner GUI, but found that this and MbUnit only support up to version 3.5 of the .Net Frameword and the result is "the tests will be ignored". In terms of the coverage side of things (for reporting into CruiseControl.net), we have found that PartCover is a good candidate for substituting-out NCover, (as the newer version of NCover is quite dear, and PartCover is free), but this is a few steps down the line yet, as we can't get the test runners to work first!! Can any shed any light on a testnig framework that will run under .Net 4.0 in the same way as I've described above? If not, I fear we may have to revert back to using .Net 3.5 until the manufacturers of the tooling that we're currently using have a chance to upgrade to .Net 4.0. Thanks.

    Read the article

  • Running PHP Zend Test in Eclipse

    - by Carlos Eiroa
    Is it possible to run PHP Zend test cases (those that extend Zend_Test_PHPUnit_ControllerTestCase, etc.) through Eclipse PDT? I would like to be able to run them in a similar fashion as you run JUnit tests in Eclipse, by right-clicking the test file and selecting "Run as a JUnit test case." I'd love to see the green or red bar instead of having to go to the command line :). Thanks in advance.

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • Learning a new language using broken unit tests

    - by Brian MacKay
    I was listening to a dot net rocks the other day where they mentioned, almost in passing, a really intriguing tool for learning new languages -- I think they were specifically talking about F#. It's a solution you open up and there are a bunch of broken unit tests. Fixing them walks you through the steps of learning the language. I want to check it out, but I was driving in my car and I have no idea what the name of the project is or which dot net rocks episode it was. Google hasn't helped much. Any idea?

    Read the article

  • Unit-Testing functions which have parameters of classes where source code is not accessible

    - by McMannus
    Relating to this question, I have another question regarding unit testing functions in the utility classes: Assume you have function signatures like this: public function void doSomething(InternalClass obj, InternalElement element) where InternalClass and InternalElement are both Classes which source code are not available, because they are hidden in the API. Additionally, doSomething only operates on obj and element. I thought about mocking those classes away but this option is not possible due to the fact that they do not implement an interface at all which I could use for my Mocking classes. However, I need to fill obj with defined data to test doSomething. How can this problem be solved?

    Read the article

  • Adding unit tests to a legacy, plain C project

    - by Groo
    The title says it all. My company is reusing a legacy firmware project for a microcontroller device, written completely in plain C. There are parts which are obviously wrong and need changing, and coming from a C#/TDD background I don't like the idea of randomly refactoring stuff with no tests to assure us that functionality remains unchanged. Also, I've seen that hard to find bugs were introduced in many occasions through slightest changes (which is something which I believe would be fixed if regression testing was used). A lot of care needs to be taken to avoid these mistakes: it's hard to track a bunch of globals around the code. To summarize: How do you add unit tests to existing tightly coupled code before refactoring? What tools do you recommend? (less important, but still nice to know) I am not directly involved in writing this code (my responsibility is an app which will interact with the device in various ways), but it would be bad if good programming principles were left behind if there was a chance they could be used.

    Read the article

  • Test driven development - convince me!

    - by Casebash
    I know some people are massive proponents of test driven development. I have used unit tests in the past, but only to test operations that can be tested easily or which I believe will quite possibly be correct. Complete or near complete code coverage sounds like it would take a lot of time. What projects do you use test-driven development for? Do you only use it for projects above a certain size? Should I be using it or not? Convince me!

    Read the article

  • What is the effect of creating unit tests during development on time to develop as well as time spent in maintenance activities?

    - by jgauffin
    I'm a consultant and I am going to introduce unit tests to all developers at my client site. My goal is to ensure that all new applications should have unit tests for all classes created. The client has a problem with high maintenance costs from fixing bugs in their existing applications. Their applications have a life span from between 5-15 years in which they continuously add new features. I'm quite confident that they will benefit greatly from starting with unit tests. I'm interested in the effect of unit tests on the time and cost of development: How much time will writing unit tests as part of the development process add? How much time will be saved in maintenance activities (testing and debugging) by having good unit tests?

    Read the article

  • First time unit testing (in silverlight)

    - by Jakob
    Hi I've searched some other posts, but most of them assumed that people knew what they were doing in their unit testing, and frankly I don't. I see the idea behind unit testing, and I'm coding an silverlight application much in the blind right now, and I'd like to write some unit tests to kind of be sure I'm on the right path. I'd like to be able to use the SL4 vs 2010 silverlight unit test project template, to keep it simple and not use external tools. So what I need an answer for are questions like: what are the methods of unit testing? what are the differences between unit tests, and automated unit tests? How do I meaningfully unit test in silverlight? What should I be aware of while unit testing (in silverlight) ? Also should I implement some kind of IRepository pattern in my silverlight app to make unit testing easier?

    Read the article

  • Unit test SHA256 wrapper queries

    - by Sam Leach
    I am just beginning to write unit tests. So please bear with me. I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString("x2").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: What logic is there here. The answer is my for loop and ToString("x2") so from my understanding I want to be testing this part? I can assume Encoding.UTF8.GetBytes(plainText) works. Correct assumption? I can assume SHA256CryptoServiceProvider.ComputeHash() works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks.

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Help create a unit test for test response header, specifically Cache-Control, in determining if cach

    - by VajNyiaj
    Scenario: I have a base controller which disables caching within the OnActionExecuting override. protected override void OnActionExecuting(ActionExecutingContext filterContext) { filterContext.HttpContext.Response.Cache.SetExpires(DateTime.UtcNow.AddDays(-1)); filterContext.HttpContext.Response.Cache.SetValidUntilExpires(false); filterContext.HttpContext.Response.Cache.SetRevalidation(HttpCacheRevalidation.AllCaches); filterContext.HttpContext.Response.Cache.SetCacheability(HttpCacheability.NoCache); //IE filterContext.HttpContext.Response.Cache.SetNoStore(); //FireFox } How can I create a Unit Test to test this behavior?

    Read the article

  • Coded UI Test Method failed inconsistently

    - by Sunitha M
    The following exception failing my UI automation test. Message: Test method CodedUITestMethod1 throw exception: The playback failed to find the control with the given search properties. Additional Details: TechnologyName: 'UIA' ControlType: 'MenuItem' Name: 'MyViewModel' ---> system.runtime.interopservices.comexception error hresult e_fail has been returned from a call to a COM component please any one give me a solution for these type of exceptions.

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >