Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 6/313 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • If your unit test code "smells" does it really matter?

    - by Buttons840
    Usually I just throw my unit tests together using copy and paste and all kind of other bad practices. The unit tests usually end up looking quite ugly, they're full of "code smell," but does this really matter? I always tell myself as long as the "real" code is "good" that's all that matters. Plus, unit testing usually requires various "smelly hacks" like stubbing functions. How concerned should I be over poorly designed ("smelly") unit tests?

    Read the article

  • Good practices - database programming, unit testing

    - by Piotr Rodak
    Jason Brimhal wrote today on his blog that new book, Defensive Database Programming , written by Alex Kuznetsov ( blog ) is coming to bookstores. Alex writes about various techniques that make your code safer to run. SQL injection is not the only one vulnerability the code may be exposed to. Some other include inconsistent search patterns, unsupported character sets, locale settings, issues that may occur during high concurrency conditions, logic that breaks when certain conditions are not met. The...(read more)

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

    - by Kevin Donde
    We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ... MsBuild Version .NET 4.0 MsBuild Build File ./WebProjectFolder/WebProject.csproj Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project. If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process. Options ... Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project. Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package. Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it. Problems ... There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ... This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements. This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc. Initial problem. We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ... We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...

    Read the article

  • Why don't xUnit frameworks allow tests to run in parallel?

    - by Xavier Nodet
    Do you know of any xUnit framework that allows to run tests in parallel, to make use of multiple cores in today's machine? I don't... If none (or so few) of them does it, maybe there is a reason... Is it that tests are usually so quick that people simply don't feel the need to paralellize them? Is there something deeper that precludes distributing (at least some of) the tests over multiple threads? Thanks!

    Read the article

  • Is it dangerous to substitute unit tests for user testing? [closed]

    - by MushinNoShin
    Is it dangerous to substitute unit tests for user testing? A co-worker believes we can reduce the manual user testing we need to do by adding more unit tests. Is this dangerous? Unit tests seem to have a very different purpose than user testing. Aren't unit tests to inform design and allow breaking changes to be caught early? Isn't that fundamentally different than determining if an aspect of the system is correct as a whole of the system? Is this a case of substituting apples for oranges?

    Read the article

  • How do I check that my tests were not removed by other developers?

    - by parxier
    I've just came across an interesting collaborative coding issue at work. I've written some unit/functional/integration tests and implemented new functionality into application that's got ~20 developers working on it. All tests passed and I checked in the code. Next day I updated my project and noticed (by chance) that some of my test methods were deleted by other developers (merging problems on their end). New application code was not touched. How can I detect such problem automatically? I mean, I write tests to automatically check that my code still works (or was not deleted), how do I do the same for tests? We're using Java, JUnit, Selenium, SVN and Hudson CI if it matters.

    Read the article

  • What Are Some Tips For Writing A Large Number of Unit Tests?

    - by joshin4colours
    I've recently been tasked with testing some COM objects of the desktop app I work on. What this means in practice is writing a large number (100) unit tests to test different but related methods and objects. While the unit tests themselves are fairly straight forward (usually one or two Assert()-type checks per test), I'm struggling to figure out the best way to write these tests in a coherent, organized manner. What I have found is that copy and Paste coding should be avoided. It creates more problems than it's worth, and it's even worse than copy-and-paste code in production code because test code has to be more frequently updated and modified. I'm leaning toward trying an OO-approach using but again, the sheer number makes even this approach daunting from an organizational standpoint due to concern with maintenance. It also doesn't help that the tests are currently written in C++, which adds some complexity with memory management issues. Any thoughts or suggestions?

    Read the article

  • Do you use unit tests at work? What benefits do you get from them?

    - by Anonymous
    I had planned to study and apply unit testing to my code, but after talking with my colleagues, some of them suggested to me that it's not necessary and it has a very little benefit. They also claim that only a few companies actually do unit testing with production software. I am curious how people have applied unit testing at work and what benefits they are getting from using them, e.g., better code quality, reduced development time in the long term, etc.

    Read the article

  • Making Separate Assemblies For Different Types Of Tests For The Same Component?

    - by sooprise
    I was told by a few members here that splitting up my unit tests into different assemblies for different components is the best way to structure unit tests. Now, I have a few questions about that idea. What are the advantages of this? Organization, and isolation of errors? Let's say I have a component named "calculator", and I create an assembly for the unit tests on "calculator". Would I create a separate assembly for the integration tests I want to run on "calculator"? Or is the definition of an integration test a test across multiple components, like "calculator" and whatever else, which would require a separate assembly to test both of them together? In that case, would I have one assembly to do all of the integration testing for every component combination?

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • Junit: splitting integration test and Unit tests.

    - by jeff porter
    Hello all, I've inherited a load of Junit test, but these tests (apart from most not working) are a mixture of actual unit test and integration tests (requiring external systems, db etc). So I'm trying to think of a way to actually separate them out, so that I can run the unit test nice and quickly and the integration tests after that. The options are.. 1: Split them into separate directories. 2: Move to Junit4 and annotate the classes to separate them. 3: Use a file naming convention to tell what a class is , i.e. AdapterATest and AdapterAIntergrationTest. 3 has the issue that Eclipse has the option to "Run all tests in the selected project/package or folder". So it would make it very hard to just run the integration tests. 2: runs the risk that developers might start writing integration tests in unit test classes and it just gets messy. 1: Seems like the neatest solution, but my gut says there must be a better solution out there. So that is my question, how do you lot break apart integration tests and proper unit tests?

    Read the article

  • How to solve timing problems in automated UI tests with C# and Visual Studio?

    - by Lernkurve
    Question What is the standard approach to solve timing problems in automated UI tests? Concrete example I am using Visual Studio 2010 and Team Foundation Server 2010 to create automated UI tests and want to check whether my application has really stopped running: [TestMethod] public void MyTestMethod() { Assert.IsTrue(!IsMyAppRunning(), "App shouldn't be running, but is."); StartMyApp(); Assert.IsTrue(IsMyAppRunning(), "App should have been started and should be running now."); StopMyApp(); //Pause(500); Assert.IsTrue(!IsMyAppRunning(), "App was stopped and shouldn't be running anymore."); } private bool IsMyAppRunning() { foreach (Process runningProcesse in Process.GetProcesses()) { if (runningProcesse.ProcessName.Equals("Myapp")) { return true; } } return false; } private void Pause(int pauseTimeInMilliseconds) { System.Threading.Thread.Sleep(pauseTimeInMilliseconds); } StartMyApp() and StopMyApp() have been recorded with MS Test Manager 2010 and reside in UIMap.uitest. The last assert fails because the assertion is executed while my application is still in the process of shutting down. If I put a delay after StopApp() the test case passes. The above is just an example to explain my problem. What is the standard approach to solve these kinds of timing issues? One idea would be to wait with the assertion until I get some event notification that my app has been stopped.

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • New NCover 3.4.2 makes all my MSTest unit tests fail

    - by Steven
    Yesterday, I decided to install the newest NCover version (3.4.2). However, when I ran it on my existing .ncover configuration file, the NCover output suddenly reported that all my MSTest tests failed. Of course those tests succeed when ran within Visual Studio. Because of this, NCover isn't able to determine any coverage. Somehow the old configuration doesn't seem to work with the new version. Does anyone have any idea what the problem could be or how to solve it? Btw. Here is my ncover configuration. Project settings: Path to application to profile: c:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe Arguments for the application to profile: /testcontainer:D:\dev\MyApp\MyApp.Services.Tests.Unit\bin\Debug\MyApp.Services.Tests.Unit.dll /testcontainer:D:\dev\MyApp\MyApp.WS.Tests.Unit\bin\Debug\MyApp.WS.Tests.Unit.dll Working folder: D:\dev\MyApp

    Read the article

  • How often should we write unit tests?

    - by Midnight Blue
    Hi, I am recently introduced to the test-driven approach to development by my mentor at work, and he encourages me to write an unit-test whenenver "it makes sense." I understand some benefits of having a throughout unit-test suite for both regression testing and refractoring, but I do wonder how often and how throughout we should write unit-test. My mentor/development lead asks me to write a new unit test-case for a newly written control flow in a method that is already being tested by the exsisting test class, and I think it is an overkill. How often do you write your unit tests, and how detailed do you think your unit tests should be? Thanks!

    Read the article

  • Tests that are 2-3 times bigger than the testable code

    - by HeavyWave
    Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more). Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place. I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?

    Read the article

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >