Search Results

Search found 68011 results on 2721 pages for 'unit of work'.

Page 5/2721 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • Relationship between Repository and Unit of Work

    - by NullOrEmpty
    I am going to implement a repository, and I would like to use the UOW pattern since the consumer of the repository could do several operations, and I want to commit them at once. After read several articles about the matter, I still don't get how to relate this two elements, depending on the article it is being done in a way u other. Sometimes the UOW is something internal to the repository: public class Repository { UnitOfWork _uow; public Repository() { _uow = IoC.Get<UnitOfWork>(); } public void Save(Entity e) { _uow.Track(e); } public void SubmittChanges() { SaveInStorage(_uow.GetChanges()); } } And sometimes it is external: public class Repository { public void Save(Entity e, UnitOfWork uow) { uow.Track(e); } public void SubmittChanges(UnitOfWork uow) { SaveInStorage(uow.GetChanges()); } } Other times, is the UOW whom references the Repository public class UnitOfWork { Repository _repository; public UnitOfWork(Repository repository) { _repository = repository; } public void Save(Entity e) { this.Track(e); } public void SubmittChanges() { _repository.Save(this.GetChanges()); } } How are these two elements related? UOW tracks the elements that needs be changed, and repository contains the logic to persist those changes, but... who call who? Does the last make more sense? Also, who manages the connection? If several operations have to be done in the repository, I think using the same connection and even transaction is more sound, so maybe put the connection object inside the UOW and this one inside the repository makes sense as well. Cheers

    Read the article

  • Using Mock for event listeners in unit-testing

    - by phtrivier
    I keep getting to test this kind of code (language irrelevant) : public class Foo() { public Foo(Dependency1 dep1) { this.dep1 = dep1; } public void setUpListeners() { this.dep1.addSomeEventListener(.... some listener code ...); } } Typically, you want to test what when the dependency fires the event, the class under tests reacts appropriately (in some situation, the only purpose of such classes is to wire lots of other components, that can be independently tested. So far, to test this, I always end up doing something like : creating a 'stub' that implements both a addXXXXListener, that simply stores the callback, and a fireXXXX, that simply calls any registered listener. This is a bit tedious since you have to create the mock with the right interface, but that can do use an introspective framework that can 'spy' on a method, and inject the real dependency in tests Is there a cleaner way to do this kind of things ?

    Read the article

  • Naming your unit tests

    - by kerry
    When you create a test for your class, what kind of naming convention do you use for the tests? How thorough are your tests? I have lately switched from the conventional camel case test names to lower case letters with underscores. I have found this increases the readability and causes me to write better tests. A simple utility class: public class ArrayUtils { public static T[] gimmeASlice(T[] anArray, Integer start, Integer end) { // implementation (feeling lazy today) } } I have seen some people who would write a test like this: public class ArrayUtilsTest { @Test public void testGimmeASliceMethod() { // do some tests } } A more thorough and readable test would be: public class ArrayUtilsTest { @Test public void gimmeASlice_returns_appropriate_slice() { // ... } @Test public void gimmeASlice_throws_NullPointerException_when_passed_null() { // ... } @Test public void gimmeASlice_returns_end_of_array_when_slice_is_partly_out_of_bounds() { // ... } @Test public void gimmeASlice_returns_empty_array_when_slice_is_completely_out_of_bounds() { // ... } } Looking at this test, you have no doubt what the method is supposed to do. And, when one fails, you will know exactly what the issue is.

    Read the article

  • Unit testing to prove balanced tree

    - by Darrel Hoffman
    I've just built a self-balancing tree (red-black) in Java (language should be irrelevant for this question though), and I'm trying to come up with a good means of testing that it's properly balanced. I've tested all the basic tree operations, but I can't think of a way to test that it is indeed well and truly balanced. I've tried inserting a large dictionary of words, both pre-sorted and un-sorted. With a balanced tree, those should take roughly the same amount of time, but an unbalanced tree would take significantly longer on the already-sorted list. But I don't know how to go about testing for that in any reasonable, reproducible way. (I've tried doing millisecond tests on these, but there's no noticeable difference - probably because my source data is too small.) Is there a better way to be sure that the tree is really balanced? Say, by looking at the tree after it's created and seeing how deep it goes? (That is, without modifying the tree itself by adding a depth field to each node, which is just wasteful if you don't need it for anything other than testing.)

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • Am I just not understanding TDD unit testing (Asp.Net MVC project)?

    - by KallDrexx
    I am trying to figure out how to correctly and efficiently unit test my Asp.net MVC project. When I started on this project I bought the Pro ASP.Net MVC, and with that book I learned about TDD and unit testing. After seeing the examples, and the fact that I work as a software engineer in QA in my current company, I was amazed at how awesome TDD seemed to be. So I started working on my project and went gun-ho writing unit tests for my database layer, business layer, and controllers. Everything got a unit test prior to implementation. At first I thought it was awesome, but then things started to go downhill. Here are the issues I started encountering: I ended up writing application code in order to make it possible for unit tests to be performed. I don't mean this in a good way as in my code was broken and I had to fix it so the unit test pass. I mean that abstracting out the database to a mock database is impossible due to the use of linq for data retrieval (using the generic repository pattern). The reason is that with linq-sql or linq-entities you can do joins just by doing: var objs = select p from _container.Projects select p.Objects; However, if you mock the database layer out, in order to have that linq pass the unit test you must change the linq to be var objs = select p from _container.Projects join o in _container.Objects on o.ProjectId equals p.Id select o; Not only does this mean you are changing your application logic just so you can unit test it, but you are making your code less efficient for the sole purpose of testability, and getting rid of a lot of advantages using an ORM has in the first place. Furthermore, since a lot of the IDs for my models are database generated, I proved to have to write additional code to handle the non-database tests since IDs were never generated and I had to still handle those cases for the unit tests to pass, yet they would never occur in real scenarios. Thus I ended up throwing out my database unit testing. Writing unit tests for controllers was easy as long as I was returning views. However, the major part of my application (and the one that would benefit most from unit testing) is a complicated ajax web application. For various reasons I decided to change the app from returning views to returning JSON with the data I needed. After this occurred my unit tests became extremely painful to write, as I have not found any good way to write unit tests for non-trivial json. After pounding my head and wasting a ton of time trying to find a good way to unit test the JSON, I gave up and deleted all of my controller unit tests (all controller actions are focused on this part of the app so far). So finally I was left with testing the Service layer (BLL). Right now I am using EF4, however I had this issue with linq-sql as well. I chose to do the EF4 model-first approach because to me, it makes sense to do it that way (define my business objects and let the framework figure out how to translate it into the sql backend). This was fine at the beginning but now it is becoming cumbersome due to relationships. For example say I have Project, User, and Object entities. One Object must be associated to a project, and a project must be associated to a user. This is not only a database specific rule, these are my business rules as well. However, say I want to do a unit test that I am able to save an object (for a simple example). I now have to do the following code just to make sure the save worked: User usr = new User { Name = "Me" }; _userService.SaveUser(usr); Project prj = new Project { Name = "Test Project", Owner = usr }; _projectService.SaveProject(prj); Object obj = new Object { Name = "Test Object" }; _objectService.SaveObject(obj); // Perform verifications There are many issues with having to do all this just to perform one unit test. There are several issues with this. For starters, if I add a new dependency, such as all projects must belong to a category, I must go into EVERY single unit test that references a project, add code to save the category then add code to add the category to the project. This can be a HUGE effort down the road for a very simple business logic change, and yet almost none of the unit tests I will be modifying for this requirement are actually meant to test that feature/requirement. If I then add verifications to my SaveProject method, so that projects cannot be saved unless they have a name with at least 5 characters, I then have to go through every Object and Project unit test to make sure that the new requirement doesn't make any unrelated unit tests fail. If there is an issue in the UserService.SaveUser() method it will cause all project, and object unit tests to fail and it the cause won't be immediately noticeable without having to dig through the exceptions. Thus I have removed all service layer unit tests from my project. I could go on and on, but so far I have not seen any way for unit testing to actually help me and not get in my way. I can see specific cases where I can, and probably will, implement unit tests, such as making sure my data verification methods work correctly, but those cases are few and far between. Some of my issues can probably be mitigated but not without adding extra layers to my application, and thus making more points of failure just so I can unit test. Thus I have no unit tests left in my code. Luckily I heavily use source control so I can get them back if I need but I just don't see the point. Everywhere on the internet I see people talking about how great TDD unit tests are, and I'm not just talking about the fanatical people. The few people who dismiss TDD/Unit tests give bad arguments claiming they are more efficient debugging by hand through the IDE, or that their coding skills are amazing that they don't need it. I recognize that both of those arguments are utter bullocks, especially for a project that needs to be maintainable by multiple developers, but any valid rebuttals to TDD seem to be few and far between. So the point of this post is to ask, am I just not understanding how to use TDD and automatic unit tests?

    Read the article

  • Getting work done in a small office

    - by three-cups
    I work in an office area of ~450sqft. There are a total of 7 people working in the office. I've been finding it hard to concentrate on my work (writing code) because of the distractions going on around me. The distractions are both work-related and non-work-related conversations. I'm trying to figure out what to do in this situation. I want to be part of the team, and I want to get my work done to the best of my ability. I can easily think of two options that I don't like: Stay where I am, not be able to concentrate and get less work done Move somewhere else. (This is tough because I code on a desktop, so I'm not very mobile.) But what are other options? I'm going to talk this through with my team in the next couple days. Any advice or solutions would be great.

    Read the article

  • Are programmers tied to their code?

    - by Jason
    Assuming you are working for a company or in a team of developers, is the code bound to the person who did it? When someone develops a particular functionnality or an area of the application, is this person the only one who can, is allowed or is just able to make changes to it? I personnally think that a well-done piece of code or program should be easily modified by any programmers, but what about what you see in your environment? At my work, I'd say that the majority of the code can be modified by anyone (that's what coding standards are for right?). There are some things though that are 'property' of some coworkers like the module that handles the pay or some important functionality of the production module (we are developing an ERP system). What about your work place? Am I the only one living this?

    Read the article

  • Handling "related" work within a single agile work item

    - by Tesserex
    I'm on a project team of 4 devs, myself included. We've been having a long discussion on how to handle extra work that comes up in the course of a single work item. This extra work is usually things that are slightly related to the task, but not always necessary to accomplish the goal of the item (that may be an opinion). Examples include but are not limited to: refactoring of the code changed by the work item refactoring code neighboring the code changed by the item re-architecting the larger code area around the ticket. For example if an item has you changing a single function, you realize the entire class now could be redone to better accommodate this change. improving the UI on a form you just modified When this extra work is small we don't mind. The problem is when this extra work causes a substantial extension of the item beyond the original feature point estimation. Sometimes a 5 point item will actually take 13 points of time. In one case we had a 13 point item that in retrospect could have been 80 points or more. There are two options going around in our discussion for how to handle this. We can accept the extra work in the same work item, and write it off as a mis-estimation. Arguments for this have included: We plan for "padding" at the end of the sprint to account for this sort of thing. Always leave the code in better shape than you found it. Don't check in half-assed work. If we leave refactoring for later, it's hard to schedule and may never get done. You are in the best mental "context" to handle this work now, since you're waist deep in the code already. Better to get it out of the way now and be more efficient than to lose that context when you come back later. We draw a line for the current work item, and say that the extra work goes into a separate ticket. Arguments include: Having a separate ticket allows for a new estimation, so we aren't lying to ourselves about how many points things really are, or having to admit that all of our estimations are terrible. The sprint "padding" is meant for unexpected technical challenges that are direct barriers to completing the ticket requirements. It is not intended for side items that are just "nice-to-haves". If you want to schedule refactoring, just put it at the top of the backlog. There is no way for us to properly account for this stuff in an estimation, since it seems somewhat arbitrary when it comes up. A code reviewer might say "those UI controls (which you actually didn't modify in this work item) are a bit confusing, can you fix that too?" which is like an hour, but they might say "Well if this control now inherits from the same base class as the others, why don't you move all of this (hundreds of lines of) code into the base and rewire all this stuff, the cascading changes, etc.?" And that takes a week. It "contaminates the crime scene" by adding unrelated work into the ticket, making our original feature point estimates meaningless. In some cases, the extra work postpones a check-in, causing blocking between devs. Some of us are now saying that we should decide some cut off, like if the additional stuff is less than 2 FP, it goes in the same ticket, if it's more, make it a new ticket. Since we're only a few months into using Agile, what's the opinion of all the more seasoned Agile veterans around here on how to handle this?

    Read the article

  • Working for free?

    - by Jonny
    I came across this article Work for Free that got me thinking. The goal of every employer is to gain more value from workers than the firm pays out in wages; otherwise, there is no growth, no advance, and no advantage for the employer. Conversely, the goal of every employee should be to contribute more to the firm than he or she receives in wages, and thereby provide a solid rationale for receiving raises and advancement in the firm. I don't need to tell you that the refusenik didn't last long in this job. In contrast, here is a story from last week. My phone rang. It was the employment division of a major university. The man on the phone was inquiring about the performance of a person who did some site work on Mises.org last year. I was able to tell him about a remarkable young man who swung into action during a crisis, and how he worked three 19-hour days, three days in a row, how he learned new software with diligence, how he kept his cool, how he navigated his way with grace and expertise amidst some 80 different third-party plug-ins and databases, how he saw his way around the inevitable problems, how he assumed responsibility for the results, and much more. What I didn't tell the interviewer was that this person did all this without asking for any payment. Did that fact influence my report on his performance? I'm not entirely sure, but the interviewer probably sensed in my voice my sense of awe toward what this person had done for the Mises Institute. The interviewer told me that he had written down 15 different questions to ask me but that I had answered them all already in the course of my monologue, and that he was thrilled to hear all these specifics. The person was offered the job. He had done a very wise thing; he had earned a devotee for life. The harder the economic times, the more employers need to know what they are getting when they hire someone. The job applications pour in by the buckets, all padded with degrees and made to look as impressive as possible. It's all just paper. What matters today is what a person can do for a firm. The resume becomes pro forma but not decisive under these conditions. But for a former boss or manager to rave about you to a potential employer? That's worth everything. What do you think? Has anyone here worked for free? If so, has it benefited you in any way? Why should(nt) you work for free (presuming you have the money from other means to keep you going)? Can you share your experience? Me, I am taking a year out of college and haven't gotten a degree yet so that's probably why most of my job applications are getting ignored. So im thinking about working for free for the experience?

    Read the article

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Is it dangerous to substitute unit tests for user testing? [closed]

    - by MushinNoShin
    Is it dangerous to substitute unit tests for user testing? A co-worker believes we can reduce the manual user testing we need to do by adding more unit tests. Is this dangerous? Unit tests seem to have a very different purpose than user testing. Aren't unit tests to inform design and allow breaking changes to be caught early? Isn't that fundamentally different than determining if an aspect of the system is correct as a whole of the system? Is this a case of substituting apples for oranges?

    Read the article

  • Should I make a seperate unit test for a method, if it only modifies the parent state?

    - by Dante
    Should classes, that modify the state of the parent class, but not itself, be unit tested separately? And by separately, I mean putting the test in the corresponding unit test class, that tests that specific class. I'm developing a library based on chained methods, that return a new instance of a new type in most cases, where a chained method is called. The returned instances only modify the root parent state, but not itself. Overly simplified example, to get the point across: public class BoxedRabbits { private readonly Box _box; public BoxedRabbits(Box box) { _box = box; } public void SetCount(int count) { _box.Items += count; } } public class Box { public int Items { get; set; } public BoxedRabbits AddRabbits() { return new BoxedRabbits(this); } } var box = new Box(); box.AddRabbits().SetCount(14); Say, if I write a unit test under the Box class unit tests: box.AddRabbits().SetCount(14) I could effectively say, that I've already tested the BoxedRabbits class as well. Is this the wrong way of approaching this, even though it's far simpler to first write a test for the above call, then to first write a unit test for the BoxedRabbits separately?

    Read the article

  • What's wrong performing unit test against concrete implementation if your frameworks are not going to change?

    - by palm snow
    First a bit of background: We are re-architecting our product suite that was written 10 years ago and served its purpose. One thing that we cannot change is the database schema as we have 500+ client base using this system. Our db schema has over 150+ tables. We have decided on using Entity Framework 4.1 as DAL and still evaluating various frameworks for storing our business logic. I am investigation to bring unit testing into the mix but I also confused as to how far I need to go with setting up a full blown TDD environment. One aspect of setting up unit testing is by getting into implementing Repository, unit of work and mocking frameworks etc. This mean there will be cost and investment on the code-bloat associated with all these frameworks. I understand some of this could be auto-generated but when it comes to things like behaviors, that will be mostly hand written. Just to be clear, I am not questioning the important of unit testing your code. I am just not sure we need all its components (like repository, mocking etc.) when we are fairly certain of storage mechanism/framework (SQL Server/Entity Framework). All that code bloat with generic repositories make sense when you need a generic layers with ability to change this whenever you like however its very likely a YAGNI in our case. What we need is more of integration testing where we can unit-test our code with concrete repository objects and test data in database. In this scenario, just running integration test seem to be more beneficial in our case. Any thoughts if I am missing any thing here?

    Read the article

  • Do you use unit tests at work? What benefits do you get from them?

    - by Anonymous
    I had planned to study and apply unit testing to my code, but after talking with my colleagues, some of them suggested to me that it's not necessary and it has a very little benefit. They also claim that only a few companies actually do unit testing with production software. I am curious how people have applied unit testing at work and what benefits they are getting from using them, e.g., better code quality, reduced development time in the long term, etc.

    Read the article

  • My boss is feuding with his boss. My workload is expanding What should I do?

    - by steve
    These two have always had a somewhat shaky relationship when they were on the same level. The other guy was recently promoted to director and now my boss reports to him. On the surface, they appear to get along when they get together, but my boss despises the man and badmouths him every chance that he gets (to peers, subordinates, etc). He believe that the director is setting him up to fail. The Director and upper management is holding my boss responsible for the not-so-great performance by the team as of late. He's been playing games to make my boss look bad. Due to lay offs, we don't have the manpower to deliever the results that we did before...but expectations have not lowered...and my boss is taking the heat for it. Now he's on the warpath and starting to micromanage. He's giving everyone more work. He's forcing us midlevel guys to take responsibility for the level one techs' performance. I'm spending less and less time coding....and more time babysitting vendors, techs, etc. I'm not so sure that's a bad thing because I'm sorta burnt out on coding, but I don't really care for the idea of having to be responsible for others poor performance....isn't that the manager's job? Anyway, do you guys have any suggestions on dealing with the situation?

    Read the article

  • I'm doing 90% maintenance and 10% development, is this normal?

    - by TiredProgrammer
    I have just recently started my career as a web developer for a medium sized company. As soon as I started I got the task of expanding an existing application (badly coded, developed by multiple programmers over the years, handles the same tasks in different ways, zero structure) So after I had successfully extended this application with the requested functionality, they gave me the task to fully maintain the application. This was of course not a problem, or so I thought. But then I got to hear I wasn't allowed to improve the existing code and to only focus on bug fixes when a bug gets reported. From then on I have had 3 more projects just like the above, that I now also have to maintain. And I got 4 projects where I was allowed to create the application from scratch, and I have to maintain those as well. At this moment I'm slightly beginning to get crazy from the daily mails of users (read managers) for each application I have to maintain. They expect me to handle these mails directly while also working on 2 other new projects (and there are already 5 more projects lined up after those). The sad thing is I have yet to receive a bug report on anything that I have coded myself, for that I have only received the occasional lets do things 180 degrees different change requests. Anyway, is this normal? In my opinion I'm doing the work equivalent of a whole team of developers. Was I an idiot when I initially expected things to be different? I guess this post has turned into a big rant, but please tell me that this is not the same for every developer. P.S. My salary is almost equal if not lower then that of a cashier at a supermarket.

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • organizing unit test

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewing developers?

    Read the article

  • TDD vs. Unit testing

    - by Walter
    My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s). There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits? I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests. What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written? EDIT What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input. FOLLOW UP We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!

    Read the article

  • Unit testing opaque structure based C API

    - by Nicolas Goy
    I have a library I wrote with API based on opaque structures. Using opaque structures has a lot of benefits and I am very happy with it. Now that my API are stable in term of specifications, I'd like to write a complete battery of unit test to ensure a solid base before releasing it. My concern is simple, how do you unit test API based on opaque structures where the main goal is to hide the internal logic? For example, let's take a very simple object, an array with a very simple test: WSArray a = WSArrayCreate(); int foo = 5; WSArrayAppendValue(a, &foo); int *bar = WSArrayGetValueAtIndex(a, 0); if(&foo != bar) printf("Eroneous value returned\n"); else printf("Good value returned\n"); WSRelease(a); Of course, this tests some facts, like the array actually acts as wanted with 1 value, but when I write unit tests, at least in C, I usualy compare the memory footprint of my datastructures with a known state. In my example, I don't know if some internal state of the array is broken. How would you handle that? I'd really like to avoid adding codes in the implementation files only for unit testings, I really emphasis loose coupling of modules, and injecting unit tests into the implementation would seem rather invasive to me. My first thought was to include the implementation file into my unit test, linking my unit test statically to my library. For example: #include <WS/WS.h> #include <WS/Collection/Array.c> static void TestArray(void) { WSArray a = WSArrayCreate(); /* Structure members are available because we included Array.c */ printf("%d\n", a->count); } Is that a good idea? Of course, the unit tests won't benefit from encapsulation, but they are here to ensure it's actually working.

    Read the article

  • what appropriate "Allocation Unit Size" for an exFAT SD card with ReadyBoost

    - by Revolter
    I've brought an 4GB SD card and I'm dedicating it for use by ReadyBoost (on Windows7) Im looking to get the most profit on performance, so i've formatted it with exFAT (as recommended by Microsoft) but I have some doubts to define what best .Allocation Unit Size* I should choose. Since I can't get how ReadyBoost/SD card exactly read/seek the data, can someone tell what make choosing an Allocation Unit Size in favor of another for this scheme ? Apparently, ReadyBoost is allocating all the free space in SD card as one huge file, so a big Allocation Unit Size is advised for fastest reading time. I'm not confusing with ordinary HDD's ?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >