Search Results

Search found 5709 results on 229 pages for 'persistence unit'.

Page 25/229 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • How do I implement repository pattern and unit of work when dealing with multiple data stores?

    - by Jason
    I have a unique situation where I am building a DDD based system that needs to access both Active Directory and a SQL database as persistence. Initially this wasnt a problem because our design was setup where we had a unit of work that looked like this: public interface IUnitOfWork { void BeginTransaction() void Commit() } and our repositories looked like this: public interface IRepository<T> { T GetByID() void Save(T entity) void Delete(T entity) } In this setup our load and save would handle the mapping between both data stores because we wrote it ourselves. The unit of work would handle transactions and would contain the Linq To SQL data context that the repositories would use for persistence. The active directory part was handled by a domain service implemented in infrastructure and consumed by the repositories in each Save() method. Save() was responsible with interacting with the data context to do all the database operations. Now we are trying to adapt it to entity framework and take advantage of POCO. Ideally we would not need the Save() method because the domain objects are being tracked by the object context and we would just need to add a Save() method on the unit of work to have the object context save the changes, and a way to register new objects with the context. The new proposed design looks more like this: public interface IUnitOfWork { void BeginTransaction() void Save() void Commit() } public interface IRepository<T> { T GetByID() void Add(T entity) void Delete(T entity) } This solves the data access problem with entity framework, but does not solve the problem with our active directory integration. Before, it was in the Save() method on the repository, but now it has no home. The unit of work knows nothing other than the entity framework data context. Where should this logic go? I argue this design only works if you only have one data store using entity framework. Any ideas how to best approach this issue? Where should I put this logic?

    Read the article

  • How to keep your unit test Arrange step simple and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • ASP.NET MVC - How to Unit Test boundaries in the Repository pattern?

    - by JK
    Given a basic repository interface: public interface IPersonRepository { void AddPerson(Person person); List<Person> GetAllPeople(); } With a basic implementation: public class PersonRepository: IPersonRepository { public void AddPerson(Person person) { ObjectContext.AddObject(person); } public List<Person> GetAllPeople() { return ObjectSet.AsQueryable().ToList(); } } How can you unit test this in a meaningful way? Since it crosses the boundary and physically updates and reads from the database, thats not a unit test, its an integration test. Or is it wrong to want to unit test this in the first place? Should I only have integration tests on the repository? I've been googling the subject and blogs often say to make a stub that implements the IRepository: public class PersonRepositoryTestStub: IPersonRepository { private List<Person> people = new List<Person>(); public void AddPerson(Person person) { people.Add(person); } public List<Person> GetAllPeople() { return people; } } But that doesnt unit test PersonRepository, it tests the implementation of PersonRepositoryTestStub (not very helpful).

    Read the article

  • Workflow Activity Extensions, Activity Packs and Unit Testing Framework

    - by JoshReuben
    http://wf.codeplex.com/ contains a plethora of infrastructure code and new activities for extending Workflow Foundation 4. These are also available as Nuget packages. These include: Activity Extensions Security Activity Pack ADO.NET Activity Pack Azure Activity Pack Activity Unit Testing Framework   view my PowerPoint presentation on these and more here: http://www.slideshare.net/joshuareuben9/workflow-foundation-activity-packs-extensions-and-unit-testing

    Read the article

  • SQL Server Unit Testing with tSQLt

    When one considers the amount of time and effort that Unit Testing consumes for the Database Developer, is surprising how few good SQL Server Test frameworks are around. tSQLt , which is open source and free to use, is one of the frameworks that provide a simple way to populate a table with test data as part of the unit test, and check the results with what should be expected. Sebastian Meine and Dennis Lloyd, who created tSQLt, explain

    Read the article

  • SQL Server Unit Testing with tSQLt

    When one considers the amount of time and effort that Unit Testing consumes for the Database Developer, is surprising how few good SQL Server Test frameworks are around. tSQLt , which is open source and free to use, is one of the frameworks that provide a simple way to populate a table with test data as part of the unit test, and check the results with what should be expected. Sebastian and Dennis, who created tSQLt, explain.

    Read the article

  • How do you debug a unit test in Xcode 3?

    - by Dov
    I followed Apple's instructions to set up Unit Testing in my project. I followed the directions for making them dependent, so the tests run with every build of my main project. This works, and when my tests pass the application runs; when they don't, I get build errors on the lines of the unit tests that failed. I would like, however, to be able to step through my application code when the tests are failing, but can't get Xcode (3.2.5) configured properly. The project is a Mac project, not iOS. I tried the instructions here and here, but execution never stopped at the breakpoints I set, neither in the the unit test code or in my application code. After following the first set of instructions, the breakpoints I set turned yellow with blue outlines, and I don't know what that meant, either. What do I need to do to step through my tests?

    Read the article

  • What should be done first: Code reviews or Unit tests?

    - by goldenmean
    Hello, If a developer implements code for some module and wants to get it reviewed. What should be the order : *First unit test the module after designing test cases for the module, debugging and fixing the bugs and then give the modified code for peer code review (Pros- Code to be reviewed is 'clean' to a good extent. Reduces some avoidable review comments and rework. Cons- Developer might spend large time debugging/fixing a bug which could have pointed/anticipated in peer code reviews) Or *First do the code review with peers and then go for unit testing. What are your thoughts/experience on this? I believe this approach for unit testing, code reviewing should be programming language agnostic, but it would be interesting to know otherwise(if applicable) with specific examples. -AD

    Read the article

  • What's a good unit test framework for Common Lisp projects?

    - by Lorenzo V.
    I need to write a unit test suite for a project I am developing in my spare time. Being a CL newbie I was overwhelmed by the amount of choices for a CL implementation, I spent quite some time to choose one. Now I am facing exactly the same thing with unit test frameworks. A quick glance at http://www.cliki.net/test%20framework shows 20 unit test frameworks! Choice is good but for a novice like me this can be a bit confusing and given the number of frameworks it would be painful to try them all. I would like to use a framework which: Is reasonably well maintained Easy to use but with some degree of flexibility Offers some sort of integration with Emacs (or it is possible to easily integrate it with Emacs) Integration with git post-commit hooks Integration with a continous integration system (such as buildbot) What are your experiences in this field?

    Read the article

  • How can I start a TCP server in the background during a Perl unit test?

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the TCP server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call make test after perl Makefile.PL, this test starts and I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the & at the end to force it to start in background and then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • Working effectively unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • How can I unit test an Android Activity that acts on Accelerometer?

    - by Corey Sunwold
    I am starting with an Activity based off of this ShakeActivity and I want to write some unit tests for it. I have written some small unit tests for Android activities before but I'm not sure where to start here. I want to feed the accelerometer some different values and test how the activity responds to it. For now I'm keeping it simple and just updating a private int counter variable and a TextView when a "shake" event happens. So my question largely boils down to this: How can I send fake data to the accelerometer from a unit test?

    Read the article

  • Working effectively with unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • Does Ruby/Rails require more unit testing than say a PHP app?

    - by Blankman
    I don't find the unit testing push in the PHP market like I see/read in the ruby/rails arena. Could one just as easily NOT unit test in ruby/rails as in php, or is ruby just too bendable and breakable that it "more" recommended to test in ruby than in php? Meaning there are large code bases like vBulletin, and from what I can tell, they don't unit test. I hope you understand what I am asking here, not the pros/cons of testing, or whether one should test or not, but rather, does one language need to be tested more than another? maybe its easy to write buggy code, or break during refactoring etc.

    Read the article

  • NullPointerException with static variables

    - by tomekK
    I just hit very strange (to me) behaviour of java. I have following classes: public abstract class Unit { public static final Unit KM = KMUnit.INSTANCE; public static final Unit METERS = MeterUnit.INSTANCE; protected Unit() { } public abstract double getValueInUnit(double value, Unit unit); protected abstract double getValueInMeters(double value); } And: public class KMUnit extends Unit { public static final Unit INSTANCE = new KMUnit(); private KMUnit() { } //here are abstract methods overriden } public class MeterUnit extends Unit { public static final Unit INSTANCE = new MeterUnit(); private MeterUnit() { } ///abstract methods overriden } And my test case: public class TestMetricUnits extends TestCase { @Test public void testConversion() { System.out.println("Unit.METERS: " + Unit.METERS); System.out.println("Unit.KM: " + Unit.KM); double meters = Unit.KM.getValueInUnit(102.11, Unit.METERS); assertEquals(0.10211, meters, 0.00001); } } 1) MKUnit and MeterUnit are both singletons initialized statically, so during class loading. Constructors are private, so they can't be initialized anywhere else. 2) Unit class contains static final references to MKUnit.INSTANCE and MeterUnit.INSTANCE I would expect that: KMUnit class is loaded and instance is created. MeterUnit class is loaded and instance is created. Unit class is loaded and both KM and METERS variable are initialized, they are final so they cant be changed. But when I run my test case in console with maven my result is: T E S T S Running de.audi.echargingstations.tests.TestMetricUnits Unit.METERS: m Unit.KM: null Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.089 sec <<< FAILURE! - in de.audi.echargingstations.tests.TestMetricUnits testConversion(de.audi.echargingstations.tests.TestMetricUnits) Time elapsed: 0.011 sec <<< ERROR! java.lang.NullPointerException: null at de.audi.echargingstations.tests.TestMetricUnits.testConversion(TestMetricUnits.java:29) Results : Tests in error: TestMetricUnits.testConversion:29 NullPointer And the funny part is that, when I run this test from eclipse via JUnit runner everything is fine, I have no NullPointerException and in console I have: Unit.METERS: m Unit.KM: km So the question is: what can be the reason that KM variable in Unit is null (and in the same time METERS is not null)

    Read the article

  • Live-Ubuntu 12.04 ran fine, now stopped booting!

    - by user89743
    I've seen similar problems to this several times in the forum, but mine is a bit different, so the other posts I saw were no help to me. When I boot Ubuntu 12.04 64-bit from live-SD-card (3GB persistence) I suddenly get this error: (initramfs) mount: mounting /dev/loop0 on //filesystem.squashfs failed: Invalid argument Can not mount /dev/loop/0 (/cdrom/casper/filesystem.squashfs) on //filesystem.squashfs (it says I can type "help" for commands, but I don't know anything about how to go from there, totally new to linux) The reason I say my case is different is because my Ubuntu worked fine for over a week, even pretty fast, and now this problem happened. Before that I used to run my live ubuntu from USB sticks but that was slower (especially when booting which took 15 minutes from USB stick!). Also I kept getting the same above problem after a while when booting and had to re-create a live USB linux several times. Installing on harddrive is not an option because my harddrive has physical damage and getting a replacement will take a while, therefore I can only use live-USB or live-SD-card Ubuntu. As I said I used Ubuntu without problems for more than a week, before that as well for several weeks on USB sticks, but the above problem occured sooner or later. This time I paid attention to when it happened: I was rebooting my computer (HP 620 laptop, 4 GB RAM, 64 bit system) from SD flash card and when I was booting I selected F6 and then the first option "no acpi" or something like that...I had used it before and noticed it slowed down the time it took Linux to use. This time it caused this error. Now even when I boot normally/default I get this error. Now I'm accessing Ubuntu from my USB stick without persistence file, when I check my SD card, all the files mentioned in the error message are there and the filesystem.squashfs is 691.2 MB so nothing seems to have been deleted by accident. (I have already made many changes/downloaded programs to my SD card persistent Ubuntu and would hope to loose them, since downloading is expensive for me, and since the problem seems to re-occur...) Can anyone help me, preferably without having to create another startup disk on my SD card? I'm totally new to this. Sorry for the long posts, just didn't know what info is relevant and what isnt! Kon

    Read the article

  • install to USB without touching windows MBR

    - by Robert
    I would like as full of an install as possible (most notably no casper, just straight on the drive) on a USB stick, but I'm not allowed to touch the current configuration of the computer I'll be running it on at all (except of course changing the boot options to boot from usb). Most guides I read are about getting a live install with persistence on USB, or (I think) still replace the MBR with GRUB. Is there any way to combine the two, and not touch the underlying system?

    Read the article

  • Converting time period strings to value/unit pair

    - by randomtoor
    I need to parse the contents of a string that represents a time period. The format of the string is value/unit, e.g.: 1s, 60min, 24h. I would separate the actual value (an int) and unit (a str) to separated variables. At the moment I do it like this: def validate_time(time): binsize = time.strip() unit = re.sub('[0-9]','',binsize) if unit not in ['s','m','min','h','l']: print "Error: unit {0} is not valid".format(unit) sys.exit(2) tmp = re.sub('[^0-9]','',binsize) try: value = int(tmp) except ValueError: print "Error: {0} is not valid".format(time) sys.exit(2) return value,unit However, it is not ideal as things like 1m0 are also (wrongly) validated (value=10,unit=m). What is the best way to validate/parse this input?

    Read the article

  • TDD - Outside In vs Inside Out

    - by Songo
    What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests---This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out. Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?

    Read the article

  • Code Behaviour via Unit Tests

    - by Dewald Galjaard
    Normal 0 false false false EN-ZA X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Some four months ago my car started acting up. Symptoms included a sputtering as my car’s computer switched between gears intermittently. Imagine building up speed, then when you reach 80km/h the car magically and mysteriously decide to switch back to third or even second gear. Clearly it was confused! I managed to track down a technician, an expert in his field to help me out. As he fitted his handheld computer to some hidden port under the dash, he started to explain “These cars are quite intelligent, you know. When they sense something is wrong they run in a restrictive program which probably account for how you managed to drive here in the first place...”  I was surprised and thought this was certainly going to be an interesting test drive. The car ran smoothly down the first couple of stretches as the technician ran through routine checks. Then he said “Ok, all looking good. We need to start testing aspects of the gearbox. Inside the gearbox there are a couple of sensors. One of them is a speed sensor which talks to the computer, which in turn will decide which gear to switch to. The restrictive program avoid these sensors altogether and allow the computer to obtain its input from other [non-affected] sources”. Then, as soon as he forced the speed sensor to come back online the symptoms and ill behaviour re-emerged... What an incredible analogy for getting into a discussion on unit testing software? Besides I should probably put my ill fortune to some good use, right? This example provide a lot of insight into how and why we should conduct unit tests when writing code. More importantly, it captures what is easily and unfortunately often the most overlooked goal of writing unit tests by those new to the art and those who oppose it alike - The goal of writing unit tests is to test the behaviour of our code under predefined conditions. Although it is very possible to test the intrinsic workings of each and every component in your code, writing several tests for each method in practise will soon prove to be an exhausting and ultimately fruitless exercise given the certain and ever changing nature of business requirements. Consequently it is true and quite possible whilst conducting proper unit tests, to call any single method several times as you examine and contemplate different scenarios. Let’s write some code to demonstrate what I mean. In my example I make use of the Moq framework and NUnit to create my tests. Truly you can use whatever you’re comfortable with. First we’ll create an ISpeedSensor interface. This is to represent the speed sensor located in the gearbox.  Then we’ll create a Gearbox class which we’ll pass to a constructor when we instantiate an object of type Computer. All three are described below.   ISpeedSensor.cs namespace AutomaticVehicle {     public interface ISpeedSensor     {         int ReportCurrentSpeed();     } }   Gearbox.cs namespace AutomaticVehicle {      public class Gearbox     {         private ISpeedSensor _speedSensor;           public Gearbox( ISpeedSensor gearboxSpeedSensor )         {             _speedSensor = gearboxSpeedSensor;         }         /// <summary>         /// This method obtain it's reading from the speed sensor.         /// </summary>         /// <returns></returns>         public int ReportCurrentSpeed()         {             return _speedSensor.ReportCurrentSpeed();         }     } } Computer.cs namespace AutomaticVehicle {     public class Computer     {         private Gearbox _gearbox;         public Computer( Gearbox gearbox )         {                     }          public int GetCurrentSpeed()         {             return _gearbox.ReportCurrentSpeed( );         }     } } Since this post is about Unit testing, that is exactly what we’ll create next. Create a second project in your solution. I called mine AutomaticVehicleTests and I immediately referenced the respective nunit, moq and AutomaticVehicle dll’s. We’re going to write a test to examine what happens inside the Computer class. ComputerTests.cs namespace AutomaticVehicleTests {     [TestFixture]     public class ComputerTests     {         [Test]         public void Computer_Gearbox_SpeedSensor_DoesThrow()         {             // Mock ISpeedSensor in gearbox             Mock< ISpeedSensor > speedSensor = new Mock< ISpeedSensor >( );             speedSensor.Setup( n => n.ReportCurrentSpeed() ).Throws<Exception>();             Gearbox gearbox = new Gearbox( speedSensor.Object );               // Create Computer instance to test it's behaviour  towards an exception in gearbox             Computer carComputer = new Computer( gearbox );             // For simplicity let’s assume for now the car only travels at 60 km/h.             Assert.AreEqual( 60, carComputer.GetCurrentSpeed( ) );          }     } }   What is happening in this test? We have created a mocked object using the ISpeedsensor interface which we've passed to our Gearbox object. Notice that I created the mocked object using an interface, not the implementation. I’ll talk more about this in future posts but in short I do this to accentuate the fact that I'm not not really concerned with how SpeedSensor work internally at this particular point in time. Next I’ve gone ahead and created a scenario where I’ve declared the speed sensor in Gearbox to be faulty by forcing it to throw an exception should we ask Gearbox to report on its current speed. Sneaky, sneaky. This test is a simulation of how things may behave in the real world. Inevitability things break, whether it’s caused by mechanical failure, some logical error on your part or a fellow developer which didn’t consult the documentation (or the lack thereof ) - whether you’re calling a speed sensor, making a call to a database, calling a web service or just trying to write a file to disk. It’s a scenario I’ve created and this test is about how the code within the Computer instance will behave towards any such error as I’ve depicted. Now, if you’ve followed closely in my final assert method you would have noticed I did something quite unexpected. I might be getting ahead of myself now but I’m testing to see if the value returned is equal to what I expect it to be under perfect conditions – I’m not testing to see if an error has been thrown! Why is that? Well, in short this is TDD. Test Driven Development is about first writing your test to define the result we want, then to go back and change the implementation within your class to obtain the desired output (I need to make sure I can drive back to the repair shop. Remember? ) So let’s go ahead and run our test as is. It’s fails miserably... Good! Let’s go back to our Computer class and make a small change to the GetCurrentSpeed method.   Computer.cs public int GetCurrentSpeed() {   try   {     return _gearbox.ReportCurrentSpeed( );   }   catch   {     RunRestrictiveProgram( );   } }     This is a simple solution, I know, but it does provide a way to allow for different behaviour. You’re more than welcome to provide an implementation for RunRestrictiveProgram should you feel the need to. It's not within the scope of this post or related to the point I'm trying to make. What is important is to notice how the focus has shifted in our approach from how things can break - to how things behave when broken.   Happy coding!

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance. Edit: Thanks for the information. Does anybody know of any alternatives to what has been mentioned so far (NUnit, RhinoMocks and CodedUI)?

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance.

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >