Search Results

Search found 10170 results on 407 pages for 'regression testing'.

Page 49/407 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Test a simple multi-player (upto four players) Android game in single developer machine

    - by Kush
    I'm working on a multi-player Android game (very simple it is that it doesn't have any game-engine used). The game is based on Java Socket. Four devices will connect the game server and a new thread will manage their session. The game server will server many such sessions (having 4 players each). What I'm worried about is the testing of this game. I know it is possible to run multiple android emulators, but my development laptop is very limited in capabilities (3 GB RAM, 2 Ghz Intel Core2Duo and on-board Graphics). And I'm already using Ubuntu to develop the game so that I have more user memory available than I'd have with Windows. Hence, the laptop will burn-to-death on running 4 emulator instances. I don't have access to any android device, neither I have another machine with higher configuration. And I still have to develop and test this game. P.S. : I'm a CS student, and currently don't work anywhere, and this game is college project, so if there are any paid solutions, I cannot afford it. What can I do to test the app seamlessly? ability to test even only 4 clients (i.e. only 1 session) would suffice, its alright if I can't simulate real environment with some 10-20 active game sessions (having 4 players each).

    Read the article

  • Unit Tests as a learning tool - a good idea?

    - by Ekkehard.Horner
    I'm interested in ways and means for learning (a) programming language(s) efficiently. I believe that using Unit Test concepts and infrastructure early in that process is a good thing, even better than starting with "Hello world". Why: To write a decent program even for a toy/restricted problem in a new language, you'll have to master many heterogenous concepts (control flow & variables & IO ...), you are tempted to glance over details just to get your program 'to work'. Putting (your understanding of) the facts about the new language in assertions with good descriptions (=success messages) enforces thinking thru/clearness/precision. Grouping topics and adding assertions to such groups is much easier than incorporation features from the 2. chapter of your "Learning X" book to your chapter 1 program. Why not: 'Real' Unit Tests are meant to output "1234 tests ok; 1 failure: saveWorld() chokes on negative input"; 'didactic' Unit Tests should output relevant facts about the new language like perl6 10-string.t # ### p5chop ... ok 13 - p5chop( "cbä" ) returns "ä" ok 14 - after that, victim is changed to "cb" # ### (p6) chop ... ok 27 - (p6) chop( "cbä" ) returns chopped copy: "cb" ok 18 - after that, victim is unchanged: "cbä" # ### chomp ... So (mis?)using Unit Tests may be counterproductive - practicing actions while learning you wouldn't use professionally. How: Writing 'didactic' Unit Tests in languages with lightweight testing systems (Perl 5/6) is easy; (mis?)using more elaborate systems (JUnit, CppUnit) may be not worth the effort or not suitable for a person just starting with a new language. So Is using Unit Tests as a learning tool a bad idea? Can the Unit Test tool(s) of your favourite language(s) used didactically? Should implementation details (eventually) be discussed here or over at stackoverflow.com?

    Read the article

  • How to analyze a scenario where a bug didn't get caught and adjust development workflow to prevent similar errors

    - by durron597
    I had a bug that was really difficult to track down, because all the unit tests were green, but the production application didn't work properly. Here's what happened: I had a filter class that set my application to ignore data that was not in some specified time windows. The unit test, which seemed thorough to me, turned green. Additionally, my integration tests also produced results as expected. Production, however, did not work. As a result of the first two bullets, this problem was very difficult to find. It turned out the problem was that my test dates were using my time zone (America/Chicago) but the production data was providing dates in UTC, which I did not realize, and the logic for the filter wasn't correct for UTC dates. (I was using joda time DateTime objects). Where did my workflow break down? Did I fail to produce a spec that specified that the logic needed to handle dates in any time zone? Did I fail to thoroughly consider all cases at the unit test level? Did I fail to insure the integration test was sufficiently similar to production? Other? What changes can I make to my workflow to better prevent this sort of mistake in the future? How can I more effectively debug a problem when there is an issue in production but not in testing?

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Is there value in having new developers (graduates) start as testers / bug-fixers?

    - by Nico Huysamen
    Hi Programmers Community. What are your thoughts on the following: Is there value in having new developers (graduates) start as testers / bug-fixers? There are two schools of thought here that I have come across. Having new developers (graduates) start as testers / bug-fixers / doing SLA (Service Level Agreement) work, get's them familiar with the code base. It also allows them the opportunity to learn how to read [other people's] code. Further more, by fixing bugs, they will learn certain bad and good practices, which could hopefully help them in the future. The other way of thinking though, is that if you immediately start new developers on something like testing / bug-fixing / SLA work, their appetite for the development world might go away, and/or they might leave the company and you potentially loose out on a great future resource. Is there a balance that should be kept between these two? Currently where I work there is no clear-cut definition of what new starters do. Some go directly on to client work, while some fall in to the SLA world. Should companies have such a policy? Or should it be handled on a case-by-case or opportunity-based basis? Hope to hear from some of you that have experience in this field. Thanks!

    Read the article

  • How to test the tests?

    - by Ryszard Szopa
    We test our code to make it more correct (actually, less likely to be incorrect). However, the tests are also code -- they can also contain errors. And if your tests are buggy, they hardly make your code better. I can think of three possible types of errors in tests: Logical errors, when the programmer misunderstood the task at hand, and the tests do what he thought they should do, which is wrong; Errors in the underlying testing framework (eg. a leaky mocking abstraction); Bugs in the tests: the test is doing slightly different than what the programmer thinks it is. Type (1) errors seem to be impossible to prevent (unless the programmer just... gets smarter). However, (2) and (3) may be tractable. How do you deal with these types of errors? Do you have any special strategies to avoid them? For example, do you write some special "empty" tests, that only check the test author's presuppositions? Also, how do you approach debugging a broken test case?

    Read the article

  • Mock the window.setTimeout in a Jasmine test to avoid waiting

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/21/mock-the-window.settimeout-in-a-jasmine-test-to-avoid-waiting.aspxJasmine has a clock mocking feature, but I was unable to make it work in a function that I’m calling and want to test. The example only shows using clock for a setTimeout in the spec tests and I couldn’t find a good example. Here is my current and slightly limited approach.   If we have a method we want to test: var test = function(){ var self = this; self.timeoutWasCalled = false; self.testWithTimeout = function(){ window.setTimeout(function(){ self.timeoutWasCalled = true; }, 6000); }; }; Here’s my testing code: var realWindowSetTimeout = window.setTimeout; describe('test a method that uses setTimeout', function(){ var testObject; beforeEach(function () { // force setTimeout to be called right away, no matter what time they specify jasmine.getGlobal().setTimeout = function (funcToCall, millis) { funcToCall(); }; testObject = new test(); }); afterEach(function() { jasmine.getGlobal().setTimeout = realWindowSetTimeout; }); it('should call the method right away', function(){ testObject.testWithTimeout(); expect(testObject.timeoutWasCalled).toBeTruthy(); }); }); I got a good pointer from Andreas in this StackOverflow question. This would also work for window.setInterval. Other possible approaches: create a wrapper module of setTimeout and setInterval methods that can be mocked. This can be mocked with RequireJS or passed into the constructor. pass the window.setTimeout function into the method (this could get messy)

    Read the article

  • Implementing a ILogger interface to log data

    - by Jon
    I have a need to write data to file in one of my classes. Obviously I will pass an interface into my class to decouple it. I was thinking this interface will be used for testing and also in other projects. This is my interface: //This could be used by filesystem, webservice public interface ILogger { List<string> PreviousLogRecords {get;set;} void Log(string Data); } public interface IFileLogger : ILogger { string FilePath; bool ValidFileName; } public class MyClassUnderTest { public MyClassUnderTest(IFileLogger logger) {....} } [Test] public void TestLogger() { var mock = new Mock<IFileLogger>(); mock.Setup(x => x.Log(Is.Any<string>).AddsDataToList()); //Is this possible?? var myClass = new MyClassUnderTest(mock.Object); myClass.DoSomethingThatWillSplitThisAndLog3Times("1,2,3"); Assert.AreEqual(3,mock.PreviousLogRecords.Count); } This won't work I don't believe as nothing is storing the items so is this possible using Moq and also what do you think of the design of the interface?

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • Is this method of writing Unit Tests correct?

    - by aspdotnetuser
    I have created a small C# project to help me learn how to write good unit tests. I know that one important rule of unit testing is to test the smallest 'unit' of code possible so that if it fails you know exactly what part of the code needs to fixed. I need help with the following before I continue to implement more unit tests for the project: If I have a Car class, for example, that creates a new Car object which has various attributes that are calculated when its' constructor method is called, would the two following tests be considered as overkill? Should there be one test that tests all calculated attributes of the Car object instead? [Test] public void CarEngineCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.GreaterOrEqual(car.Engine, 1); } [Test] public void CarNameCalculatedValue() { BusinessObjects.Car car= new BusinessObjects.Car(); Assert.IsNotNull(car.Name); } Should I have the above two test methods to test these things or should I have one test method that asserts the Car object has first been created and then test these things in the same test method?

    Read the article

  • When writing tests for a Wordpress plugin, should i run them inside wordpress or in a normal browser?

    - by Nicola Peluchetti
    I have started using BDD for a wordpress plugin i'm working on and i'm rewriting the js codebase to do tests. I've encountered a few problems but i'm going steady now, i was wondering if i had the right approach, because i'm writing test that should pass in a normal browser environment and not inside wordpress. I choose to do this because i want my plugin to be totally indipendent from the wordpress environment, i'm using requirejs in a way that i don't expose any globals and i'm loading my version of jQuery that doesn't override the one that ships with Wordpress. In this way my plugin would work the same on every wordpress version and my code would not break if they cheange the jQuery version or someone use my plugin on an old wordpress version. I wonder if this is the right approach or if i should always test inside the environment i'm working in. Since wordpress implies some globals i had to write some function purely for testing purpose, like "get_ajax_url": function() { if( typeof window.ajaxurl === "undefined" ) { return "http://localhost/wordpress/wp-admin/admin-ajax.php"; } else { return window.ajaxurl; } }, but apart from that i got everything working right. What do you think?

    Read the article

  • tdd is about design not verification what does it concretely mean?

    - by sigo
    I've been wondering about this. What do we exactly mean by design and verification. Should I just apply tdd to make sure my code is SOLID and not check is correct external behaviour ? Should I use Bdd for the correct behaviour part ? Where I get confused also is regarding TDD code katas, to me they looked like more about verification than design... shouldn't they be called bdd katas instead of tdd katas? I reckon that for example uncle bob bowling kata leads in the end to a simple and nice internal design but I felt that most of the process was more around vérification than design. Design seemed to be a side effect of testing incrementally the external behaviour. I didnt feel so much that we were focusing most of our efforts on design but more on vérification. While normally we are told the contrary, that in TDD, verification is a side effect, design is the main purpose. So my question is what should i focus exactly on when i do tdd: SOLID, external Api usability, what else...? And how can I do that without being focused on verification ? What do you guys focus your energy on when you are practicing TDD ?

    Read the article

  • Advice on choosing a book to read

    - by Kioshiki
    I would like to ask for some recommendations on useful books to read. Initially I had intended on posting quite a long description of my current issue and asking for advice. But I realised that I didn’t have a clear idea of what I wanted to ask. One thing that is clear to me is that my knowledge in various areas needs improving and reading is one method of doing that. Though choosing the right book to read seems like a task in itself when there are so many books out there. I am a programmer but I also deal with analysis, design & testing. So I am not sure what type of book to read. One option might be to work through two books at the same time. I had thought maybe one about design or practices and another of a more technical focus. Recently I came across one book that I thought might be useful to read: http://xunitpatterns.com/index.html It seems like an interesting book, but the comments I read on amazon.co.uk show that the book is probably longer than it needs to be. Has anyone read it and can comment on this? Another book that I already own and would probably be a good one to finish reading is this: http://www.amazon.co.uk/Code-Complete-Practical-Handbook-Construction/dp/0735619670/ref=sr_1_1?ie=UTF8&qid=1309438553&sr=8-1 Has anyone else read this who can comment on its usefulness? Beyond these two I currently have no clear idea of what to read. I have thought about reading a book related to OO design or the GOF design patterns. But I wonder if I am worrying too much about the process and practices and not focusing on the actual work. I would be very grateful for any suggestions or comments. Many Thanks, Kioshiki

    Read the article

  • Are injectable classes allowed to have constructor parameters in DI?

    - by Songo
    Given the following code: class ClientClass{ public function print(){ //some code to calculate $inputString $parser= new Parser($inputString); $result= $parser->parse(); } } class Parser{ private $inputString; public __construct($inputString){ $this->inputString=$inputString; } public function parse(){ //some code } } Now the ClientClass has dependency on class Parser. However, if I wanted to use Dependency Injection for unit testing it would cause a problem because now I can't send the input string to the parser constructor like before as its calculated inside ClientCalss itself: class ClientClass{ private $parser; public __construct(Parser $parser){ $this->parser=$parser; } public function print(){ //some code to calculate $inputString $result= $this->parser->parse(); //--> will throw an exception since no string was provided } } The only solution I found was to modify all my classes that took parameters in their constructors to utilize Setters instead (example: setInputString()). However, I think there might be a better solution than this because sometimes modifying existing classes can cause much harm than benefit. So, Are injectable classes not allowed to have input parameters? If a class must take input parameters in its constructor, what would be the way to inject it properly? UPDATE Just for clarification, the problem happens when in my production code I decide to do this: $clientClass= new ClientClass(new Parser($inputString));//--->I have no way to predict $inputString as it is calculated inside `ClientClass` itself. UPDATE 2 Again for clarification, I'm trying to find a general solution to the problem not for this example code only because some of my classes have 2, 3 or 4 parameters in their constructors not only one.

    Read the article

  • Access a PLESK website before propagation?

    - by RCNeil
    My web host uses Plesk and I want to know if there is anyway to access and view a website (with PHP and other processes being functional) without propagation of the domain name? I have found countless forums on this but they are all pretty old (circa 01-04) and involve either tricking your localhost or SSH commands and some even result in terrible security risks. I would like to access a web page directory through a browser and see it's contents while having the PHP processes carry out... before I propagate it's potential domain name. People claim this is pointless but during a site migration why on earth would you not test a site before propagating it? I'm looking for something similar to what cPanel offers i.e. http://IP.ADDRESS./~mydomain.com The only solution I could think of is storing the site in a new directory of an already functional site and then setting up databases and testing the site once it's complete. Once tested and working I should be easily be able to migrate the files to the "new" domain name's root directory and just setup a new databases and then propagate the domain name. I can't believe that Plesk V10+ still does not have a site preview method that includes PHP, JS, and Flash ability.

    Read the article

  • "TDD is about design, not verification"; concretely, what does that mean?

    - by sigo
    I've been wondering about this. What do we exactly mean by design and verification. Should I just apply TDD to make sure my code is SOLID and not check if it's external behaviour is correct? Should I use BDD for verifying the behaviour is correct? Where I get confused also is regarding TDD code Katas, to me they looked like more about verification than design; shouldn't they be called BDD Katas instead of TDD Katas? I reckon that for example the Uncle Bob bowling Kata leads in the end to a simple and nice internal design but I felt that most of the process was centred more around verification than design. Design seemed to be a side effect of testing the external behaviour incrementally. I didn't feel so much that we were focusing most of our efforts on design but more on verification. While normally we are told the contrary, that in TDD, verification is a side effect, design is the main purpose. So my question is what should I focus on exactly, when I do TDD: SOLID, external API usability, or something else? And how can I do that without being focused on verification? What do you guys focus your energy on when you are practising TDD?

    Read the article

  • Please recommend the best tools to build a test plan management tool

    - by fzkl
    I have mostly worked on hardware testing in my professional career and would like to get onto the software development side. I thought working on a practically usable project will help motivate me and help acquire some skills. I have decided to build a test plan management tool for the QA team I work in (We use excel sheets!). The test plan management tool should be browser based and should support this: There would be many test plans, each test plan having test sets, test sets having test cases and test cases having instructions, attachments and Pass/fail status marking and bug info in case of failure. It should also have an export to excel option. I have a visual picture of the tool I am looking to build but I don't have enough experience to figure our where to start. My current programming skills are limited to C and shell programming and I want to pick up python. What tools (programming language, database and anything else?) would you recommend for me to get this done? Also what are the key concepts in the recommended programming language that I should focus on to build a browser based tool like this?

    Read the article

  • How to implement isValid correctly?

    - by Songo
    I'm trying to provide a mechanism for validating my object like this: class SomeObject { private $_inputString; private $_errors=array(); public function __construct($inputString) { $this->_inputString = $inputString; } public function getErrors() { return $this->_errors; } public function isValid() { $isValid = preg_match("/Some regular expression here/", $this->_inputString); if($isValid==0){ $this->_errors[]= 'Error was found in the input'; } return $isValid==1; } } Then when I'm testing my code I'm doing it like this: $obj = new SomeObject('an INVALID input string'); $isValid = $obj->isValid(); $errors=$obj->getErrors(); $this->assertFalse($isValid); $this->assertNotEmpty($errors); Now the test passes correctly, but I noticed a design problem here. What if the user called $obj->getErrors() before calling $obj->isValid()? The test will fail because the user has to validate the object first before checking the error resulting from validation. I think this way the user depends on a sequence of action to work properly which I think is a bad thing because it exposes the internal behaviour of the class. How do I solve this problem? Should I tell the user explicitly to validate first? Where do I mention that? Should I change the way I validate? Is there a better solution for this? UPDATE: I'm still developing the class so changes are easy and renaming functions and refactoring them is possible.

    Read the article

  • How to use lists in equivalence partitioning?

    - by KhDonen
    I have read that equivalence partitioning can be used typically for intervals or lists, e.g. I assume it can be used for every set of inputs. Anyway if the requirement says that allowed colors are (RED,BLUE,BLACK, GREEN), I cannot treat them like a list, right? I mean, testing one of them would not be enough because developers most likely used some switch-case and thus it is not real "set" where one could represent also the others. So how it is meant with lists? Also what is not that clear to me, I do not think it is always possible to do the initial partioning and then design the test cases. What about checking two lines intersection: Y=MX+C. (two inputs) 1) The lines are paraller. M1=M1 but C1 must be different from C2. 2) Lines are intersecting. M1 must be different from M2. 3) Coincident. The are the same. How can I use partitioning here? THis is actually taken from a book and it says that these sets are eq.classes.

    Read the article

  • Tips for Making this Code Testable [migrated]

    - by Jesse Bunch
    So I'm writing an abstraction layer that wraps a telephony RESTful service for sending text messages and making phone calls. I should build this in such a way that the low-level provider, in this case Twilio, can be easily swapped without having to re-code the higher level interactions. I'm using a package that is pre-built for Twilio and so I'm thinking that I need to create a wrapper interface to standardize the interaction between the Twilio service package and my application. Let us pretend that I cannot modify this pre-built package. Here is what I have so far (in PHP): <?php namespace Telephony; class Provider_Twilio implements Provider_Interface { public function send_sms(Provider_Request_SMS $request) { if (!$request->is_valid()) throw new Provider_Exception_InvalidRequest(); $sms = \Twilio\Twilio::request('SmsMessage'); $response = $sms->create(array( 'To' => $request->to, 'From' => $request->from, 'Body' => $request->body )); if ($this->_did_request_fail($response)) { throw new Provider_Exception_RequestFailed($response->message); } $response = new Provider_Response_SMS(TRUE); return $response; } private function _did_request_fail($api_response) { return isset($api_response->status); } } So the idea is that I can write another file like this for any other telephony service provided that it implements Provider_Interface making them swappable. Here are my questions: First off, do you think this is a good design? How could it be improved? Second, I'm having a hard time testing this because I need to mock out the Twilio package so that I'm not actually depending on Twilio's API for my tests to pass or fail. Do you see any strategy for mocking this out? Thanks in advance for any advice!

    Read the article

  • How to test chrome extensions?

    - by swampsjohn
    Is there a good way to do this? I'm writing an extension that interacts with a website as a content script and saves data using localstorage. Are there any tools, frameworks, etc. that I can use to test this behavior? I realize there are some generic tools for testing javascript, but are those sufficiently power to test an extension? Unit testing is most important, but I'm also interested in other types of testing (such as integration testing).

    Read the article

  • Testing HTTP status codes

    - by amusero
    I'm running an Apache Tomcat server. Making some security testing I'd noticed than my server is returning a 200 HTTP status code of the default error page when I try to access to a non-existent element instead of return a 404 status code and redirect me to the default error page. I suspect that this is not the only fail with this issue. Anyone can suggest me a process to chech the most common HTTP status codes?

    Read the article

  • Plugins stopped working in Chromium in Debian Testing

    - by Jan Hudec
    Short time ago plugins stopped working in chromium. Neither of kpartsplugin, mozplugger nor flashplayer-nonfree seem to work. Neither comes up in chrome://plugins page (only "Chromoting Viewer" does). Was there recently any change that would require reconfiguration? And if, of what? I have Debian Testing (Jessie) amd64, recently updated, with chromium 35.0.1916.114-2, flashplugin-nonfree 1:3.4, kpartsplugin 20120605-1 and mozplugger 1.14.5-2.

    Read the article

  • Library to fake intermittent failures according to tester-defined policy?

    - by crosstalk
    I'm looking for a library that I can use to help mock a program component that works only intermittently - usually, it works fine, but sometimes it fails. For example, suppose I need to read data from a file, and my program has to avoid crashing or hanging when a read fails due to a disk head crash. I'd like to model that by having a mock data reader function that returns mock data 90% of the time, but hangs or returns garbage otherwise. Or, if I'm stress-testing my full program, I could turn on debugging code in my real data reader module to make it return real data 90% of the time and hang otherwise. Now, obviously, in this particular example I could just code up my mock manually to test against a random() routine. However, I was looking for a system that allows implementing any failure policy I want, including: Fail randomly 10% of the time Succeed 10 times, fail 4 times, repeat Fail semi-randomly, such that one failure tends to be followed by a burst of more failures Any policy the tester wants to define Furthermore, I'd like to be able to change the failure policy at runtime, using either code internal to the program under test, or external knobs or switches (though the latter can be implemented with the former). In pig-Java, I'd envision a FailureFaker interface like so: interface FailureFaker { /** Return true if and only if the mocked operation succeeded. Implementors should override this method with versions consistent with their failure policy. */ public boolean attempt(); } And each failure policy would be a class implementing FailureFaker; for example there would be a PatternFailureFaker that would succeed N times, then fail M times, then repeat, and a AlwaysFailFailureFaker that I'd use temporarily when I need to simulate, say, someone removing the external hard drive my data was on. The policy could then be used (and changed) in my mock object code like so: class MyMockComponent { FailureFaker faker; public void doSomething() { if (faker.attempt()) { // ... } else { throw new RuntimeException(); } } void setFailurePolicy (FailureFaker policy) { this.faker = policy; } } Now, this seems like something that would be part of a mocking library, so I wouldn't be surprised if it's been done before. (In fact, I got the idea from Steve Maguire's Writing Solid Code, where he discusses this exact idea on pages 228-231, saying that such facilities were common in Microsoft code of that early-90's era.) However, I'm only familiar with EasyMock and jMockit for Java, and neither AFAIK have this function, or something similar with different syntax. Hence, the question: Do such libraries as I've described above exist? If they do, where have you found them useful? If you haven't found them useful, why not?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >