Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 13/500 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Should the test and the fix be written by different people?

    - by Nutel
    There is a common practice in TDD to write a test before fix to avoid regression and simplify fixing. I just wonder what if the test and fix will be written by different people, total spent time will be almost the same but as now three people will think about possible failures (+tester) we increase probability that fix will cover all possible failure scenarios. Does this practice make sense or it will just waste additional time needed for one more person to familiarize with bug?

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • Testing my model for hybrid scheduling in Embedded Systems

    - by markusian
    I am working on a project for school, where I have to analyze the performances of a few fixed-priority servers algorithms (polling server, deferrable server, priority exchange) using a simulator in the case of hybrid scheduling, where we have both hard periodic tasks and soft aperiodic tasks. In my model I consider that: the hard tasks have a period equal to their deadline, with a known worst case execution time (wcet). The actual execution time could be smaller than the wcet. the soft tasks have a known wcet and random interarrival times. The actual execution time could be smaller than the wcet. In order to test those algorithms I need realistic case studies. For this reason I'm digging in the scientific literature but I am facing different problems: Sometimes I find a list of hard tasks with wcet, but it is not specified how the soft tasks parameters are found. Given the wcet of a task, how can I model its actual execution time? This means, what random distribution should I use considering the wcet? How can I model the random interarrival times of soft aperiodic tasks?

    Read the article

  • Web Form Testing [closed]

    - by Frank G.
    I created a application for a client that is along the lines of a ticket tracking system. I wanted to know if anyone know of software that could beta test the web forms. Well I am looking for something that could automatically populate/fill whatever forms are on the web page with generic data. The purpose of this is to just randomly populate data and see if I get any errors on the page when submitted plus to also see how validation for the form functions. Does anyone know of anything that could do this?

    Read the article

  • How do you set a custom session when unit testing with wicket?

    - by vagabond
    I'm trying to run some unit tests on a wicket page that only allows access after you've logged in. In my JUnit test I cannot start the page or render it without setting the session. How do you set the session? I'm having problems finding any documentation on how to do this. WicketTester tester = new WicketTester(new MyApp()); ((MyCustomSession)tester.getWicketSession()).setItem(MyFactory.getItem("abc")); //Fails to start below, no session seems to be set tester.startPage(General.class); tester.assertRenderedPage(General.class);

    Read the article

  • White-box testing in Javascript - how to deal with privacy?

    - by Max Shawabkeh
    I'm writing unit tests for a module in a small Javascript application. In order to keep the interface clean, some of the implementation details are closed over by an anonymous function (the usual JS pattern for privacy). However, while testing I need to access/mock/verify the private parts. Most of the tests I've written previously have been in Python, where there are no real private variables (members, identifiers, whatever you want to call them). One simply suggests privacy via a leading underscore for the users, and freely ignores it while testing the code. In statically typed OO languages I suppose one could make private members accessible to tests by converting them to be protected and subclassing the object to be tested. In Javascript, the latter doesn't apply, while the former seems like bad practice. I could always wall back to black box testing and simply check the final results. It's the simplest and cleanest approach, but unfortunately not really detailed enough for my needs. So, is there a standard way of keeping variables private while still retaining some backdoors for testing in Javascript?

    Read the article

  • When creating an library published on CodePlex, how "bad" would it be for the unit-test projects to rely on commercial products?

    - by Lasse V. Karlsen
    I have started a project on CodePlex for a WebDAV server implementation for .NET, so that I can host a WebDAV server in my own programs. This is both a learning/research project (WebDAV + server portion) as well as a project I think I can have much fun with, both in terms of making it and using it. However, I see a need to do mocking of types here in order to unit-testing properly. For instance, I will be relying on HttpListener for the web server portion of the WebDAV server, and since this type has no interface, and is sealed, I cannot easily make mocks or stubs out of it. Unless I use something like TypeMock. So if I used TypeMock in the unit-test projects on this library, how bad would this be for potential users? The projects are made in C# 3.5 for .NET 3.5 and 4.0, and the project files was created with Visual Studio 2010 Professional. The actual class libraries you would end up referencing in your software would of course not be encumbered with anything remotely like this, only the unit-test libraries. What's your thoughts on this? As an example, I have in my old code-base, which is private, the ability to just initiate a WebDAV server with just this: var server = new WebDAVServer(); This constructs, and owns, a HttpListener instance internally, and I would like to verify through unit-tests that if I dispose of this server object, the internal listener is disposed of. If, on the other hand, I use the overload where I hand it a listener object, this object should not be disposed of. Short of exposing the internal listener object to the outside world, something I'm a bit loath to do, how can I in a good way ensure that the object was disposed of? With TypeMock I can mock away parts of this object even though it isn't accessed through interfaces. The alternative would be for me to wrap everything in wrapper classes, where I have complete control.

    Read the article

  • Unit testing methods decorated with custom attributes

    - by Joel Cunningham
    I am trying to retrofit unit tests on to some existing code base. Both the class and method I want to unit test is decorated with custom attributes. These attributes are fairly sophisticated and I dont want them to run as part of the unit test. The only solution I have come up with is to compile the attribute out when I want to unit test. I dont really like this solution and would prefer to either replace it with a mocked attribute at runtime or prevent the attribute from running in a more elegant way. How do you unit test code that has class and method attributes that you dont want to run as part of a unit test? Thanks in advance.

    Read the article

  • Unit test class inherited from ContextBoundObject and decorated with ContextAttribute

    - by Joel Cunningham
    I am trying to retrofit unit tests on to some existing code base. Both the class and method I want to unit test is decorated with custom attributes that are inherited from ContextBoundObject and ContextAttribute. I dont want them to run as part of the unit test. The only solution I have come up with is to compile the attribute out when I want to unit test. I dont really like this solution and would prefer to either replace it with a mocked attribute at runtime or prevent the attribute from running in a more elegant way. How do you unit test code that has class and method attributes that inherit from ContextBoundObject and ContextAttribute that you dont want to run as part of a unit test? Thanks in advance.

    Read the article

  • Should a Unit-test replicate functionality or Test output?

    - by Daniel Beardsley
    I've run into this dilemma several times. Should my unit-tests duplicate the functionality of the method they are testing to verify it's integrity? OR Should unit tests strive to test the method with numerous manually created instances of inputs and expected outputs? I'm mainly asking the question for situations where the method you are testing is reasonably simple and it's proper operation can be verified by glancing at the code for a minute. Simplified example (in ruby): def concat_strings(str1, str2) return str1 + " AND " + str2 end Simplified functionality-replicating test for the above method: def test_concat_strings 10.times do str1 = random_string_generator str2 = random_string_generator assert_equal (str1 + " AND " + str2), concat_strings(str1, str2) end end I understand that most times the method you are testing won't be simple enough to justify doing it this way. But my question remains; is this a valid methodology in some circumstances (why or why not)?

    Read the article

  • How often should we write unit tests?

    - by Midnight Blue
    Hi, I am recently introduced to the test-driven approach to development by my mentor at work, and he encourages me to write an unit-test whenenver "it makes sense." I understand some benefits of having a throughout unit-test suite for both regression testing and refractoring, but I do wonder how often and how throughout we should write unit-test. My mentor/development lead asks me to write a new unit test-case for a newly written control flow in a method that is already being tested by the exsisting test class, and I think it is an overkill. How often do you write your unit tests, and how detailed do you think your unit tests should be? Thanks!

    Read the article

  • Isolating test data in acceptance tests

    - by Matt Phillips
    I'm looking for guidance on how to keep my acceptance tests isolated. Right now the issue I'm having with being able to run the tests in parallel is the database records that are manipulated in the tests. I've written helpers that take care of doing inserts and deletes before tests are executed, to make sure the state is correct. But now I can't run them in parallel against the same database without uniquely generating the test data fields for each test. For example. Testing creating a row i'll delete everything where column A = foo and column B = bar Then I'll navigate through the UI in the test and create a record with column A = foo and column B = bar. Testing that a duplicate row is not allowed to be created. I'll insert a row with column A = foo and column B = bar and then use the UI to try and do the exact same thing. This will display an error message in the UI as expected. These tests work perfectly when ran separately and serially. But I can't run them at the same time for fear that one will create or delete a record the other is expecting. Any tips on how to structure them better so they can be run in parallel?

    Read the article

  • Who should write the test plan?

    - by Cheng Kiang
    Hi, I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- Click on button A. Key in XYZ in the form field and click button B. You should see behaviour C. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK

    Read the article

  • Automated Acceptance tests under specific contraints

    - by HH_
    This is a follow up to my previous question, which was a bit general, so I'll be asking for a more precise situation. I want to automate acceptance testing on a web application. Briefly, this application allows the user to create contracts for subscribers with the two constraints: You cannot create more than one contract for a subscriber. Once a contract is created, it cannot be deleted (from the UI) Let's say TestCreate is a test case with tests for the normal creation of a contract. The constraints have introduced complexities to the testing process, mainly dependencies between test cases and test executions. Before we run TestCreate we need to make sure that the application is in a suitable state (the subscriber has no contract) If we run TestCreate twice, the second run will fail since the state of the application will have changed. So we need to revert back to the initial state (i.e. delete the contract), which is impossible to do from the UI. More generally, after each test case we should guarantee that the state is reverted back. And since, in this case, it is impossible to do it from the UI, how do you handle this? Possible solution: I thought about doing a backup of the database in the state that I desire, and after each test case, run a script which deletes the db and restores the backup. However, I find that to be too heavy to do for each single test case. In addition, what if some information are stored in files? or in multiple or unaccessible databases? My question: In this situation, what would an experienced tester do to write automated and maintanable tests. Thank you. More info: I'm trying to integrate tests into a BDD framework, which I find to be a neat solution for test documentation and communication, but it does not solve this particular problem (it even makes it harder)

    Read the article

  • Writing Acceptance test cases

    - by HH_
    We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply: Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents. Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each? Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test? Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle? Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.

    Read the article

  • How can I unit test a class which requires a web service call?

    - by Chris Cooper
    I'm trying to test a class which calls some Hadoop web services. The code is pretty much of the form: method() { ...use Jersey client to create WebResource... ...make request... ...do something with response... } e.g. there is a create directory method, a create folder method etc. Given that the code is dealing with an external web service that I don't have control over, how can I unit test this? I could try and mock the web service client/responses but that breaks the guideline I've seen a lot recently: "Don't mock objects you don't own". I could set up a dummy web service implementation - would that still constitute a "unit test" or would it then be an integration test? Is it just not possible to unit test at this low a level - how would a TDD practitioner go about this?

    Read the article

  • why my unit testing taken more than normal time to run in VS 2010 Premium [on hold]

    - by kombo
    I have only 4 proeject in my solutions. Am trying to run a unit test for one of my class in one of the project. I Create the unit test by: Right clicking on the class choose the create unit test option. I followed the wizard to the end.which resulting the test creation. I just pass the values of the parameter and run the test. but my test keep running. Both surprisingly it runs on other developers pc. NB:My class is connecting to the database and my application is asp.net web form. i know this is not recommended but i want to have my test running now. i have tried alot of samples on the internet but still my problem persist. Could any one tell me the cause of the extreme slowness(more than 30 minutes)

    Read the article

  • Unit Testing-- fundamental goal?

    - by David
    Me and my co-workers had a bit of a disagreement last night about unit testing in our PHP/MySQL application. Half of us argued that when unit testing a function within a class, you should mock everything outside of that class and its parents. The other half of us argued that you SHOULDN'T mock anything that is a direct dependancy of the class either. The specific example was our logging mechanism, which happened through a static Logging class, and we had a number of Logging::log() calls in various locations throughout our application. The first half of us said the Logging mechanism should be faked (mocked) because it would be tested in the Logging unit tests. The second half of us argued that we should include the original Logging class in our unit test so that if we make a change to our logging interface, we'll be able to see if it creates problems in other parts of the application due to failing to update the call interface. So I guess the fundamental question is-- do unit tests serve to test the functionality of a single unit in a closed environment, or show the consequences of changes to a single unit in a larger environment? If it's one of these, how do you accomplish the other?

    Read the article

  • How fast are my services? Comparing basicHttpBinding and ws2007HttpBinding using the SO-Aware Test Workbench

    - by gsusx
    When working on real world WCF solutions, we become pretty aware of the performance implications of the binding and behavior configuration of WCF services. However, whether it’s a known fact the different binding and behavior configurations have direct reflections on the performance of WCF services, developers often struggle to figure out the real performance behavior of the services. We can attribute this to the lack of tools for correctly testing the performance characteristics of WCF services...(read more)

    Read the article

  • SO-Aware at the Atlanta Connected Systems User Group

    - by gsusx
    Today my colleague Don Demsak will be presenting a session about WCF management, testing and governance using SO-Aware and the SO-Aware Test Workbench at the Connected Systems User Group in Atlanta . Don is a very engaging speaker and has prepared some very cool demos based on lessons of real world WCF solutions. If you are in the ATL area and interested in WCF, AppFabric, BizTalk you should definitely swing by Don’s session . Don’t forget to heckle him a bit (you can blame it for it ;) )...(read more)

    Read the article

  • Unit testing in python?

    - by yossi.ittach
    Hey - I'm new to python , and I'm having a hard time grasping the concept of Unit testing in python. I'm coming from Java - so unit testing makes sense because - well , there you actually have a unit - A Class. But a Python class is not necessarily the same as a Java class , and the way I use Python - as a scripting language - is more functional then OOP - So what do you "unit test" in Python ? A flow? Thanks!

    Read the article

  • Unit testing an iPhone static library with XCode 3

    - by teabot
    I am writing a number of static libraries for the iPhone and wish also to have suites of unit tests. XCode 3 provides templates for both static libraries and unit tests but I am wondering how they should fit together in a static library project? In my static library project I have created a target for unit testing but expect to also create an executable to kick off the unit tests than run against the classes in the static library. What is the procedure for doing this?

    Read the article

  • Are unit tests also used to find bugs?

    - by Draco
    I was reading the following article and the author made it quite clear that unit tests are NOT used to find bugs. I would like to know what your thoughts are on this. I do know that unit tests makes the design of your application much more robust but isn't it the fact that finding bugs through unit tests that make the application robust, besides its other advantages? http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >