Search Results

Search found 10206 results on 409 pages for 'tooling and testing'.

Page 52/409 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • How do I inject test objects when the real objects are created dynamically?

    - by JW01
    I want to make a class testable using dependency injection. But the class creates multiple objects at runtime, and passes different values to their constructor. Here's a simplified example: public abstract class Validator { private ErrorList errors; public abstract void validate(); public void addError(String text) { errors.add( new ValidationError(text)); } public int getNumErrors() { return errors.count() } } public class AgeValidator extends Validator { public void validate() { addError("first name invalid"); addError("last name invalid"); } } (There are many other subclasses of Validator.) What's the best way to change this, so I can inject a fake object instead of ValidationError? I can create an AbstractValidationErrorFactory, and inject the factory instead. This would work, but it seems like I'll end up creating tons of little factories and factory interfaces, for every dependency of this sort. Is there a better way?

    Read the article

  • Types of semantic bugs, logic errors [closed]

    - by C-Otto
    I am a PhD student and currently focus on automatically finding instances of new types of bugs in (Java) programs that cannot be found by existing tools like FindBugs. The existing tool currently is used to prove/disprove termination of (Java) programs. I have some ideas (see below), but I could need more input from you (experienced programmers, potential users of my tool). What kind of bugs do you wish to find? What types of bugs exist and might be suitable for my analysis? One strength of the approach I use is detailled information about the heap. So in contrast to FindBugs, I can work with knowledge of the form "variable x and variable y are disjoint on the heap" or "variable z is not cyclic". It is also possible to see if a method might have side effects (and if so, which variables may/may not be affected by it). Example 1: Vacuous call: Graph graphOne = createGraph(); Graph graphTwo = createGraph(); Node source = graphTwo.getRootNode(); for (Node n : graphOne.getNodes()) { if (areConnected(source, n)) { graphTwo.addNode(n); } } Imagine createGraph() creates a fresh graph, so that graphOne and graphTwo are disjoint on the heap. Then, because source is taken from graphTwo instead of graphOne, the call to areConnected always returns false. In this situation I could find out that the call areConnected is useless (because it does not have any side effect and the return value always is false) which helps finding the real bug (taking source from the wrong graph). For this the information that x and y are disjoint (because graphOne and graphTwo are disjoint) is crucial. This bug is related to calling x.equals(y) where x and y are objects of different classes. In this scenario, most implementations of equals() always return false, which most likely is not the intended result. FindBugs already finds this bug (hardcoded to equals(), semantics of implementation is not checked). Example 2: Useless code: someCode(); while (something()) { yetMoreSomething(); } moreCode(); In the case that the loop (so the code in something() and yetMoreSomething()) does not modify anything visible outside the loop, it does not make sense to run this code - the program has the same behaviour as someCode(); moreCode() (i.e., without the loop). To find this out, one needs detailled information about the side effects of the (possibly useless) code. If I can prove that the code does not have any side effect that can be observed afterwards (in the example: in moreCode() or later), then the code indeed is useless. Of course, here Input/Output of any form must be seen as a side effect, so that a System.out.println(...) is not considered useless. Example 3: Ignored return value: Instead of x = foo(); and making use of x, the method is called without storing the result: foo();. If the method does not have any side effect, its invocation is useless and can be dropped. Most likely, the bug here is that the returned value should have been used. Here, too, detailled information about side effects are needed. Can you think of similar types of bugs that might be detected (only) with detailled information about the heap, side effects, semantics of called methods, ...? Did you encounter bugs related to the ones shown below in "real life"? By the way, the tool is AProVE and Java related publications can be found on my homepage. Thanks a lot, Carsten

    Read the article

  • Onsite Interview : QA Engineer with more Emphasis on Java Skills

    - by coolrockers2007
    Hello I'm having a onsite interview for QA engineer with Startup. While phone interview the person said he would want to test my JAVA, JUnit and SQL skills on white board with more importance on Object-oriented skills, So what all can i questions can i expect ? One more important issue : How do i overcome the fear of White board interview ?. I'm very bad at White board sessions, i get fully tensed. Please suggest me tips to overcome my jinx

    Read the article

  • Should mock objects for tests be created at a high or low level

    - by Danack
    When creating unit tests for those other objects, what is the best way to create mock objects that provide data to other objects. Should they be created at a 'high level' and intercept the calls as soon as possible, or should they be done at a 'low level' and so make as much as the real code still be called? e.g. I'm writing a test for some code that requires a NoteMapper object that allows Notes to be loaded from the DB. class NoteMapper { function getNote($sqlQueryFactory, $noteID) { // Create an SQL query from $sqlQueryFactory // Run that SQL // if null // return null // else // return new Note($dataFromSQLQuery) } } I could either mock this object at a high level by creating a mock NoteMapper object, so that there are no calls to the SQL at all e.g. class MockNoteMapper { function getNote($sqlQueryFactory, $noteID) { //$mockData = {'Test Note title', "Test note text" } // return new Note($mockData); } } Or I could do it at a very low level, by creating a MockSQLQueryFactory that instead of actually querying the database just provides mock data back, and passing that to the current NoteMapper object. It seems that creating mocks at a high level would be easier in the short term, but that in the long term doing it at a low level would be more powerful and possibly allow more automation of tests e.g. by recording data in an out of a DB and then replaying that data for tests. Is there a recommended way of creating mocks? Are there any hard and fast rules about which are better, or should they both be used where appropriate?

    Read the article

  • bug: deviation from requirements vs deviation from expectations

    - by user970696
    I am not clear on this one. No matter the terminology, in the end the software fault/bug causes (according to a lot of sources): Deviation from requirements Devation from expectations But if the expectations are not in requirements, then stakeholder could see a bug everywhere as he expected it to be like this or that..So how can I really know? I did read that specification can miss things and then of course its expected but not specified (by mistake).

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Quality of Code in unit tests?

    - by m3th0dman
    Is it worth to spend time when writing unit tests in order that the code written there has good quality and is very easy to read? When writing this kinds of tests I break very often the Law of Demeter, for faster writing and not using so many variables. Technically, unit tests are not reused directly - are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functionaly.

    Read the article

  • Should tests be in the same ruby file or in separeted ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • How to handle bugs that I think I fixed, but I'm not entirely sure

    - by vsz
    There are some types of bugs which are very hard to reproduce, happen very rarely and seemingly by random. It can happen, that I find a possible cause, fix it, test the program, and can't reproduce the bug. However, as it was impossible to reliably reproduce the bug and it happened so rarely, how can I indicate this in a bugtracker? What is the common way of doing it? If I set the status to fixed, and the solution to fixed, it would mean something completely fixed, wouldn't it? Is it common practice to set the status to fixed and the solution to open, to indicate to the testers, that "it's probably fixed, but needs more attention to make sure" ? Edit: most (if not all) bugtrackers have two properties for the status of a bug, maybe the names are not the same. By status I mean new, assigned, fixed, closed, etc., and by solution I mean open (new), fixed, unsolvable, not reproducible, duplicate, not a bug, etc.

    Read the article

  • Mock RequireJS define dependencies with config.map

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/18/mock-requirejs-define-dependencies-with-config.map.aspxI had a module dependency, that I’m pulling down with RequireJS that I needed to use and write tests against. In this case, I don’t care about the actual implementation of the module (it’s simple enough that I’m just avoiding some AJAX calls). EDIT: make sure you look at the bottom example after the edit before using the config.map approach. I found that there is an easier way. I did not want to change the constructor of the consumer as I had a chain of changes that would have to be made and that would have been to invasive for this task. I found a question on StackOverflow with a short, but helpful answer from “Artem Oboturov”. We can use the config.map from RequireJs to achieve this. Here is some code: A module example (“usefulModule” in Common/Modules/usefulModule.js): define([], function() { "use strict"; var testMethod = function() { ... }; // add more functionality of the module return { testMethod; } }); A consumer of usefulModule example: define([ "Commmon/Modules/usefulModule" ], function(usefulModule) { "use strict"; var consumerModule = function(){ var self = this; // add functionality of the module } }); Using config.map in the html of the test runner page (and in your Karma config –> I’m still trying to figure this out): map: {'*': { // replace usefulModule with a mock 'Common/Modules/usefulModule': '/Tests/Specs/Common/usefulModuleMock.js' } } With the new mapping, Require will load usefulModuleMock.js from Tests/Specs/Common instead of the real implementation. Some of the answers on StackOverflow mentioned Squire.js, which looked interesting, but I wasn’t ready to introduce a new library at this time. That’s all you need to be able to mock a depency in RequireJS. However, there are many good cases when you should pass it in through the constructor instead of this approach.   EDIT: After all that, here’s another, probably better way: The consumer class, updated: define([ "Commmon/Modules/usefulModule" ], function(UsefulModule) { "use strict"; var consumerModule = function(){ var self = this; self.usefulModule = new UsefulModule(); // add functionality of the module } }); Jasmine test: define([ "consumerModule", "/UnitTests/Specs/Common/Mocks/usefulModuleMock.js" ], function(consumerModule, UsefulModuleMock){ describe("when mocking out the module", function(){ it("should probably just override the property", function(){ var consumer = new consumerModule(); consumer.usefulModule = new UsefulModuleMock(); }); }); });   Thanks for letting me think out loud :-).

    Read the article

  • Using automated bdd-gui-tests to keep user-documentation-screenshots up do date?

    - by k3b
    Are there developpers out there, who (ab)use the CaptureScreenshot() function of their automated gui-tests to also create uptodate-screenshots for the userdocumentation? Background: Whithin the lifetime of an application, its gui-elements are constantly changing. It makes a lot of work to keep the userdocumentation uptodate, especially if the example data in the pictures should match the textual description. If you already have automated bdd-gui-tests why not let them take screenshots at certain points? I am currently playing with webapps in dotnet+specflow+selenium, but this topic also applies to other bdd-engines (JRuby-Cucumber, mspec, rspec, ...) and gui-test-Frameworks (WaitN, WaitR, MsWhite, ....) Any experience, thoughts or url-links to this topic would be helpfull. How is the cost/benefit relation? Is it worth the efford? What are the Drawbacks? See also: Is it practical to retroactively write specifications documenting a system via automated acceptance tests?

    Read the article

  • Isolated Unit Tests and Fine Grained Failures

    - by Winston Ewert
    One of the reasons often given to write unit tests which mock out all dependencies and are thus completely isolated is to ensure that when a bug exists, only the unit tests for that bug will fail. (Obviously, an integration tests may fail as well). That way you can readily determine where the bug is. But I don't understand why this is a useful property. If my code were undergoing spontaneous failures, I could see why its useful to readily identify the failure point. But if I have a failing test its either because I just wrote the test or because I just modified the code under test. In either case, I already know which unit contains a bug. What is the useful in ensuring that a test only fails due to bugs in the unit under test? I don't see how it gives me any more precision in identifying the bug than I already had.

    Read the article

  • Testcase runner for parametrized testcases

    - by Razer
    Let me explain my situation. I'm planning a kind of test case runner for doing testcases on external devices, which are microcontroller based. Lets consider the devices: Device 1 Device 2 There exist a lot of test cases which can be run with one of the devices above. For example: Testcase 1 Testcase 2 The main reason that all the testcases can be run with any device is, that the testcases validates some standard and this software should be extensible for future devices. The testcases itself must be runnable with changing parameters. For example Testcase 1 does some Timing Verification the testcase needs as input parameter the datarate: 4800, 9600, 19200. Now hoping you understand the situation, let me explain my design questions. For implementing the test cases I thought about an Attribute based approach, like nunit does it. The more complicated problem is, how to define the parametrized testcases? Like this: Device 1: Testcase 1: datarate: 4800, 9600, 19200 Testcase 2: supply: 1, 2, 3 Device 2: Testcase 1: datarate: 9600, 19200, 38400 Testcase 2: supply: 3, 4, 5 How would you design such a framework? I've done a similar desin in python where I had for every device a XML containing the testcase definitions like: <Testcase="Testcase 1" datarate=4800/> <Testcase="Testcase 1" datarate=9600/> <Testcase="Testcase 1" datarate=19200/>

    Read the article

  • What if I can't make my unit test fail in "Red, Green, Refactor" of TDD?

    - by Joshua Harris
    So let's say that I have a test: @Test public void MoveY_MoveZero_DoesNotMove() { Point p = new Point(50.0, 50.0); p.MoveY(0.0); Assert.assertAreEqual(50.0, p.Y); } This test then causes me to create the class Point: public class Point { double X; double Y; public void MoveY(double yDisplace) { throw new NotYetImplementedException(); } } Ok. It fails. Good. Then I remove the exception and I get green. Great, but of course I need to test if it changes value. So I write a test that calls p.MoveY(10.0) and checks if p.Y is equal to 60.0. It fails, so then I change the function to look like so: public void MoveY(double yDisplace) { Y += yDisplace; } Great, now I have green again and I can move on. I've tested not moving and moving in the positive direction, so naturally I should test a negative value. The only problem with this test is that if I wrote the test correctly, then it doesn't fail at first. That means that I didn't fit the principle of "Red, Green, Refactor." Of course, This is a first-world problem of TDD, but getting a fail at first is helpful in that it shows that your test can fail. Otherwise this seemingly innocent test that is just passing for incorrect reasons could fail later because it was written wrong. That might not be a problem if it happened 5 minutes later, but what if it happens to the poor-sap that inheirited your code two years later. What he knows is that MoveY does not work with negative values because that is what the test is telling him. But, it really could work and just be a bug in the test. I don't think that would happen in this particular case because the code sample is so simple, but if it were a large complicated system that might not be the case. It seems crazy to say that I want to fail my tests, but that is an important step in TDD, for good reasons.

    Read the article

  • Do you test your SQL/HQL/Criteria ?

    - by 0101
    Do you test your SQL or SQL generated by your database framework? There are frameworks like DbUnit that allow you to create real in-memory database and execute real SQL. But its very hard to use(not developer-friendly so to speak), because you need to first prepare test data(and it should not be shared between tests). P.S. I don't mean mocking database or framework's database methods, but tests that make you 99% sure that your SQL is working even after some hardcore refactoring.

    Read the article

  • What are the processes of true Quality assurance?

    - by user970696
    Having read that Quality Assurance (QA) is focused on processes (while Quality Control (QC) is focused on the product), the books often mentions QA is the verification process - doing peer reviews, inspections etc. I still tend to think these are also QC as they check intermediate products. Elsewhere I have read that QA activity is e.g. choosing the right bugtracker. That sounds better to me in terms of process improvement. The question that close-voting person obviously missed is pretty clear: What are the activities that true QA should perform? I would appreciate the reference as I work on my thesis dealing with all these discrepancies and inconsistencies in the software quality world.

    Read the article

  • Elo system behaves oddly in program I've created

    - by adc
    Alright, so I'm looking to build a small program (C# and XAML) that, essentially, does this: Generate array of players. Each player has a current rating and a true rating. I set current rating to 1200 as a starting point right now; I've also tried setting it to true rating and the average of the two. True rating is what their skill level actually is. Their true rating is calculated based on percentages from the current League of Legends rating system; generating an array of 970 thousand generates results very similar to the data from here: (removed due to URL limit - but trust me, the results are very similar). This array is of length specified by the user. If need be, sort the array from smallest to largest. Play X number of games, again specified by the user. This is done by taking the array of players (which is sorted by Current Rating after being created) and running through it in groups of 10. The first five are on team one, the second five are on team two. It then takes the True Rating of these players and calculates an expected chance to win using the Elo system. It generates a random double and compares it to the expected chance to win; if the number is lower, team one wins - otherwise team two wins. I then update the rating of the players via, again, the Elo system - giving the winning team a score of 1 and the losing team a score of 0. I use a K value of 36 (but have tried 12, 24, and even higher ones) and an F value of 400. After going through the entire loop of players (which I have conveniently forced to be a multiple of ten), it sorts the array - again via current rating. This, if my understanding of the Elo system is correct, runs properly. However, it doesn't seem to work. I have a running test telling me how many players of the full array are within 100 current rating of their true rating. I would expect some portion of the population to be outside this range (as probability is not always going to go in their favor), but a full 40-45% of the population is outside of this range. I also have it outputting the maximum difference between true and current rating - and I have never seen this drop below 500! It hovers between 550-600, occasionally going over or under. I'm at a loss as to what to change - I've fiddled with the K and F values, where I start all the players, etc. but nothing changes the fact that eventually a good 40% of the population is outside the range. And it isn't that I have it playing too few games - it's now run through over 60 thousand games and the problem never disappears or really fluctuates. The full C# code, including everything except the XAML file and the Player class (pastebin is being very slow and I can only post two links, so I can't link to the XAML file): http://pastebin.com/rFcZRL84 The Player class: http://pastebin.com/4cJTdTRu I guess my question is did I do anything wrong? Is there a problem with the way I implemented the system, or is it just that Riot uses a significantly modified Elo system? I don't think it's the latter, as that still wouldn't explain the massive true and current rating differences to me, however.

    Read the article

  • Registration free hosting for ASP.NET web service

    - by Andrew
    I've built a simple ASP.NET web service, tested it locally and would like to test it when externally hosted. Are there free hosting services available where I can just upload the assembly and service description file and test it straight away. Without registering the account, etc. My service does not do anything malicious and I am ok to run it in a restricted (security sandbox, bandwith, calls per second, etc) environment? I have heard about appharbor.com but it looks like an overkill to test a simple web service.

    Read the article

  • Verification of requirements question

    - by user970696
    Doing a lot of reading about V&V, I would need to clarify the following. A lot of definitons (less formal ones found in books) define verification like that: Verification: The software should conform to its specification. But then they speak about requirement verification, design verification etc. If I say that these items are "software" in terms of applying the definitons, what should I checked them against, what specification should requirements, which is the basic information, conform to? And one more thing: shouldnt be requirements also validated? To make sure they meets the customer needs? All texts I have speak only about SW validation on the end of the dev.process..

    Read the article

  • Tender vs. Requirements vs. Solution Design

    - by Tom Tom
    Conventionally, which of the above documents is deemed to hold the most weight when it comes to system acceptance? I recently had a conversation along these lines: It was argued that the initial requirements / tender documentation should be used to determine system acceptance. It was said that the solution design only serves to describe the way in which the system will solve the problem, not the problem it will solve. Furthermore, it was argued that if requirements are missed during solution design, the requirements should be referenced during system acceptance and that if any requirements were missed then the original tender should be referenced. Conversely, I suggested that - while requirements may be based on the original tender - they supersede it once agreed with the stakeholders. Furthermore, during solution design, analysis is performed to address and refine these initial requirements, translating them into a system capable of meeting the actual requirements. Once signed off by the relevant users, this solution design should absolutely represent the requirements (by virtue of the fact that it's designed upon them) but actually supersedes them as the basis for system acceptance. Is one of the above arguments more valid than the other?

    Read the article

  • What is considered third party code?

    - by Songo
    Inspired by this question Using third-party libraries - always use a wrapper? I wanted to know what people actually consider as third-party libraries. Example from PHP: If I'm building an application using Zend framework, should I treat Zend framework libraries as third party code? Example from C#: If I'm building a desktop application, should I treat all .Net classes as third party code? Example from Java: Should I treat all libraries in the JDK as third party libraries? Some people say that if a library is stable and won't change often then one doesn't need to wrap it. However I fail to see how one would test a class that depends on a third party code without wrapping it.

    Read the article

  • Link tracking: Amazon or Google way

    - by Howard
    When doing a shopping site, the best way is to reference some successful stores, like Amazon. In the area of link tracking, for example, to see which section of your frontpage yield better conversion: Amazon way: Generate an unique URL for each link in the frontpage, such as http://www.amazon.com/gp/product/B0083Q04IQ/ref=s9_pop_gw_g424_ir04/175-6575053-9292830?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-2&pf_rd_r=0AMJCKBBQA63EP0XHB86&pf_rd_t=101&pf_rd_p=1263340922&pf_rd_i=507846 Google way Use Google Analytics <a href="/products/abc" onClick="javascript: pageTracker._trackPageview('/from-main-menu/products/abc');"> WHat are the pros and cons with the above two approaches (besides Google require JS support)?

    Read the article

  • What is the best way to find a python google app engine coach?

    - by David Haddad
    i'm a software engineer and have been building Google App Engine apps with Python for about a year. I have a pretty good familiarity with the main concepts: web app framework, modeling, queues, memcache, django templates, etc. Where I think I'm lacking is in methodology. Architecting the app, using git for versioning, designing an writing unit tests. I'm totally convinced to incorporate these practices in my development style, and have started reading up on them. However I've learned that I'm a much faster learner when I have someone experienced to ask questions to and interact with. IRC channels and forums like stack overflow are great. But sometimes you want something more dynamic that produces results faster. So my question is how can a person find an experienced engineer that is familiar with the technologies he uses and that is willing to give them a couple of hours of Skype coaching sessions per week in return for an hourly fee...

    Read the article

  • Should tests be in the same Ruby file or in separated Ruby files?

    - by Junior Mayhé
    While using Selenium and Ruby to do some functional tests, I am worried with the performance. So is it better to add all test methods in the same Ruby file, or I should put each one in separated code files? Below a sample with all tests in the same file: # encoding: utf-8 require "selenium-webdriver" require "test/unit" class Tests < Test::Unit::TestCase def setup @driver = Selenium::WebDriver.for :firefox @base_url = "http://mysite" @driver.manage.timeouts.implicit_wait = 30 @verification_errors = [] @wait = Selenium::WebDriver::Wait.new :timeout => 10 end def teardown @driver.quit assert_equal [], @verification_errors end def element_present?(how, what) @driver.find_element(how, what) true rescue Selenium::WebDriver::Error::NoSuchElementError false end def verify(&blk) yield rescue Test::Unit::AssertionFailedError => ex @verification_errors << ex end def test_1 @driver.get(@base_url + "/") # a huge test here end def test_2 @driver.get(@base_url + "/") # a huge test here end def test_3 @driver.get(@base_url + "/") # a huge test here end def test_4 @driver.get(@base_url + "/") # a huge test here end def test_5 @driver.get(@base_url + "/") # a huge test here end end

    Read the article

  • Unit test: How best to provide an XML input?

    - by TheSilverBullet
    I need to write a unit test which validates the serialization of two attributes of an XML(size ~ 30 KB) file. What is the best way to provide an input for this test? Here are the options I have considered: Add the file to the project and use a file reader Pass the contents of the XML as a string Create the XML through a program and pass it Which is my best option and why? If there is another way which you think is better, I would love to hear it.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >