Search Results

Search found 17871 results on 715 pages for 'ui testing'.

Page 15/715 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • SO-Aware sessions in Dallas and Houston

    - by gsusx
    Our WCF Registry: SO-Aware keeps being evangelized throughout the world. This week Tellago Studios' Dwight Goins will be speaking at Microsoft events in Dallas and Houston ( https://msevents.microsoft.com/cui/EventDetail.aspx?culture=en-US&EventID=1032469800&IO=ycqB%2bGJQr78fJBMJTye1oA%3d%3d ) about WCF management best practices using SO-Aware . If you are in the area and passionate about WCF you should definitely swing by and give Dwight a hard time ;)...(read more)

    Read the article

  • Do Repeat Yourself in Unit Tests

    - by João Angelo
    Don’t get me wrong I’m a big supporter of the DRY (Don’t Repeat Yourself) Principle except however when it comes to unit tests. Why? Well, in my opinion a unit test should be a self-contained group of actions with the intent to test a very specific piece of code and should not depend on externals shared with other unit tests. In a typical unit test we can divide its code in two major groups: Preparation of preconditions for the code under test; Invocation of the code under test. It’s in the first group that you are tempted to refactor common code in several unit tests into helper methods that can then be called in each one of them. Another way to not duplicate code is to use the built-in infrastructure of some unit test frameworks such as SetUp/TearDown methods that automatically run before and after each unit test. I must admit that in the past I was guilty of both charges but what at first seemed a good idea since I was removing code duplication turnout to offer no added value and even complicate the process when a given test fails. We love unit tests because of their rapid feedback when something goes wrong. However, this feedback requires most of the times reading the code for the failed test. Given this, what do you prefer? To read a single method or wander through several methods like SetUp/TearDown and private common methods. I say it again, do repeat yourself in unit tests. It may feel wrong at first but I bet you won’t regret it later.

    Read the article

  • Why some consider static analysis a testing and some do not?

    - by user970696
    Preparing myself also to ISTQB certification, I found they call static analysis actually as a static testing, while some engineering book distinct between static analysis and testing, which is the dynamic activity. I tent to think that static analysis is not a testing in the true sense as it does not test, it checks/verifies. But sure I would love to hear opinion of the true experts here. Thank you

    Read the article

  • Separating physics and game logic from UI code

    - by futlib
    I'm working on a simple block-based puzzle game. The game play consists pretty much of moving blocks around in the game area, so it's a trivial physics simulation. My implementation, however, is in my opinion far from ideal and I'm wondering if you can give me any pointers on how to do it better. I've split the code up into two areas: Game logic and UI, as I did with a lot of puzzle games: The game logic is responsible for the general rules of the game (e.g. the formal rule system in chess) The UI displays the game area and pieces (e.g. chess board and pieces) and is responsible for animations (e.g. animated movement of chess pieces) The game logic represents the game state as a logical grid, where each unit is one cell's width/height on the grid. So for a grid of width 6, you can move a block of width 2 four times until it collides with the boundary. The UI takes this grid, and draws it by converting logical sizes into pixel sizes (that is, multiplies it by a constant). However, since the game has hardly any game logic, my game logic layer [1] doesn't have much to do except collision detection. Here's how it works: Player starts to drag a piece UI asks game logic for the legal movement area of that piece and lets the player drag it within that area Player lets go of a piece UI snaps the piece to the grid (so that it is at a valid logical position) UI tells game logic the new logical position (via mutator methods, which I'd rather avoid) I'm not quite happy with that: I'm writing unit tests for my game logic layer, but not the UI, and it turned out all the tricky code is in the UI: Stopping the piece from colliding with others or the boundary and snapping it to the grid. I don't like the fact that the UI tells the game logic about the new state, I would rather have it call a movePieceLeft() method or something like that, as in my other games, but I didn't get far with that approach, because the game logic knows nothing about the dragging and snapping that's possible in the UI. I think the best thing to do would be to get rid of my game logic layer and implement a physics layer instead. I've got a few questions regarding that: Is such a physics layer common, or is it more typical to have the game logic layer do this? Would the snapping to grid and piece dragging code belong to the UI or the physics layer? Would such a physics layer typically work with pixel sizes or with some kind of logical unit, like my game logic layer? I've seen event-based collision detection in a game's code base once, that is, the player would just drag the piece, the UI would render that obediently and notify the physics system, and the physics system would call a onCollision() method on the piece once a collision is detected. What is more common? This approach or asking for the legal movement area first? [1] layer is probably not the right word for what I mean, but subsystem sounds overblown and class is misguiding, because each layer can consist of several classes.

    Read the article

  • Are there any formalized/mathematical theories of software testing?

    - by Erik Allik
    Googling "software testing theory" only seems to give theories in the soft sense of the word; I have not been able to find anything that would classify as a theory in the mathematical, information theoretical or some other scientific field's sense. What I'm looking for is something that formalizes what testing is, the notions used, what a test case is, the feasibility of testing something, the practicality of testing something, the extent to which something should be tested, formal definition/explanation of code coverage, etc. UPDATE: Also, I'm not sure, intuitively, about the connection between formal verification and what I asked, but there's clearly some sort of connection.

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How can I unit test rendering output?

    - by stephelton
    I've been embracing Test-Driven Development (TDD) recently and it's had wonderful impacts on my development output and the resiliency of my codebase. I would like to extend this approach to some of the rendering work that I do in OpenGL, but I've been unable to find any good approaches to this. I'll start with a concrete example so we know what kinds of things I want to test; lets say I want to create a unit cube that rotates about some axis, and that I want to ensure that, for some number of frames, each frame is rendered correctly. How can I create an automated test case for this? Preferably, I'd even be able to write a test case before writing any code to render the cube (per usual TDD practices.) Among many other things, I'd want to make sure that the cube's size, location, and orientation are correct in each rendered frame. I may even want to make sure that the lighting equations in my shaders are correct in each frame. The only remotely useful approach to this that I've come across involves comparing rendered output to a reference output, which generally precludes TDD practice, and is very cumbersome. I could go on about other desired requirements, but I'm afraid the ones I've listed already are out of reach.

    Read the article

  • Writing selenium tests, should I just get it done or get it right?

    - by Peter Smith
    I'm attempting to drive my user interface (heavy on javascript) through selenium. I've already tested the rest of my ajax interaction with selenium successfully. However, this one particular method seems to be eluding me because I can't seem to fake the correct click event. I could solve this problem by simply waiting in the test for the user to click a point and then continuing with the test but this seems like a cop out. But I'm really running out of time on my deadline to have this done and working. Should I just get this done and move on or should I spend the extra (unknown) amount of time to fix this problem and be able to have my selenium tests 100% automated?

    Read the article

  • Is it correct to fix bugs without adding new features when releasing software for system testing?

    - by Pratik
    This question is to experienced testers or test leads. This is a scenario from a software project: Say the dev team have completed the first iteration of 10 features and released it to system testing. The test team has created test cases for these 10 features and estimated 5 days for testing. The dev team of course cannot sit idle for 5 days and they start creating 10 new features for next iteration. During this time the test team found defects and raised some bugs. The bugs are prioritised and some of them have to be fixed before next iteration. The catch is that they would not accept the new release with any new features or changes to existing features until all those bugs fixed. The test team says that's how can we guarantee a stable release for testing if we also introduce new features along with the bug fix. They also cannot do regression tests of all their test cases each iteration. Apparently this is proper testing process according to ISQTB. This means the dev team has to create a branch of code solely for bug fixing and another branch where they continue development. There is more merging overhead specially with refactoring and architectural changes. Can you agree if this is a common testing principle. Is the test team's concern valid. Have you encountered this in practice in your project.

    Read the article

  • How to unit test image processing code?

    - by rold2007
    I'm working in image processing (mainly OCR) and I wonder how I should integrate unit tests in my development. I'm already using unit tests for more "common" type of code but when dealing with image processing code I'm not sure how to deal with it. This kind of code always need some image data input/output and mocking this is not obvious. For now I'm mostly doing integration tests but they take a while to run and I would like some ideas on how to break down this kind of code into unit tests so that I can run them more quickly.

    Read the article

  • Is static universally "evil" for unit testing and if so why does resharper recommend it?

    - by Vaccano
    I have found that there are only 3 ways to unit test (mock/stub) dependencies that are static in C#.NET: Moles TypeMock JustMock Given that two of these are not free and one has not hit release 1.0, mocking static stuff is not too easy. Does that make static methods and such "evil" (in the unit testing sense)? And if so, why does resharper want me to make anything that can be static, static? (Assuming resharper is not also "evil".) Clarification: I am talking about the scenario when you want to unit test a method and that method calls a static method in a different unit/class. By most definitions of unit testing, if you just let the method under test call the static method in the other unit/class then you are not unit testing, you are integration testing. (Useful, but not a unit test.)

    Read the article

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How can I test database access methods in Java?

    - by javaStudent
    I want to write a test for a method that accesses a database such as following. public class MyClass{ public String getAddress(Int id){ String query = "Select * from Address where id="+id; //some additional statements resultSet = statement.executeQuery(); return result.getString(ADDRESS); } } How can I test this method? I am using Java.

    Read the article

  • Assignments in mock return values

    - by zerkms
    (I will show examples using php and phpunit but this may be applied to any programming language) The case: let's say we have a method A::foo that delegates some work to class M and returns the value as-is. Which of these solutions would you choose: $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue('baz')); $obj = new A($mock); $this->assertEquals('baz', $obj->foo()); or $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result = 'baz')); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); or $result = 'baz'; $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result)); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); Personally I always follow the 2nd solution, but just 10 minutes ago I had a conversation with couple of developers who said that it is "too tricky" and chose 3rd or 1st. So what would you usually do? And do you have any conventions to follow in such cases?

    Read the article

  • Weird Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days into the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How do I check that my tests were not removed by other developers?

    - by parxier
    I've just came across an interesting collaborative coding issue at work. I've written some unit/functional/integration tests and implemented new functionality into application that's got ~20 developers working on it. All tests passed and I checked in the code. Next day I updated my project and noticed (by chance) that some of my test methods were deleted by other developers (merging problems on their end). New application code was not touched. How can I detect such problem automatically? I mean, I write tests to automatically check that my code still works (or was not deleted), how do I do the same for tests? We're using Java, JUnit, Selenium, SVN and Hudson CI if it matters.

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • Should the test and the fix be written by different people?

    - by Nutel
    There is a common practice in TDD to write a test before fix to avoid regression and simplify fixing. I just wonder what if the test and fix will be written by different people, total spent time will be almost the same but as now three people will think about possible failures (+tester) we increase probability that fix will cover all possible failure scenarios. Does this practice make sense or it will just waste additional time needed for one more person to familiarize with bug?

    Read the article

  • Where can I find statistics / figures on how long testing should / could take?

    - by NoCarrier
    I'm trying to convince management that testing/QA takes considerably longer than non-developers think. Some smaller shops don't have budgets for testers and phbs automatically assume the developer will spend a few minutes after every build "testing" and deliver a perfectly functional system. Can someone point me to some numbers? e.g. Testing should be XX% of your total man hour count , etc etc? Or perhaps some real world experience? My goal is to have some numbers that are grounded in real life so I can make time/effort allocation justifications for "proper" testing when preparing estimates and timelines for applications. Maybe not full blown 100% TDD, but pragmatically close to it. I apologize if I seem vague.

    Read the article

  • What set of tools make up "the rails way" of testing javascript in the browser?

    - by Jordan Feldstein
    What's the concensus for doing in-browser (either headless or remote-controlled) testing of javascript? Unit testing my JS is nice, but can't protect against irresponsible changes to the DOM. Unit testing of the JS and functional testing of the views to make sure they both provide and utilize the same, correct DOM, might work, but then the link between JS and DOM is being covered in two places which seems brittle or cumbersome. Is there an acknowledged "Rails Way" to implement full-stack tests, where I can run my javascript against the DOM rendered by the rest of the app, and check the results? (Something like what PHPUnit and Selenium give us, but inside the rails framework?)

    Read the article

  • Should I use a seperate class per test?

    - by user460667
    Taking the following simple method, how would you suggest I write a unit test for it (I am using MSTest however concepts are similar in other tools). public void MyMethod(MyObject myObj, bool validInput) { if(!validInput) { // Do nothing } else { // Update the object myObj.CurrentDateTime = DateTime.Now; myObj.Name = "Hello World"; } } If I try and follow the rule of one assert per test, my logic would be that I should have a Class Initialise method which executes the method and then individual tests which check each property on myobj. public class void MyTest { MyObj myObj; [TestInitialize] public void MyTestInitialize() { this.myObj = new MyObj(); MyMethod(myObj, true); } [TestMethod] public void IsValidName() { Assert.AreEqual("Hello World", this.myObj.Name); } [TestMethod] public void IsDateNotNull() { Assert.IsNotNull(this.myObj.CurrentDateTime); } } Where I am confused is around the TestInitialize. If I execute the method under TestInitialize, I would need seperate classes per variation of parameter inputs. Is this correct? This would leave me with a huge number of files in my project (unless I have multiple classes per file). Thanks

    Read the article

  • How to populate a private container for unit test?

    - by Sardathrion
    I have a class that defines a private (well, __container to be exact since it is python) container. I am using the information within said container as part of the logic of what the class does and have the ability to add/delete the elements of said container. For unit tests, I need to populate this container with some data. That date depends on the test done and thus putting it all in setUp() would be impractical and bloated -- plus it could add unwanted side effects. Since the data is private, I can only add things via the public interface of the object. This run codes that need not be run during a unit test and in some case is just a copy and paste from another test. Currently, I am mocking the whole container but somehow it does not feel that elegant a solution. Due to Python mocking frame work (mock), this requires the container to be public -- so I can use patch.dict(). I would rather keep that data private. What pattern can one use to still populate the containers without excising the public method so I have data to test with? Is there a way to do this with mock' patch.dict() that I missed?

    Read the article

  • JMeter: how to asign a single distinct value from CSV Data Set Config to each thread in thread group?

    - by JohnnyM
    I have to make a load test for a relatively large number of users so I cant realy use User Parameters pre-processor to parametrize each thread with custom user data. I've read that I should use CSV Data Set Config instead. However I run into a problem with how JMeter interprets the input of this Config. Example: I have a thread group of 3 threads and Loop Count:10 with one HTTP request sampler with server www.example.com and path: \${user}. The csv file (bullet is a single line in file) for CSV Data Set Config to extract the user parameter: 1 2 3 4 5 Expected output is that for thread 1-x the path of the request should be: \x. So the output file should consist of 10 samples per thread namely: for thread 1-1 : 10 requests to www.example.com\1 for thread 1-2 : 10 requests to www.example.com\2 for thread 1-3 : 10 requests to www.example.com\3 but instead i get requests to each \1 - \5 and then to EOF. Does anyone know how to achieve the expected effect with CSV Data Set Config in jmeter 2.9?

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >