Search Results

Search found 10149 results on 406 pages for 'acceptance testing'.

Page 7/406 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Are there any language agnostic unit testing frameworks?

    - by Bringer128
    I have always been skeptical of rewriting working code - porting code is no exception to this. However, with the advent of TDD and automated testing it is much more reasonable to rewrite and refactor code. Does anyone know if there is a TDD tool that can be used for porting old code? Ideally you could do the following: Write up language agnostic unit tests for the old code that pass (or fail if you find bugs!). Run unit tests on your other code base that fail. Write code in your new language that passes the tests without looking at the old code. The alternative would be to split step 1 into "Write up unit tests in language 1" and "Port unit tests to language 2", which significantly increases effort required and is difficult to justify if the old code base is going to stop being maintained after the port (that is, you don't get the benefit of continuous integration on this code base). EDIT: It's worth noting this question on StackOverflow.

    Read the article

  • How to remove the “AMD Testing use only” watermark from Ubuntu 12.10

    - by Lucio
    I've installed the latest catalyst driver (beta) following the step in this guide for Ubuntu Quantal Quetzal. My system is 64 bit and my graphic card is an ATI RadeonHD 6670, this g.c. is Officially Supported (Catalyst & Open Source), you can confirm that from this AMD Linux Community thread. I don't have any problem, except the AMD testing use only watermark. I see the following frame in any stage into the OS (logged, unlloged, etc.) except in the terminals. I found different versions of how to remove this image, but this change according to the system, so I want an answer from this popular (trusted) site. How to solve this issue in Ubuntu 12.10 32b? This procedure is different in a 64b system?

    Read the article

  • Networking Programs Suitable For Symbolic Testing

    - by Milen
    Symbolic execution has been successfully used to test programs and automatically generate test cases. I've been working on my master's thesis that allows the testing of arbitrary networked programs (i.e., those communicating via sockets). Now that we have a working symbolic execution engine that has support for sockets, we're looking for real-world pieces of software to test. Our engine has an important restriction (at the moment): it cannot execute multi-threaded programs. So, we're looking for programs that satisfy the criteria outlined below: Written in C Communicates via sockets (TCP / UDP are supported) Does not rely on the filesystem to get the "job" done Runs on Linux Does not use multi-threading Source is available (so that we can compile them to LLVM bytecode) Most programs that would fall under the criteria would probably be implementations of distributed protocols solving a particular problem (e.g., consensus). Any suggestions are greatly appreciated.

    Read the article

  • A testing feedback/report tool?

    - by Mert
    I'm thinking of developing a pluggable test and assessment module. This tool will be used especially for desktop application projects to report and log errors, bugs, missing features and suggestions from testers. The tool will be plugged to the application by putting a small icon to the application itself. When pressed the tool will be visible where user can create entries about the application. Is there already a tool like that? I am not speaking about UI testing btw. For example, this tool might have a form consisting of Page name Environment information Entry type (can be bug, feature request, suggestion) Message User Info (name, contact etc) Date I think such a tool can greatly help testers prepare reports. Developers can understand the issue better and track all the reports.

    Read the article

  • Unit testing time-bound code

    - by maasg
    I'm currently working on an application that does a lot of time-bound operations. That is, based on long now = System.currentTimeMillis();, and combined with an scheduler, it will calculate periods of time that parametrize the execution of some operations. e.g.: public void execute(...) { // executed by an scheduler each x minutes final int now = (int) TimeUnit.MILLISECONDS.toSeconds(System.currentTimeMillis()); final int alignedTime = now - now % getFrequency() ; final int startTime = alignedTime - 2 * getFrequency(); final int endTimeSecs = alignedTime - getFrequency(); uploadData(target, startTime, endTimeSecs); } Most parts of the application are unit-tested independently of time (in this case, uploadData has a natural unit test), but I was wondering about best practices for testing time-bound parts that rely on System.currentTimeMillis() ?

    Read the article

  • How to improve testing your own code

    - by Peter
    Hi guys, Today I checked in a change on some code which turned out to be not working at all due to something rather stupid yet very crucial. I feel really bad about it and I hope I finally learn something from it. The stupid thing is, I've done these things before and I always tell myself, next time I won't be so stupid... Then it happens again and I feel even worse about it. I know you should keep your chin up and learn from your mistakes but here's the thing: I try to improve myself, I just don't see how I can prevent these things from happening. So, now I'm asking you guys: Do you have certain groundrules when testing your code?

    Read the article

  • How come verification does not include actual testing?

    - by user970696
    Having read a lot about this topic, I still did not get it. Verification should prove that you are building the product right, while validation you build the right product. But only static techniques are mentioned as being verification methods (code reviews, requirements checks...). But how can you say if its implemented correctly if you do not test it? It is said that verification checks e.g. code for its correctnes. Verification - ensure that the product meet specified requirements. Again, if the function is specified to work somehow, only by testing I can say that it does. Could anyone explain this to me please? EDIT: As Wiki says: Verification:Preparing of the test cases (based on the analysis of the requireemnts) Validation: Running of the test cases

    Read the article

  • Testing To Prevent Cascading Bugs

    - by jfrankcarr
    Yesterday, Twitter was hit with a "Cascading Bug" as described in this blog post: A “cascading bug” is a bug with an effect that isn’t confined to a particular software element, but rather its effect “cascades” into other elements as well. I've seen this kind of bug, on a smaller scale of course, on some projects I've worked on. They can be difficult to identify in dev/test environments, even within a test driven development environment. My questions are... What are some strategies you use, beyond the basic TDD and standard regression testing, to identify and prevent the potential trouble points that might only occur in the production environment? Does the presence of such problems indicate a breakdown in the software development process or simply a by-product of complex software systems?

    Read the article

  • Testing web applications written in java

    - by Vinoth Kumar
    How do you test the web applications (both server side and client side code)? The testing method has to work irrespective of the framework used (struts, spring web mvc) etc. I am using Java for the server side code, Javascript and HTML for the client side code. This is the sample test case of what I am talking about: 1. When you click on a link, the pop up opens. 2. Change some value in the pop up (say a drop down value) and it gets saved in the DB. 3. Click the popup again, you get the changed values. Can we simulate this kind of thing using unit test cases? Is JUnit enough for this?

    Read the article

  • Dog-1 testing as a synonym for integration / system test

    - by SkeetJon
    In the company I'm working for the phrase Dog-1 is used for the software testing scenario where the first piece of data passes through the complete system. Has anyone heard this phrase in a similar context? There's nothing on Google about it in a software development context, so I'm guessing its a idiosyncrasy of my current environment. Is there a recognized phrase in the industry for this kind of test? I don't think it's simply a 'system test' / 'integration test' as 'Dog-1' refers specifically to the first item/unit through the system.

    Read the article

  • Testing controller logic that uses ISession directly

    - by Rippo
    I have just read this blog post from Jimmy Bogard and was drawn to this comment. Where this falls down is when a component doesn’t support a given layering/architecture. But even with NHibernate, I just use the ISession directly in the controller action these days. Why make things complicated? I then commented on the post and ask this question:- My question here is what options would you have testing the controller logic IF you do not mock out the NHibernate ISession. I am curious what options we have if we utilise the ISession directly on the controller?

    Read the article

  • unit testing on ARM

    - by NomadAlien
    We are developing application level code that runs on an ARM processor. The BSP (low level code) is being delivered by a 3d party so our code sits just on top of this abstraction layer (code is written in c++). To do unit testing, I assume we will have to mock/stub out the BSP library(essentially abstracting out the HW), but what I'm not sure of is if I write/run the unit test on my pc, do I compile it with for example GCC? Normally we use Realview compiler to compile our code for the ARM. Can I assume that if I compile and run the code with x86 compiler and the unit tests pass that it will also pass when compiled with RealView compiler? I'm not sure how much difference the compiler makes and if you can trust that if the x86 compiled code pass the unit tests that you can also be confident that the Realview compiled code is ok.

    Read the article

  • Cross-Platform Automated Mobile Application UI Testing

    - by thetaspark
    My dissertation is about developing a tool for testing mobile applications from the GUI. Primary device is Android, but it should support Blackberry or iOs etc. I found some frameworks using Google, e.g MonkeyTalk. I am not so sure, but what I want to develop might be a mini MonkeyTalk, with minimal functionality, focused on the GUI of the application(s) to be tested. My questions: What framework(s)? I am good with Java. Can I use the xUnit family for this, and how? What should I be reading/studying? Any suggestions, links for tutorials, documentations, howtos, etc., would be very helpful. Thanks in advance.

    Read the article

  • Improve Bad testing

    - by SetiSeeker
    We have a large team of developers and testers. The ratio is one tester for every one developer. We have full bug tracking and reporting systems in place. We have test plans in place. Every change to the product, the testing team is involved in the design of the feature and are included in the development process as much as possible. We build in small iterative blocks, using scrum methodology and every scrum they are included in, including the grooming sessions etc. But every release of the product, they miss even the most simple and obvious defects. How can we improve this?

    Read the article

  • Has anyone used Genetify for A/B testing

    - by Joshak
    I'm working on a small project that will likely run on Wordpress, I always like to run some split testing to improve conversion rates for various goals. Typically if its a small site that I either don't have a budget for or want to keep it as inexpensive as possible I use Google Website Optimizer if I do have a budget I go with Visual Website Optimizer both are great and affordable, but for fun I was checking out alternatives and found Genetify which is an open source project and has some neat features. In searching around I don't see many people talking about it and wondered if anyone here has used it. If so what do you think about it?

    Read the article

  • Testing complex compositions

    - by phlipsy
    I have a rather large collection of classes which check and mutate a given data structure. They can be composed via the composition pattern into arbitrarily complex tree-like structures. The final product contains a lot of these composed structures. My question is now: How can I test those? Albeit it is easy to test every single unit of these compositions, it is rather expensive to test the whole compositions in the following sense: Testing the correct layout of the composition-tree results in a huge number of test cases Changes in the compositions result in a very laborious review of every single test case What is the general guideline here?

    Read the article

  • What is the best way to test using grails using IDEA?

    - by egervari
    I am seriously having a very non-pleasant time testing using Grails. I will describe my experience, and I'd like to know if there's a better way. The first problem I have with testing is that Grails doesn't give immediate feedback to the developer when .save() fails inside of an integration test. So let's say you have a domain class with 12 fields, and 1 of them is violating a constraint and you don't know it when you create the instance... it just doesn't save. Naturally, the test code afterward is going to fail. This is most troublesome because the thingy under test is probably fine... and the real risk and pain is the setup code for the test itself. So, I've tried to develop the habit of using .save(failOnError: true) to avoid this problem, but that's not something that can be easily enforced by everyone working on the project... and it's kind of bloaty. It'd be nice to turn this on for code that is running as part of a unit test automatically. Integration Tests run slow. I cannot understand how 1 integration test that saves 1 object takes 15-20 seconds to run. With some careful test planning, I've been able to get 1000 tests talking to an actual database and doing dbunit dumps after every test to happen in about the same time! This is dumb. It is hard to run all the unit tests and not integration tests in IDEA. Integration tests are a massive pain. Idea actually shows a GREEN BAR when integration tests fail. The output given by grails indicates that something failed, but it doesn't say what it was. It says to look in the test reports... which forces the developer to launch up their file system to hunt the stupid html file down. What a pain. Then once you got the html file and click to the failing test, it'll tell you a line number. Since these reports are not in the IDE, you can't just click the stack trace to go to that line of code... you gotta go back and find it yourself. ARGGH!@!@! Maybe people put up with this, but I refuse. Testing should not be this painful. It should be fast and painless, or people won't do it. Please help. What is the solution? Rails instead of Grails? Something else entirely? I love the Grails framework, but they never demo their testing for a reason. They have a snazzy framework, but the testing is painful. After having used Scala for the last 1.5 months, and being totally spoiled by ScalaTest... I can't go back to this.

    Read the article

  • Zend Framework: How to start PHPUnit testing Forms?

    - by Andrew
    I am having trouble getting my filters/validators to work correctly on my form, so I want to create a Unit test to verify that the data I am submitting to my form is being filtered and validated correctly. I started by auto-generating a PHPUnit test in Zend Studio, which gives me this: <?php require_once 'PHPUnit/Framework/TestCase.php'; /** * Form_Event test case. */ class Form_EventTest extends PHPUnit_Framework_TestCase { /** * @var Form_Event */ private $Form_Event; /** * Prepares the environment before running a test. */ protected function setUp () { parent::setUp(); // TODO Auto-generated Form_EventTest::setUp() $this->Form_Event = new Form_Event(/* parameters */); } /** * Cleans up the environment after running a test. */ protected function tearDown () { // TODO Auto-generated Form_EventTest::tearDown() $this->Form_Event = null; parent::tearDown(); } /** * Constructs the test case. */ public function __construct () { // TODO Auto-generated constructor } /** * Tests Form_Event->init() */ public function testInit () { // TODO Auto-generated Form_EventTest->testInit() $this->markTestIncomplete( "init test not implemented"); $this->Form_Event->init(/* parameters */); } /** * Tests Form_Event->getFormattedMessages() */ public function testGetFormattedMessages () { // TODO Auto-generated Form_EventTest->testGetFormattedMessages() $this->markTestIncomplete( "getFormattedMessages test not implemented"); $this->Form_Event->getFormattedMessages(/* parameters */); } } so then I open up terminal, navigate to the directory, and try to run the test: $ cd my_app/tests/unit/application/forms $ phpunit EventTest.php Fatal error: Class 'Form_Event' not found in .../tests/unit/application/forms/EventTest.php on line 19 So then I add a require_once at the top to include my Form class and try it again. Now it says it can't find another class. I include that one and try it again. Then it says it can't find another class, and another class, and so on. I have all of these dependencies on all these other Zend_Form classes. What should I do? How should I go about testing my Form to make sure my Validators and Filters are being attached correctly, and that it's doing what I expect it to do. Or am I thinking about this the wrong way?

    Read the article

  • Problem with Unit testing of ASP.NET project (NullReferenceException when running the test)

    - by Alex
    Hi, I'm trying to create a bunch of MS visual studio unit tests for my n-tiered web app but for some reason I can't run those tests and I get the following error - "Object reference not set to an instance of an object" What I'm trying to do is testing of my data access layer where I use LINQ data context class to execute a certain function and return a result,however during the debugging process I found out that all the tests fail as soon as they get to the LINQ data context class and it has something to do with the connection string but I cant figure out what is the problem. The debugging of tests fails here(the second line): public EICDataClassesDataContext() : base(global::System.Configuration.ConfigurationManager.ConnectionStrings["EICDatabaseConnectionString"].ConnectionString, mappingSource) { OnCreated(); } And my test is as follows: TestMethod()] public void OnGetCustomerIDTest() { FrontLineStaffDataAccess target = new FrontLineStaffDataAccess(); // TODO: Initialize to an appropriate value string regNo = "jonh"; // TODO: Initialize to an appropriate value int expected = 10; // TODO: Initialize to an appropriate value int actual; actual = target.OnGetCustomerID(regNo); Assert.AreEqual(expected, actual); } The method which I call from DAL is: public int OnGetCustomerID(string regNo) { using (LINQDataAccess.EICDataClassesDataContext dataContext = new LINQDataAccess.EICDataClassesDataContext()) { IEnumerable<LINQDataAccess.GetCustomerIDResult> sProcCustomerIDResult = dataContext.GetCustomerID(regNo); int customerID = sProcCustomerIDResult.First().CustomerID; return customerID; } } So basically everything fails after it reaches the 1st line of DA layer method and when it tries to instantiate the LINQ data access class... I've spent around 10 hours trying to troubleshoot the problem but no result...I would really appreciate any help! UPDATE: Finally I've fixed this!!!!:) I dont know why but for some reasons in the app.config file the connection to my database was as follows: AttachDbFilename=|DataDirectory|\EICDatabase.MDF So what I did is I just changed the path and instead of |DataDirectory| I put the actual path where my MDF file sits,i.e C:\Users\1\Documents\Visual Studio 2008\Projects\EICWebSystem\EICWebSystem\App_Data\EICDatabase.mdf After I had done that it worked out!But still it's a bit not clear what was the problem...probably incorrect path to the database?My web.config of ASP.NET project contains the |DataDirectory|\EICDatabase.MDF path though..

    Read the article

  • Testing Finite State Machines

    - by Pondidum
    I have inherited a large and firaly complex state machine at work. It has 31 possbile states to be in. It has the following inputs: Enum: Current State (so 0 - 30) Enum: source (currently only 2 entries) Boolean: Request Boolean: type Enum: Status (3 states) Enum: Handling (3 states) Boolean: Completed The 31 States are really needed (big business process). Breaking into seperate state machines doesnt seem feasable - each state is distinct. I have written tests for one set of inputs (the most common set), with one test per input (all inputs constant, except for the State input): [Subject("Application Process States")] public class When_state_is_meeting2Requested : AppProcessBase { Establish context = () => { //Setup.... }; Because of = () => process.Load(jas, vac); It Current_node_should_be_meeting2Requested = () => process.CurrentNode.ShouldBeOfType<meetingRequestedNode>(); It Can_move_to_clientDeclined = () => Check(process, process.clientDeclined); It Can_move_to_meeting1Arranged = () => Check(process, process.meeting1Arranged); It Can_move_to_meeting2Arranged = () => Check(process, process.meeting2Arranged); It Can_move_to_Reject = () => Check(process, process.Reject); It Cannot_move_to_any_other_state = () => AllOthersFalse(process); } As no one is entirely sure on what the output should be for each state and set of inputs i have been starting to write tests for it, however on calculation i will need to write 4320 ( 30*2*2*2*3*3*2 ) tests for it. Does anyone have any suggestions on how i should go about testing this?

    Read the article

  • Collaboration using github and testing the code

    - by wyred
    The procedure in my team is that we all commit our code to the same development branch. We have a test server that runs updated code from this branch so that we can test our code on the servers. The problem is that if we want to merge the development branch to the master branch in order to publish new features to our production servers, some features that may not have been ready will be applied to the production servers. So we're considering having each developer work on a feature/topic branch where each of them work on their own features and when it's ready, merge it into the development branch for testing, and then into the master branch. However, because our test server only pulls changes from the development branch, the developers are unable to test their features. While this is not a huge issue as they can test it on their local machine, the only problem I foresee is if we want to test callbacks from third-party services like sendgrid (where you specify a url for sendgrid to update you on the status of emails sent out). How to handle this problem? Note: We're not advanced git users. We use the Github app for MacOSX and Windows to commit our work.

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Testing of visualization projects

    - by paxRoman
    We develop small to large visualization projects for different tasks and industries and sometimes while rewriting them a couple of times in the process we hit walls because we discover that we need to add a lot of code to support new requirements. Now we have established a design process that seems to work well (at least we reduced the development time for each new project quite a bit), but we're still left scratching our heads around this question: what exactly should we test when testing visualizations? If everything that we want to explore is on the screen (bounded visualizations)? If the data is ok - if data is valid (that's one of the nice things about visualizations you can spot errors in your datasets)? Usability? User interaction? Code quality? I can tell you for sure that a simple check of the code quality is certainly not enough! Is there a classic paper / book about how to test visualizations? Also do you happen to know about classic design patterns for visualizations (except the obvious ones like Pub-Sub)?

    Read the article

  • Do unit tests sometimes break encapsulation?

    - by user1288851
    I very often hear the following: "If you want to test private methods, you'd better put that in another class and expose it." While sometimes that's the case and we have a hiding concept inside our class, other times you end up with classes that have the same attributes (or, worst, every attribute of one class become a argument on a method in the other class) and exposes functionality that is, in fact, implementation detail. Specially on TDD, when you refactor a class with public methods out of a previous tested class, that class is now part of your interface, but has no tests to it (since you refactored it, and is a implementation detail). Now, I may be not finding an obvious better answer, but if my answer is the "correct", that means that sometimes writting unit tests can break encapsulation, and divide the same responsibility into different classes. A simple example would be testing a setter method when a getter is not actually needed for anything in the real code. Please when aswering don't provide simple answers to specific cases I may have written. Rather, try to explain more of the generic case and theoretical approach. And this is neither language specific. Thanks in advance. EDIT: The answer given by Matthew Flynn was really insightful, but didn't quite answer the question. Altough he made the fair point that you either don't test private methods or extract them because they really are other concern and responsibility (or at least that was what I could understand from his answer), I think there are situations where unit testing private methods is useful. My primary example is when you have a class that has one responsibility but the output (or input) that it gives (takes) is just to complex. For example, a hashing function. There's no good way to break a hashing function apart and mantain cohesion and encapsulation. However, testing a hashing function can be really tough, since you would need to calculate by hand (you can't use code calculation to test code calculation!) the hashing, and test multiple cases where the hash changes. In that way (and this may be a question worth of its own topic) I think private method testing is the best way to handle it. Now, I'm not sure if I should ask another question, or ask it here, but are there any better way to test such complex output (input)? OBS: Please, if you think I should ask another question on that topic, leave a comment. :)

    Read the article

  • Should adapters or wrappers be unit tested?

    - by m3th0dman
    Suppose that I have a class that implements some logic: public MyLogicImpl implements MyLogic { public void myLogicMethod() { //my logic here } } and somewhere else a test class: public MyLogicImplTest { @Test public void testMyLogicMethod() { /test my logic } } I also have: @WebService public MyWebServices class { @Inject private MyLogic myLogic; @WebMethod public void myLogicWebMethod() { myLogic.myLogicMethod(); } } Should there be a test unit for myLogicWebMethod or should the testing for it be handled in integration testing.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >