Search Results

Search found 19002 results on 761 pages for 'coded ui testing'.

Page 4/761 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Testing loses its effectiveness if all programmers don't use them

    - by Jeff O
    Let's assume you are convinced that the extra time spent unit testing has merit and improves production. Does that still hold up when everyone working on the same code doesn't use them? This question makes me wonder if fixing tests that everyone doesn't use is a waste of time. If you correct a test so the new code will pass, you're assuming the new code is correct. The person updating the test better have a firm understanding of the reasoning behind the code change and decide if the test or the new code needs to be fixed. This much inconsistency in a team when it comes to testing is probably an indication of other problems as well. There is a certain amount of risk involved that someone else on the team will alter code that is covered by testing. Is this the point where testing becomes counter-productive?

    Read the article

  • Is verification and validation part of testing process?

    - by user970696
    Based on many sources I do not believe the simple definition that aim of testing is to find as many bugs as possible - we test to ensure that it works or that it does not. E.g. followint are goals of testing form ISTQB: Determine that (software products) satisfy specified requirements ( I think its verificication) Demonstrate that (software products) are fit for purpose (I think that is validation) Detect defects I would agree that testing is verification, validation and defect detection. Is that correct?

    Read the article

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • CppUnit for unit-testing executable files?

    - by hagubear
    I am not sure if anyone has done it. I am trying to do something that is in general, uncommon i.e. unit-testing executable (Windows) or ELFs (Linux). I know that CppUnit provides a good unit testing facility, but I have never used it for unit-testing (used UnitTest++). I hear rumours that you can unit-test executables too. Does anyone have the experience in this? A relevant post regarding the philosophy of it was here

    Read the article

  • Scenario to illustrate how unit testing leads to better design

    - by Cocowalla
    For an internal training session, I'm trying to come up with a simple scenario that illustrates how unit testing leads to better design, by forcing you to think about things like coupling before you start coding. The idea is that I get the participants to code something first, without considering unit testing, then we do it again, but considering unit testing. Hopefully the code produced second time round should be more decoupled and maintainable. I'm struggling to come up with a scenario that can be coded quickly, yet can still demonstrate how unit testing can lead to better overall design.

    Read the article

  • What is the aim of software testing?

    - by user970696
    Having read many books, there is a basic contradiction: Some say, "the goal of testing is to find bugs" while other say "the goal of the testing is to equalize the quality of the product", meaning that bugs are its by-products. I would also agree that if testing would be aimed primarily on a bug hunt, who would do the actual verification and actually provided the information, that the software is ready? Even e.g. Kaner changed his original definiton of testing goal from bug hunting to quality assesement provision but I still cannot see the clear difference. I percieve both as equally important. I can verify software by its specification to make sure it works and in that case, bugs found are just by products. But also I perform tests just to brake things. Also what definition is more accurate?

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • Performance Testing Versus Unit Testing

    - by Mystagogue
    I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind: Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time. Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test). Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit). My question is: do my findings / leanings correspond with the thoughts of the community?

    Read the article

  • Animate jquery ui slider handle to specific value

    - by user1159555
    I'm trying to animate a jquery UI handle to a specifiv value. My code is this so far. $("#slider1").slider({ max:350, min:100, animate: 'slow', step:10, animate: "true", value: 0, change: function() { // This setTimeout will allow the slider to animated correctly. setTimeout("$('#slider1').slider('value', 200);", 350); } slide: function(event, ui) { $("#amount").val(ui.value); $(this).find('.ui-slider-handle').html('<div class="sliderControl-label v-labelCurrent">'+ui.value+'</div>'); update(); } }); $('#slider').slider('value', 200); How do I make it so that 1) The handle will go to the specific value on page load and 2) The handle can be freely moved after the page has loaded and the animation has finished. Cheers, Jonah

    Read the article

  • jQuery UI autocomplete not working in IE

    - by Peter Di Cecco
    Hi all, I've got the new autocomplete widget in jQuery UI 1.8rc3 working great in Firefox. It doesn't work at all in IE. Can someone help me out? HTML: <input type="text" id="ctrSearch" size="30"> <input type="hidden" id="ctrId"> Javascript: $("#ctrSearch").autocomplete({ source: "ctrSearch.do", minLength: 3, focus: function(event, ui){ $('#ctrSearch').val(ui.item.ctrLastName + ", " + ui.item.ctrFirstName); return false; }, select: function(event, ui){ $('#ctrId').val(ui.item.ctrId); return false; } }); Result (IE 8): The red box is the <ul> element created by jQuery. I also get this error: Line: 116 Error: Invalid argument. When I open it in the IE8 script debugger, it highlights f[b]=d on line 116 of jquery.min.js. Note that I'm using version 1.4.2 of jQuery hosted on Google's servers (https://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js). I've tried removing some of the options, but even when I call .autocomplete() with no options, or with only the source option, I still get the same result. Once again, it's working in Firefox, but not in IE. Any suggestions? Thanks.

    Read the article

  • Drag and drop not working while using JQuery UI dialog

    - by user327854
    Hi Guys ! I m using jquery ui.core.js with version 1.7.2 and jquery-1.4.2.js . I got the issue saying "proto.plugins"(query ui.core.js for draggable module) is undefined ,hence i am not able to perform the drag option.My Code is given below. $("#linkfortag").click(function(){ var $dialog=$("<div id='testing'></div>").dialog({autoOpen:false}); $dialog.dialog("destroy"); $dialog.draggable(); alert("$dialog is" + $dialog.html()); $dialog.dialog('open'); $dialog.dialog({title:'Add Tag',resizable:true,closeOnEscape:true,modal:true,minWidth:250,buttons:{ 'Add Tag':function() { alert($(this).children().children().children(':input').attr('value')); var value=$("#addTag").attr('value'); var formobj=$(this).children(); validate_usertagform(value); } ,'Cancel': function() {$(this).dialog('close');}}}); $dialog.bind("dialogbeforeclose",function(event,ui){ alert("Dialog is gng to close"); alert("<<<<<<<<<<<<<<<<<UI IS>>>>>>>>>>>>>>>>>>>>>>" + ui); }); $dialog.bind("drag",function(event,ui){ alert("<<<<<<<<<<<<<<<<<drag is>>>>>>>>>>>>>" + event); }); $dialog.dialog( "option", "closeText", '' ); $dialog.dialog("option","draggable",true); }); Please help me on this........

    Read the article

  • Dynamic UI vs Static UI

    - by Damien
    I've been wondering, at what point should I give up the convenience of a static data entry form with designer support for a dynamic UI which removes a lot of code duplication? There seems to be a conflict in the programming world where people constantly try to remove code repetition to improve maintainability and yet when it comes to forms, that all goes out of the window and everything gets added explicitly to the forms. What signs should I look for to know when it's time to leave the designer in the dust and create a dynamic UI?

    Read the article

  • Unit testing and Test Driven Development questions

    - by Theomax
    I'm working on an ASP.NET MVC website which performs relatively complex calculations as one of its functions. This functionality was developed some time ago (before I started working on the website) and defects have occurred whereby the calculations are not being calculated properly (basically these calculations are applied to each user which has certain flags on their record etc). Note; these defects have only been observed by users thus far, and not yet investigated in code while debugging. My questions are: Because the existing unit tests all pass and therefore do not indicate that the defects that have been reported exist; does this suggest the original code that was implemented is incorrect? i.e either the requirements were incorrect and were coded accordingly or just not coded as they were supposed to be coded? If I use the TDD approach, would I disgregard the existing unit tests as they don't show there are any problems with the calculations functionality - and I start by making some failing unit tests which test/prove there are these problems occuring, and then add code to make them pass? Note; if it's simply a bug that is occurring that can be found while debugging the code, do the unit tests need to be updated since they are already passing?

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • How to manage test fixtures for end-to-end testing?

    - by Peter Becker
    Having just set up a test framework for a new web application, I realized I missed one of the big questions: "How do I make tests independent from each other?" Years ago I have set up some complicated Ant scripting to do full cycles of deleting all database tables, creating the schema again, adding test data, starting the application, running one test and then stopping the application. That was a pain to maintain and restricted us to nightly tests due to the time it took to run the full suite. It was still worth it, but I wonder if there is an easier way. Are there alternatives to this approach? The main criterion is that each test should not be affected by any other test in the suite, no matter if it failed or succeeded.

    Read the article

  • How to do integration testing?

    - by StackUnderflow
    There is so much written about unit testing but I have hardly found any books/blogs about integration testing? Could you please suggest me something to read on this topic? What tests to write when doing integration testing? what makes a good integration test? etc etc Thanks

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • Why not speed up testing by using function dependency graph?

    - by Maltrap
    It seems logical to me that if you have a dependency graph of your source code (tree showing call stack of all functions in your code base) you should be able to save a tremendous amount of time doing functional and integration tests after each release. Essentially you will be able to tell the testers exactly what functionality to test as the rest of the features remain unchanged from a source code point of view. If for instance you change a spelling mistake in once piece of the code, there is no reason to run through your whole test script again "just in case" you introduced a critical bug. My question, why are dependency trees not used in software engineering and if you use them, how do you maintain them? What tools are available that generate these trees for C# .NET, C++ and C source code?

    Read the article

  • BUILD 2013 Session&ndash;Testing Your C# Base Windows Store Apps

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionndashtesting-your-c-base-windows-store-apps.aspx Testing an application is not what most people consider fun and the number of situation that need to be tested seems to grow exponentially when building mobile apps.  That is why I found the topic of this session interesting.  When I found out that the speaker, Francis Cheung, was from the Patterns and Practices group I knew I was in the right place.  I have admired that team since I first met Ron Jacobs around 2001.  So what did Francis have to offer? He started off in a rather confusing who’s on first fashion.  It seems that one of his tester was originally supposed to give the talk, but then it was decided that it would be better to have someone who does development present a testing topic.  This didn’t hinder the content of the talk in the least.  He broke the process down in a logical manner that would be straight forward to understand if not implement. Francis hit the main areas we usually think of such as tombstoning, network connectivity and asynchronous code, but he approached them with tools they we may not have thought of until now.  He relied heavily on Fiddler to intercept and change the behavior of network requests. Then there are the areas you might not normal think to check.  This includes localization, accessibility and updating client code to a new version.  These are important aspects of your app that can severely impact how customers feel about your app.  Take the time to view this session and get a new appreciation for testing and where it fits in your development lifecycle. del.icio.us Tags: BUILD 2013,Testing,C#,Windows Store Apps,Fiddler

    Read the article

  • Step by Step screencasts to do Behavior Driven Development on WCF and UI using xUnit

    - by oazabir
    I am trying to encourage my team to get into Behavior Driven Development (BDD). So, I made two quick video tutorials to show how BDD can be done from early requirement collection stage to late integration tests. It explains breaking user stories into behaviors, and then developers and test engineers taking the behavior specs and writing a WCF service and unit test for it, in parallel, and then eventually integrating the WCF service and doing the integration tests. It introduces how mocking is done using the Moq library. Moreover, it shows a way how you can write test once and do both unit and integration tests at the flip of a config setting. Watch the screencast here: Doing BDD with xUnit, Subspec and on a WCF Service  Warning: you might hear some noise in the audio in some places. Something wrong with audio bit rate. I suggest you let the video download for a while and then play it. If you still get noise, go back couple of seconds earlier and then resume play. It eliminates the noise.  The next video tutorial is about doing BDD to do automated UI tests. It shows how test engineers can take behaviors and then write tests that tests a prototype UI in isolation (just like Service Contract) in order to ensure the prototype conforms to the expected behaviors, while developers can write the real code and build the real product in parallel. When the real stuff is done, the same test can test the real stuff and ensure the agreed behaviors are satisfied. I have used WatiN to automate UI and test UI for expected behaviors. Doing BDD with xUnit and WatiN on a ASP.NET webform Hope you like it!

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >