Search Results

Search found 14702 results on 589 pages for 'testing logic'.

Page 4/589 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • JUnit Testing in Multithread Application

    - by e2bady
    This is a problem me and my team faces in almost all of the projects. Testing certain parts of the application with JUnit is not easy and you need to start early and to stick to it, but that's not the question I'm asking. The actual problem is that with n-Threads, locking, possible exceptions within the threads and shared objects the task of testing is not as simple as testing the class, but testing them under endless possible situations within threading. To be more precise, let me tell you about the design of one of our applications: When a user makes a request several threads are started that each analyse a part of the data to complete the analysis, these threads run a certain time depending on the size of the chunk of data (which are endless and of uncertain quality) to analyse, or they may fail if the data was insufficient/lacking quality. After each completed its analysis they call upon a handler which decides after each thread terminates if the collected analysis-data is sufficient to deliver an answer to the request. All of these analysers share certain parts of the applications (some parts because the instances are very big and only a certain number can be loaded into memory and those instances are reusable, some parts because they have a standing connection, where connecting takes time, ex.gr. sql connections) so locking is very common (done with reentrant-locks). While the applications runs very efficient and fast, it's not very easy to test it under real-world conditions. What we do right now is test each class and it's predefined conditions, but there are no automated tests for interlocking and synchronization, which in my opionion is not very good for quality insurances. Given this example how would you handle testing the threading, interlocking and synchronization?

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • Mock Objects for Testing - Test Automation Engineer Perspective

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks Edit : Based on the answers below am providing more details about what QA i was referring to . I'm interested in more of Test Automation rather than simple QA involved in record and play of script. So Test Automation engineers are responsible for developing frameworks ? or do they have a team of developers dedicated in Framework development ? Yes i was asking about usage of Mock Objects for testing from Test Automation engineer perspective.

    Read the article

  • CppUnit for unit-testing executable files?

    - by hagubear
    I am not sure if anyone has done it. I am trying to do something that is in general, uncommon i.e. unit-testing executable (Windows) or ELFs (Linux). I know that CppUnit provides a good unit testing facility, but I have never used it for unit-testing (used UnitTest++). I hear rumours that you can unit-test executables too. Does anyone have the experience in this? A relevant post regarding the philosophy of it was here

    Read the article

  • Scenario to illustrate how unit testing leads to better design

    - by Cocowalla
    For an internal training session, I'm trying to come up with a simple scenario that illustrates how unit testing leads to better design, by forcing you to think about things like coupling before you start coding. The idea is that I get the participants to code something first, without considering unit testing, then we do it again, but considering unit testing. Hopefully the code produced second time round should be more decoupled and maintainable. I'm struggling to come up with a scenario that can be coded quickly, yet can still demonstrate how unit testing can lead to better overall design.

    Read the article

  • Code testing practice

    - by Robin Castlin
    So now I have come to the conclusion like many others that having some way of constantly testing your code is good practice since it enables fewer people to be involved (colleges and customers alike) by simply knowing what's wrong before someone else finds out the hard way. I've heard and read some about Unit Testing and understand what it's supposed to do and all. The there are so many different types of bugs. It can be everything from web browser not being able not being able to send correct values, javascript failing, a global function messing up a piece of code somewhere to a change that looked good when testing it out but fails in some special case which was hard to anticipate. My simply finding these errors I learn to rarely repeat them again, but there seems to always be new bugs to be found and learnt from. I would guess maybe the best practice would be to run every page and it's functions a couple of times, witness the result and repeat this in Firefox, Chrome and Internet Explorer (and all smartphones apparently) to make sure it works as intended. However this would take quite some time to do consider I don't work with patches/versions and do little fixes here and there a couple of times per week. What I prefer would be some kind of page I can just load that tests as much things as possible to make sure the site works as intended. Basicly just run a lot of cURL's with POST-values and see if I get expected result. But how would I preferably not increase the IDs of every mysql rows if I delete these testing rows? It feels silly to be on ID 1000 with maybe 50 rows in total. If I could build a new project from scratch I would probably implement some kind of smooth way to return a "TRUE" on testing instead of the actual page. But this solution would for the moment being have to be passed on existing projects. My question What would you recommend to be the best way to test my site to make sure that existing functions does their job upon editing the code? Should I consider to implement a lot of edits first, then test manually the entire code to make sure it still works? Is there any nice way of testing codes without "hurting" the ID columns? Extra thoughs Would it be a good idea to associate all of my files to the different parts of my site which they affect? For instance if I edit home.php I will through documentation test if my homepage's start works as intended since it's the only part of my site it should affect.

    Read the article

  • What is the aim of software testing?

    - by user970696
    Having read many books, there is a basic contradiction: Some say, "the goal of testing is to find bugs" while other say "the goal of the testing is to equalize the quality of the product", meaning that bugs are its by-products. I would also agree that if testing would be aimed primarily on a bug hunt, who would do the actual verification and actually provided the information, that the software is ready? Even e.g. Kaner changed his original definiton of testing goal from bug hunting to quality assesement provision but I still cannot see the clear difference. I percieve both as equally important. I can verify software by its specification to make sure it works and in that case, bugs found are just by products. But also I perform tests just to brake things. Also what definition is more accurate?

    Read the article

  • Performance Testing Versus Unit Testing

    - by Mystagogue
    I'm reading Osherove's "The Art of Unit Testing," and though I've not yet seen him say anything about performance testing, two thoughts still cross my mind: Performance tests generally can't be unit tests, because performance tests generally need to run for long periods of time. Performance tests generally can't be unit tests, because performance issues too often manifest at an integration or system level (or at least the logic of a single unit test needed to re-create the performance of the integration environment would be too involved to be a unit test). Particularly for the first reason stated above, I doubt it makes sense for performance tests to be handled by a unit testing framework (such as NUnit). My question is: do my findings / leanings correspond with the thoughts of the community?

    Read the article

  • Testing Workflows &ndash; Test-After

    - by Timothy Klenke
    Originally posted on: http://geekswithblogs.net/TimothyK/archive/2014/05/30/testing-workflows-ndash-test-after.aspxIn this post I’m going to outline a few common methods that can be used to increase the coverage of of your test suite.  This won’t be yet another post on why you should be doing testing; there are plenty of those types of posts already out there.  Assuming you know you should be testing, then comes the problem of how do I actual fit that into my day job.  When the opportunity to automate testing comes do you take it, or do you even recognize it? There are a lot of ways (workflows) to go about creating automated tests, just like there are many workflows to writing a program.  When writing a program you can do it from a top-down approach where you write the main skeleton of the algorithm and call out to dummy stub functions, or a bottom-up approach where the low level functionality is fully implement before it is quickly wired together at the end.  Both approaches are perfectly valid under certain contexts. Each approach you are skilled at applying is another tool in your tool belt.  The more vectors of attack you have on a problem – the better.  So here is a short, incomplete list of some of the workflows that can be applied to increasing the amount of automation in your testing and level of quality in general.  Think of each workflow as an opportunity that is available for you to take. Test workflows basically fall into 2 categories:  test first or test after.  Test first is the best approach.  However, this post isn’t about the one and only best approach.  I want to focus more on the lesser known, less ideal approaches that still provide an opportunity for adding tests.  In this post I’ll enumerate some test-after workflows.  In my next post I’ll cover test-first. Bug Reporting When someone calls you up or forwards you a email with a vague description of a bug its usually standard procedure to create or verify a reproduction plan for the bug via manual testing and log that in a bug tracking system.  This can be problematic.  Often reproduction plans when written down might skip a step that seemed obvious to the tester at the time or they might be missing some crucial environment setting. Instead of data entry into a bug tracking system, try opening up the test project and adding a failing unit test to prove the bug.  The test project guarantees that all aspects of the environment are setup properly and no steps are missing.  The language in the test project is much more precise than the English that goes into a bug tracking system. This workflow can easily be extended for Enhancement Requests as well as Bug Reporting. Exploratory Testing Exploratory testing comes in when you aren’t sure how the system will behave in a new scenario.  The scenario wasn’t planned for in the initial system requirements and there isn’t an existing test for it.  By definition the system behaviour is “undefined”. So write a new unit test to define that behaviour.  Add assertions to the tests to confirm your assumptions.  The new test becomes part of the living system specification that is kept up to date with the test suite. Examples This workflow is especially good when developing APIs.  When you are finally done your production API then comes the job of writing documentation on how to consume the API.  Good documentation will also include code examples.  Don’t let these code examples merely exist in some accompanying manual; implement them in a test suite. Example tests and documentation do not have to be created after the production API is complete.  It is best to write the example code (tests) as you go just before the production code. Smoke Tests Every system has a typical use case.  This represents the basic, core functionality of the system.  If this fails after an upgrade the end users will be hosed and they will be scratching their heads as to how it could be possible that an update got released with this core functionality broken. The tests for this core functionality are referred to as “smoke tests”.  It is a good idea to have them automated and run with each build in order to avoid extreme embarrassment and angry customers. Coverage Analysis Code coverage analysis is a tool that reports how much of the production code base is exercised by the test suite.  In Visual Studio this can be found under the Test main menu item. The tool will report a total number for the code coverage, which can be anywhere between 0 and 100%.  Coverage Analysis shouldn’t be used strictly for numbers reporting.  Companies shouldn’t set minimum coverage targets that mandate that all projects must have at least 80% or 100% test coverage.  These arbitrary requirements just invite gaming of the coverage analysis, which makes the numbers useless. The analysis tool will break down the coverage by the various classes and methods in projects.  Instead of focusing on the total number, drill down into this view and see which classes have high or low coverage.  It you are surprised by a low number on a class this is an opportunity to add tests. When drilling through the classes there will be generally two types of reaction to a surprising low test coverage number.  The first reaction type is a recognition that there is low hanging fruit to be picked.  There may be some classes or methods that aren’t being tested, which could easy be.  The other reaction type is “OMG”.  This were you find a critical piece of code that isn’t under test.  In both cases, go and add the missing tests. Test Refactoring The general theme of this post up to this point has been how to add more and more tests to a test suite.  I’ll step back from that a bit and remind that every line of code is a liability.  Each line of code has to be read and maintained, which costs money.  This is true regardless whether the code is production code or test code. Remember that the primary goal of the test suite is that it be easy to read so that people can easily determine the specifications of the system.  Make sure that adding more and more tests doesn’t interfere with this primary goal. Perform code reviews on the test suite as often as on production code.  Hold the test code up to the same high readability standards as the production code.  If the tests are hard to read then change them.  Look to remove duplication.  Duplicate setup code between two or more test methods that can be moved to a shared function.  Entire test methods can be removed if it is found that the scenario it tests is covered by other tests.  Its OK to delete a test that isn’t pulling its own weight anymore. Remember to only start refactoring when all the test are green.  Don’t refactor the tests and the production code at the same time.  An automated test suite can be thought of as a double entry book keeping system.  The unchanging, passing production code serves as the tests for the test suite while refactoring the tests. As with all refactoring, it is best to fit this into your regular work rather than asking for time later to get it done.  Fit this into the standard red-green-refactor cycle.  The refactor step no only applies to production code but also the tests, but not at the same time.  Perhaps the cycle should be called red-green-refactor production-refactor tests (not quite as catchy).   That about covers most of the test-after workflows I can think of.  In my next post I’ll get into test-first workflows.

    Read the article

  • Cloud Based Load Testing Using TF Service &amp; VS 2013

    - by Tarun Arora [Microsoft MVP]
    Originally posted on: http://geekswithblogs.net/TarunArora/archive/2013/06/30/cloud-based-load-testing-using-tf-service-amp-vs-2013.aspx One of the new features announced as part of the Visual Studio 2013 Ultimate Preview is ‘Cloud Based Load Testing’. In this blog post I’ll walk you through, What is Cloud Based Load Testing? How have I been using this feature? – Success story! Where can you find more resources on this feature? What is Cloud Based Load Testing? It goes without saying that performance testing your application not only gives you the confidence that the application will work under heavy levels of stress but also gives you the ability to test how scalable the architecture of your application is. It is important to know how much is too much for your application! Working with various clients in the industry I have realized that the biggest barriers in Load Testing & Performance Testing adoption are, High infrastructure and administration cost that comes with this phase of testing Time taken to procure & set up the test infrastructure Finding use for this infrastructure investment after completion of testing Is cloud the answer? 100% Visual Studio Compatible Scalable and Realistic Start testing in < 2 minutes Intuitive Pay only for what you need Use existing on premise tests on cloud There are a lot of vendors out there offering Cloud Based Load Testing, to name a few, Load Storm Soasta Blaze Meter Blitz And others… The question you may want to ask is, why should you go with Microsoft’s Cloud based Load Test offering. If you are a Microsoft shop or already have investments in Microsoft technologies, you’ll see great benefit in the natural integration this offers with existing Microsoft products such as Visual Studio and Windows Azure. For example, your existing Web tests authored in Visual Studio 2010 or Visual Studio 2012 will run on the cloud without requiring any modifications what so ever. Microsoft’s cloud test rig also supports API based testing, for example, if you are building a WPF application which consumes WCF services, you can write unit tests to invoke the WCF service, these tests can be run on the cloud test rig and loaded with ‘N’ concurrent users for performance testing. If you have your assets already hosted in the Azure and possibly in the same data centre as the Cloud test rig, your Azure app will not incur a usage cost because of the generated traffic since the traffic is coming from the same data centre. The licensing or pricing information on Microsoft’s cloud based Load test service is yet to be announced, but I would expect this to be priced attractively to match the market competition.   The only additional configuration required for running load tests on Microsoft Cloud based Load Tests service is to select the Test run location as Run tests using Visual Studio Team Foundation Service, How have I been using Microsoft’s Cloud based Load Test Service? I have been part of the Microsoft Cloud Based Load Test Service advisory council for the last 7 months. This gave the opportunity to see the product shape up from concept to working solution. I was also the first person outside of Microsoft to try this offering out. This gave me the opportunity to test real world application at various clients using the Microsoft Load Test Service and provide real world feedback to the Microsoft product team. One of the most recent systems I tested using the Load Test Service has been an insurance quote generation engine. This insurance quote generation engine is,   hosted in Windows Azure expected to get quote requests from across the globe expected to handle 5 Million quote requests in a day (not clear how this load will be distributed across the day) There was no way, I could simulate such kind of load from on premise without standing up additional hardware. But Microsoft’s Cloud based Load Test service allowed me to test my key performance testing scenarios, i.e. Simulate expected Load, Endurance Testing, Threshold Testing and Testing for Latency. Simulating expected load: approach to devising a load pattern My approach to devising a load test pattern has been to run the test scenario with 1 user to figure out the response time. Then work out how many users are required to reach the target load. So, for example, to invoke 1 quote from the quote engine software takes 0.5 seconds. Now if you do the math,   1 quote request by 1 user = 0.5 seconds   quotes generated by 1 user in 24 hour = 1 * (((2 * 60) * 60) * 24) = 172,800   quotes generated by 30 users in 24 hours = 172,800 * 30 =  5,184,000 This was a very simple example, if your application requires more concurrent users to test scenario’s such as caching, etc then you can devise your own load pattern, some examples of load test patterns can be found here.  Endurance Testing To test for endurance, I loaded the quote generation engine with an expected fixed user load and ran the test for very long duration such as over 48 hours and observed the affect of the long running test on the Azure infrastructure. Currently Microsoft Load Test service does not support metrics from the machine under test. I used Azure diagnostics to begin with, but later started using Cerebrata Azure Diagnostics Manager to capture the metrics of the machine under test. Threshold Testing To figure out how much user load the application could cope with before falling on its belly, I opted to step load the quote generation engine by incrementing user load with different variations of incremental user load per minute till the application crashed out and forced an IIS reset. Testing for Latency Currently the Microsoft Load Test service does not support generating geographically distributed load, I however, deployed the insurance quote generation engine in different Azure data centres and ran the same set of performance tests to measure for latency. Because I could compare load test results from different runs by exporting the results to excel (this feature is provided out of the box right from Visual Studio 2010) I could see the different in response times. More resources on Microsoft Cloud based Load Test Service A few important links to get you started, Download Visual Studio Ultimate 2013 Preview Getting started guide for load testing using Team Foundation Service Troubleshooting guide for FAQs and known issues Team Foundation Service forum for questions and support Detailed demo and presentation (link to Tech-Ed session recording) Detailed demo and presentation (link to Build session recording) There a few limits on the usage of Microsoft Cloud based Load Test service that you can read about here. If you have any feedback on Microsoft Cloud based Load Test service, feel free to share it with the product team via the Visual Studio User Voice forum. I hope you found this useful. Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Stay tuned!

    Read the article

  • Pair programming business logic with a non-IT person

    - by user1598390
    Have you have any experience in which a non-IT person works with a programmer during the coding process? It's like pair programming, but one person is a non-IT person that knows a lot about the business, maybe a process engineer with math background who knows how things are calculated and can understand non-idiomatic, procedural code. I've found that some procedural, domain-specific languages like PL/SQL are quite understandable by non-IT engineers. These person end up being co-authors of the code and guarantee the correctness of formulas, factors etc. I've found this kind of pair programming quite productive, this kind of engineer user feel they are also "owners" and "authors" of the code and help minimize misunderstanding in the communication process. They even help design the test cases. Is this practice common ? Does it have a name ? Have you had similar experiences ?

    Read the article

  • Business Logic Layer in MVC Application

    - by Subin Jacob
    In my ASP MVC application I decided to add another Business Layer and made the model only to have properties. All other functionality like save to db, get from db is done on this new Business layer. So now the controller will be calling this business layer and model for various operations. Is it a good approach to design like this? I decided not to use model for this purpose because I would need a number of models for different actions. (for eg, one for edit and other for create)

    Read the article

  • Improve the business logic

    - by Victor
    In my application,I have a feature like this: The user wants to add a new address to the database. Before adding the address, he needs to perform a search(using input parameters like country,city,street etc) and when the list comes up, he will manually check if the address he wants to add is present or not. If present, he will not add the address. Is there a way to make this process better. maybe somehow eliminate a step, avoid need for manual verification etc.

    Read the article

  • What is the logic behind this C Program?

    - by iamanimesh19
    Here is a small piece of program (14 lines of program) which counts the number of bits set in a number. Input-Output -- 0--0(0000000), 5--2(0000101), 7--3(0000111) int CountBits (unsigned int x) { static unsigned int mask[] = { 0x55555555, 0x33333333, 0x0F0F0F0F, 0x00FF00FF, 0x0000FFFF } ; int i ; int shift ; /* Number of positions to shift to right*/ for (i =0, shift =1; i < 5; i ++, shift *= 2) x = (x & mask[i ])+ ( ( x >> shift) & mask[i]); return x; } Can someone explain the algorithm used here/why this works?

    Read the article

  • Logic or Algorithm to solve this problem [closed]

    - by jade
    I have two lists. List1 {a,b,c,d,e} and List2 {f,g,h,i,j} The relation between the two list is as follows a->g,a->h,h->c,h->d,d->i,d->j Now I have these two lists displayed. Based on the relation above on selecting element a from List1, List2 shows g,h. On selecting h from List2, in List1 c,d are shown in List1. On selecting d from List1 it shows i,j in List2. How to trace back to initial state by deselecting the elements in reverse order in which they have been selected?

    Read the article

  • Techniques to increase logic at programming

    - by u3050
    I am into programming since last 3 years. But I seems to be lost in it. I am not able to get good at it even though I code everyday. suppose I solve one problem, I will wander from solution to solution and implement some other solution. I cant focus much. I get many defects for the code I write. I afraid of code I dont know why if I dont finish it on time my boss will fire me etc. I enjoy coding but not all the time. How to increase patience? I always wonder how do I become the best coder like many exceptional programmers. I know this sounds subjective but I think this will help programmer community to get good at it especially for average like me or beginner programmers.

    Read the article

  • How to manage test fixtures for end-to-end testing?

    - by Peter Becker
    Having just set up a test framework for a new web application, I realized I missed one of the big questions: "How do I make tests independent from each other?" Years ago I have set up some complicated Ant scripting to do full cycles of deleting all database tables, creating the schema again, adding test data, starting the application, running one test and then stopping the application. That was a pain to maintain and restricted us to nightly tests due to the time it took to run the full suite. It was still worth it, but I wonder if there is an easier way. Are there alternatives to this approach? The main criterion is that each test should not be affected by any other test in the suite, no matter if it failed or succeeded.

    Read the article

  • How to do integration testing?

    - by StackUnderflow
    There is so much written about unit testing but I have hardly found any books/blogs about integration testing? Could you please suggest me something to read on this topic? What tests to write when doing integration testing? what makes a good integration test? etc etc Thanks

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • Why not speed up testing by using function dependency graph?

    - by Maltrap
    It seems logical to me that if you have a dependency graph of your source code (tree showing call stack of all functions in your code base) you should be able to save a tremendous amount of time doing functional and integration tests after each release. Essentially you will be able to tell the testers exactly what functionality to test as the rest of the features remain unchanged from a source code point of view. If for instance you change a spelling mistake in once piece of the code, there is no reason to run through your whole test script again "just in case" you introduced a critical bug. My question, why are dependency trees not used in software engineering and if you use them, how do you maintain them? What tools are available that generate these trees for C# .NET, C++ and C source code?

    Read the article

  • BUILD 2013 Session&ndash;Testing Your C# Base Windows Store Apps

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/build-2013-sessionndashtesting-your-c-base-windows-store-apps.aspx Testing an application is not what most people consider fun and the number of situation that need to be tested seems to grow exponentially when building mobile apps.  That is why I found the topic of this session interesting.  When I found out that the speaker, Francis Cheung, was from the Patterns and Practices group I knew I was in the right place.  I have admired that team since I first met Ron Jacobs around 2001.  So what did Francis have to offer? He started off in a rather confusing who’s on first fashion.  It seems that one of his tester was originally supposed to give the talk, but then it was decided that it would be better to have someone who does development present a testing topic.  This didn’t hinder the content of the talk in the least.  He broke the process down in a logical manner that would be straight forward to understand if not implement. Francis hit the main areas we usually think of such as tombstoning, network connectivity and asynchronous code, but he approached them with tools they we may not have thought of until now.  He relied heavily on Fiddler to intercept and change the behavior of network requests. Then there are the areas you might not normal think to check.  This includes localization, accessibility and updating client code to a new version.  These are important aspects of your app that can severely impact how customers feel about your app.  Take the time to view this session and get a new appreciation for testing and where it fits in your development lifecycle. del.icio.us Tags: BUILD 2013,Testing,C#,Windows Store Apps,Fiddler

    Read the article

  • Testing on Device Other Than the Known Brand Question (Local and Imported Phone Question)

    - by David Dimalanta
    I have a question. When testing a device by using Eclipse, it's easy to install and add device software with these specific brands commonly used in game testing like Samsung, Google, T-Mobile, and HTC; according to the Android Developers website. What if I'm using other brands that runs on Android to test the program via Eclipse (i.e. MyPhone, Starmobile), what should I look for to download in order to enable testing phones that those brands are using other than the brands that are known and commonly used: model number or simply brand? Here's some examples of these brands other than the brands we've known that runs on Android: Starmobile Engage 7 (http://www.lazada.com.ph/Starmobile-Engage-7-Android-40-4GB-with-Wi-Fi-Black-Starmobile-Mercury-B201-COMBO-39833.html/) My|Phone A898 Duo (http://www.myphone.com.ph/#!a898-duo/c1yt) Also, take note that I'm a Filipino programmer working at the Philippines to test our local smartphones for the created Android game or app. Hope you can understand me for my help.

    Read the article

  • How do you do ASP.Net performance testing?

    - by John
    Our team is in need of a performance testing process. We use ASP.Net (both web forms and MVC) and performance testing is not currently built into our projects. We occasionally do some ad-hoc analysis, such as checking the load on the server or SQL Server Profiler, but we don't have a true beginning to end, built into the project performance testing methodology. Where is a good place to start? I'm interested in both: Process - General knowledge, including best practices. Essential list of tools. I'm aware of a few tools, such as what's built into the pricier versions of VS 2010 and JetBrains products, though I haven't used them.

    Read the article

  • Looking for a very subtle unit testing example

    - by Stéphane Bruckert
    In the context of Continuous Integration, I need to teach unit testing to a 20-people audience of programmers. Everything will be all right, but I am still trying to find the perfect unit testing example. More than writing tests like a robot, I want to show that unit testing can help prevent very subtle errors. I am thinking of the following scenario to happen when doing a live TDD demo: the test cases would already be written, we would have to write methods together, most of us would naturally have forgotten to handle a specific case for a method, everyone would then be surprised, when seeing that all tests don't pass, the failing test would make us think more and realize that we forgot an important case. My question will probably finish as "too broad" or "not clear what you are asking", but we never know, one of you might have a great idea. Your answer can use Java and JUnit, though any other language will be fine since only the idea will matter.

    Read the article

  • Game testing on Android - emulator or real devices?

    - by n00bfuscator
    I am working at a localization agency and we have been approached by a client about testing their games on iOS as well as Android. Testing on iOS seems fairly easy as we can just buy a couple of devices and we should be covered. For Android it seems to be completely different. From what i found, the emulator can cover all API levels, screen sizes and such, but i hear it's buggy and nothing could replace testing on real devices. With the vast amount of Android devices out there and the rate at which new devices are released it seems impossible to keep up. How can i test games (localization and functional) on Android covering all compatible devices?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >