Search Results

Search found 10170 results on 407 pages for 'regression testing'.

Page 14/407 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • What is unit testing?

    - by Alon
    What is unit testing and unit testing libraries like xUnit? I understood it's testing specific code, so what's the difference between this and just opening a new project and test this specific code?

    Read the article

  • Is Unit Testing worth the effort?

    - by The Talking Walnut
    I am working to integrate unit testing into the development process on the team I work on and there are some skeptics. What are some good ways to convince the skeptical developers on the team of the value of Unit Testing? In my specific case we would be adding Unit Tests as we add functionality or fixed bugs. Unfortunately our code base does not lend itself to easy testing.

    Read the article

  • Unit Testing a CSV Parser and Column Mapping Tool

    - by PieterG
    I am really starting to enjoy unit testing and have the following question to the gurus of unit testing. Let's for example say I have the following class public class FileMapper { public Dictionary<string, string> ReadFile(string filename, string delimeter){} } How do you guys generally go about unit testing a Parser or ReadFile method in my case?

    Read the article

  • Getting started with unit testing in VS2010?

    - by Herb Caudill
    I'm new to both unit testing and Visual Studio 2010 (just upgraded from 2008). I'm interested in using VS2010's new built-in unit testing tools, but would like to get the lay of the land first. I haven't been able to find any resources or tutorials on unit testing with VS2010 specifically - has anyone found a good walk-through? I'm also open to persuasion that we should stick with NUnit or the like, if anyone knows a reason to avoid the built-in tools.

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • Free Book from Microsoft - Testing for Continuous Delivery with Visual Studio 2012

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/10/16/free-book-from-microsoft---testing-for-continuous-delivery-with.aspxAt  http://msdn.microsoft.com/en-us/library/jj159345.aspx, Microsoft have made available a free e-book - Testing for Continuous Delivery with Visual Studio 2012 "As more software projects adopt a continuous delivery cycle, testing threatens to be the bottleneck in the process. Agile development frequently revisits each part of the source code, but every change requires a re-test of the product. While the skills of the manual tester are vital, purely manual testing can't keep up. Visual Studio 2012 provides many features that remove roadblocks in the testing and debugging process and also help speed up and automate re-testing."

    Read the article

  • Review: Backbone.js Testing

    - by george_v_reilly
    Title: Backbone.js Testing Author: Ryan Roemer Rating: $stars(4.5) Publisher: Packt Copyright: 2013 ISBN: 178216524X Pages: 168 Keywords: programming, testing, javascript, backbone, mocha, chai, sinon Reading period: October 2013 Backbone.js Testing is a short, dense introduction to testing JavaScript applications with three testing libraries, Mocha, Chai, and Sinon.JS. Although the author uses a sample application of a personal note manager written with Backbone.js throughout the book, much of the material would apply to any JavaScript client or server framework. Mocha is a test framework that can be executed in the browser or by Node.js, which runs your tests. Chai is a framework-agnostic TDD/BDD assertion library. Sinon.JS provides standalone test spies, stubs and mocks for JavaScript. They complement each other and the author does a good job of explaining when and how to use each. I've written a lot of tests in Python (unittest and mock, primarily) and C# (NUnit), but my experience with JavaScript unit testing was both limited and years out of date. The JavaScript ecosystem continues to evolve rapidly, with new browser frameworks and Node packages springing up everywhere. JavaScript has some particular challenges in testing—notably, asynchrony and callbacks. Mocha, Chai, and Sinon meet those challenges, though they can't take away all the pain. The author describes how to test Backbone models, views, and collections; dealing with asynchrony; provides useful testing heuristics, including isolating components to reduce dependencies; when to use stubs and mocks and fake servers; and test automation with PhantomJS. He does not, however, teach you Backbone.js itself; for that, you'll need another book. There are a few areas which I thought were dealt with too lightly. There's no real discussion of Test-driven_development or Behavior-driven_development, which provide the intellectual foundations of much of the book. Nor does he have much to say about testability and how to make legacy code more testable. The sample Notes app has plenty of testing seams (much of this falls naturally out of the architecture of Backbone); other apps are not so lucky. The chapter on automation is extremely terse—it could be expanded into a very large book!—but it does provide useful indicators to many areas for exploration. I learned a lot from this book and I have no hesitation in recommending it. Disclosure: Thanks to Ryan Roemer and Packt for a review copy of this book.

    Read the article

  • Free E-Book - Testing for Continuous Delivery with Visual Studio 2012

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/11/05/free-e-book---testing-for-continuous-delivery-with-visual-studio.aspx At http://msdn.microsoft.com/en-us/library/jj159345.aspx, Microsoft Press are offering the free e-Book, Testing for Continuous Delivery with Visual Studio 2012. "As more software projects adopt a continuous delivery cycle, testing threatens to be the bottleneck in the process. Agile development frequently revisits each part of the source code, but every change requires a re-test of the product. While the skills of the manual tester are vital, purely manual testing can't keep up. Visual Studio 2012 provides many features that remove roadblocks in the testing and debugging process and also help speed up and automate re-testing. " (Please ignore the click to look inside!)

    Read the article

  • How are design-by-contract and property-based testing (QuickCheck) related?

    - by Todd Owen
    Is their only similarity the fact that they are not xUnit (or more precisely, not based on enumerating specific test cases), or is it deeper than that? Property-based testing (using QuickCheck, ScalaCheck, etc) seem well-suited to a functional programming style where side-effects are avoided. On the other hand, Design by Contract (as implemented in Eiffel) is more suited to OOP languages: you can express post-conditions about the effects of methods, not just their return values. But both of them involve testing assertions that are true in general (rather than assertions that should be true for a specific test case). And both can be tested using randomly generated inputs (with QuickCheck this is the only way, whereas with Eiffel I believe it is an optional feature of the AutoTest tool). Is there an umbrella term to encompass both approaches? Or am I imagining a relationship that doesn't really exist.

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • Should I use a unit testing framework to validate XML documents?

    - by christofr
    From http://www.w3.org/XML/Schema: [XML Schemas] provide a means for defining the structure, content and semantics of XML documents. I'm using an XML Schema (XSD) to validate several large XML documents. While I'm finding plenty of support within XSD for checking the structure of my documents, there are no procedural if/else features that allow me to say, for instance, If Country is USA, then Zipcode cannot be empty. I'm comfortable using unit testing frameworks, and could quite happily use a framework to test content integrity. Am I asking for trouble doing it this way, rather than an alternative approach? Has anybody tried this with good / bad results? -- Edit: I didn't include this information to keep it technology agnostic, but I would be using C# / Linq / xUnit for deserialization / testing.

    Read the article

  • Unit testing class in a web service in .net

    - by Dan Bailiff
    After some digging here, I took the advice in this thread: http://stackoverflow.com/questions/371961/how-to-unit-test-c-web-service-with-visual-studio-2008 I've created a separate class and my web service class is just a wrapper for that one. The problem is that when I try to create a unit test project in VS2008, it insists on creating a unit test that acts like I'm testing the web service calls instead of the class I specified. I can't get to the class I'm trying to test. I have a web service "subscription_api.asmx". The code behind is "subscription_api.cs" which contains the web method wrapper calls to the real code at "subscription.cs". I would expect to be able to do the following: [TestMethod()] public void GetSystemStatusTest() { subscription sub = new subscription(); XmlNode node = sub.GetSystemStatusTest(); Assert.IsNotNull(node); } But instead I get this mess which is autogenerated from VS'08: /// <summary> ///A test for GetSystemStatus ///</summary> // TODO: Ensure that the UrlToTest attribute specifies a URL to an ASP.NET page (for example, // http://.../Default.aspx). This is necessary for the unit test to be executed on the web server, // whether you are testing a page, web service, or a WCF service. [TestMethod()] [HostType("ASP.NET")] [AspNetDevelopmentServerHost("C:\\CVSROOT\\rnr\\pro\\product\\wms\\ss\\subscription_api", "/subscription_api")] [UrlToTest("http://localhost/subscription_api")] public void GetSystemStatusTest() { subscription_Accessor target = new subscription_Accessor(); // TODO: Initialize to an appropriate value XmlNode expected = null; // TODO: Initialize to an appropriate value XmlNode actual; actual = target.GetSystemStatus(); Assert.AreEqual(expected, actual); Assert.Inconclusive("Verify the correctness of this test method."); } Additionally, there is a "subscription_api.accessor" in the Test References folder. When I try this: [TestMethod()] public void GetSystemStatusTest2() { subscription_Accessor sub = new subscription_Accessor(); XmlNode node = sub.GetSystemStatus(); Assert.IsNotNull(node); } I get an error: Test method subscription_api.Test.subscriptionTest.GetSystemStatusTest2 threw exception: System.TypeInitializationException: The type initializer for 'subscription_Accessor' threw an exception. ---> System.ArgumentNullException: Value cannot be null. I'm really new to unit testing and feel lost. How can I create a unit test just for my subscription class in "subscription.cs" without testing the web service? Am I limited to testing within the same project (I hope not)? Do I have to put the target class in its own project outside of the web service project?

    Read the article

  • Example of testing a RPC call using GWT-TestCase with GAE

    - by Stephen Cagle
    How is that for a lot of acronyms! I am having trouble testing GWT's RPC mechanism using GWT's GWTTestCase. I created a class for testing using the junitCreator tool included with GWT. I am attempting to test using the built in Google App Engine using the created "hosted mode" testing profile created by junitCreator. When I run the test, I keep getting errors saying things like Starting HTTP on port 0 HTTP listening on port 49569 The development shell servlet received a request for 'greet' in module 'com.google.gwt.sample.stockwatcher.StockWatcher.JUnit.gwt.xml' [WARN] Resource not found: greet; (could a file be missing from the public path or a <servlet> tag misconfigured in module com.google.gwt.sample.stockwatcher.StockWatcher.JUnit.gwt.xml ?) com.google.gwt.user.client.rpc.StatusCodeException: Cannot find resource 'greet' in the public path of module 'com.google.gwt.sample.stockwatcher.StockWatcher.JUnit' I hope that someone somewhere has successfully run junit test (using GWTTestCase or just plain TestCase) that will allow for the testing of gwt RPC. If this is the case, could you please mention the steps you took, or better yet, just post code that works. Thanks.

    Read the article

  • TDD vs. Unit testing

    - by Walter
    My company is fairly new to unit testing our code. I've been reading about TDD and unit testing for some time and am convinced of their value. I've attempted to convince our team that TDD is worth the effort of learning and changing our mindsets on how we program but it is a struggle. Which brings me to my question(s). There are many in the TDD community who are very religious about writing the test and then the code (and I'm with them), but for a team that is struggling with TDD does a compromise still bring added benefits? I can probably succeed in getting the team to write unit tests once the code is written (perhaps as a requirement for checking in code) and my assumption is that there is still value in writing those unit tests. What's the best way to bring a struggling team into TDD? And failing that is it still worth writing unit tests even if it is after the code is written? EDIT What I've taken away from this is that it is important for us to start unit testing, somewhere in the coding process. For those in the team who pickup the concept, start to move more towards TDD and testing first. Thanks for everyone's input. FOLLOW UP We recently started a new small project and a small portion of the team used TDD, the rest wrote unit tests after the code. After we wrapped up the coding portion of the project, those writing unit tests after the code were surprised to see the TDD coders already done and with more solid code. It was a good way to win over the skeptics. We still have a lot of growing pains ahead, but the battle of wills appears to be over. Thanks for everyone who offered advice!

    Read the article

  • Unit Testing a Java Chat Application

    - by Epitaph
    I have developed a basic Chat application in Java. It consists of a server and multiple client. The server continually monitors for incoming messages and broadcasts them to all the clients. The client is made up of a Swing GUI with a text area (for messages sent by the server and other clients), a text field (to send Text messages) and a button (SEND). The client also continually monitors for incoming messages from other clients (via the Server). This is achieved with Threads and Event Listeners and the application works as expected. But, how do I go about unit testing my chat application? As the methods involve establishing a connection with the server and sending/receiving messages from the server, I am not sure if these methods should be unit tested. As per my understanding, Unit Testing shouldn't be done for tasks like connecting to a database or network. The few test cases that I could come up with are: 1) The max limit of the text field 2) Client can connect to the Server 3) Server can connect to the Client 4) Client can send message 5) Client can receive message 6) Server can send message 7) Server can receive message 8) Server can accept connections from multiple clients But, since most of the above methods involve some kind of network communication, I cannot perform unit testing. How should I go about unit testing my chat application?

    Read the article

  • SAP related testing

    - by mgj
    Dear all, One notion that has been prevalent mostly as rumours for many aspiring programmers is that the testing phase of the SDLC(Software Development Life Cycle) is not that challenging and interesting as one's job as a tester after a period of time becomes monotonous because a person does the same thing repeatedly over and over again. Thats why many have this complexion of looking for a developers job rather than that of testers. Don't testers have a space for themselves in software companies to grow..? Please feel free to express your views for or against this. How true is that, could you please give e.g.'s of instances( need not be practical, even theoretical would suffice) which actually contradict this statement wrt a tester's career specifically wrt the SAP domain. E.g.'s from other domains are also welcome. This question is not meant to hurt someone's feelings who is in the testing domain. Its just that for e.g. in my case I want to know what actually would be the challenge's a tester could also face in real life situations.Something that would make their job also interesting and fun-filled. I myself am pro-testing and also interested in pursuing testing as a profession in a sw co, just curious to know more about it so...:) Thanks..:)

    Read the article

  • Penetration testing with Nikto, unknown results found

    - by heldrida
    I've scanned my new webserver and I'm surprised to find that in the results there's programs that I never installed. This is a fresh new install of Ubuntu 12.04 and just installed Php 5.3, mysql, fail2ban, apache2, git, a few other things. Not sure if related, but I've got Wordpress installed but this doesn't have anything to do with myphpnuke does it? I'd like to understand why am I getting this results ? + OSVDB-27071: /phpimageview.php?pic=javascript:alert(8754): PHP Image View 1.0 is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=search&query=[script]alert('Vulnerable);[/script]?query=: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-3931: /myphpnuke/links.php?op=MostPopular&ratenum=[script]alert(document.cookie);[/script]&ratetype=percent: myphpnuke is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?op=modload&name=FAQ&file=index&myfaq=yes&id_cat=1&categories=%3Cimg%20src=javascript:alert(9456);%3E&parent_id=0: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + /modules.php?letter=%22%3E%3Cimg%20src=javascript:alert(document.cookie);%3E&op=modload&name=Members_List&file=index: Post Nuke 0.7.2.3-Phoenix is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-4598: /members.asp?SF=%22;}alert('Vulnerable');function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. + OSVDB-2946: /forum_members.asp?find=%22;}alert(9823);function%20x(){v%20=%22: Web Wiz Forums ver. 7.01 and below is vulnerable to Cross Site Scripting (XSS). http://www.cert.org/advisories/CA-2000-02.html. Thanks for looking!

    Read the article

  • JMeter Stress testing

    - by mcondiff
    MAMP server hosting a Joomla instance. I'd like to hear the community's thoughts on the best way to stress test the server and find it's breaking point on concurrent users etc. Currently I have setup a test plan which I have going to the home page, grabbing the index.php, css, js and all images and have run tests on 1 to 100 users and a varying number of loops. What I'd like to know is how do I determine at what number of concurrent requests or looping requests is a good way to gauge if my server can handle the proposed increase in traffic? What is a good KB/sec, Throughput, Average, Max, Min via the Aggregate Report and at what number of threads/loops etc? I have googled and have not found immediate answers to these questions and thought to come here. More or less I have just used this http://jakarta.apache.org/jmeter/usermanual/jmeter_proxy_step_by_step.pdf to guide me and then I have been winging it in terms of Thread and Loop numbers. Any light shed on these subject would be much appreciated.

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >