Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 23/500 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Value of Step-by-Step Asserts in Unit Tests

    - by Eric J.
    When writing unit tests, there are cases where one can create an Assert for each condition that could fail or an Assert that would catch all such conditions. C# Example: Dictionary<string, string> dict = LoadDictionary(); // Optional Asserts: Assert.IsNotNull(dict); Assert.IsTrue(dict.Count > 0); Assert.IsTrue(dict.ContainsKey("ExpectedKey")); // Condition actually interested in testing: Assert.IsTrue(dict["ExpectedKey"] == "ExpectedValue"); Is there value to a large, multi-person project in this kind of situation to add the "Optional Asserts"? There's more work involved (if you have lots of unit tests) but it will be more immediately clear where the problem lies. I'm using VS 2010 and the integrated testing tools but intend the question to be generic.

    Read the article

  • Throwing special type of exception to terminate unit test

    - by trendl
    Assume I want to write a unit test to test a particular piece of functionality that is implemented within a method. If I wanted to execute the method completely, I would have to do some extra set up work (mock objects expectations etc.). Instead of doing that I use the following approach: - I set up the expectations I'm interested in verifying and then make the tested method throw a special type of exception (e.g. TerminateTestException). - Further down in the unit test I catch the exception and verify the mock object expectations. It works fine but I'm not sure it is good practice. I do not do this regularly, only in cases where it saves me time and effort. One thing that comes to mind as an argument against using this is that throwing exceptions takes long time so the tests execute slower than if I used a different approach.

    Read the article

  • Load Testing Linux Virtual Server

    - by Anubhav Agarwal
    I have configured a Linux virtual network with following configuration 172.17.6.112- VIP 172.17.6.111- Linux Director | |----------172.17.6.113 --- Real Server 1 |----------172.17.6.114 --- Real Server 2 I am using direct routing technique. I am unable to test my LVS network. Are there some good scripts/softwares available for load testing. I am running apache2.0 service on them. I came across with testlvs on the internet but am unable to understand its documentation. Are there more simpler ones I want to test the response time of server using various scheduling algorithms .

    Read the article

  • Developing and implementing a testing plan for a software app deployed on a web server

    - by Abhzoo
    A company in the USA is building a new Web App that will be offered SaaS to customers and the development is being done by a software development team located in a different country(India). They are about to take delivery of a first demo to provide live feedback to the team in India. The overseas team requires a cloud server (Windows + SQL Standard, 8GB Ram, 8 vCPUs, 40GB SSD system disk, 80GB SSD data disk, 1600Mb/s network bandwidth) to serve as a tester server. When the tester is setup the team will install the app on the test server to get live feedback. Q:Explain in detail how you will develop and implement a testing plan for the software App. Be sure to explain the specifics. PLEASE HELP, NEED ANSWER ASAP

    Read the article

  • How to do integrated testing?

    - by Enthusiastic Programmer
    So I have been reading up on a lot of books surrounding testing. But all the books I've read have the same flaws. They will all tell you the definitions of testing. But I have not found a single book that will guide you into integration testing (or pretty much anything higher then unit testing). Is integration testing that elusive or am I reading the wrong books? I'm a hands on person, so I would appreciate it if someone could help me with a simple program: Let's say you need to make some sort of calculation program that calculates something (doesn't matter what) and exports it to *.txt file. Let's assume we use the Model View Controller design principle. And one class for the actual calculating which you'll use in the model and one for writing the textfile. So: View = Controller = Model = CalculationClass, FileClass So for unittesting: You'd test the calculationClass, I'd personally focus most of my unit tests there. And less time on unit testing the view/controller/FileClass. I personally wouldn't see the use of unittesting those unless you want a really robust program. Integration testing: Now this is where I run into a wall. What would I have to test to call it an integration test? I could stub the view and feed the controller data which it would pass on to the model and so forth. And then check what the view gets back in the end. But ... Couldn't I just run the (in this case small) program then and test it manually? Would this be considered a integration test too, or does it have to be automated? Also, can I check multiple items to see if they are correct? I cannot seem to find any book that offers a hands on approach to methods of integration testing.

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • Load Testing a Security/Gateway Appliance

    - by Joel Coel
    In a couple weeks I will load testing a security/gateway appliance. We're a small residential college, and that "residential" means the traffic moving through the appliance is a bit like the Wild West. We have everything from Facebook to World of Warcraft, BitTorrent to Netflix, or Halo to YouTube... basically anything you might find in the home of a high-school or college aged person. Somewhere in there some real academic work gets done as well. We rely on our current appliance for traffic shaping, antivirus, malware filtering, intrusion detection on our servers, logging and abuse reporting, and even some content filtering. All this puts a decent load when we have students around, and I'm concerned about the ability of the new candidate to keep up. On paper it should handle things, but I'm worried. Prior experience is that vendors greatly over-report what an appliance can handle. The product also includes a licensed session limit, and I'm also worried that just a few misbehaving students could unwittingly bring us to that limit and cause service disruptions. I need to know this will work for our campus in order to commit to it. Going a performance level higher in that product takes the pricing way out of line with what we expect and have done in the past. What I need is a good way to load test this guy. My problem is that our current level of summer traffic is less than one percent of what it will be when students come back just six weeks from now. Any ideas on how to really stress this thing and see what it can do, in a way that will give me some clear ideas o. How that will scale for our campus? For the curious, I'm looking at a Watchguard 515, but it could be anything. If I were evaluating a competitor, I'd ask the same question.

    Read the article

  • How do I test UrlHelper.RouteUrl()?

    - by Jeff Putz
    I'm having a tough go trying to figure out what I need to mock in my tests to show that UrlHelper.RouteUrl() is returning the right URL. It works, but I'd like to have the right test coverage. The meat of the controller method looks like this: var urlHelper = new UrlHelper(ControllerContext.RequestContext); return Json(new BasicJsonMessage { Result = true, Redirect = urlHelper.RouteUrl(new { controller = "TheController", action = "TheAction", id = somerecordnumber }) }); Testing the result object is easy enough, like this: var controller = new MyController(); var result = controller.DoTheNewHotness()); Assert.IsInstanceOf<JsonResult>(result); var data = (BasicJsonMessage)result.Data; Assert.IsTrue(data.Result); result.Redirect is always null because the controller obviously doesn't know anything about the routing. What do I have to do to the controller to let it know? As I said, I know it works when I exercise the production code, but I'd like some testing assurance. Thanks for your help!

    Read the article

  • Unit testing several implementation of the same trait/interface

    - by paradigmatic
    I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle. For instance, when testing implementations of lists, tests could include: An instance should be empty, if and only if and only if it has zero size. After calling clear, the size sould be zero. Adding an element in the middle of a list, will increment by one the index of rhs elements. etc. What are the best practices ?

    Read the article

  • Robust unit-testing of HTML in PHP

    - by asbja
    I'm adding unit-tests to an older PHP codebase at work. I will be testing and then rewriting a lot of HTML generation code and currently I'm just testing if the generated strings are identical to the expected string, like so: (using PHPUnit) public function testConntype_select() { $this->assertEquals( '<select><option value="blabla">Some text</option></select>', conntype_select(1); // A value from the test dataset. ); } This way has the downside that attribute ordering, whitespace and a lot of other irrelevant details are tested as well. I'm wondering if there are any better ways to do this. For example if there are any good and easy ways to compare the generated DOM trees. I found very similar questions for ruby, but couldn't find anything for PHP.

    Read the article

  • using vb6 for testing

    - by codeModuler
    A friend has got an interview for a testing job. Apparently the job requires knowledge of VB6. My friend knows VB6 and she knows testing, but she and I are both wondering what is the relevance of VB6 to testing. Is there some well-known standard way to test applications using VB6 that my friend should learn for this interview?

    Read the article

  • Ruby on Rails testing: How can I test or at the very least see a form_for's error_messages_for?

    - by williamjones
    I'm working on creating a tests, and I can't figure out why the creation of a model from a form_for is failing in the test but works in real browsers. Is there a straightforward way for me to see what the problems are in the model creation? Even better would be, is there a straightforward way for me to test the error outputs that I access via error_messages_for? In that case, I'd like to also add in tests that make sure that malformed forms are outputting the correct errors.

    Read the article

  • strerror_r returns trash when I manually set errno during testing

    - by Robert S. Barnes
    During testing I have a mock object which sets errno = ETIMEDOUT; The object I'm testing sees the error and calls strerror_r to get back an error string: if (ret) { if (ret == EAI_SYSTEM) { char err[128]; strerror_r(errno, err, 128); err_string.assign(err); } else { err_string.assign(gai_strerror(ret)); } return ret; } I don't understand why strerror_r is returning trash. I even tried calling strerror_r(ETIMEDOUT, err, 128) directly and still got trash. I must be missing something. It seems I'm getting the gnu version of the function not the posix one, but that shouldn't make any difference in this case.

    Read the article

  • Should I unit test my JavaScript?

    - by Joseph Silvashy
    I'm curious to if it would be valuable, I'd like to start using QUnit, but I really don't know where to get started. Actually I'm not going to lie, I'm new to testing in general, not just with JS. I'm hoping to get some tips to how I would start using unit testing with an app that already has a large amount of JavaScript (ok so about 500 lines, not huge, be enough to make me wonder if I have regression that goes unnoticed). How would you recommend getting started and Where would I write my tests? (for example its rails app, where is a logical place to have my JS tests, it would be cool if they could go in the /test directory but it's outside the public directory and thus not possible... err is it?)

    Read the article

  • What are some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • Google Analytics testing/sandbox environment?

    - by Laimoncijus
    Is there any Google Analytics testing/sandbox environment for testing your JS custom code before putting it to live system? I don't want to use my real tracking ID to see if everything is correct on my dev. environment, neither I want to put my code untested live... Is there any techniques or maybe some fake Analytics tracking lib I could use for testing?

    Read the article

  • Linker Error: iPhone Unit Test Bundle referencing App classes

    - by ohhorob
    Starting with an app already in development, I have carried out the instructions in the iPhone Development Guide – Unit Testing Applications I can successfully include and use my App's classes in Application-style tests that run on the device, and output their results to the console. If I add the following line of code: STAssertTrue([viewController isKindOfClass:[LoginViewController class]], @"Top view controller is not LoginViewController"); The following build error is generated: Undefined symbols: "_OBJC_CLASS_$_LoginViewController", referenced from: __objc_classrefs__DATA@0 in LoginViewTest.o ld: symbol(s) not found collect2: ld returned 1 exit status I can provide more configuration information for the project and the Testing target, but the setup works file without the [LoginViewController class] line in the test source. Without that line, I can reference the class, use it's properties and send it messages successfully. Is there a linking build setting, or bundle loading option that is required when attempting to use an App class in this fashion? Or should I find another type of test to confirm that the class of an object is the expected one?

    Read the article

  • Unit testing the app.config file with NUnit

    - by Dana
    When you guys are unit testing an application that relies on values from an app.config file? How do you test that those values are read in correctly and how your program reacts to incorrect values entered into a config file? It would be ridiculous to have to modify the config file for the NUnit app, but I can't read in the values from the app.config I want to test. Edit: I think I should clarify perhaps. I'm not worried about the ConfigurationManager failing to read the values, but I am concerned with testing how my program reacts to the values read in.

    Read the article

  • Creating TCP network errors for unit testing

    - by Robert S. Barnes
    I'd like to create various network errors during testing. I'm using the Berkely sockets API directly in C++ on Linux. I'm running a mock server in another thread from within Boost.Test which listens on localhost. For instance, I'd like to create a timeout during connect. So far I've tried not calling accept in my mock server and setting the backlog to 1, then making multiple connections, but all seem to successfully connect. I would think that if there wasn't room in the backlog queue I would at least get a connection refused error if not a timeout. I'd like to do this all programatically if possible, but I'd consider using something external like IPchains to intentionally drop certain packets to certain ports during testing, but I'd need to automate creating and removing rules so I could do it from within my Boost.Test unit tests. I suppose I could mock the various system calls involved, but I'd rather go through a real TCP stack if possible. Ideas?

    Read the article

  • DCHP and Router load testing

    - by John H
    I manage a campground wifi network with an average of 10 - 60 active users. I have encountered issues where the router starts acting flaky (failing to assign DHCP or failing to pass traffic) without any clear warning (low cpu utilization, etc). I upgraded the router a couple times and ended up with a Netgear ProSafe VPN router that seems to be handling the traffic. The interesting thing is that the Netgear has lower specs than the Buffalo router it replaced, indicating the issue is with the DD-WRT firmware. While I'll be pursuing this issue on the dd-wrt forums, I need a way to test routers. My vision is having 1-2 computers connected on the LAN side and 1-2 computers connected on the WAN side. I want the LAN computers to be generating various type of traffic and connections, as well as requesting DCHP addresses. A few notes: The wireless aspect should be a non-issue. Most clients would connect to a wireless bridge and come into the router through a network cable. I had a monitoring server with Nagios running check_dhcp against the router. This server was connected directly by a network cable, eliminating wifi bridges and other devices from the equation. This question is somewhat related, but not exactly: Load testing wireless LANs I am going to look at IxChariot. While I'd ideally like to use a 1 computer on each side running Linux and preferably free software, I can entertain running Windows, multiple computers, or non-free software. Total bandwidth doesn't seem to be the issue. I can transfer large files all day. Even on the busiest days, the users seemed to only pull ~5Mbps. There is very little "LAN to LAN traffic" and most of it might never have reached the main router. The issue I need to test for seems to be tied to active users, or more appropriately, active sessions. I know active users or active clients is a meaningless term from a router standpoint and wouldn't mind having more appropriate terms to use. Summary: I need a way to test a routers ability in handling traffic from a large number of clients. My current strategy is to purchase a router, deploy it, and see how it fails in the live environment.

    Read the article

  • Create System.Data.Linq.Table in Code for Testing

    - by S. DePouw
    I have an adapter class for Linq-to-Sql: public interface IAdapter : IDisposable { Table<Data.User> Activities { get; } } Data.User is an object defined by Linq-to-Sql pointing to the User table in persistence. The implementation for this is as follows: public class Adapter : IAdapter { private readonly SecretDataContext _context = new SecretDataContext(); public void Dispose() { _context.Dispose(); } public Table<Data.User> Users { get { return _context.Users; } } } This makes mocking the persistence layer easy in unit testing, as I can just return whatever collection of data I want for Users (Rhino.Mocks): Expect.Call(_adapter.Users).Return(users); The problem is that I cannot create the object 'users' since the constructors are not accessible and the class Table is sealed. One option I tried is to just make IAdapter return IEnumerable or IQueryable, but the problem there is that I then do not have access to the methods ITable provides (e.g. InsertOnSubmit()). Is there a way I can create the fake Table in the unit test scenario so that I may be a happy TDD developer?

    Read the article

  • Asp.Net MVC Tutorial Unit Tests

    - by Nicholas
    I am working through Steve Sanderson's book Pro ASP.NET MVC Framework and I having some issues with two unit tests which produce errors. In the example below it tests the CheckOut ViewResult: [AcceptVerbs(HttpVerbs.Post)] public ViewResult CheckOut(Cart cart, FormCollection form) { // Empty carts can't be checked out if (cart.Lines.Count == 0) { ModelState.AddModelError("Cart", "Sorry, your cart is empty!"); return View(); } // Invoke model binding manually if (TryUpdateModel(cart.ShippingDetails, form.ToValueProvider())) { orderSubmitter.SubmitOrder(cart); cart.Clear(); return View("Completed"); } else // Something was invalid return View(); } with the following unit test [Test] public void Submitting_Empty_Shipping_Details_Displays_Default_View_With_Error() { // Arrange CartController controller = new CartController(null, null); Cart cart = new Cart(); cart.AddItem(new Product(), 1); // Act var result = controller.CheckOut(cart, new FormCollection { { "Name", "" } }); // Assert Assert.IsEmpty(result.ViewName); Assert.IsFalse(result.ViewData.ModelState.IsValid); } I have resolved any issues surrounding 'TryUpdateModel' by upgrading to ASP.NET MVC 2 (Release Candidate 2) and the website runs as expected. The associated error messages are: *Tests.CartControllerTests.Submitting_Empty_Shipping_Details_Displays_Default_View_With_Error: System.ArgumentNullException : Value cannot be null. Parameter name: controllerContext* and the more detailed at System.Web.Mvc.ModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext) at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider) at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, IValueProvider valueProvider) at WebUI.Controllers.CartController.CheckOut(Cart cart, FormCollection form) Has anyone run into a similar issue or indeed got the test to pass?

    Read the article

  • jQuery validator not working in unit testing

    - by Dbugger
    I have this small HTML file: <html> <head></head> <body> <form id='MyForm'> <input type='text' required /> <input type='submit' /> </form> <script src="/js/jquery-1.9.0.js"></script> <script src="/js/jquery.validate.js"></script> <script> var validator = $("#MyForm").validate(); alert(validator.form()); </script> </body> </html> This alerts me with "false", which is the expected behaviour. The problem comes when I go to unit testing, with js-test-driver: TestCase("MyTests", { setUp: function() { this.myform = "<form id='MyForm'><input type='text' required /><input type='submit' /></form>"; this.validator = $(this.myform).validate(); jstestdriver.console.log("Does the form validate? " + this.validator.form()); }, test_empty: function() { }, }); This code returns me the string Does the form validate? true This is a simplified version of my project of course, but the point is that I dont seem to be able to unit test the validation module im developing, since the jQuery validate plugin doesnt seem to work. What am I missing?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >