Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 9/456 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Simulating Ajax failures for QA testing

    - by womp
    Our first ASP.Net MVC/jQuery product is about to go to QA, and we're looking for a way for our QA guys to easily be able to simulate bad Ajax requests (without modifying the application code). A typical integration/UI test plan might be: Load page, click button "DoStuff" "DoStuff" fails Attempt button "DoStuff" again "DoStuff" succeeds Verify application state This is a simple test case - there will be cases with multiple failures and successes interspersed. Aside from "unplug your network cable" I'm looking for an easy way for our guys to simulate intermittent bad server responses. I'm open to any ideas so I won't go into too many details about our application setup or dependencies. How have you handled this?

    Read the article

  • Why is django.test.client.Client not keeping me logged in.

    - by Mystic
    I'm using django.test.client.Client to test whether some text shows up when a user is logged in. However, I the Client object doesn't seem to be keeping me logged in. This test passes if done manually with Firefox but not when done with the Client object. class Test(TestCase): def test_view(self): user.set_password(password) user.save() client = self.client # I thought a more manual way would work, but no luck # client.post('/login', {'username':user.username, 'password':password}) login_successful = client.login(username=user.username, password=password) # this assert passes self.assertTrue(login_successful) response = client.get("/path", follow=True) #whether follow=True or not doesn't seem to work self.assertContains(response, "needle" ) When I print response it returns the login form that is hidden by: {% if not request.user.is_authenticated %} ... form ... {% endif %} This is confirmed when I run ipython manage.py shell. The problem seems to be that the Client object is not keeping the session authenticated.

    Read the article

  • JRuby-friendly method for parallel-testing Rails app

    - by Toby Hede
    I am looking for a system to parallelise a large suite of tests in a Ruby on Rails app (using rspec, cucumber) that works using JRuby. Cucumber is actually not too bad, but the full rSpec suite currently takes nearly 20 minutes to run. The systems I can find (hydra, parallel-test) look like they use forking, which isn't the ideal solution for the JRuby environment.

    Read the article

  • "rake test" doesn't load fixtures?

    - by Pavel K.
    when i run rake test --trace here's what happens ** Invoke test (first_time) ** Execute test ** Invoke test:units (first_time) ** Invoke db:test:prepare (first_time) ** Invoke db:abort_if_pending_migrations (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:abort_if_pending_migrations ** Execute db:test:prepare ** Invoke db:test:load (first_time) ** Invoke db:test:purge (first_time) ** Invoke environment ** Execute db:test:purge ** Execute db:test:load ** Invoke db:schema:load (first_time) ** Invoke environment ** Execute db:schema:load ** Execute test:units /usr/bin/ruby1.8 -I"lib:test".... (and after that fails because there's no fixtures loaded) why doesn't it load fixtures (i thought that would be default behaviour) and how do i make it load fixtures before executing tests??? p.s. my test/test_helper.rb content is: ENV["RAILS_ENV"] = "test" require File.expand_path(File.dirname(__FILE__) + "/../config/environment") require 'test_help' class ActiveSupport::TestCase self.use_transactional_fixtures = true self.use_instantiated_fixtures = false fixtures :all end (rails 2.3.4)

    Read the article

  • Tools for website/web application load testing?

    - by zarko.susnjar
    Before going into production, our client demands actual numbers of how many users our web application can handle. We have all kinds of features implemented including asset management (file uploads/downloads), documents import/export, various statistics, web-services etc. Which tool (or set of tools) could do this? Application details: XHTML/jQuery Coldfusion 8 SQL Server 2008 Windows Server 2008

    Read the article

  • Starting jetty with spring xml as a background process/thread

    - by compass
    My goal is to set up a jetty test server and inject a custom servlet to test some REST classes in my project. I was able to launch the server with spring xml and run tests against that server. The issue I'm having is sometimes after the server started, the process stopped at the point before running the tests. It seems jetty didn't go to background. It works every time on my computer. But when I deployed to my CI server, it doesn't work. It also doesn't work when I'm on VPN. (Strange.) The server should be completed initialized as when the tests stuck, I was able to access the server using a browser. Here is my spring context xml: .... <bean id="servletHolder" class="org.eclipse.jetty.servlet.ServletHolder"> <constructor-arg ref="courseApiServlet"/> </bean> <bean id="servletHandler" class="org.eclipse.jetty.servlet.ServletContextHandler"/> <!-- Adding the servlet holders to the handlers --> <bean id="servletHandlerSetter" class="org.springframework.beans.factory.config.MethodInvokingFactoryBean"> <property name="targetObject" ref="servletHandler"/> <property name="targetMethod" value="addServlet"/> <property name="arguments"> <list> <ref bean="servletHolder"/> <value>/*</value> </list> </property> </bean> <bean id="httpTestServer" class="org.eclipse.jetty.server.Server" init-method="start" destroy-method="stop" depends-on="servletHandlerSetter"> <property name="connectors"> <list> <bean class="org.eclipse.jetty.server.nio.SelectChannelConnector"> <property name="port" value="#{settings['webservice.server.port']}" /> </bean> </list> </property> <property name="handler"> <ref bean="servletHandler" /> </property> </bean> Running latest Jetty 8.1.8 server and Spring 3.1.3. Any idea?

    Read the article

  • EBS 12.1.1 Test Starter Kit now Available for Oracle Application Testing Suite

    - by Steven Chan
    We've discussed automated testing tools for the E-Business Suite several times on this blog, since testing is such a key part of everyone's implementation lifecycle.  An important part of our testing arsenal in E-Business Suite Development is the Oracle Application Testing Suite.  The Oracle Automated Testing Suite (OATS) is built on the foundation of the e-TEST suite of products acquired from Empirix  in 2008.  The testing suite is comprised of:   1. Oracle Load Testing for scalability, performance, and load testing   2. Oracle Functional Testing for automated functional and regression testing   3. Oracle Test Manager for test process management, test execution, and defect trackingOracle Application Testing Suite 9.0 has been supported for use with the E-Business Suite since 2009.  I'm very pleased to let you know that our E-Business Suite Release 12.1.1 Test Starter Kit is now available for Oracle Application Testing Suite 9.1.  You can download it here:Oracle Application Testing Suite Downloads

    Read the article

  • Is there a name for a testing method where you compare a set of very different designs?

    - by DVK
    "A/B testing" is defined as "a method of marketing testing by which a baseline control sample is compared to a variety of single-variable test samples in order to improve response rates". The point here, of course, is to know which small single-variable changes are more optimal, with the goal of finding the local optimum. However, one can also envision a somewhat related but different scenario for testing the response rate of major re-designs: take a baseline control design, take one or more completely different designs, and run test samples on those redesigns to compare response rates. As a practical but contrived example, imagine testing a set of designs for the same website, one being minimalist "googly" design, one being cluttered "Amazony" design, and one being an artsy "designy" design (e.g. maximum use of design elements unlike Google but minimal simultaneously presented information, like Google but unlike Amazon) Is there an official name for such testing? It's definitely not A/B testing, since the main component of it (finding local optimum by testing single-variable small changes that can be attributed to response shift) is not present. This is more about trying to compare a set of local optimums, and compare to see which one works better as a global optimum. It's not a multivriable, A/B/N or any other such testing since you don't really have specific variables that can be attributed, just different designs.

    Read the article

  • Unit test SHA256 wrapper queries

    - by Sam Leach
    I am just beginning to write unit tests. So please bear with me. I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString("x2").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: What logic is there here. The answer is my for loop and ToString("x2") so from my understanding I want to be testing this part? I can assume Encoding.UTF8.GetBytes(plainText) works. Correct assumption? I can assume SHA256CryptoServiceProvider.ComputeHash() works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks.

    Read the article

  • How to unit test models in MVC / MVR app?

    - by BBnyc
    I'm building a node.js web app and am trying to do so for the first time in a test driven fashion. I'm using nodeunit for testing, which I find allows me to write tests quickly and painlessly. In this particular app, the heavy lifting primarily involves translating SQL data into complex Javascript object and serving them to the front-end via json. Likewise, the app also spends a great deal of code validating and translating complex, multidimensional Javascript objects it receives from the front-end into SQL rows. Hence I have used a fat model design for the app -- most of the real code resides in the models, where the data translation happens. What's the best approach to test such models with unit tests? I mean in particular the methods that have create javascript objects from the SQL rows and serve them to the front-end. Right now what I'm doing is making particular requests of my models with the unit tests and checking the returned data for all of the fields that should be there. However I have a suspicion that this is not the most robust kind of testing I could be doing. My current testing design also means I have to package my app code with some dummy data so that my tests can anticipate the kind of data that the app should be returning when tests run.

    Read the article

  • How can I write automated tests for iptables?

    - by Phil Frost
    I am configuring a Linux router with iptables. I want to write acceptance tests for the configuration that assert things like: traffic from some guy on the internet is not forwarded, and TCP to port 80 on the webserver in the DMZ from hosts on the corporate LAN is forwarded. An ancient FAQ alludes to a iptables -C option which allows one to ask something like, "given a packet from X, to Y, on port Z, would it be accepted or dropped?" Although the FAQ suggests it works like this, for iptables (but maybe not ipchains as it uses in the examples) the -C option seems to not simulate a test packet running through all the rules, but rather checks for the existence for an exactly matching rule. This has little value as a test. I want to assert that the rules have the desired effect, not just that they exist. I've considered creating yet more test VMs and a virtual network, then probing with tools like nmap for effects. However, I'm avoiding this solution due to the complexity of creating all those additional virtual machines, which is really quite a heavy way to generate some test traffic. It would also be nice to have an automated testing methodology which can also work on a real server in production. How else might I solve this problem? Is there some mechanism I might use to generate or simulate arbitrary traffic, then know if it was (or would be) dropped or accepted by iptables?

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • What's a good way to do testing a plug-in on multiple Windows and Outlook versions?

    - by Andrei
    Hello, We're building a plug-in for Outlook that should work on multiple Windows versions (XP, Vista, 7) and also with different Outlook versions (2003, 2007, 2010). The testing problem I am facing right now, is that I can't figure out a good/convenient/thorough way to test the application on multiple Windows and Outlook versions. At the moment, I have a VirtualBox which runs many virtual machines, with different Windows versions and Outlook versions. So I would have a virtual machine with Windows 7 testing Outlook 2010, and another one with Windows 7 testing Outlook 2007, Windows Vista with Outlook 2010 and so on, going through some of the possible combinations. It kind of gets the job done, although it is cumbersome and takes a long time to test. Some of the testing included in the application is unit testing, but this is also rather tied in with the machine I test it on (windows 7 with outlook 2010). For example, I was using ManagementObject recently, which worked fine on my system (and thus passed the unit test for that method), however, using that object threw an exception in another person's system, which crashed the application. I work on Visual Studio 2010 Ultimate. The questions: Is there a more elegant way to make the testing process more streamline and more efficient? Any other testing methods you recommend? How would you deal with this problem? Thanks! Looking forward to your replies.

    Read the article

  • Project ideas for automated deduction/automated theorem proving?

    - by wsh
    Dear Stack Overflow brethren, I'm a second-semester junior who will embark upon my thesis soon, and I have an interest in automated deduction and automated theorem provers. As in, I'd like to advance the art in some way (I don't mean that pretentiously, but I do want to do something productive). I've Googled pretty far and wide and so far few promising ideas have emerged. There are a few student project idea pages, but most seem either horribly outdated or too advanced (I was originally going to attempt to synthesize postmodernist thought (hahaha) and abstract its logical content, build a complete and consistent model (if possible, of course), and attempt to automate it, grounding said model as possible in a nonstandard logic a la these. My advisor thought that gave postmodernist thought too much credit (a while ago I reimplemented the Postmodernism Generator in Haskell with Parsec, so that is in part where the idea came from); I am tempted to concur.) So, yeah. Does anyone have ideas? I apologize if there is some obvious gap in my approach here/if I haven't appropriately done my homework (and if there is one, please tell me!), but in large part I don't even know where to start, and thank you for reading all that.

    Read the article

  • stress testing opencl/Ati GPU on 12.04

    - by lurscher
    What does people normally use to stress test their GPU on ubuntu 12.04? I tried installing Phoronix Benchmark suite ubuntu .deb but it tries to install freeglu3-dev and at the same time complains about $ phoronix-test-suite benchmark pts/opencl The following dependencies are needed and will be installed: freeglut3-dev This process may take several minutes. Reading package lists... Building dependency tree... Reading state information... freeglut3-dev is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 90 not upgraded. There are dependencies still missing from the system: - OpenGL Utility Kit / GLUT 1: Ignore missing dependencies and proceed with installation. 2: Skip installing the tests with missing dependencies. 3: Re-attempt to install the missing dependencies. 4: Quit the current Phoronix Test Suite process. Missing dependencies action: 4 so even if it is installing freeglu3, it still complains about missing GLUT. You can't win against GLUT it seems So, what does people use for this? i mean, really, because i have tried googling for an hour and it is not paying up Thanks!

    Read the article

  • Good practices - database programming, unit testing

    - by Piotr Rodak
    Jason Brimhal wrote today on his blog that new book, Defensive Database Programming , written by Alex Kuznetsov ( blog ) is coming to bookstores. Alex writes about various techniques that make your code safer to run. SQL injection is not the only one vulnerability the code may be exposed to. Some other include inconsistent search patterns, unsupported character sets, locale settings, issues that may occur during high concurrency conditions, logic that breaks when certain conditions are not met. The...(read more)

    Read the article

  • Does software testing methodology rely on flawed data?

    - by Konrad Rudolph
    It’s a well-known fact in software engineering that the cost of fixing a bug increases exponentially the later in development that bug is discovered. This is supported by data published in Code Complete and adapted in numerous other publications. However, it turns out that this data never existed. The data cited by Code Complete apparently does not show such a cost / development time correlation, and similar published tables only showed the correlation in some special cases and a flat curve in others (i.e. no increase in cost). Is there any independent data to corroborate or refute this? And if true (i.e. if there simply is no data to support this exponentially higher cost for late discovered bugs), how does this impact software development methodology?

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Where should I draw the line between unit tests and integration tests? Should they be separate?

    - by Earlz
    I have a small MVC framework I've been working on. It's code base definitely isn't big, but it's not longer just a couple of classes. I finally decided to take the plunge and start writing tests for it(yes, I know I should've been doing that all along, but it's API was super unstable up until now) Anyway, my plan is to make it extremely easy to test, including integration tests. An example integration test would go something along these lines: Fake HTTP request object - MVC framework - HTTP response object - check the response is correct Because this is all doable without any state or special tools(browser automation etc), I could actually do this with ease with regular unit test frameworks(I use NUnit). Now the big question. Where exactly should I draw the line between unit tests and integration tests? Should I only test one class at a time(as much as possible) with unit tests? Also, should integration tests be placed in the same testing project as my unit testing project?

    Read the article

  • Is paying programmers to "test" for bugs normal? [on hold]

    - by user106277
    I recently hired a programming team to do a port of my iPad app to the iPhone and Android platforms. I also wanted them to implement a bunch of tips on how to play the app, similar like you would find in Candy Crush or Cut the Rope. They want to charge 12 hours @ $35/hr for the "Testing all of the Tips", telling me that normally it would take them more than 25 hours but that they will 'bear the difference'. I have never heard of this, but maybe it's a new practice? I am used to devs doing their own quality control, and then having a testing/acceptance period... Am I missing something? Thanks for any help and advice you can give!

    Read the article

  • Large sparse (stiff) ODE system needed for testing

    - by macydanim
    I hope this is the right place for this question. I have been working on a sparse stiff implicit ODE solver and have finished the code so far. I now tested the solver with the Van der Pol equation, and another stiff problem, which is of dimension 4. But to perform better tests I am searching for a bigger system. I'm thinking of the order N = 100...1000, if possible stiff and sparse. Does anybody have an example I could use? I really don't know where to search.

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

  • mocha testing for the lazies, single key-press for all possible tests

    - by laggingreflex
    I have a batch file that lists all the test files I have and asks me which test I want to perform, like Test. [U]nit, [I]ntegration : i (user input) Integration. [A]ll, [2][U]serInteraction, [3][R]esultGeneration : u 2 User Interaction. Running "mocha integration\2userint.js" ... So essentially I have configured a batch "option" for each test file I have, which I can choose to run individually or all together. But adding and removing tests is a pain. Is there something that does this or anything like this automatically? Like reads all the files and asks me which file(s) I want to test. A GUI with checkboxes would be ultimate! but I'll take anything. I'm working in node.js

    Read the article

  • Android Live Testing

    - by Matthew Dockerty
    I am making a game for android and in it I am using sensors which are not available in the emulator. At the moment I am connecting my device and transferring the apk, then installing to test but that is a pain to do, and I have gotten to the stage where I need to start logging values for debugging. I have gone into the run configs of my app and set it to prompt me to pick a device, but my device is never in the list when it is connected to my PC and I try to run it. How am I supposed to set it up to work properly? Thanks for the help.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >