Search Results

Search found 14022 results on 561 pages for 'coded ui tests'.

Page 190/561 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • How do I give each test its own TestResults folder?

    - by izb
    I have a set of unit tests, each with a bunch of methods, each of which produces output in the TestResults folder. At the moment, all the test files are jumbled up in this folder, but I'd like to bring some order to the chaos. Ideally, I'd like to have a folder for each test method. I know I can go round adding code to each test to make it produce output in a subfolder instead, but I was wondering if there was a way to control the output folder location with the Visual Studio unit test framework, perhaps using an initialization method on each test class so that any new tests added automatically get their own output folder without needing copy/pasted boilerplate code?

    Read the article

  • Unit testing several implementation of the same trait/interface

    - by paradigmatic
    I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle. For instance, when testing implementations of lists, tests could include: An instance should be empty, if and only if and only if it has zero size. After calling clear, the size sould be zero. Adding an element in the middle of a list, will increment by one the index of rhs elements. etc. What are the best practices ?

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Continous integration with .net and svn

    - by stiank81
    We're currently not applying the automated building and testing of continous integration in our project. We haven't bothered this far as we're only 2 developers working on it, but even with a team of 2 I still think it would be valuable to use continous integration and get a confirmation that our builds don't break or tests start failing. We're using .Net with C# and WPF. We have created Python-scripts for building the application - using MSbuild - and for running all tests. Our source is in SVN. What would be the best approach to apply continous integration with this setup? What tool should we get? It should be one which doesn't require alot of setup. Simple procedures to get started and little maintanance is a must.

    Read the article

  • xml intellisense with Eclipse Europa?

    - by Maddy.Shik
    Please suggest some free tools or some in build feature in Eclipse for intellisense of xml. I am using Eclipse Europa which is for EE java developers. <?xml version ="1.0" encoding = "UTF-8"?> <tests xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://jtestcase.sourceforge.net/dtd/jtestcase2.xsd"> </tests> Xml code above is part of xml file i am working upon. I am expecting eclipse to suggest me tags based upon xsd specified above. I am using ctrl+space for suggestion of new tag. Please suggest if i am using wrong key for intellisense or am i missing some tool or plugin. Thanks

    Read the article

  • NUnit vs Visual Studio 2010's MSTest?

    - by David White
    I realise that there are many older questions addressing the general question of NUnit v MSTest for versions of Visual Studio up to 2008 (such as this one). Microsoft have a history of getting things right in their 3rd version. For MSTest, that is VS2010. Have they done so with MSTest? Would you use it in a new project in preference to NUnit? My specific concerns: speed running tests within CruiseControl.NET (either commandline or MSBuild task) code coverage reports from CC.NET can you run MSTest tests in debug mode (We use ReSharper, so test-runners are not an issue for us. We have used NUnit for the last few years. We do not have TFS.)

    Read the article

  • Possible to register Selenium RC's with the Hudson Selenium Grid Hub w/o the RC's being slaves in th

    - by Rodreegez
    I am trying to get Hudson to run my ruby based selenium tests. I have installed the Selenium Grid plugin, but I don't want to have the RC's running as slaves in a Hudson cluster. The reason for this is I don't want to waste the next six years of my life trying to configure each of my projects in various Windows environments. Hudson currently pulls each project from Github and builds it just fine. With a regular Selenium Grid setup, I am able to edit the grid_configuration.yml file to represent the various environments I wish to tests against, then pass environment variables to the rake task that runs the test i.e. which browser/platfom to run on and the URL of the application under test -- usually a port on the hub machine running in a specific environment. In this way, the machines on which the RC's run don't need to know anything about the source code of my apps, they just need to have selenium-grid installed and have registered with the hub. Is there a way of elegantly emulating this with Hudson?

    Read the article

  • Watin from TeamCity not running as a Windows Service

    - by peter.swallow
    I'm trying to run Watin from within a TeamCity build, using nUnit. All tests run fine locally. I know you cannot run the full Watin tests (i.e. POST) from TeamCity if it is running as a Windows Service. You must start the build agent from a .bat file. But, I don't want to have to login to the server for it to start. I've tried getting a Scheduled Task (in Windows Server 2008) to fire the agent.bat file on StartUp (not Login), but with no luck. Has anyone else got Watin/TeamCity running from a Scheduled Task? Thanks, Pete

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • Rspec-rails doesn't seem to find my models

    - by sa125
    Hi - I'm trying out rspec, and immediately hit a wall when it doesn't seem to load db records I know exist. Here's my fairly simple spec (no tests yet). require File.expand_path(File.dirname(__FILE__) + '../spec_helper') describe SomeModel do before :each do @user1 = User.find(1) @user2 = User.find(2) end it "should do something fancy" end I get an ActiveRecord::RecordNotFound exception, saying it couldn't find User w/ ID=1 or ID=2, which I know for a fact exist. I set both test and development databases to point to the same schema in database.yml, so this shouldn't be database mixup. I also ran script/generate rspec after installing the gems (rspec, rspec-rails), and gem.config both environment.rb and test.rb. Any idea what I'm missing? thanks. EDIT Seems I was running the tests with rake spec:models, which emptied the db and thus no records were found. When I used % spec spec/models/some_model_spec.rb, everything worked as expected.

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • How do I generate coverage xml report for a single package?

    - by Wraith
    I'm using nose and coverage to generate coverage reports. I only have one package right now, ae, so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% ---------------------------------------------------------------------- Ran 68 tests in 5.292s However when I run coverage xml, coverage pulls in more packages than necessary, including python email and logging packages which have nothing to do with my code. If I run coverage xml ae, I get this error: No source for code: '/home/wraith/dev/projects/trimurti/src/ae': [Errno 21] Is a directory: '/home/wraith/dev/projects/trimurti/src/ae' Is there a way to generate the XML for just the ae package?

    Read the article

  • Maven multi-module project with many reports: looking for an example

    - by hstoerr
    Is there an open source project that can serve as a good example on how to use the maven site plugin to generate reports? I would prefer it to consist of many modules, possibly hierarchically structured use as many plugins as possible (surefire, jxr, pmd, findbugs, javadoc, checkstyle, you name it) the reports should be aggregated: if some tests fail you want to have a single page that shows all modules with failing tests, not only a gazillion individual pages to check include enterprisey stuff (WAR, EAR etc), but this is not so important. The idea is to have something where you can gather ideas on how it is done and what is possible.

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Why does headless PDE Build omit directories I've specified in build.properties's bin.includes?

    - by Woody Zenfell III
    One of my Eclipse plug-ins (OSGi bundles) is supposed to contain a directory (Database Elements) of .sql files. My build.properties shows: bin.includes = META-INF/,\ .,\ Database Elements/ (...which looks right to me.) When I build and run from within my interactive Eclipse IDE, everything works fine: calls to Bundle.getEntry(String) and Bundle.findEntries(String, String, bool) return valid URL objects; my tests are happy; my code is happy. When I build via headless ant script (using PDE Build), those same calls end up returning null. My tests break; my code breaks. I find that Database Elements is quietly but simply missing from my plug-in's JAR package. (META-INF and the built classes still make it in there fine.) I scoured the build log (even eventually invoking ant -verbose on the relevant portion of the build script) but saw no mention of anything helpful. What gives?

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • Eclipse refresh taking too long

    - by Nash0
    I am doing TDD on a large Java project in eclipse and am finding it frustrating because every time I run a test I have to wait 30 seconds+ for eclipse to compile and refresh. I estimate that 80%+ of that time is spent refreshing. Is there a way I can drastically reduce the amount of refreshing it is doing? I have looked at server other similar questions but I could not see anything that helps. One way I reduced the compile refresh time was to split the unit tests and code into separate projects. There are 4,700 classes in the src project and 300 in the tests. I am running eclipse 3.5.1 on Java 1.6.0_17-b04 (eclipse.vm). My computer is running windows xp with 3.1 gigs of usable ram. The only plugin I have installed is subclipse.

    Read the article

  • How much of Grails GORM to test?

    - by Lloyd Meinholz
    Is there a "best practice" or defacto standard with how much of the GORM functionality one should test in the unit/functional tests? My take is that one should probably do most of the domain testing as functional tests so that you get the full grails environment. But what do you test? Inserts, updates, deletes? Do you test constraints even though they were probably more thoroughly tested by the grails release? Or do you just assume that GORM does what it is supposed to do and move to other parts of the application?

    Read the article

  • JUnit: 4.8.1 "Could not find class"

    - by Patrick
    Ok, I am like other and new to jUnit and having a difficult time trying to get it working. I have searched the forum but the answers provided; I am just not getting. If anyone out there could lend me a hand I would greatly appreciate it. Let me provide the basics: OS: mac OS X.6 export JUNIT_HOME="/Developer/junit/junit4.8.1" export CVSROOT="/opt/cvsroot" export PATH="/usr/local/bin:/usr/local/sbin:/usr/localmysql/bin:/opt/PalmSDK/Current/bin/:/usr/local/mysql/bin:$PATH:$JUNIT_HOME:$CVSROOT" export CLASSPATH="$CLASSPATH:$JUNIT_HOME/junit-4.8.1.jar:$JUNIT_HOME" I can compile a test class from a java file, however when I try to then run the test java org.junit.runner.JUnitCore MyTest.class I get the following: JUnit version 4.8.1 Could not find class: MyTest.class Time: 0.001 OK (0 tests) Now I have been in the directory with the MyTest.class which is just somewhere in my file system, I tried moving the source folder to the "junit" folder and the "junit/junit4.8.1" folder and the same result. I cannot even run the tests that came with junit. Thanks patrick

    Read the article

  • How to test a project with multiple python versions in a sequential way?

    - by ecolell
    I am developing a python adapter to interact with a 3rd party website, without any json or xml api (http://www.class.noaa.gov/). I have a problem when Travis CI run multiple python tests (of the The Travis CI Build Matrix) concurrently. The project is on GitHub at ecolell/noaaclass and the .travis.yml file is: language: python python: - "2.6" - "2.7" - "3.2" - "3.3" install: - "make deploy" script: "make test-coverage-travis-ci" #nosetests after_success: - "make test-coveralls" Specifically, I have a problem when at least 2 python versions were running their unit tests at the same time, because they use the same account of a website. Is there any option to specify to The Build Matrix the execution of each python version in a secuential way? Or maybe, Is there a better way to do this?

    Read the article

  • Implementing a Stack using Test-Driven Development

    - by devoured elysium
    I am doing my first steps with TDD. The problem is (as probably with everyone starting with TDD), I never know very well what kind of unit tests to do when I start working in my projects. Let's assume I want to write a Stack class with the following methods(I choose it as it's an easy example): Stack<T> - Push(element : T) - Pop() : T - Seek() : T - Count : int - IsEmpty : boolean How would you approch this? I never understood if the idea is to test a few corner cases for each method of the Stack class or start by doing a few "use cases" with the class, like adding 10 elements and removing them. What is the idea? To make code that uses the Stack as close as possible to what I'll use in my real code? Or just make simple "add one element" unit tests where I test if IsEmpty and Count were changed by adding that element? How am I supposed to start with this?

    Read the article

  • Working with a Java Mail Server for Testing

    - by Charlie
    I'm in the process of testing an application that takes mail out of a mailbox, performs some action based on the content of that mail, and then sends a response mail depending on the result of the action. I'm looking for a way to write tests for this application. Ideally, I'd like for these tests to bring up their own mail server, push my test emails to a folder on this mail server, and have my application scrape the mail out of the mail server that my test started. Configuring the application to use the mailserver is not difficult, but I do not know where to look for a programatic way of starting a mail server in Java. I've looked at JAMES, but I am unable to figure out how to start the server from within my test. So the question is this: What can I use for a mail server in Java that I can configure and start entirely within Java?

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >