Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 56/192 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • springTestContextBeforeTestMethod failed in Maven spring-test

    - by joejax
    I try to setup a project with spring-test using TestNg in Maven. The code is like: @ContextConfiguration(locations={"test-context.xml"}) public class AppTest extends AbstractTestNGSpringContextTests { @Test public void testApp() { assert true; } } A test-context.xml simply defined a bean: <bean id="app" class="org.sonatype.mavenbook.simple.App"/> I got error for Failed to load ApplicationContext when running mvn test from command line, seems it cannot find the test-context.xml file; however, I can get it run correctly inside Eclipse (with TestNg plugin). So, test-context.xml is under src/test/resources/, how do I indicate this in the pom.xml so that 'mvn test' command will work? Thanks, UPDATE: Thanks for the reply. Cannot load context file error was caused by I moved the file arround in different location since I though the classpath was the problem. Now I found the context file seems loaded from the Maven output, but the test is failed: Running TestSuite May 25, 2010 9:55:13 AM org.springframework.beans.factory.xml.XmlBeanDefinitionReader loadBeanDefinitions INFO: Loading XML bean definitions from class path resource [test-context.xml] May 25, 2010 9:55:13 AM org.springframework.context.support.AbstractApplicationContext prepareRefresh INFO: Refreshing org.springframework.context.support.GenericApplicationContext@171bbc9: display name [org.springframework.context.support.GenericApplicationContext@171bbc9]; startup date [Tue May 25 09:55:13 PDT 2010]; root of context hierarchy May 25, 2010 9:55:13 AM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory INFO: Bean factory for application context [org.springframework.context.support.GenericApplicationContext@171bbc9]: org.springframework.beans.factory.support.DefaultListableBeanFactory@1df8b99 May 25, 2010 9:55:13 AM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory@1df8b99: defining beans [app,org.springframework.context.annotation.internalCommonAnnotationProcessor,org.springframework.context.annotation.internalAutowiredAnnotationProcessor,org.springframework.context.annotation.internalRequiredAnnotationProcessor]; root of factory hierarchy Tests run: 3, Failures: 2, Errors: 0, Skipped: 1, Time elapsed: 0.63 sec <<< FAILURE! Results : Failed tests: springTestContextBeforeTestMethod(org.sonatype.mavenbook.simple.AppTest) springTestContextAfterTestMethod(org.sonatype.mavenbook.simple.AppTest) Tests run: 3, Failures: 2, Errors: 0, Skipped: 1 If I use spring-test version 3.0.2.RELEASE, the error becomes: org.springframework.test.context.testng.AbstractTestNGSpringContextTests.springTestContextPrepareTestInstance() is depending on nonexistent method null Here is the structure of the project: simple |-- pom.xml `-- src |-- main | `-- java `-- test |-- java `-- resources |-- test-context.xml `-- testng.xml testng.xml: <suite name="Suite" parallel="false"> <test name="Test"> <classes> <class name="org.sonatype.mavenbook.simple.AppTest"/> </classes> </test> </suite> test-context.xml: <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd" default-lazy-init="true"> <bean id="app" class="org.sonatype.mavenbook.simple.App"/> </beans> In the pom.xml, I add testng, spring, and spring-test artifacts, and plugin: <dependency> <groupId>org.testng</groupId> <artifactId>testng</artifactId> <version>5.1</version> <classifier>jdk15</classifier> <scope>test</scope> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring</artifactId> <version>2.5.6</version> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-test</artifactId> <version>2.5.6</version> <scope>test</scope> </dependency> <build> <finalName>simple</finalName> <plugins> <plugin> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <configuration> <suiteXmlFiles> <suiteXmlFile>src/test/resources/testng.xml</suiteXmlFile> </suiteXmlFiles> </configuration> </plugin> </plugins> Basically, I replaced 'A Simple Maven Project' Junit with TestNg, hope it works. UPDATE: I think I got the problem (still don't know why) - Whenever I extends AbstractTestNGSpringContextTests or AbstractTransactionalTestNGSpringContextTests, the test will failed with this error: Failed tests: springTestContextBeforeTestMethod(org.sonatype.mavenbook.simple.AppTest) springTestContextAfterTestMethod(org.sonatype.mavenbook.simple.AppTest) So, eventually the error went away when I override the two methods. I don't think this is the right way, didn't find much info from spring-test doc. If you know spring test framework, please shred some light on this.

    Read the article

  • Documenting user scenarios and measuring/testing

    - by Rimian
    Please forgive me as I don't quite remember the exact terms for what I am talking about... hence my question. Recently I worked on a large Agile team where I encountered a method of defining user scenarios (much like user stories). These scenarios were a few very basic short sentences with keywords and a structure that could be understood by humans (especially project managers) and could also be coded against using some Java Framework (for verifying tests). The exact structure of this mini language used keywords like "when" and "and" or "if" which was how the framework parsed and verified the result. The purpose of this framework was to interface between management and the acceptance testing framework. So essentially management could write the tests themselves using English. The scenario went something like this: "When a user visits URL and User clicks on X Something happens (that can be measured)" Can anyone help me remember exactly what I am talking about? Many thanks

    Read the article

  • UnitTest++ constructing fixtures multiple times?

    - by Peter
    I'm writing some unit tests in UnitTest++ and want to write a bunch of tests which share some common resources. I thought that this should work via their TEST_FIXTURE setup, but it seems to be constructing a new fixture for every test. Sample code: #include <UnitTest++.h> struct SomeFixture { SomeFixture() { // this line is hit twice } }; TEST_FIXTURE(SomeFixture, FirstTest) { } TEST_FIXTURE(SomeFixture, SecondTest) { } I feel like I must be doing something wrong; I had thought that the whole point of having the fixture was so that the setup/teardown code only happens once. Am I wrong on this? Is there something else I have to do to make it work that way?

    Read the article

  • How to manage test fixtures for end-to-end testing?

    - by Peter Becker
    Having just set up a test framework for a new web application, I realized I missed one of the big questions: "How do I make tests independent from each other?" Years ago I have set up some complicated Ant scripting to do full cycles of deleting all database tables, creating the schema again, adding test data, starting the application, running one test and then stopping the application. That was a pain to maintain and restricted us to nightly tests due to the time it took to run the full suite. It was still worth it, but I wonder if there is an easier way. Are there alternatives to this approach? The main criterion is that each test should not be affected by any other test in the suite, no matter if it failed or succeeded.

    Read the article

  • Alpha transparent PNGs not displaying correctly in Mobile Safari

    - by worksology
    I'm using some semi-transparent PNGs as background-images on various websites. These are usually something like a 1x1 image with a 30-percent opaque white layer. I've noticed that Mobile Safari does not display them correctly, giving them a darker/grayish tint. I've created a couple test pages to illustrate. View them both in your normal browser, and then on Mobile Safari, and you should see what I mean. This shows 11 red images of varying opacities on white: http://thecompleteworks.org/alpha-tests/index-red.html This shows 11 white images of varying opacities on blue: http://thecompleteworks.org/alpha-tests/index.html Is this a MobileSafari bug (I couldn't imagine so), or do I need to do something different, either to my pages or PNGs? (Here's how I create the PNGs: In Photoshop, create a 1x1 transparent canvas. Draw a white rectangle in Layer 1. Set opacity to, say 30 percent, Save for Web as 24-bit PNG with transparency.)

    Read the article

  • Building v8 without JIT

    - by rames
    Hello, I would like to run some tests on v8 with and without JIT to compare performances. I know JIT will improve my average speed performance, but it would be nice for me to have some actual more detailed tests results as I want to work with mobile platforms. I haven't found how to enable or disable JIT like it exists on Squirrelfish (cf. ENABLE_JIT in JavaScriptCore/wtf/Platform.h). Does somebody knows how to do that with v8? Thanks. Alexandre

    Read the article

  • Python if statement efficiency

    - by Dennis
    A friend (fellow low skill level recreational python scripter) asked me to look over some code. I noticed that he had 7 separate statements that basically said. if ( a and b and c): do something the statements a,b,c all tested their equality or lack of to set values. As I looked at it I found that because of the nature of the tests, I could re-write the whole logic block into 2 branches that never went more than 3 deep and rarely got past the first level (making the most rare occurrence test out first). if a: if b: if c: else: if c: else: if b: if c: else: if c: To me, logically it seems like it should be faster if you are making less, simpler tests that fail faster and move on. My real questions are 1) When I say if and else, should the if be true, does the else get completely ignored? 2) In theory would if (a and b and c) take as much time as the three separate if statements would?

    Read the article

  • How do I give each test its own TestResults folder?

    - by izb
    I have a set of unit tests, each with a bunch of methods, each of which produces output in the TestResults folder. At the moment, all the test files are jumbled up in this folder, but I'd like to bring some order to the chaos. Ideally, I'd like to have a folder for each test method. I know I can go round adding code to each test to make it produce output in a subfolder instead, but I was wondering if there was a way to control the output folder location with the Visual Studio unit test framework, perhaps using an initialization method on each test class so that any new tests added automatically get their own output folder without needing copy/pasted boilerplate code?

    Read the article

  • Unit testing several implementation of the same trait/interface

    - by paradigmatic
    I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle. For instance, when testing implementations of lists, tests could include: An instance should be empty, if and only if and only if it has zero size. After calling clear, the size sould be zero. Adding an element in the middle of a list, will increment by one the index of rhs elements. etc. What are the best practices ?

    Read the article

  • How to make efficient code emerge through unit testing

    - by Jean
    Hi, I participate in a TDD Coding Dojo, where we try to practice pure TDD on simple problems. It occured to me however that the code which emerges from the unit tests isn't the most efficient. Now this is fine most of the time, but what if the code usage grows so that efficiency becomes a problem. I love the way the code emerges from unit testing, but is it possible to make the efficiency property emerge through further tests ? Here is a trivial example in ruby: prime factorization. I followed a pure TDD approach making the tests pass one after the other validating my original acceptance test (commented at the bottom). What further steps could I take, if I wanted to make one of the generic prime factorization algorithms emerge ? To reduce the problem domain, let's say I want to get a quadratic sieve implementation ... Now in this precise case I know the "optimal algorithm, but in most cases, the client will simply add a requirement that the feature runs in less than "x" time for a given environment. require 'shoulda' require 'lib/prime' class MathTest < Test::Unit::TestCase context "The math module" do should "have a method to get primes" do assert Math.respond_to? 'primes' end end context "The primes method of Math" do should "return [] for 0" do assert_equal [], Math.primes(0) end should "return [1] for 1 " do assert_equal [1], Math.primes(1) end should "return [1,2] for 2" do assert_equal [1,2], Math.primes(2) end should "return [1,3] for 3" do assert_equal [1,3], Math.primes(3) end should "return [1,2] for 4" do assert_equal [1,2,2], Math.primes(4) end should "return [1,5] for 5" do assert_equal [1,5], Math.primes(5) end should "return [1,2,3] for 6" do assert_equal [1,2,3], Math.primes(6) end should "return [1,3] for 9" do assert_equal [1,3,3], Math.primes(9) end should "return [1,2,5] for 10" do assert_equal [1,2,5], Math.primes(10) end end # context "Functionnal Acceptance test 1" do # context "the prime factors of 14101980 are 1,2,2,3,5,61,3853"do # should "return [1,2,3,5,61,3853] for ${14101980*14101980}" do # assert_equal [1,2,2,3,5,61,3853], Math.primes(14101980*14101980) # end # end # end end and the naive algorithm I created by this approach module Math def self.primes(n) if n==0 return [] else primes=[1] for i in 2..n do if n%i==0 while(n%i==0) primes<<i n=n/i end end end primes end end end

    Read the article

  • Continous integration with .net and svn

    - by stiank81
    We're currently not applying the automated building and testing of continous integration in our project. We haven't bothered this far as we're only 2 developers working on it, but even with a team of 2 I still think it would be valuable to use continous integration and get a confirmation that our builds don't break or tests start failing. We're using .Net with C# and WPF. We have created Python-scripts for building the application - using MSbuild - and for running all tests. Our source is in SVN. What would be the best approach to apply continous integration with this setup? What tool should we get? It should be one which doesn't require alot of setup. Simple procedures to get started and little maintanance is a must.

    Read the article

  • xml intellisense with Eclipse Europa?

    - by Maddy.Shik
    Please suggest some free tools or some in build feature in Eclipse for intellisense of xml. I am using Eclipse Europa which is for EE java developers. <?xml version ="1.0" encoding = "UTF-8"?> <tests xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://jtestcase.sourceforge.net/dtd/jtestcase2.xsd"> </tests> Xml code above is part of xml file i am working upon. I am expecting eclipse to suggest me tags based upon xsd specified above. I am using ctrl+space for suggestion of new tag. Please suggest if i am using wrong key for intellisense or am i missing some tool or plugin. Thanks

    Read the article

  • NUnit vs Visual Studio 2010's MSTest?

    - by David White
    I realise that there are many older questions addressing the general question of NUnit v MSTest for versions of Visual Studio up to 2008 (such as this one). Microsoft have a history of getting things right in their 3rd version. For MSTest, that is VS2010. Have they done so with MSTest? Would you use it in a new project in preference to NUnit? My specific concerns: speed running tests within CruiseControl.NET (either commandline or MSBuild task) code coverage reports from CC.NET can you run MSTest tests in debug mode (We use ReSharper, so test-runners are not an issue for us. We have used NUnit for the last few years. We do not have TFS.)

    Read the article

  • Watin from TeamCity not running as a Windows Service

    - by peter.swallow
    I'm trying to run Watin from within a TeamCity build, using nUnit. All tests run fine locally. I know you cannot run the full Watin tests (i.e. POST) from TeamCity if it is running as a Windows Service. You must start the build agent from a .bat file. But, I don't want to have to login to the server for it to start. I've tried getting a Scheduled Task (in Windows Server 2008) to fire the agent.bat file on StartUp (not Login), but with no luck. Has anyone else got Watin/TeamCity running from a Scheduled Task? Thanks, Pete

    Read the article

  • Possible to register Selenium RC's with the Hudson Selenium Grid Hub w/o the RC's being slaves in th

    - by Rodreegez
    I am trying to get Hudson to run my ruby based selenium tests. I have installed the Selenium Grid plugin, but I don't want to have the RC's running as slaves in a Hudson cluster. The reason for this is I don't want to waste the next six years of my life trying to configure each of my projects in various Windows environments. Hudson currently pulls each project from Github and builds it just fine. With a regular Selenium Grid setup, I am able to edit the grid_configuration.yml file to represent the various environments I wish to tests against, then pass environment variables to the rake task that runs the test i.e. which browser/platfom to run on and the URL of the application under test -- usually a port on the hub machine running in a specific environment. In this way, the machines on which the RC's run don't need to know anything about the source code of my apps, they just need to have selenium-grid installed and have registered with the hub. Is there a way of elegantly emulating this with Hudson?

    Read the article

  • Rspec-rails doesn't seem to find my models

    - by sa125
    Hi - I'm trying out rspec, and immediately hit a wall when it doesn't seem to load db records I know exist. Here's my fairly simple spec (no tests yet). require File.expand_path(File.dirname(__FILE__) + '../spec_helper') describe SomeModel do before :each do @user1 = User.find(1) @user2 = User.find(2) end it "should do something fancy" end I get an ActiveRecord::RecordNotFound exception, saying it couldn't find User w/ ID=1 or ID=2, which I know for a fact exist. I set both test and development databases to point to the same schema in database.yml, so this shouldn't be database mixup. I also ran script/generate rspec after installing the gems (rspec, rspec-rails), and gem.config both environment.rb and test.rb. Any idea what I'm missing? thanks. EDIT Seems I was running the tests with rake spec:models, which emptied the db and thus no records were found. When I used % spec spec/models/some_model_spec.rb, everything worked as expected.

    Read the article

  • django sphinx automodule -- basics

    - by haras.pl
    Hi, I have a projects with several large apps and where settings and apps files are split. directory structure goes something like that: project_name __init__.py apps __init__.py app1 app2 3rdparty __init__.py lib1 lib2 settings __init__.py installed_apps.py path.py templates.py locale.py ... urls.py every app is like that __init__.py admin __init__.py file1.py file2.py models __init__.py model1.py model2.py tests __init__.py test1.py test2.py views __init__.py view1.py view2.py urls.py how to use a sphinx to autogenerate documentation for that? I want something like that for each in settings module or INSTALLED_APPS (not starting with django.* or 3rdparty.*) give me a auto documentation output based on docstring and autogen documentation and run tests before git commit btw. I tried doing .rst files by hand with .. automodule:: module_name :members: but is sucks for such a big project, and it does not works for settings Is there an autogen method or something? I am not tied to sphinx, is there a better solution for my problem?

    Read the article

  • Can Hudson branch promotion get based on project stability?

    - by Wayne
    Hudson CI server displays stability "weather" which is cool. And it allows one project build to kick off based on the successful build of another. However, how can you make that secondary project dependent additionally on the stability of multiple builds of the first project? Specifically, project "stable_deploy" needs to only kick off to promote a version to "stable" if project "integrate" with version 8.3.4.1233 has built and tested successfully at least 8 times--in a row. Until then, it's still in integration mode. IMPORTANT: A significant caveat to this is that a single set of Hudson projects gets used as a "pipeline" to process each new version through to release. So a project may have built successfully 8 times in a rolw but the latest version 8.3.4.1233 may be only the 2 most recent builds. The builds prior to that may be an earlier version. We're open to completely reorganizing this but the pipeline idea seemed to greatly reduce the amount of manually project creation and deletion. Is there a better way to track version release "pipeline"? In particular, we will have multiple versions in this pipeline simultaneously in the future due to fixes or patches to older versions. We don't see how to do that yet, except to create new pipeline projects for each version which is a real hassle. Here's some background details: The TickZoom application has some very complete unit tests some of which simulates real time trading environments. Add to that TickZoom makes elaborate use of parallelization for leveraging multi-core computers. Needless to say, during development of a new version, there can be stability issues during integration testing which get uncovered by running the build and auto tests repeatedly. A version which builds and tests cleanly 8 times in a row without change plus has undergone some real world testing by users can be deemed "stable" and promoted to the stable branch. Our Hudson projects look like this: test - Only for testing a build, zero user visibility. integrate_deploy - Promotes a test project build to integrate branch and makes it available to public for UA testing. integrate - Repeatedly builds the integrate branch to determine if it's stable enough to promote to stable branch. This runs the builds and test hourly throughout every night. stable_deploy - Promotes an integrate project build to the stable branch and makes it public for users who want the latest and greatest. stable - Builds the stable branch once every night. After 2 weeks of successful builds (14 builds) it can go to "release candidate". And so on... it continues with "release candidate" and then "release".

    Read the article

  • How do I generate coverage xml report for a single package?

    - by Wraith
    I'm using nose and coverage to generate coverage reports. I only have one package right now, ae, so I specify to only cover that: nosetests -w tests/unit --with-xunit --with-coverage --cover-package=ae And here are the results, which look good: Name Stmts Exec Cover Missing ---------------------------------------------- ae 1 1 100% ae.util 253 224 88% 39, 63-65, 284, 287, 362, 406 ---------------------------------------------- TOTAL 263 234 88% ---------------------------------------------------------------------- Ran 68 tests in 5.292s However when I run coverage xml, coverage pulls in more packages than necessary, including python email and logging packages which have nothing to do with my code. If I run coverage xml ae, I get this error: No source for code: '/home/wraith/dev/projects/trimurti/src/ae': [Errno 21] Is a directory: '/home/wraith/dev/projects/trimurti/src/ae' Is there a way to generate the XML for just the ae package?

    Read the article

  • Experiences with UI Automation and WPF

    - by soren.enemaerke
    We are developing a rather large WPF based application and would like to include some automated UI testing in our test suite (which already contains a number of unit tests). The UI Automation Framework from Microsoft partly sounds like a perfect fit for programatically launching and interacting with the application in a test setup. However, I've struggled to find solid references for samples and experiences with the technology, the articles and small samples available on MSDN is not enough to convince me that it is a solid choice. So, does anybody have real world experiences using the UI Automation Framework in their test suite? What are the caveats and the gotchas? Any best practices when written tests scripts, can you "record and replay" to a scriptable format, how much should you facilitate the testing from the application, how did you incorporate it in the automatic build? Should we be looking in another direction than the UI Automation Framework? Feel free to post you experiences here or link to some good references I might have missed

    Read the article

  • Node.js vs PHP processing speed

    - by Cody Craven
    I've been looking into node.js recently and wanted to see a true comparison of processing speed for PHP vs Node.js. In most of the comparisons I had seen, Node trounced Apache/PHP set ups handily. However all of the tests were small 'hello worlds' that would not accurately reflect any webpage's markup. So I decided to create a basic HTML page with 10,000 hello world paragraph elements. In these tests Node with Cluster was beaten to a pulp by PHP on Nginx utilizing PHP-FPM. So I'm curious if I am misusing Node somehow or if Node is really just this bad at processing power. Note that my results were equivalent outputting "Hello world\n" with text/plain as the HTML, but I only included the HTML as it's closer to the use case I was investigating. My testing box: Core i7-2600 Intel CPU (has 8 threads with 4 cores) 8GB DDR3 RAM Fedora 16 64bit Node.js v0.6.13 Nginx v1.0.13 PHP v5.3.10 (with PHP-FPM) My test scripts: Node.js script var cluster = require('cluster'); var http = require('http'); var numCPUs = require('os').cpus().length; if (cluster.isMaster) { // Fork workers. for (var i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('death', function (worker) { console.log('worker ' + worker.pid + ' died'); }); } else { // Worker processes have an HTTP server. http.Server(function (req, res) { res.writeHead(200, {'Content-Type': 'text/html'}); res.write('<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n'); for (var i = 0; i < 10000; i++) { res.write('<p>Hello world</p>\n'); } res.end('</body>\n</html>'); }).listen(80); } This script is adapted from Node.js' documentation at http://nodejs.org/docs/latest/api/cluster.html PHP script <?php echo "<html>\n<head>\n<title>Speed test</title>\n</head>\n<body>\n"; for ($i = 0; $i < 10000; $i++) { echo "<p>Hello world</p>\n"; } echo "</body>\n</html>"; My results Node.js $ ab -n 500 -c 20 http://speedtest.dev/ This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: Server Hostname: speedtest.dev Server Port: 80 Document Path: / Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 14.603 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95066500 bytes HTML transferred: 95035000 bytes Requests per second: 34.24 [#/sec] (mean) Time per request: 584.123 [ms] (mean) Time per request: 29.206 [ms] (mean, across all concurrent requests) Transfer rate: 6357.45 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 2 Processing: 94 547 405.4 424 2516 Waiting: 0 331 399.3 216 2284 Total: 95 547 405.4 424 2516 Percentage of the requests served within a certain time (ms) 50% 424 66% 607 75% 733 80% 813 90% 1084 95% 1325 98% 1843 99% 2062 100% 2516 (longest request) PHP/Nginx $ ab -n 500 -c 20 http://speedtest.dev/test.php This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking speedtest.dev (be patient) Completed 100 requests Completed 200 requests Completed 300 requests Completed 400 requests Completed 500 requests Finished 500 requests Server Software: nginx/1.0.13 Server Hostname: speedtest.dev Server Port: 80 Document Path: /test.php Document Length: 190070 bytes Concurrency Level: 20 Time taken for tests: 0.130 seconds Complete requests: 500 Failed requests: 0 Write errors: 0 Total transferred: 95109000 bytes HTML transferred: 95035000 bytes Requests per second: 3849.11 [#/sec] (mean) Time per request: 5.196 [ms] (mean) Time per request: 0.260 [ms] (mean, across all concurrent requests) Transfer rate: 715010.65 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.2 0 1 Processing: 3 5 0.7 5 7 Waiting: 1 4 0.7 4 7 Total: 3 5 0.7 5 7 Percentage of the requests served within a certain time (ms) 50% 5 66% 5 75% 5 80% 6 90% 6 95% 6 98% 6 99% 6 100% 7 (longest request) Additional details Again what I'm looking for is to find out if I'm doing something wrong with Node.js or if it is really just that slow compared to PHP on Nginx with FPM. I certainly think Node has a real niche that it could fit well, however with these test results (which I really hope I made a mistake with - as I like the idea of Node) lead me to believe that it is a horrible choice for even a modest processing load when compared to PHP (let alone JVM or various other fast solutions). As a final note, I also tried running an Apache Bench test against node with $ ab -n 20 -c 20 http://speedtest.dev/ and consistently received a total test time of greater than 0.900 seconds.

    Read the article

  • Maven multi-module project with many reports: looking for an example

    - by hstoerr
    Is there an open source project that can serve as a good example on how to use the maven site plugin to generate reports? I would prefer it to consist of many modules, possibly hierarchically structured use as many plugins as possible (surefire, jxr, pmd, findbugs, javadoc, checkstyle, you name it) the reports should be aggregated: if some tests fail you want to have a single page that shows all modules with failing tests, not only a gazillion individual pages to check include enterprisey stuff (WAR, EAR etc), but this is not so important. The idea is to have something where you can gather ideas on how it is done and what is possible.

    Read the article

  • Override variables while testing a standalone Perl script

    - by BrianH
    There is a Perl script in our environment that I now need to maintain. It is full of bad practices, including using (and re-using) global variables throughout the script. Before I start making changes to the script, I was going to try to write some test scripts so I can have a good regression base. To do this, I was going to use a method described on this page. I was starting by writing tests for a single subroutine. I put this line somewhat near the top of the script I am testing: return 1 if ( caller() ); That way, in my test script, I can require 'script_to_test.pl'; and it won't execute the whole script. The first subroutine I was going to test makes a lot of use of global variables that are set throughout the script. My thought was to try to override these variables in my test script, something like this: require_ok('script_to_test.pl'); $var_from_other_script = 'Override Value'; ok( sub_from_other_script() ); Unfortunately (for me), the script I am testing has a massive "my" block at the top, where it declares all variables used in the script. This prevents my test script from seeing/changing the variables in the script I'm running tests against. I've played with Exporter, Test::Mock..., and some other modules, but it looks like if I want to be able to change any variables I am going to have to modify the other script in some fashion. My goal is to not change the other script, but to get some good tests running so when I do start changing the other script, I can make sure I didn't break anything. The script is about 10,000 lines (3,000 of them in the main block), so I'm afraid that if I start changing things, I will affect other parts of the code, so having a good test suite would help. Is this possible? Can a calling script modify variables in another script declared with "my"? And please don't jump in with answers like, "Just re-write the script from scratch", etc. That may be the best solution, but it doesn't answer my question, and we don't have the time/resources for a re-write.

    Read the article

  • Why does headless PDE Build omit directories I've specified in build.properties's bin.includes?

    - by Woody Zenfell III
    One of my Eclipse plug-ins (OSGi bundles) is supposed to contain a directory (Database Elements) of .sql files. My build.properties shows: bin.includes = META-INF/,\ .,\ Database Elements/ (...which looks right to me.) When I build and run from within my interactive Eclipse IDE, everything works fine: calls to Bundle.getEntry(String) and Bundle.findEntries(String, String, bool) return valid URL objects; my tests are happy; my code is happy. When I build via headless ant script (using PDE Build), those same calls end up returning null. My tests break; my code breaks. I find that Database Elements is quietly but simply missing from my plug-in's JAR package. (META-INF and the built classes still make it in there fine.) I scoured the build log (even eventually invoking ant -verbose on the relevant portion of the build script) but saw no mention of anything helpful. What gives?

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >