Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 48/274 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Automating XNA Performance Testing?

    - by Grofit
    I was wondering what peoples approaches or thoughts were on automating performance testing in XNA. Currently I am looking at only working in 2d, but that poses many areas where performance can be improved with different implementations. An example would be if you had 2 different implementations of spatial partitioning, one may be faster than another but without doing some actual performance testing you wouldn't be able to tell which one for sure (unless you saw the code was blatantly slow in certain parts). You could write a unit test which for a given time frame kept adding/updating/removing entities for both implementations and see how many were made in each timeframe and the higher one would be the faster one (in this given example). Another higher level example would be if you wanted to see how many entities you can have on the screen roughly without going beneath 60fps. The problem with this is to automate it you would need to use the hidden form trick or some other thing to kick off a mock game and purely test which parts you care about and disable everything else. I know that this isnt a simple affair really as even if you can automate the tests, really it is up to a human to interpret if the results are performant enough, but as part of a build step you could have it run these tests and publish the results somewhere for comparison. This way if you go from version 1.1 to 1.2 but have changed a few underlying algorithms you may notice that generally the performance score would have gone up, meaning you have improved your overall performance of the application, and then from 1.2 to 1.3 you may notice that you have then dropped overall performance a bit. So has anyone automated this sort of thing in their projects, and if so how do you measure your performance comparisons at a high level and what frameworks do you use to test? As providing you have written your code so its testable/mockable for most parts you can just use your tests as a mechanism for getting some performance results... === Edit === Just for clarity, I am more interested in the best way to make use of automated tests within XNA to track your performance, not play testing or guessing by manually running your game on a machine. This is completely different to seeing if your game is playable on X hardware, it is more about tracking the change in performance as your game engine/framework changes. As mentioned in one of the comments you could easily test "how many nodes can I insert/remove/update within QuadTreeA within 2 seconds", but you have to physically look at these results every time to see if it has changed, which may be fine and is still better than just relying on playing it to see if you notice any difference between version. However if you were to put an Assert in to notify you of a fail if it goes lower than lets say 5000 in 2 seconds you have a brittle test as it is then contextual to the hardware, not just the implementation. Although that being said these sort of automated tests are only really any use if you are running your tests as some sort of build pipeline i.e: Checkout - Run Unit Tests - Run Integration Tests - Run Performance Tests - Package So then you can easily compare the stats from one build to another on the CI server as a report of some sort, and again this may not mean much to anyone if you are not used to Continuous Integration. The main crux of this question is to see how people manage this between builds and how they find it best to report upon. As I said it can be subjective but as knowledge will be gained from the answers it seems a worthwhile question.

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • Why can I run JUnit tests for my Spring project, but not a main method?

    - by FarmBoy
    I am using JDBC to connect to MySQL for a small application. In order to test without altering the real database, I'm using HSQL in memory for JUnit tests. I'm using Spring for DI and DAOs. Here is how I'm configuring my HSQL DataSource <bean id="mockDataSource" class="org.springframework.jdbc.datasource.SingleConnectionDataSource"> <property name="driverClassName" value="org.hsqldb.jdbcDriver"/> <property name="url" value="jdbc:hsqldb:mem:mockSeo"/> <property name="username" value="sa"/> </bean> This works fine for my JUnit tests which use the mock DB. But when I try to run a main method, I find the following error: Error creating bean with name 'mockDataSource' defined in class path resource [beans.xml]: Error setting property values; nested exception is org.springframework.beans.PropertyBatchUpdateException; nested PropertyAccessExceptions (1) are: PropertyAccessException 1: org.springframework.beans.MethodInvocationException: Property 'driverClassName' threw exception; nested exception is java.lang.IllegalStateException: Could not load JDBC driver class [org.hsqldb.jdbcDriver] I'm running from Eclipse, and I'm using the Maven plugin. Is there a reason why this would work as a Test, but not as a main()? I know that the main method itself is not the problem, because it works if I remove all references to the HSQL DataSource from my Spring Configuration file.

    Read the article

  • How should we setup up complex situations for tests?

    - by ShaneC
    I'm currently working on what I would call integration tests. I want to verify that if a WCF service is called it will do what I expect. Let's take a very simple scenario. Assume we have a contract object that we can put on hold or take off hold. Now writing the put on hold test is quite simple. You create a contract instance and execute the code that puts it on code. The question I have comes when we want to test the taking off hold service call. The problem is that putting a contract on hold can be actually quite complicated leading to various objects all be modified. So usually I would use the Builder pattern and do something like this.. var onHoldContract = new ContractBuilder().PutOnHold().Build(); The problem I have with this is now I have to pretty much replicate a large part of my put on hold service. Now when I change what putting something on hold means I have two places I have to modify. The other option that immediately jumps out at me is to just use the put on hold service as part of my test setup but now I'm coupling my test to the success of another piece of code which is something I don't like to do since it can lead to failures in one spot breaking unrelated tests elsewhere (if put on hold failed for example). Any other options I'm missing out here? or opinions on which method is preferable and why?

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

  • What's the use of writing tests matching configuration-like code line by line?

    - by Pascal Van Hecke
    Hi, I have been wondering about the usefulness of writing tests that match code one-by-one. Just an example: in Rails, you can define 7 restful routes in one line in routes.rb using: resources :products BDD/TDD proscribes you test first and then write code. In order to test the full effect of this line, devs come up with macros e.g. for shoulda: http://kconrails.com/2010/01/27/route-testing-with-shoulda-in-ruby-on-rails/ class RoutingTest < ActionController::TestCase # simple should_map_resources :products end I'm not trying to pick on the guy that wrote the macros, this is just an example of a pattern that I see all over Rails. I'm just wondering what the use of it is... in the end you're just duplicating code and the only thing you test is that Rails works. You could as well write a tool that transforms your test macros into actual code... When I ask around, people answer me that: "the tests should document your code, so yes it makes sense to write them, even if it's just one line corresponding to one line" What are your thoughts?

    Read the article

  • Ant Junit tests are running much slower via ant than via IDE - what to look at?

    - by Alex B
    I am running my junit tests via ant and they are running substantially slower than via the IDE. My ant call is: <junit fork="yes" forkmode="once" printsummary="off"> <classpath refid="test.classpath"/> <formatter type="brief" usefile="false"/> <batchtest todir="${test.results.dir}/xml"> <formatter type="xml"/> <fileset dir="src" includes="**/*Test.java" /> </batchtest> </junit> The same test that runs in near instantaneously in my IDE (0.067s) takes 4.632s when run through Ant. In the past, I've been able to speed up test problems like this by using the junit fork parameter but this doesn't seem to be helping in this case. What properties or parameters can I look at to speed up these tests? More info: I am using the reported time from the IDE vs. the time that the junit task outputs. This is not the sum total time reported at the end of the ant run. So, bizarrely, this problem has resolved itself. What could have caused this problem? The system runs on a local disk so that is not the problem.

    Read the article

  • How to create a fully functional server of own? [closed]

    - by Saransh Sharma
    Hi i am looking for various companies for cloud hosting and servers dedicated one but i am not pleased with service being provided in our country and don't wanna opt to other country rather then India,well and also looking in cost factor too....so what i am looking is to create our own server which we can manage basically its like we would create a server or a machine or a pc and make it online and will use it as a server Problem is we need fully managed solution and we are not getting it anywhere and we are a startup looking to save and create something new in server so guys is it possible to create a server a webserver with home/dedsktop pc PLease help Saransh sharma

    Read the article

  • Both nginx and php5-fpm init.d startup scripts are non-functional and returning no errors..? But they used to work perfectly

    - by Ollie Treend
    I have been using nginx and php5-fpm on my Ubuntu box for a while now. Everything has been configured and setup correctly, and it ran like a charm. I have been keeping the packages updated & upgraded as usual, but haven't touched the nginx OR php5-fpm config files at all (thus I'm pretty sure this isn't my fault... ) Basically, I noticed nginx wasn't running as it should be. I ran the command sudo service nginx start, and the script did nothing. The same thing happens when trying to do anything - start, stop, restart or reload. This also happens for the "php5-fpm" init script - although all other init scripts seem to be functioning correctly. When trying to start nginx OR php5-fpm, this is what happens: root@HAL:/etc# service php5-fpm start root@HAL:/etc# I can't understand what is going wrong. The script isn't returning errors, but similarly it isn't starting the daemon or reporting success as usual. For reference, both installations are from the official nginx and php5-fpm PPAs. The fact that both started doing this at the same time has thrown me - since they are both unrelated packages. I have since purged both sets of packages from my system with apt-get purge ... and also apt-get remove --purge ... both of which have successfully removed the packages, their config files, and their init.d startup scripts. After having reinstalled nginx, I now have a functioning startup script again - I can start the web server as usual. However, php5-fpm is still experiencing the strange premature exiting of the startup script.. and I really can't figure out what's causing it. I have no idea what caused this to occur initially, but have managed to fix nginx. I now need to fix the php5-fpm startup script. If anybody could shed some light on this situation, I would be very grateful! The chances are both these issues are related - and they were caused by me doing something stupid. But now I need to fix it. This time I was lucky - because these problems are just on my development server. But I have 2 other live servers which are configured in a similar way, and I am worried the same thing will happen to these two as well! Has anybody else come across this? Do you have any words of advice? Thank you

    Read the article

  • is there a tool that will let me take 'functional screenshots'?

    - by subpixel
    It's one thing to grab screenshots of web sites and return to them for design and interface ideas and inspiration. But increasingly I've found that when I want to examine something that previously caught my eye, the actual web site has since been altered, so I can no longer learn from the code. I wonder: is there a tool available that would allow me to save interfaces in a format that would retain the html/css? Thanks

    Read the article

  • Are there any language agnostic unit testing frameworks?

    - by Bringer128
    I have always been skeptical of rewriting working code - porting code is no exception to this. However, with the advent of TDD and automated testing it is much more reasonable to rewrite and refactor code. Does anyone know if there is a TDD tool that can be used for porting old code? Ideally you could do the following: Write up language agnostic unit tests for the old code that pass (or fail if you find bugs!). Run unit tests on your other code base that fail. Write code in your new language that passes the tests without looking at the old code. The alternative would be to split step 1 into "Write up unit tests in language 1" and "Port unit tests to language 2", which significantly increases effort required and is difficult to justify if the old code base is going to stop being maintained after the port (that is, you don't get the benefit of continuous integration on this code base). EDIT: It's worth noting this question on StackOverflow.

    Read the article

  • Authlogic openid: getting undefined method openid_identifier? error in functional test

    - by usr
    I use Authlogic with the Authlogic-openid addon (I gem installed ruby- openid and script/plugin install git://github.com/rails/open_id_authentication.git) and get two errors. First when running functional test, I get an undefined method openid_identifier? message on a line in my new.html.erb file when running the UsersControllerTest. The line is: <% if @user.openid_identifier? %> When running script/console I can access this method without any problem. Second when testing the openid functionality and registering a new user to my application using openid and using my blogspot account for that I get the following in my log file: Generated checkid_setup request to http://www.blogger.com/openid-server.g with assocication ... Redirected to http://www.blogger.com/openid-server.g?openid.assoc_handle=... NoMethodError (You have a nil object when you didn't expect it! The error occurred while evaluating nil.call): app/controllers/users_controller.rb:44:in `create' /usr/lib/ruby/1.8/webrick/httpserver.rb:104:in `service' /usr/lib/ruby/1.8/webrick/httpserver.rb:65:in `run' /usr/lib/ruby/1.8/webrick/server.rb:173:in `start_thread' /usr/lib/ruby/1.8/webrick/server.rb:162:in `start' /usr/lib/ruby/1.8/webrick/server.rb:162:in `start_thread' /usr/lib/ruby/1.8/webrick/server.rb:95:in `start' /usr/lib/ruby/1.8/webrick/server.rb:92:in `each' /usr/lib/ruby/1.8/webrick/server.rb:92:in `start' /usr/lib/ruby/1.8/webrick/server.rb:23:in `start' /usr/lib/ruby/1.8/webrick/server.rb:82:in `start' The code in the users_controller is straight forward: def create respond_to do |format| @user.save do |result| if result flash[:notice] = t('Thanks for signing up!') format.html { redirect_to :action => 'index' } format.xml { render :xml => @user, :status => :created, :location => @user } else format.html { render :action => "new" } format.xml { render :xml => @user.errors, :status => :unprocessable_entity } end end end end The line giving the error being @user.save do |result|... I feel I'm missing something pretty basic but I have been staring at this for too long because I can't find what it is. I checked with the code on Railscasts episodes 160 and 170 and the bones GitHub project but found nothing. Thanks for your help, usr

    Read the article

  • Is it possible to profile memory usage of unit tests?

    - by Rowland Shaw
    I'm looking at building some unit tests to ascertain if resources are leaking (or not) using the unit testing framework that comes with Visual Studio. At present, I'm evaluating the latest version of ANTS Profiler, but I can't quite work out if it allows me to force a snapshot from code (so that I can take a snapshot, run a unit test a few hundred times, force a garbage collection, and take another snapshot, and save the results out for later analysis). Is this possible to do with ANTS/Visual Studio or should I be exploring options with other profilers?

    Read the article

  • Is it against best practice to throw Exception on most JUnit tests?

    - by Chris Knight
    Almost all of my JUnit tests are written with the following signature: public void testSomething() throws Exception My reasoning is that I can focus on what I'm testing rather than exception handling which JUnit appears to give me for free. But am I missing anything by doing this? Is it against best practice? Would I gain anything by explicitly catching specific exceptions in my test and then fail()'ing on them?

    Read the article

  • How to unit tests functions which return results asyncronously in XCode?

    - by DevDevDev
    I have something like - (void)getData:(SomeParameter*)param { // Remotely call out for data returned asynchronously // returns data via a delegate method } - (void)handleDataDelegateMethod:(NSData*)data { // Handle returned data } I want to write a unit test for this, how can I do something better than NSData* returnedData = nil; - (void)handleDataDelegateMethod:(NSData*)data { returnedData = data; } - (void)test { [obj getData:param]; while (!returnedData) { [NSThread sleep:1]; } // Make tests on returnedData }

    Read the article

  • Is it possible to run NUnit 2.5.3 tests in Gallio?

    - by Allrameest
    When I try to run NUnit tests in resharper I get this: Detected a probable test framework assembly version mismatch. Referenced test frameworks: 'nunit.framework, Version=2.5.3.9345, Culture=neutral, PublicKeyToken=96d09a1eb7f44a77'. Supported test frameworks: 'nunit.framework, Version=2.5.0.0-2.5.2.65535', 'nunit.framework, Version=2.4.8.0-2.4.8.65535'. I use Gallio v3.1 build 397.

    Read the article

  • How can I prevent javascript errors in my selenium tests from hanging my build system?

    - by lowellk
    We are having a problem with our selenium tests which run in IE on a virtual machine. Whenever there is a javascript error, a popup shows up and puts our system into a 'stuck' state - a user has to go clear that and restart the selenium test run. Is there a way to prevent the javascript error popup from putting the system into its stuck state? Would setting window.error be of any help here?

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >