Search Results

Search found 24159 results on 967 pages for 'droid test'.

Page 17/967 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Regular expression test can't decide between true and false (JavaScript)

    - by nw
    I get this behavior in both Chrome (Developer Tools) and Firefox (Firebug). Note the regex test returns alternating true/false values: > var re = /.*?\bbl.*\bgr.*/gi; undefined > re /.*?\\bbl.*\\bgr.*/gi > re.test("Blue-Green"); true > re.test("Blue-Green"); false > re.test("Blue-Green"); true > re.test("Blue-Green"); false However, testing the same regex as a literal: > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true I can't explain this and it's making debugging very difficult. Can anyone explain this behavior?

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • Homoscedascity test for Two-Way ANOVA

    - by aL3xa
    I've been using var.test and bartlett.test to check basic ANOVA assumptions, among others, homoscedascity (homogeniety, equality of variances). Procedure is quite simple for One-Way ANOVA: bartlett.test(x ~ g) # where x is numeric, and g is a factor var.test(x ~ g) But, for 2x2 tables, i.e. Two-Way ANOVA's, I want to do something like this: bartlett.test(x ~ c(g1, g2)) # or with list; see latter: var.test(x ~ list(g1, g2)) Of course, ANOVA assumptions can be checked with graphical procedures, but what about "an arithmetic option"? Is that, at all, manageable? How do you test homoscedascity in Two-Way ANOVA?

    Read the article

  • JUnit Best Practice: Different Fixtures for each @Test

    - by Juri Glass
    Hi I understand that there are @Before and @BeforeClass, which are used to define fixtures for the @Test's. But what should I use if I need different fixtures for each @Test? Should I define the fixture in the @Test? Should I create a test class for each @Test? I am asking for the best practices here, since both solutions aren't clean in my opinion. With the first solution, I would test the initialization code. And with the second solution I would break the "one test class for each class" pattern.

    Read the article

  • Unit Testing in QTestLib - running single test / tests in class / all tests

    - by Dave
    I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button. All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes. I'm sure I'm missing something. I'd like to be able to easily: Run a single test method Run the tests in an entire class Run all tests Any of those would call the appropriate setup / teardown functions.

    Read the article

  • Perl unit test - start a tcp server & continue

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the tcp server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call "make test" after "perl Makefile.PL", this test starts & I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the "&" at the end to force it to start in background & then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • MS Test : How do I enforce exception message with ExpectedException attribute

    - by CRice
    I thought these two tests should behave identically, in fact I have written the test in my project using MS Test only to find out now that it does not respect the expected message in the same way that nunit does. nunit (fails): [Test, ExpectedException(typeof(System.FormatException), ExpectedMessage = "blah")] public void Validate() { int.Parse("dfd"); } ms test (passes): [TestMethod, ExpectedException(typeof(System.FormatException), "blah")] public void Validate() { int.Parse("dfd"); } No matter what message I give the ms test, it will pass. Is there any way to get the ms test to fail if the message is not right? Can I even create my own exception attribute? I would rather not have to write a try catch block for every test where this occurs.

    Read the article

  • Maven - Selenium - Possible to run only one test

    - by Jonas Söderström
    Hi We are using JUnit - Selenium for our web tests. We use Maven to start them and build a surefire report. The test suite is pretty large and takes a while to run and sometimes single tests fail because the browser won't start. I want to be able run a SINGLE test using maven so I retest the tests that fail and update the report. I can use mvn test -Dtest=TESTCLASSNAME to run all the tests in one test class, but this is not good enough since it takes about 10 minutes to run all the tests in our most complicated test classes and it's very likely that some other test will fail (because the browser wont start) and this will mess up my report. I know I can run one test from Eclipse but that is not what I am looking for. Any help on this would be very appriciated

    Read the article

  • Unit Test this - Simple method but don't know what's to test!

    - by user309705
    a very simple method, but don't know what's to test! I'd like to test this method in Business Logic Layer, and the _dataAccess apparently is from data layer. public DataSet GetLinksByAnalysisId(int analysisId) { DataSet result = new DataSet(); result = _dataAccess.SelectAnalysisLinksOverviewByAnalysisId(analysisId); return result; } All Im testing really is to test _dataAccess.SelectAnalysisLinksOverviewByAnalysisId() is get called! here's my test code (using Rhino mock) [TestMethod] public void Test() { var _dataAccess = MockRepository.GenerateMock<IDataAccess>(); _dataAccess.Expect(x => x.SelectAnalysisLinksOverviewByAnalysisId(_settings.UserName, 0, out dateExecuted)); var analysisBusinessLogic = new AnalysisLinksBusinessLogic(_dataAccess); analysisBusinessLogic.GetLinksByAnalysisId(_settings, 0); _dataAccess.VerifyAllExpectations(); } Let me know if you writing the test for this method what would you test against? Many Thanks!

    Read the article

  • Rails test across multiple environments

    - by DSimon
    Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases. Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe. The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.

    Read the article

  • adding org.springframework.test package to spring-2.5.6-SEC01.jar in netbeans

    - by John
    This is probably very simple, however I can't get this to work. I want to use AbstractDependencyInjectionSpringContextTest class which is in org.springframework.test package. This package is not included in Netbeans' spring library, so I want to add it. So what I have tried so far is: copy and paste "test" directory (downloaded from spring) into the Netbeans' spring-2.5.6-SEC01.jar file (copy it to org.springframework directory in that jar so I can use org.springframework.test to import it). If I go to project/libraries in Netbeans it is there, but when I try to import org.springframework.test.*; the autocomplition doesn't give me the option to choose test directory from org.soringframework package. create a new library which points to "test" directory and add it to the project- as there is no any jar file in "test" I'm not sure what path should I use to import it. I'm pretty sure this is something very simple but I'm still a novice and can't figure this out.

    Read the article

  • Cannot get principal id on my Spock test

    - by Ant's
    I have a controller like this : @Secured(['ROLE_USER','IS_AUTHENTICATED_FULLY']) def userprofile(){ def user = User.get(springSecurityService.principal.id) params.id = user.id redirect (action : "show", params:params) } I want to test the controller above controller in spock, so I wrote a test code like this: def 'userProfile test'() { setup: mockDomain(User,[new User(username:"amtoasd",password:"blahblah")]) when: controller.userprofile() then: response.redirectUrl == "/user/show/1" } When I run my test, this test fails with this error message : java.lang.NullPointerException: Cannot get property 'principal' on null object at mnm.schedule.UserController.userprofile(UserController.groovy:33) And in case of Integration test: class UserSpec extends IntegrationSpec { def springSecurityService def 'userProfile test'() { setup: def userInstance = new User(username:"antoaravinth",password:"secrets").save() def userInstance2 = new User(username:"antoaravinthas",password:"secrets").save() def usercontroller = new UserController() usercontroller.springSecurityService = springSecurityService when: usercontroller.userprofile() then: response.redirectUrl == "/user/sho" } } I get the same error as well. What went wrong? Thanks in advance.

    Read the article

  • Junit test that creates other tests

    - by Benju
    Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception. Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test. I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files. Any suggestions?

    Read the article

  • SSSD Authentication

    - by user24089
    I just built a test server running OpenSuSE 12.1 and am trying to learn how configure sssd, but am not sure where to begin to look for why my config cannot allow me to authenticate. server:/etc/sssd # cat sssd.conf [sssd] config_file_version = 2 reconnection_retries = 3 sbus_timeout = 30 services = nss,pam domains = test.local [nss] filter_groups = root filter_users = root reconnection_retries = 3 [pam] reconnection_retries = 3 # Section created by YaST [domain/mose.cc] access_provider = ldap ldap_uri = ldap://server.test.local ldap_search_base = dc=test,dc=local ldap_schema = rfc2307bis id_provider = ldap ldap_user_uuid = entryuuid ldap_group_uuid = entryuuid ldap_id_use_start_tls = True enumerate = False cache_credentials = True chpass_provider = krb5 auth_provider = krb5 krb5_realm = TEST.LOCAL krb5_kdcip = server.test.local server:/etc # cat ldap.conf base dc=test,dc=local bind_policy soft pam_lookup_policy yes pam_password exop nss_initgroups_ignoreusers root,ldap nss_schema rfc2307bis nss_map_attribute uniqueMember member ssl start_tls uri ldap://server.test.local ldap_version 3 pam_filter objectClass=posixAccount server:/etc # cat nsswitch.conf passwd: compat sss group: files sss hosts: files dns networks: files dns services: files protocols: files rpc: files ethers: files netmasks: files netgroup: files publickey: files bootparams: files automount: files ldap aliases: files shadow: compat server:/etc # cat krb5.conf [libdefaults] default_realm = TEST.LOCAL clockskew = 300 [realms] TEST.LOCAL = { kdc = server.test.local admin_server = server.test.local database_module = ldap default_domain = test.local } [logging] kdc = FILE:/var/log/krb5/krb5kdc.log admin_server = FILE:/var/log/krb5/kadmind.log default = SYSLOG:NOTICE:DAEMON [dbmodules] ldap = { db_library = kldap ldap_kerberos_container_dn = cn=krbContainer,dc=test,dc=local ldap_kdc_dn = cn=Administrator,dc=test,dc=local ldap_kadmind_dn = cn=Administrator,dc=test,dc=local ldap_service_password_file = /etc/openldap/ldap-pw ldap_servers = ldaps://server.test.local } [domain_realm] .test.local = TEST.LOCAL [appdefaults] pam = { ticket_lifetime = 1d renew_lifetime = 1d forwardable = true proxiable = false minimum_uid = 1 clockskew = 300 external = sshd use_shmem = sshd } If I log onto the server as root I can su into an ldap user, however if I try to console locally or ssh remotely I am unable to authenticate. getent doesn't show the ldap entries for users, Im not sure if I need to look at LDAP, nsswitch, or what: server:~ # ssh localhost -l test Password: Password: Password: Permission denied (publickey,keyboard-interactive). server:~ # su test test@server:/etc> id uid=1000(test) gid=100(users) groups=100(users) server:~ # tail /var/log/messages Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): system info: [Client not found in Kerberos database] Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): authentication failure; logname=LOGIN uid=0 euid=0 tty=/dev/ttyS1 ruser= rhost= user=test Nov 24 09:36:44 server login[14508]: pam_sss(login:auth): received for user test: 4 (System error) Nov 24 09:36:44 server login[14508]: FAILED LOGIN SESSION FROM /dev/ttyS1 FOR test, System error server:~ # vi /etc/pam.d/common-auth auth required pam_env.so auth sufficient pam_unix2.so auth required pam_sss.so use_first_pass server:~ # vi /etc/pam.d/sshd auth requisite pam_nologin.so auth include common-auth account requisite pam_nologin.so account include common-account password include common-password session required pam_loginuid.so session include common-session session optional pam_lastlog.so silent noupdate showfailed

    Read the article

  • Ten Things I Wish I’d Known When I Started Using tSQLt and SQL Test

    The open-source Unit Test framework tSQLt is a great way of writing unit tests in the same language as the one being tested. In retrospect, after using tSQLt for a while, what are the 'gotchas'; those things that you'd have been better off knowing about before you get started? David Green lists a few tips he wished he'd read beforehand. Learn Agile Database Development Best PracticesAgile database development experts Sebastian Meine and Dennis Lloyd are running day-long classes designed to complement Red Gate’s SQL in the City US tour. Classes will be held in San Francisco, Chicago, Boston and Seattle. Register Now.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >