Search Results

Search found 23949 results on 958 pages for 'test'.

Page 15/958 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • How to unit test with lots of IO

    - by Eric
    I write Linux embedded software which closely integrates with hardware. My modules are such as : -CMOS video input with kernel driver (v4l2) -Hardware h264/mpeg4 encoders (texas instuments) -Audio Capture/Playback (alsa) -Network IO I'd like to have automated testing for those functionalities, such as integration testing. I am not sure how I can automate this process since most of the top level functionalities I face are IO bound. Sure, it is easy to test functions individually, but whole process checking means depending on tons of external dependencies only available at runtime.

    Read the article

  • Test Your Web Application Using Free Web Apps Security Tools

    The budget restrictions and time to test are common factor, and this is where a handful of free and open source web application security testing tools proves to be practical. The following are tools that must be in your toolkit or at least on your radar, particularly if you're not able to rationalize spitting out the money needed by commercial alternatives. It should be a little more time overwhelming and painful, but in the end you're still going to get good results.

    Read the article

  • IE 8 Finishes Last on Google JavaScript Test

    Google last week provided an additional means for users to test JavaScript performance in Web browsers....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google veut suivre vos achats hors de la sphère internet, un projet en phase de test

    Google veut suivre vos achats hors de la sphère internet, un projet en phase de test Début octobre, Google avait annoncé son intention de mieux mesurer l'efficacité de ses publicités en ligne sur les ventes en boutique physique. Il faut dire que l'entreprise est loin d'être la seule ; de nombreuses compagnies ont déjà cherché à établir un lien entre le nombre de clics sur une publicité et un achat réellement effectué. Digiday affirme que pour y parvenir, Google a lancé un programme qui se...

    Read the article

  • Deploying, but without those pesky test files!

    - by Chris Skardon
    Silverlight testing is great, we all know that (don’t we??), we’re expected to do it as part of the development process, but once we’ve got an awesome application written and we come to deploy it, we don’t want the test files going out with it… You might be like me, have the files in a Web project – let’s face it, that’s how we’re pushed into doing it… So let’s stick with it! Now. I’m deploying via the wonders of the Web Deployment shizzle, but this also applies to the classic ‘installer’ project as well.. Baaaasically, we’re going to use the ‘Debug’ / ‘Release’ configurations to include given files. ?? OK, you know in the top of your visual studio editor, you (usually) have a drop down which predominantly reads ‘Debug’? Those are ‘configurations’. Mostly we don’t bother changing it, primarily due to laziness, but also the fact that we generally don’t see ‘Release’ as actually doing anything other than making it harder to find problems :) Well today my friends we’re going to change that bad boy… The next few steps are just helping you set up a new ‘Debug’ configuration, but you can just switch to the ‘Release’ configuration and skip to the end… First let’s go to the Configuration Manager. There are multiple ways, through the ‘Build’ menu (at the bottom), or via the drop down which currently has ‘Debug’ in it :) Got it? Select ‘New’ from the ‘Active solution configuration’ drop down: Create a new configuration, kind of like the picture below shows (or for those graphically challenged – Name: DebugWithNoTests, and Copy settings from: ‘Debug’, ensuring the ‘Create new project configurations’ checkbox is checked). Press OK. VS will do some shizzle, and in the Configuration manager, you will see pretty much exactly what you did before, only with ‘Debug’ replaced with ‘DebugWithNoTests’. Turn off the build options for the test projects. We won’t need them.. IF you skipped down from the top, this is where you’ll be wanting to stop!!! Close and now we’re one notepad step away from achieving our goals. Yes, I said notepad. You can’t do what we’re going to do in VS. (Pity). Go to the folder where your web project is, and right click on the ‘.csproj’ file. Now open it with notepad. Head on down to the ‘<Content Include’ bits, they’ll look like this: <ItemGroup> <Content Include="ClientBin\Tests.xap" /> ... </ItemGroup> Take this and modify each of the files you don’t want deployed and change to: <Content Include="ClientBin\Tests.xap" Condition="'$(Configuration)' == 'Debug'" /> Once you’ve got that sorted publish your project, once with the Debug configuration selected, and another with any other configuration (‘Release’, ‘DebugWithNoTests’ etc).. No files! Huzzah!

    Read the article

  • New Test Fest Available

    - by Cinzia Mascanzoni
    Test Fest is back by popular demand, and has been included as one of the many partner benefits for attending OPN Exchange this year. Remind your partners to join us from October 1 - 4 in the Marriott Marquis, Juniper Room at Oracle OpenWorld, and be sure to watch and share the new video.

    Read the article

  • Quick introduction to the Web Load Test features of Visual Studio 2010

    any developers are not even aware that you can set up and run some very sophisticated web load tests for an ASP.NET Application right from within Visual Studio. This article provides a quick introduction to the Web Load Test features of Visual Studio 2010.  read moreBy Peter BrombergDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Test de la caméra Raspberry Pi 5M, tutoriel par Nicolargo

    Bonjour,Nous avons précédemment publié le tutoriel :Raspberry Pi : Déballage et installationComme suite nous proposons ce tutoriel : Test de la caméra Raspberry Pi 5M Citation: Raspberry propose depuis peu et pour moins de 25 € une caméra dédiée à sa gamme Pi. Cette caméra de quelques grammes se connecte à une Raspberry Pi (modèle A ou B) à travers une interface CSI v2 (MIPI camera interface) dédiée. Grâce à Kubii (fournisseur Farnell en France), j'ai pu obtenir rapidement...

    Read the article

  • copied the reachability-test from apple, but the linker gives a failure

    - by nico
    i have tried to use the reachability-project published by apple to detect a reachability in an own example. i copied the most initialization, but i get this failure in the linker: Ld build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test normal armv6 cd /Users/uid04100/Documents/TEST setenv IPHONEOS_DEPLOYMENT_TARGET 3.1.3 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk -L/Users/uid04100/Documents/TEST/build/Debug-iphoneos -F/Users/uid04100/Documents/TEST/build/Debug-iphoneos -filelist /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test.LinkFileList -dead_strip -miphoneos-version-min=3.1.3 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test Undefined symbols: "_SCNetworkReachabilitySetCallback", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithAddress", referenced from: +[Reachability reachabilityWithAddress:] in Reachability.o "_SCNetworkReachabilityScheduleWithRunLoop", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityGetFlags", referenced from: -[Reachability connectionRequired] in Reachability.o -[Reachability currentReachabilityStatus] in Reachability.o "_SCNetworkReachabilityUnscheduleFromRunLoop", referenced from: -[Reachability stopNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithName", referenced from: +[Reachability reachabilityWithHostName:] in Reachability.o ld: symbol(s) not found collect2: ld returned 1 exit status my delegate.h: import @class Reachability; @interface testAppDelegate : NSObject { UIWindow *window; UINavigationController *navigationController; Reachability* hostReach; Reachability* internetReach; Reachability* wifiReach; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UINavigationController *navigationController; @end my delegate.m: import "testAppDelegate.h" import "SecondViewController.h" import "Reachability.h" @implementation testAppDelegate @synthesize window; @synthesize navigationController; (void) updateInterfaceWithReachability: (Reachability*) curReach { if(curReach == hostReach) { BOOL connectionRequired= [curReach connectionRequired]; if(connectionRequired) { //in these brackets schould be some code with sense, if i´m getting it to run } else { } } if(curReach == internetReach) { } if(curReach == wifiReach) { } } //Called by Reachability whenever status changes. - (void) reachabilityChanged: (NSNotification* )note { Reachability* curReach = [note object]; NSParameterAssert([curReach isKindOfClass: [Reachability class]]); [self updateInterfaceWithReachability: curReach]; } (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after application launch // Observe the kNetworkReachabilityChangedNotification. When that notification is posted, the // method "reachabilityChanged" will be called. // [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(reachabilityChanged:) name: kReachabilityChangedNotification object: nil]; //Change the host name here to change the server your monitoring hostReach = [[Reachability reachabilityWithHostName: @"www.apple.com"] retain]; [hostReach startNotifer]; [self updateInterfaceWithReachability: hostReach]; internetReach = [[Reachability reachabilityForInternetConnection] retain]; [internetReach startNotifer]; [self updateInterfaceWithReachability: internetReach]; wifiReach = [[Reachability reachabilityForLocalWiFi] retain]; [wifiReach startNotifer]; [self updateInterfaceWithReachability:wifiReach]; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } (void)dealloc { [navigationController release]; [window release]; [super dealloc]; } @end

    Read the article

  • Kruskal-Wallis test with details on pairwise comparisons

    - by dalloliogm
    The standard stats::kruskal.test module allows to calculate the kruskal-wallis test on a dataset: >>> data(diamonds) >>> kruskal.test.test(price~carat, data=diamonds) Kruskal-Wallis rank sum test data: price by carat by color Kruskal-Wallis chi-squared = 50570.15, df = 272, p-value < 2.2e-16 this is correct, it is giving me a probability that all the groups in the data have the same mean. However, I would like to have the details for each pair comparison, like if diamonds of colors D and E have the same mean price, as some other softwares do (SPSS) when you ask for a Kruskal test. I have found kruskalmc from the package pgirmess which allows me to do what I want to do: > kruskalmc(diamonds$price, diamonds$color) Multiple comparison test after Kruskal-Wallis p.value: 0.05 Comparisons obs.dif critical.dif difference D-E 571.7459 747.4962 FALSE D-F 2237.4309 751.5684 TRUE D-G 2643.1778 726.9854 TRUE D-H 4539.4392 774.4809 TRUE D-I 6002.6286 862.0150 TRUE D-J 8077.2871 1061.7451 TRUE E-F 2809.1767 680.4144 TRUE E-G 3214.9237 653.1587 TRUE E-H 5111.1851 705.6410 TRUE E-I 6574.3744 800.7362 TRUE E-J 8649.0330 1012.6260 TRUE F-G 405.7470 657.8152 FALSE F-H 2302.0083 709.9533 TRUE F-I 3765.1977 804.5390 TRUE F-J 5839.8562 1015.6357 TRUE G-H 1896.2614 683.8760 TRUE G-I 3359.4507 781.6237 TRUE G-J 5434.1093 997.5813 TRUE H-I 1463.1894 825.9834 TRUE H-J 3537.8479 1032.7058 TRUE I-J 2074.6585 1099.8776 TRUE However, this package only allows for one categoric variable (e.g. I can't study the prices clustered by color and by carat, as I can do with kruskal.test), and I don't know anything about the pgirmess package, whether it is maintained or not, or if it is tested. Can you recommend me a package to execute the Kruskal-Wallis test which returns details for every comparison? How would you handle the problem?

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Regular expression test can't decide between true and false (JavaScript)

    - by nw
    I get this behavior in both Chrome (Developer Tools) and Firefox (Firebug). Note the regex test returns alternating true/false values: > var re = /.*?\bbl.*\bgr.*/gi; undefined > re /.*?\\bbl.*\\bgr.*/gi > re.test("Blue-Green"); true > re.test("Blue-Green"); false > re.test("Blue-Green"); true > re.test("Blue-Green"); false However, testing the same regex as a literal: > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true I can't explain this and it's making debugging very difficult. Can anyone explain this behavior?

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • Homoscedascity test for Two-Way ANOVA

    - by aL3xa
    I've been using var.test and bartlett.test to check basic ANOVA assumptions, among others, homoscedascity (homogeniety, equality of variances). Procedure is quite simple for One-Way ANOVA: bartlett.test(x ~ g) # where x is numeric, and g is a factor var.test(x ~ g) But, for 2x2 tables, i.e. Two-Way ANOVA's, I want to do something like this: bartlett.test(x ~ c(g1, g2)) # or with list; see latter: var.test(x ~ list(g1, g2)) Of course, ANOVA assumptions can be checked with graphical procedures, but what about "an arithmetic option"? Is that, at all, manageable? How do you test homoscedascity in Two-Way ANOVA?

    Read the article

  • JUnit Best Practice: Different Fixtures for each @Test

    - by Juri Glass
    Hi I understand that there are @Before and @BeforeClass, which are used to define fixtures for the @Test's. But what should I use if I need different fixtures for each @Test? Should I define the fixture in the @Test? Should I create a test class for each @Test? I am asking for the best practices here, since both solutions aren't clean in my opinion. With the first solution, I would test the initialization code. And with the second solution I would break the "one test class for each class" pattern.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >