Search Results

Search found 25284 results on 1012 pages for 'test driven'.

Page 20/1012 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Is there any way to test how will the site perform under load

    - by Pankaj Upadhyay
    I have made an Asp.net MVC website and hosted it on a shared hosting provider. Since my website surrounds a very generic idea, it might have number of concurrent users sometime in future. So, I was thinking of a way to test my website for on-load performance. Like how will the site perform when 100 or 1000 users are online at the same time and surfing the website. This will also make me understand whether my LINQ queries are well written or not.

    Read the article

  • Test your internet connection - Emtel Mobile Internet

    After yesterday's report on Emtel Fixed Broadband (I'm still wondering where the 'fixed' part is), I did the same tests on Emtel Mobile Internet. For this I'm using the Huawei E169G HSDPA USB stick, connected to the same machine. Actually, this is my fail-safe internet connection and the system automatically switches between them if a problem, let's say timeout, etc. has been detected on the main line. For better comparison I used exactly the same servers on Speedtest.net. The results Following are the results of Rose Hill (hosted by Emtel) and respectively Frankfurt, Germany (hosted by Vodafone DE): Speedtest.net result of 31.05.2013 between Flic en Flac and Rose Hill, Mauritius (Emtel - Mobile Internet) Speedtest.net result of 31.05.2013 between Flic en Flac and Frankfurt, Germany (Emtel - Mobile Internet) As you might easily see, there is a big difference in speed between national and international connections. More interestingly are the results related to the download and upload ratio. I'm not sure whether connections over Emtel Mobile Internet are asymmetric or symmetric like the Fixed Broadband. Might be interesting to find out. The first test result actually might give us a clue that the connection could be asymmetric with a ratio of 3:1 but again I'm not sure. I'll find out and post an update on this. It depends on network coverage Later today I was on tour with my tablet, a Samsung Galaxy Tab 10.1 (model GT-P7500) running on Android 4.0.4 (Ice Cream Sandwich), and did some more tests using the Speedtest.net app. The results are actually as expected and in areas with better network coverage you will get better results after all. At least, as long as you stay inside the national networks. For anything abroad, it doesn't really matter. But see for yourselves: Speedtest.net result of 31.05.2013 between Cascavelle and servers in Rose Hill, Mauritius (Emtel - Mobile Internet), Port Louis, Mauritius and Kuala Lumpur, Malaysia It's rather shocking and frustrating to see how the speed on international destinations goes down. And the full capability of the tablet's integrated modem (HSDPA: 21 Mbps; HSUPA: 5.76 Mbps) isn't used, too. I guess, this demands more tests in other areas of the island, like Ebene, Pailles or Port Louis. I'll keep you updated... The question remains: Alternatives? After the publication of the test results on Fixed Broadband I had some exchange with others on Facebook. Sadly, it seems that there are really no alternatives to what Emtel is offering at the moment. There are the various internet packages by Mauritius Telecom feat. Orange, like ADSL, MyT and Mobile Internet, and there is Bharat Telecom with their Bees offer which is currently limited to Ebene and parts of Quatre Bornes.

    Read the article

  • How to unit test with lots of IO

    - by Eric
    I write Linux embedded software which closely integrates with hardware. My modules are such as : -CMOS video input with kernel driver (v4l2) -Hardware h264/mpeg4 encoders (texas instuments) -Audio Capture/Playback (alsa) -Network IO I'd like to have automated testing for those functionalities, such as integration testing. I am not sure how I can automate this process since most of the top level functionalities I face are IO bound. Sure, it is easy to test functions individually, but whole process checking means depending on tons of external dependencies only available at runtime.

    Read the article

  • Test Your Web Application Using Free Web Apps Security Tools

    The budget restrictions and time to test are common factor, and this is where a handful of free and open source web application security testing tools proves to be practical. The following are tools that must be in your toolkit or at least on your radar, particularly if you're not able to rationalize spitting out the money needed by commercial alternatives. It should be a little more time overwhelming and painful, but in the end you're still going to get good results.

    Read the article

  • IE 8 Finishes Last on Google JavaScript Test

    Google last week provided an additional means for users to test JavaScript performance in Web browsers....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Google veut suivre vos achats hors de la sphère internet, un projet en phase de test

    Google veut suivre vos achats hors de la sphère internet, un projet en phase de test Début octobre, Google avait annoncé son intention de mieux mesurer l'efficacité de ses publicités en ligne sur les ventes en boutique physique. Il faut dire que l'entreprise est loin d'être la seule ; de nombreuses compagnies ont déjà cherché à établir un lien entre le nombre de clics sur une publicité et un achat réellement effectué. Digiday affirme que pour y parvenir, Google a lancé un programme qui se...

    Read the article

  • Use a custom value object or a Guid as an entity identifier in a distributed system?

    - by Kazark
    tl;dr I've been told that in domain-driven design, an identifier for an entity could be a custom value object, i.e. something other than Guid, string, int, etc. Can this really be advisable in a distributed system? Long version I will invent an situation analogous to the one I am currently facing. Say I have a distributed system in which a central concept is an egg. The system allows you to order eggs and see spending reports and inventory-centric data such as quantity on hand, usage, valuation and what have you. There area variety of services backing these behaviors. And say there is also another app which allows you to compose recipes that link to a particular egg type. Now egg type is broken down by the species—ostrich, goose, duck, chicken, quail. This is fine and dandy because it means that users don't end up with ostrich eggs when they wanted quail eggs and whatnot. However, we've been getting complaints because jumbo chicken eggs are not even close to equivalent to small ones. The price is different, and they really aren't substitutable in recipes. And here we thought we were doing users a favor by not overwhelming them with too many options. Currently each of the services (say, OrderSubmitter, EggTypeDefiner, SpendingReportsGenerator, InventoryTracker, RecipeCreator, RecipeTracker, or whatever) are identifying egg types with an industry-standard integer representation the species (let's call it speciesCode). We realize we've goofed up because this change could effect every service. There are two basic proposed solutions: Use a predefined identifier type like Guid as the eggTypeID throughout all the services, but make EggTypeDefiner the only service that knows that this maps to a speciesCode and eggSizeCode (and potentially to an isOrganic flag in the future, or whatever). Use an EggTypeID value object which is a combination of speciesCode and eggSizeCode in every service. I've proposed the first solution because I'm hoping it better encapsulates the definition of what an egg type is in the EggTypeDefiner and will be more resilient to changes, say if some people now want to differentiate eggs by whether or not they are "organic". The second solution is being suggested by some people who understand DDD better than I do in the hopes that less enrichment and lookup will be necessary that way, with the justification that in DDD using a value object as an ID is fine. Also, they are saying that EggTypeDefiner is not a domain and EggType is not an entity and as such should not have a Guid for an ID. However, I'm not sure the second solution is viable. This "value object" is going to have to be serialized into JSON and URLs for GET requests and used with a variety of technologies (C#, JavaScript...) which breaks encapsulation and thus removes any behavior of the identifier value object (is either of the fields optional? etc.) Is this a case where we want to avoid something that would normally be fine in DDD because we are trying to do DDD in a distributed fashion? Summary Can it be a good idea to use a custom value object as an identifier in a distributed system (solution #2)?

    Read the article

  • Deploying, but without those pesky test files!

    - by Chris Skardon
    Silverlight testing is great, we all know that (don’t we??), we’re expected to do it as part of the development process, but once we’ve got an awesome application written and we come to deploy it, we don’t want the test files going out with it… You might be like me, have the files in a Web project – let’s face it, that’s how we’re pushed into doing it… So let’s stick with it! Now. I’m deploying via the wonders of the Web Deployment shizzle, but this also applies to the classic ‘installer’ project as well.. Baaaasically, we’re going to use the ‘Debug’ / ‘Release’ configurations to include given files. ?? OK, you know in the top of your visual studio editor, you (usually) have a drop down which predominantly reads ‘Debug’? Those are ‘configurations’. Mostly we don’t bother changing it, primarily due to laziness, but also the fact that we generally don’t see ‘Release’ as actually doing anything other than making it harder to find problems :) Well today my friends we’re going to change that bad boy… The next few steps are just helping you set up a new ‘Debug’ configuration, but you can just switch to the ‘Release’ configuration and skip to the end… First let’s go to the Configuration Manager. There are multiple ways, through the ‘Build’ menu (at the bottom), or via the drop down which currently has ‘Debug’ in it :) Got it? Select ‘New’ from the ‘Active solution configuration’ drop down: Create a new configuration, kind of like the picture below shows (or for those graphically challenged – Name: DebugWithNoTests, and Copy settings from: ‘Debug’, ensuring the ‘Create new project configurations’ checkbox is checked). Press OK. VS will do some shizzle, and in the Configuration manager, you will see pretty much exactly what you did before, only with ‘Debug’ replaced with ‘DebugWithNoTests’. Turn off the build options for the test projects. We won’t need them.. IF you skipped down from the top, this is where you’ll be wanting to stop!!! Close and now we’re one notepad step away from achieving our goals. Yes, I said notepad. You can’t do what we’re going to do in VS. (Pity). Go to the folder where your web project is, and right click on the ‘.csproj’ file. Now open it with notepad. Head on down to the ‘<Content Include’ bits, they’ll look like this: <ItemGroup> <Content Include="ClientBin\Tests.xap" /> ... </ItemGroup> Take this and modify each of the files you don’t want deployed and change to: <Content Include="ClientBin\Tests.xap" Condition="'$(Configuration)' == 'Debug'" /> Once you’ve got that sorted publish your project, once with the Debug configuration selected, and another with any other configuration (‘Release’, ‘DebugWithNoTests’ etc).. No files! Huzzah!

    Read the article

  • New Test Fest Available

    - by Cinzia Mascanzoni
    Test Fest is back by popular demand, and has been included as one of the many partner benefits for attending OPN Exchange this year. Remind your partners to join us from October 1 - 4 in the Marriott Marquis, Juniper Room at Oracle OpenWorld, and be sure to watch and share the new video.

    Read the article

  • Quick introduction to the Web Load Test features of Visual Studio 2010

    any developers are not even aware that you can set up and run some very sophisticated web load tests for an ASP.NET Application right from within Visual Studio. This article provides a quick introduction to the Web Load Test features of Visual Studio 2010.  read moreBy Peter BrombergDid you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Test de la caméra Raspberry Pi 5M, tutoriel par Nicolargo

    Bonjour,Nous avons précédemment publié le tutoriel :Raspberry Pi : Déballage et installationComme suite nous proposons ce tutoriel : Test de la caméra Raspberry Pi 5M Citation: Raspberry propose depuis peu et pour moins de 25 € une caméra dédiée à sa gamme Pi. Cette caméra de quelques grammes se connecte à une Raspberry Pi (modèle A ou B) à travers une interface CSI v2 (MIPI camera interface) dédiée. Grâce à Kubii (fournisseur Farnell en France), j'ai pu obtenir rapidement...

    Read the article

  • Free-space driven log rotation on linux?

    - by kdt
    Someone just asked me 'how long should we keep logs for our application', and my answer was 'until the disk is full' as there's no reason to throw them away other than running out of space. However, standard logrotate wants us to specify a specific period + number of rotations. Is there something similar that would let us say "rotate daily, and keep as much history as you like until there is only 5% space free"? The platform is Redhat Linux.

    Read the article

  • Empty mails received from Plesk-driven server

    - by goreSplatter
    From my Plesk managed server I keep receiving emails addressed to the configured administrator (me). Nothing special there. These mails are sent from an empty sender with no body, no subject nor any other relevant headers. I have changed the server administrator's e-mail address to an alternate one which is then the recipient. The mails are sent at irregular intervals. With variations in the timestamp I get an email every 30 minutes. Then there's a break and two and a half hours later I get the same email. I already have turned of crond for more than a day to see if this is the source of the problem. On that server there is no "foreign" software running which would cause that behavior. I have no more guesses as to why this is happening. Any suggestions?

    Read the article

  • Tool to launch a script driven by modem activity

    - by Will M
    Can anyone suggest a software tool (preferably under Windows XP or later) that would launch an application or script in response to a phone call being received on a landline phone line connected to a data modem on the same PC? or, better, in response to a sequence of touch-tones being played over such a phone line. This would allow, for example, using the telephone to manipulate firewall settings so as to create another layer of security in connection with remote internet access to that computer. I seem to recall seeing tools to do this sort of thing in the days before broadband internet access, when there was more attention to various tips and tricks for the dial-up modem, but a few attempts at Google hasn't turned anything up.

    Read the article

  • copied the reachability-test from apple, but the linker gives a failure

    - by nico
    i have tried to use the reachability-project published by apple to detect a reachability in an own example. i copied the most initialization, but i get this failure in the linker: Ld build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test normal armv6 cd /Users/uid04100/Documents/TEST setenv IPHONEOS_DEPLOYMENT_TARGET 3.1.3 setenv PATH "/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Developer/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin" /Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/gcc-4.2 -arch armv6 -isysroot /Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS3.1.3.sdk -L/Users/uid04100/Documents/TEST/build/Debug-iphoneos -F/Users/uid04100/Documents/TEST/build/Debug-iphoneos -filelist /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test.LinkFileList -dead_strip -miphoneos-version-min=3.1.3 -framework Foundation -framework UIKit -framework CoreGraphics -o /Users/uid04100/Documents/TEST/build/switchViews.build/Debug-iphoneos/test.build/Objects-normal/armv6/test Undefined symbols: "_SCNetworkReachabilitySetCallback", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithAddress", referenced from: +[Reachability reachabilityWithAddress:] in Reachability.o "_SCNetworkReachabilityScheduleWithRunLoop", referenced from: -[Reachability startNotifer] in Reachability.o "_SCNetworkReachabilityGetFlags", referenced from: -[Reachability connectionRequired] in Reachability.o -[Reachability currentReachabilityStatus] in Reachability.o "_SCNetworkReachabilityUnscheduleFromRunLoop", referenced from: -[Reachability stopNotifer] in Reachability.o "_SCNetworkReachabilityCreateWithName", referenced from: +[Reachability reachabilityWithHostName:] in Reachability.o ld: symbol(s) not found collect2: ld returned 1 exit status my delegate.h: import @class Reachability; @interface testAppDelegate : NSObject { UIWindow *window; UINavigationController *navigationController; Reachability* hostReach; Reachability* internetReach; Reachability* wifiReach; } @property (nonatomic, retain) IBOutlet UIWindow *window; @property (nonatomic, retain) IBOutlet UINavigationController *navigationController; @end my delegate.m: import "testAppDelegate.h" import "SecondViewController.h" import "Reachability.h" @implementation testAppDelegate @synthesize window; @synthesize navigationController; (void) updateInterfaceWithReachability: (Reachability*) curReach { if(curReach == hostReach) { BOOL connectionRequired= [curReach connectionRequired]; if(connectionRequired) { //in these brackets schould be some code with sense, if i´m getting it to run } else { } } if(curReach == internetReach) { } if(curReach == wifiReach) { } } //Called by Reachability whenever status changes. - (void) reachabilityChanged: (NSNotification* )note { Reachability* curReach = [note object]; NSParameterAssert([curReach isKindOfClass: [Reachability class]]); [self updateInterfaceWithReachability: curReach]; } (void)applicationDidFinishLaunching:(UIApplication *)application { // Override point for customization after application launch // Observe the kNetworkReachabilityChangedNotification. When that notification is posted, the // method "reachabilityChanged" will be called. // [[NSNotificationCenter defaultCenter] addObserver: self selector: @selector(reachabilityChanged:) name: kReachabilityChangedNotification object: nil]; //Change the host name here to change the server your monitoring hostReach = [[Reachability reachabilityWithHostName: @"www.apple.com"] retain]; [hostReach startNotifer]; [self updateInterfaceWithReachability: hostReach]; internetReach = [[Reachability reachabilityForInternetConnection] retain]; [internetReach startNotifer]; [self updateInterfaceWithReachability: internetReach]; wifiReach = [[Reachability reachabilityForLocalWiFi] retain]; [wifiReach startNotifer]; [self updateInterfaceWithReachability:wifiReach]; [window addSubview:[navigationController view]]; [window makeKeyAndVisible]; } (void)dealloc { [navigationController release]; [window release]; [super dealloc]; } @end

    Read the article

  • Kruskal-Wallis test with details on pairwise comparisons

    - by dalloliogm
    The standard stats::kruskal.test module allows to calculate the kruskal-wallis test on a dataset: >>> data(diamonds) >>> kruskal.test.test(price~carat, data=diamonds) Kruskal-Wallis rank sum test data: price by carat by color Kruskal-Wallis chi-squared = 50570.15, df = 272, p-value < 2.2e-16 this is correct, it is giving me a probability that all the groups in the data have the same mean. However, I would like to have the details for each pair comparison, like if diamonds of colors D and E have the same mean price, as some other softwares do (SPSS) when you ask for a Kruskal test. I have found kruskalmc from the package pgirmess which allows me to do what I want to do: > kruskalmc(diamonds$price, diamonds$color) Multiple comparison test after Kruskal-Wallis p.value: 0.05 Comparisons obs.dif critical.dif difference D-E 571.7459 747.4962 FALSE D-F 2237.4309 751.5684 TRUE D-G 2643.1778 726.9854 TRUE D-H 4539.4392 774.4809 TRUE D-I 6002.6286 862.0150 TRUE D-J 8077.2871 1061.7451 TRUE E-F 2809.1767 680.4144 TRUE E-G 3214.9237 653.1587 TRUE E-H 5111.1851 705.6410 TRUE E-I 6574.3744 800.7362 TRUE E-J 8649.0330 1012.6260 TRUE F-G 405.7470 657.8152 FALSE F-H 2302.0083 709.9533 TRUE F-I 3765.1977 804.5390 TRUE F-J 5839.8562 1015.6357 TRUE G-H 1896.2614 683.8760 TRUE G-I 3359.4507 781.6237 TRUE G-J 5434.1093 997.5813 TRUE H-I 1463.1894 825.9834 TRUE H-J 3537.8479 1032.7058 TRUE I-J 2074.6585 1099.8776 TRUE However, this package only allows for one categoric variable (e.g. I can't study the prices clustered by color and by carat, as I can do with kruskal.test), and I don't know anything about the pgirmess package, whether it is maintained or not, or if it is tested. Can you recommend me a package to execute the Kruskal-Wallis test which returns details for every comparison? How would you handle the problem?

    Read the article

  • .NET Test Harness what should it have

    - by Conor
    Hi Folks, We have a software house developing code for us on a project, .NET Web Service (WCF) and we are also paying for a test harness to be built as a separate billable task on a daily rate. I have just joined the company and am reviewing what we are getting from the software house and wanted to know what you guys in industry thought about it? Basically what we got was a WinForm that called the w/s that had an input area (Web Service Request) to drop our XML a Submit button along with a response area for the result of the Web Response and that's it... Our internal BA has created all the xml request documents so there was no logic put into the harness around this. Looking on the Net for a definition of a Test Harness I got this: http://en.wikipedia.org/wiki/Test_harness It states it should have these 3 below things: Automate the testing process. Execute test suites of test cases. Generate associated test reports. Clearly we have got none of this apart from a partial "Automate the testing process" via a WinForm. OK, from my development background I would expect someone to Produce a WinForm as a test harness 5 years ago and really should be using some sort of Tooling around this, I explicitly told the Software House I expected some sort of tooling (NUnit,NBUnit, SOAPIU) so we could create a regression test pack for future use. [Didn’t get it but I asked for this after the requirements were signed off as I wasn’t employed then :)] Would someone be able to clarify with me if my requirement for this is over realistic, I know if I did this, I would use NUnit and TDD and then reuse the test harness as a regression test pack in future? I am interested to see what the community thought. Cheers

    Read the article

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Regular expression test can't decide between true and false (JavaScript)

    - by nw
    I get this behavior in both Chrome (Developer Tools) and Firefox (Firebug). Note the regex test returns alternating true/false values: > var re = /.*?\bbl.*\bgr.*/gi; undefined > re /.*?\\bbl.*\\bgr.*/gi > re.test("Blue-Green"); true > re.test("Blue-Green"); false > re.test("Blue-Green"); true > re.test("Blue-Green"); false However, testing the same regex as a literal: > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true I can't explain this and it's making debugging very difficult. Can anyone explain this behavior?

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >