Search Results

Search found 30312 results on 1213 pages for 'tactial test team'.

Page 31/1213 | < Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • Test Results window in VS2008 not showing results

    - by TimK
    I have an existing solution that has been working for a long time, containing around 600 tests in a couple of test projects. I recently moved to a new PC - it's Win7-x64, and I installed a fresh copy of VS2008. When I first opened the solution on the new machine, the Test List Editor was completely empty. Trying to create a new test list caused the editor to refresh, and now it shows my test lists, but they're acting funny. I can select tests in the lists, and run them, but the results window doesn't usually update automatically to show the results of the latest test. It has done this when running a single test a couple of times, but even that is not consistent. The only way I can view the results is by manually going to the Test Runs window and connecting to individual test runs. When I do that, the results show up in the results list, but I can't check them to re-run the failed tests - the check boxes are all disabled. I guess I should describe the way it used to work, in case that was unusual - I used to select some tests from the Test Lists window, tell it to run them, and the results window would clear itself, and then display the results from the current run. I could then check any tests that I wanted to re-run, and use the run/debug button in the results window to do so. Any ideas what's going on here?

    Read the article

  • Test Driven Development with vxml

    - by Malcolm Anderson
    It's been 3 years since I did any coding and am starting back up with Java using netBeans and glassfish.  Right off the bat I noticed two things about Java's ease of use.  The java ide (netBeans) has finally caught up with visual studio, and jUnit, has finally caught up with nUnit.  netBeans intellisense exists and I don't have to subclass everything in jUnit.    Now on to the point of this very short post ( request)   I'm trying to figure out how to do test driven development with vxml and have not found anythnig yet.  I've done my google search, but unfortunately, TDD in IVR land has something to do with helping the hearing impared. I've found a vxml simulator or two, but none of their marketing is getting my hopes up.    My request - if you have done any agile engineering work with vxml, contact me, I need to pick your brain and bring some ideas back to my team.   Thanks in advance.

    Read the article

  • Team matchups for Dota Bot

    - by Dan
    I have a ghost++ bot that hosts games of Dota (a warcraft 3 map that is played 5 players versus 5 players) and I'm trying to come up with good formulas to balance the players going into a match based on their records (I have game history for several thousand games). I'm familear with some of the concepts required to match up players, like confidence based on sample size of the number of games they played, and also perameter approximation and degrees of freedom and thus throwing out any variables that don't contribute enough to the r^2. My bot collects quite a few variables for each player from each game: The Important ones: Win/Lose/Game did not finish # of Player Kills # of Player Deaths # of Kills player assisted The not so important ones: # of enemy creep kills # of creep sneak attacks # of neutral creep kills # of Tower kills # of Rax kills # of courier kills Quick explination: The kills/deaths don't determine who wins, but the gold gained and lost from this usually is enough to tilt the game. Tower/Rax kills are what the goal of the game is (once a team looses all their towers/rax their thrown can be attacked if that is destroyed they lose), but I don't really count these as important because it is pretty random who gets the credit for the tower kill, and chances are if you destroy a tower it is only because some other player is doing well and distracting the otherteam elsewhere on the map. I'm getting a bit confused when trying to deal with the fact that 5 players are on a team, so ultimately each individual isn't that responsible for the team winner or losing. Take a player that is really good at killing and has 40 kills and only 10 deaths, but in their 5 games they've only won 1. Should I give him extra credit for such a high kill score despite losing? (When losing it is hard to keep a positive kill/death ratio) Or should I dock him for losing assuming that despite the nice kill/death ratio he probably plays in a really greedy way only looking out for himself and not helping the team? Ultimately I don't think I have to guess at questions like this because I have so much data... but I don't really know how to look at the data to answer questions like this. Can anyone help me come up with formulas to help team balance and predict the outcome? Thanks, Dan

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • Web part error on Team Foundation Server Project Portal Sharepoint site...

    - by user304671
    Hi all, I recently installed TFS 2010 with the included sharepoint services on a single server. I am getting the following error multiple times on the project dashboard web page on the TFS2010 project protal after creating a brand new project in the default project collection. I am not an expert with WSS, so any guidance will be greatly appreciated. After reading a few articles I understand there are some DLL that are probably not declared as safe in a web config file. But I am not sure which DLL they are and where the web config file is. I looked at IIS to determine the directory structure in IIS is quiet different to the URL path ...Thanks very much. Web Part Error: A Web Part or Web Form Control on this Page cannot be displayed or imported. The type is not registered as safe. Thanks in advance..

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Team Foundation Server vs. SVN and other source control systems

    - by micha12
    We are currently looking for a version control system to use in our projects. Up to now we have been using VSS, but nowadays more powerful source control systems exists like TFS, SVN, etc. We are planning to migrate our projects to Visual Studio 2010, so the first idea coming to mind is to start using TFS 2010. I have never worked with SVN and other version control systems. My question is: how good is TFS compared to other source control systems? Is it a good idea using it, or should we rather use SVN (or any other system)? Thank you.

    Read the article

  • Visual Studios Team System 2008 Code Coverage Window Closes

    - by ThoughtCrhyme
    Trying to run the code coverage tool in Visual Studios for a set of unit tests. Adam from Think First, Code Later has had the same problem: I wanted to get the code coverage metrics for the project. Naturally, I fire up the solution in Visual Studio 2008, go to the Test menu, click Edit Test Run Configurations, and click Local Test Run. I then click Code Coverage to turn on code coverage for a given assembly and POOF the Local Test Run Configraution window just disappears. He recommends installing this hotfix to fix the problem, however a) when I run that hotfix I get the message “None of the products that are addressed by this software update are installed on this computer. Click Cancel to exit setup.” and b) there is no Silverlight in our solution. Any other ideas for a fix?

    Read the article

  • Dot Net Code Coverage Test Tools - there is now a choice

    - by TATWORTH
    I have been pleasantly surprised this week to discover that there is a choice of tools for measuring Code Coverage. If you have Visual Studio Team edition, then if you are using MSTEST, then you have built-in code coverage, however even then you may need a standalone tool. The tools I have found are (costs are per seat): 1) NCover  http://www.ncover.com/ (from $199 to $658 per seat) I have used it but it is very expensive. 2) PartCover http://sourceforge.net/projects/partcover/ - Free!  Steep initial learning curve to get it to work. 3) Dot Cover from http://www.jetbrains.com/dotcover/ - Personal licence - normally $99 but at a introductory price of $75 and free for OpenSource Developers (details at http://www.jetbrains.com/dotcover/buy/buy.jsp#opensource_) 4) Test Matrix from http://submain.com/products/testmatrix.aspx - $149 for a licence

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Regular expression test can't decide between true and false (JavaScript)

    - by nw
    I get this behavior in both Chrome (Developer Tools) and Firefox (Firebug). Note the regex test returns alternating true/false values: > var re = /.*?\bbl.*\bgr.*/gi; undefined > re /.*?\\bbl.*\\bgr.*/gi > re.test("Blue-Green"); true > re.test("Blue-Green"); false > re.test("Blue-Green"); true > re.test("Blue-Green"); false However, testing the same regex as a literal: > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true I can't explain this and it's making debugging very difficult. Can anyone explain this behavior?

    Read the article

  • How can I copy files in the middle of a build in Team System?

    - by Dana
    I have two solutions that I want to include in a build. Solution two requires the dll's from solution one to successfully build. Solution two has a Binaries folder where the dll's from solution one need to be copied before building Solution two. I've been trying an AfterBuild Target, hoping that it would copy the items after the first SolutionToBuild, but it doesn't fire then. I'm guessing that it would probably fire after both solutions have compiled, but that's not what I want. <SolutionToBuild Include="$(BuildProjectFolderPath)/../../Main/Framework.sln"> <Targets>AfterCompileFramework</Targets> <Properties></Properties> </SolutionToBuild> <SolutionToBuild Include="$(BuildProjectFolderPath)/../../../Dashboard/Main/Dashboard.sln"> <Targets></Targets> <Properties></Properties> </SolutionToBuild> <ItemGroup> <FrameworkBinaries Include="$(DropLocation)\$(BuildNumber)\Release\Framework.*.dll"/> </ItemGroup> <Message Text="FrameworkBinaries: @(FrameworkBinaries)" Importance="high"/> <Copy SourceFiles="@(FrameworkBinaries)" DestinationFolder="$(BuildProjectFolderPath)/../../../Dashboard/Main/Binaries"/>

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • New TFS Template Available - "Agile Dev in a Waterfall Environment"–GovDev

    - by Hosam Kamel
      Microsoft Team Foundation Server (TFS) 2010 is the collaboration platform at the core of Microsoft’s application lifecycle management solution. In addition to core features like source control, build automation and work-item tracking, TFS enables teams to align projects with industry processes such as Agile, Scrum and CMMi via the use of customable XML Process Templates. Since 2005, TFS has been a welcomed addition to the Microsoft developer tool line-up by Government Agencies of all sizes and missions. However, many government development teams consistently struggle with leveraging an iterative development process all while providing the structure, visibility and status reporting that is required by many Government, waterfall-centric, project methodologies. GovDev is an open source, TFS Process Template that combines the formality of CMMi/Waterfall with the flexibility of Agile/Iterative: The GovDev for TFS Accelerator also implements two new custom reports to support the customized process and provide the real-time visibility across the lifecycle with full traceability and drill down to tasks, tests and code: The TFS Accelerator contains: A custom TFS process template that implements a requirements centric, yet iterative process with extreme traceability throughout the lifecycle. A custom “Requirements Traceability Report” that provides a single view of traceability for the project.   Within the Traceability Report, you can also view live status indicators and “click-through” to the individual assets (even changesets). A custom report that focuses on “Contributions by Team Member” tracking things like “number of check-ins” and “Net lines added”.  Fully integrated documentation on the entire process and features. For a 45min demo of GovDev, visit: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032508359&culture=en-us Download it from Codeplex here.     Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Which web site gives the most accurate indication of a programmer's capabilities?

    - by Jerry Coffin
    If you were hiring programmers, and could choose between one of (say) the top 100 coders on topcoder.com, or one of the top 100 on stackoverflow.com, which would you choose? At least to me, it would appear that topcoder.com gives a more objective evaluation of pure ability to solve problems and write code. At the same time, despite obvious technical capabilities, this person may lack any hint of social skills -- he may be purely a "lone coder", with little or no ability to help/work with others, may lack mentoring ability to help transfer his technical skills to others, etc. On the other hand, stackoverflow.com would at least appear to give a much better indication of peers' opinion of the coder in question, and the degree to which his presence and useful and helpful to others on the "team". At the same time, the scoring system is such that somebody who just throws up a lot of mediocre (or even poor answers) will almost inevitably accumulate a positive total of "reputation" points -- a single up-vote (perhaps just out of courtesy) will counteract the effects of no fewer than 5 down-votes, and others are discouraged (to some degree) from down-voting because they have to sacrifice their own reputation points to do so. At the same time, somebody who makes little or no technical contribution seems unlikely to accumulate a reputation that lands them (even close to) the top of the heap, so to speak. So, which provides a more useful indication of the degree to which this particular coder is likely to be useful to your organization? If you could choose between them, which set of coders would you rather have working on your team?

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • How to correct a junior, but encourage him to think for himself? [closed]

    - by Phil
    I am the lead of a small team where everyone has less than a year of software development experience. I wouldn't by any means call myself a software guru, but I have learned a few things in the few years that I've been writing software. When we do code reviews I do a fair bit of teaching and correcting mistakes. I will say things like "This is overly complex and convoluted, and here's why," or "What do you think about moving this method into a separate class?" I am extra careful to communicate that if they have questions or dissenting opinions, that's ok and we need to discuss. Every time I correct someone, I ask "What do you think?" or something similar. However they rarely if ever disagree or ask why. And lately I've been noticing more blatant signs that they are blindly agreeing with my statements and not forming opinions of their own. I need a team who can learn to do things right autonomously, not just follow instructions. How does one correct a junior developer, but still encourage him to think for himself? Edit: Here's an example of one of these obvious signs that they're not forming their own opinions: Me: I like your idea of creating an extension method, but I don't like how you passed a large complex lambda as a parameter. The lambda forces others to know too much about the method's implementation. Junior (after misunderstanding me): Yes, I totally agree. We should not use extension methods here because they force other developers to know too much about the implementation. There was a misunderstanding, and that has been dealt with. But there was not even an OUNCE of logic in his statement! He thought he was regurgitating my logic back to me, thinking it would make sense when really he had no clue why he was saying it.

    Read the article

  • Homoscedascity test for Two-Way ANOVA

    - by aL3xa
    I've been using var.test and bartlett.test to check basic ANOVA assumptions, among others, homoscedascity (homogeniety, equality of variances). Procedure is quite simple for One-Way ANOVA: bartlett.test(x ~ g) # where x is numeric, and g is a factor var.test(x ~ g) But, for 2x2 tables, i.e. Two-Way ANOVA's, I want to do something like this: bartlett.test(x ~ c(g1, g2)) # or with list; see latter: var.test(x ~ list(g1, g2)) Of course, ANOVA assumptions can be checked with graphical procedures, but what about "an arithmetic option"? Is that, at all, manageable? How do you test homoscedascity in Two-Way ANOVA?

    Read the article

  • JUnit Best Practice: Different Fixtures for each @Test

    - by Juri Glass
    Hi I understand that there are @Before and @BeforeClass, which are used to define fixtures for the @Test's. But what should I use if I need different fixtures for each @Test? Should I define the fixture in the @Test? Should I create a test class for each @Test? I am asking for the best practices here, since both solutions aren't clean in my opinion. With the first solution, I would test the initialization code. And with the second solution I would break the "one test class for each class" pattern.

    Read the article

< Previous Page | 27 28 29 30 31 32 33 34 35 36 37 38  | Next Page >