Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 28/328 | < Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Rails Controller Tests for Captcha using Shoulda, Factory Girl, Mocha

    - by Siva
    Can someone provide a strategy/code samples/pointers to test Captcha validations + Authlogic using Shoulda, Factory Girl and Mocha? For instance, my UsersController is something like: class UsersController < ApplicationController validates_captcha ... def create ... if captcha_validated? # code to deal with user attributes end ... end In this case, how do you mock/stub using Shoulda / Factory Girl / Mocha to test valid and invalid responses to the Captcha image? Appreciate your help, Siva

    Read the article

  • Regression tests for T-SQL stored procedures

    - by Achim
    Hi, I would like to regression test t-sql stored procedures. My idea is to specify for each SP multiple input parameter sets. The SP should be executed with these parameters, results should be written to disc. Next time the new results should be compared with results stored before. Does anybody know a good tool for something like that? Should not be that hard to implement, but in practice you will need functionality like "ignore that column" or something like that. And I would assume that such a tool should already exist!? cheers, Achim

    Read the article

  • Grails Unit Tests: Why does this statement fail?

    - by leeand00
    I've developed in Java in the past, and now I'm trying to learn Grails/Groovy using this slightly dated tutorial. import grails.test.* class DateTagLibTests extends TagLibUnitTestCase { def dateTagLib protected void setUp() { super.setUp() dateTagLib = new DateTagLib() } protected void tearDown() { super.tearDown() } void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) // NOTE: This statement fails assertEquals("the years dont match and I dont know why.", expected, dateTagLib.thisYear()) } } DateTagLibTests.groovy (Note: this TagLibUnitTestCase is for Grails 1.2.1 and not the version used in the tutorial) For some reason the above test fails with: expected:<2010 but was:<2010 I've tried replacing the test above with the following alternate version of the test, and the test passes just fine: void testThisYear() { String expected = Calendar.getInstance().get(Calendar.YEAR) String actual = dateTagLib.thisYear() // NOTE: The following two assertions work: assertEquals("the years don\'t match", expected, actual) assertTrue("the years don\'t match", expected.equals(actual)) } These two versions of the test are basically the same thing right? Unless there's something new in Grails 1.2.1 or Groovy that I'm not understanding. They should be of the same type because the values are both the value returned by Calendar.getInstance().get(Calendar.YEAR)

    Read the article

  • Embedded systems code with good unit tests?

    - by rmk
    I am looking at approaches to Unit Test embedded systems code written in C. At the same time, I am also looking for a good UT framework that I can use. The framework should have a reasonably small number of dependencies. Any great Open-source products that have good UTs?

    Read the article

  • Re-using unit tests for models using STI

    - by TenJack
    I have a number of models that use STI and I would like to use the same unit test to test each model. For example, I have: class RegularList < List class OtherList < List class ListTest < ActiveSupport::TestCase fixtures :lists def test_word_count list = lists(:regular_list) assert_equal(0, list.count) end end How would I go about using the test_word_count test for the OtherList model. The test is much longer so I would rather not have to retype it for each model. Thanks.

    Read the article

  • How to run Clojure tests on Windows?

    - by anta40
    I put Clojure in C:\clojure-1.1.0, and start the REPL by: java -cp clojure.jar clojure.main In \test\clojure\test_clojure, there are a bunch of test files. How to run these? For example, I tried: java -cp ......\clojure.jar clojure.main data_structures.clj And it didn't work.

    Read the article

  • Gradle java.util.logging.Logger output in unit tests

    - by Misha Koshelev
    Dear All: Sorry this is probably a very simple question. I am using gradle http://www.gradle.org/ for my development environment. It works quite well! I have written a simple unit test that uses HtmlUnit and my own package. For my own package, I use java.util.Logger. HtmlUnit seems to use commons logging: http://htmlunit.sourceforge.net/logging.html I would like to see console output of my logging messages from java.util.Logger However, it seems that even messages at the info level are not displayed in my Unit Test Results GUI (System.err link), although the HtmlUnit messages are all displayed. Please let me know if you have suggestions. Thank you! Misha

    Read the article

  • C#/Java: Proper Implementation of CompareTo when Equals tests reference identity

    - by Paul A Jungwirth
    I believe this question applies equally well to C# as to Java, because both require that {c,C}ompareTo be consistent with {e,E}quals: Suppose I want my equals() method to be the same as a reference check, i.e.: public bool equals(Object o) { return this == o; } In that case, how do I implement compareTo(Object o) (or its generic equivalent)? Part of it is easy, but I'm not sure about the other part: public int compareTo(Object o) { if (! (o instanceof MyClass)) return false; MyClass other = (MyClass)o; if (this == other) { return 0; } else { int c = foo.CompareTo(other.foo) if (c == 0) { // what here? } else { return c; } } } I can't just blindly return 1 or -1, because the solution should adhere to the normal requirements of compareTo. I can check all the instance fields, but if they are all equal, I'd still like compareTo to return a value other than 0. It should be true that a.compareTo(b) == -(b.compareTo(a)), and the ordering should stay consistent as long as the objects' state doesn't change. I don't care about ordering across invocations of the virtual machine, however. This makes me think that I could use something like memory address, if I could get at it. Then again, maybe that won't work, because the Garbage Collector could decide to move my objects around. hashCode is another idea, but I'd like something that will be always unique, not just mostly unique. Any ideas?

    Read the article

  • Rails performance tests "rake test:benchmark" and "rake test:profile" give me errors

    - by go minimal
    I'm trying to run a blank default performance test with Ruby 1.9 and Rails 2.3.5 and I just can't get it to work! What am I missing here??? rails testapp cd testapp script/generate scaffold User name:string rake db:migrate rake test:benchmark - /usr/local/bin/ruby19 -I"lib:test" "/usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/performance/browsing_test.rb" -- --benchmark Loaded suite /usr/local/lib/ruby19/gems/1.9.1/gems/rake-0.8.7/lib/rake/rake_test_loader Started /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:105:in `rescue in const_missing': uninitialized constant BrowsingTest::STARTED (NameError) from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:94:in `const_missing' from /usr/local/lib/ruby19/gems/1.9.1/gems/activesupport-2.3.5/lib/active_support/testing/performance.rb:38:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:415:in `block (2 levels) in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:409:in `block in run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `each' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:408:in `run_test_suites' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:388:in `run' from /usr/local/lib/ruby19/1.9.1/minitest/unit.rb:329:in `block in autorun' rake aborted! Command failed with status (1): [/usr/local/bin/ruby19 -I"lib:test" "/usr/l...]

    Read the article

  • unit, integration and system tests for PHP applications

    - by Sara
    Hi, We were given an assignment to develop a prototype for a customer community. It was suggested PHP as the programming language. (but we're not supposed to actually code it, just a prototype with documentation is required) I'm wondering what are the best practices/ tools used in Unit testing, Integration Testing and System testing for such a php app Thanks

    Read the article

  • N-tier architecture and unit tests (using Java)

    - by Alexandre FILLATRE
    Hi there, I'd like to have your expert explanations about an architectural question. Imagine a Spring MVC webapp, with validation API (JSR 303). So for a request, I have a controller that handles the request, then passes it to the service layer, which passes to the DAO one. Here's my question. At which layer should the validation occur, and how ? My though is that the controller has to handle basic validation (are mandatory fields empty ? Is the field length ok ? etc.). Then the service layer can do some tricker stuff, that involve other objets. The DAO does no validation at all. BUT, if I want to implement some unit testing (i.e. test layers below service, not the controllers), I'll end up with unexpected behavior because some validations should have been done in the Controller layer. As we don't use it for unit testing, there is a problem. What is the best way to deal with this ? I know there is no universal answer, but your personal experience is very welcomed. Thanks a lot. Regards.

    Read the article

  • How to run unit tests in STAThread mode?

    - by Peter
    I would like to test an app that uses the Clipboard (WindowsForms) and I need the Clipboard in my Unittests also. In order to use it, it should run in STA mode, but since the NUnit Testfixture does not have a main method, I don't know where/how to annotate it... Thanks!

    Read the article

  • How to create tests for poco objects

    - by Simon G
    Hi, I'm new to mocking/testing and wanting to know what level should you go to when testing. For example in my code I have the following object: public class RuleViolation { public string ErrorMessage { get; private set; } public string PropertyName { get; private set; } public RuleViolation( string errorMessage ) { ErrorMessage = errorMessage; } public RuleViolation( string errorMessage, string propertyName ) { ErrorMessage = errorMessage; PropertyName = propertyName; } } This is a relatively simple object. So my question is: Does it need a unit test? If it does what do I test and how? Thanks

    Read the article

  • Why are my rails tests so slow?

    - by ryeguy
    Is it normal for my test suite to take 5 seconds just to launch? Even when running an empty suite it still takes this long. Is it because it's firing up a new instance of rails on each run? If so, is there anyway to keep it persistent? I'm using Test::Unit with Shoulda.

    Read the article

  • Zero code coverage with cobertura 1.9.2 but tests are working

    - by eraonel
    I run the code coverage target: <junit fork="yes" dir="${basedir}" failureProperty="test.failed"> <!-- Note the classpath order: instrumented classes are before the original (uninstrumented) classes. This is important. --> <classpath path="${instrumented.dir}" /> <classpath path="${classes.dir}" /> <classpath refid="classpath" /> <!-- The instrumented classes reference classes used by the Cobertura runtime, so Cobertura and its dependencies must be on your classpath. --> <classpath refid="cobertura.classpath" /> <formatter type="xml" /> <!--<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />--> <batchtest fork="yes" todir="${reports.xml.dir}"> <fileset dir="${classes.dir}"> <include name="**/generated/AllTests.class" /> </fileset> </batchtest> </junit> <junitreport todir="${reports.xml.dir}"> <fileset dir="${reports.xml.dir}"> <include name="TEST-*.xml" /> </fileset> <report format="frames" todir="${reports.html.dir}" /> </junitreport> Then I get the following output ( when using fork="true"): java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at net.sourceforge.cobertura.util.FileLocker.lock(FileLocker.java:124) at net.sourceforge.cobertura.coveragedata.ProjectData.saveGlobalProjectData(ProjectData.java:331) at net.sourceforge.cobertura.coveragedata.SaveTimer.run(SaveTimer.java:31) at java.lang.Thread.run(Thread.java:595) Caused by: java.io.IOException: No locks available at sun.nio.ch.FileChannelImpl.lock0(Native Method) at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:784) at java.nio.channels.FileChannel.lock(FileChannel.java:865) ... 8 more --------------------------------------- Unable to get lock on /vobs/rnc/rrt/roam2/roamSs/RoamMao_swb/RoamMao_bldu/ant_build/cobertura.ser.lock: null This is known to happen on Linux kernel 2.6.20. Make sure cobertura.jar is in the root classpath of the jvm process running the instrumented code. If the instrumented code is running in a web server, this means cobertura.jar should be in the web server's lib directory. Don't put multiple copies of cobertura.jar in different WEB-INF/lib directories. Only one classloader should load cobertura. It should be the root classloader. I am using Ant 1.7.0 and cobertura 1.9.2. Any ideas why there is no coverage? Test run ok as I see in my target. I have tried to switch java versions ( 1.5.0_06 and 1.6.0_10) but no difference.

    Read the article

  • Why I am not able to run rails tests

    - by dorelal
    This is what I did. > git clone git://github.com/rails/rails.git > cd rails > cd railties > rake And I got following error. (in /Users/dorelal/dev/scratch/rails/railties) ./test/isolation/abstract_unit.rb:236:in `initialize': No such file or directory - /Users/dorelal/dev/scratch/rails/railties/tmp/app_template/config/boot.rb (Errno::ENOENT) from ./test/isolation/abstract_unit.rb:236:in `open' from ./test/isolation/abstract_unit.rb:236 from ./test/isolation/abstract_unit.rb:222:in `initialize' from ./test/isolation/abstract_unit.rb:222:in `new' from ./test/isolation/abstract_unit.rb:222 from test/application/configuration_test.rb:1:in `require' from test/application/configuration_test.rb:1 rake aborted! I checked ~/railties/tmp and this directory is empty. I know rails is not broken. So what am I missing?

    Read the article

  • Mock dll methods for unit tests

    - by sanjeev40084
    I am trying to write a unit test for a method, which has a call to method from dll. Is there anyway i can mock the dll methods so that i can unit test? public string GetName(dllobject, int id) { var eligibileEmp = dllobject.GetEligibleEmp(id); <---------trying to mock this method if(eligibleEmp.Equals(empValue) { .......... } }

    Read the article

  • How to Run NUnit Tests from C# Code

    - by Dror Helper
    I'm trying to write a simple method that receives a file and runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing? Is there some other way to run the test from the code (without using Process.Start)?

    Read the article

  • Running NUnit Tests from Code

    - by Dror Helper
    I'm trying to write a simple method that receives a file an runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing?

    Read the article

< Previous Page | 24 25 26 27 28 29 30 31 32 33 34 35  | Next Page >