Search Results

Search found 28325 results on 1133 pages for 'test cases'.

Page 61/1133 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Datamapper In Memory Database

    - by Daniel Ribeiro
    It is easy to setup Datamapper with a Sqlite3 in memory database with: DataMapper.setup :default, 'sqlite3::memory:'. However, when testing, I'd like to destroy the whole in memory database after each test, instead of invoking automigrate! as a shortcut on dropping everything. Is it possible? Or is it enough to set the default repository to nil, and let the garbage collector dispose of it?

    Read the article

  • Best way to solve programming problem without a computer?

    - by Kevin
    What is the best way to solve programming questions when you are givien a question to write a program, in an exam for example where you have no computer to test it. Is there a certain technique that people use to help them solve these type of written problems? Or is it all down to knowlegde of the language?

    Read the article

  • Which use cases make temporary JMS queues a better choice than persistent queues?

    - by Stephen Harmon
    When you are designing a JMS application, which use cases make you pick temporary queues over persistent queues? We use temporary queues for response messages. We're having some issues maintaining connections to the temp queues, though, so I am testing persistent response queues, instead. One clear disadvantage of persistent queues is that your application has to "know" about them beforehand. If that's not a big deal, though, are there use cases where temp queues are the obvious choice?

    Read the article

  • What are the pitfalls to watch out for when upgrading MediaWiki?

    - by Mark Robinson
    We've got MediaWiki 1.13.2 and we'll soon be upgrading to the latest & greatest version (probably 1.16). We've got lots of extensions installed (for which we'll probably also need to get the latest versions) and we've done some minor configuring (e.g. adding new edit buttons). What should we watch out for during the upgrade? And what should we test? Does anyone have any experience of things that went wrong?

    Read the article

  • Creating Tests at Runtime

    - by James Thigpen
    Are there any .NET testing frameworks which allow dynamic creation of tests without having to deal with a hokey Attribute syntax? Something like: foreach (var t in tests) { TestFx.Run(t.Name, t.TestDelegate); } But with the test reporting as you would expect... I could do something like this with RowTests et al, but that seems hokey.

    Read the article

  • Does dict.update affect a function's argspec?

    - by sbox32
    import inspect class Test: def test(self, p, d={}): d.update(p) return d print inspect.getargspec(getattr(Test, 'test'))[3] print Test().test({'1':True}) print inspect.getargspec(getattr(Test, 'test'))[3] I would expect the argspec for Test.test not to change but because of dict.update it does. Why?

    Read the article

  • How important is the unit test in the software development?

    - by Lo Wai Lun
    We are doing software testing by testing a lot if I/O cases, so developers and system analysts can open reviews and test for their committed code within a given time period (e.g. 1 week). But when it come across with extracting information from a database, how to consider the cases and the corresponding methodology to start with? Although that is more likely to be a case studies because the unit-testing depends on the project we have involved which is too specific and particular most of the time. What is the general overview of the steps and precautions for unit-testing?

    Read the article

  • can I run C# built-in unit test in build machine?

    - by 5YrsLaterDBA
    can I run C# built-in unit test in build machine which doesn't have Visual Studio installed? We are thinking add unit test to our Visual Studio 2008 C# project. Our build machine doesn't have VS installed and we want to integrate the new unit test with our auto-build system. Is MSTest the executable to launch the Team Test unit test?

    Read the article

  • How to deal with the test data in Junit?

    - by user351637
    In TDD(Test Driven Development) development process, how to deal with the test data? Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?

    Read the article

  • What happened to Debug test in current context (Ctrl+R, Ctrl+T) in VS2012?

    - by Nilzor
    One of the hot-keys I used most in Visual Studio 2010 was Ctrl+R, Ctrl+T, which ran the unit test the cursor currently was on in debug mode. I think the command is named "Debug tests in current context". Now, you still have a command named Test.DebugTestsInCurrentcontext, but when I assign it to a key combination and activate it, it always yields "currently not available". I do know that there is a new function in the Test menu named "Debug selected test" - but I think that mappes to the selected tests in the Test Explorer, not the file editor. What gives, Microsoft? Are you removing features?

    Read the article

  • Creating XML with Rebol and rebelxml

    - by Rebol Tutorial
    The example in doc http://www.rebol.org/documentation.r?script=rebelxml.r to create XML works >> clear-xml-data == "" >> set-xml-data/content 'test/test "test" == "<test><test>test</test></test>" >> but when I want to create some variants it doesn't seem to work: >> clear-xml-data == "" >> set-xml-data/content 'test "test" ** Script Error: foreach expected data argument of type: series ** Where: set-xml-data ** Near: foreach tag path [ sub-rule: copy [] append sub-rule reduce [ 'thru to-open-tag tag ] if all [... >> this one doesn't work either: >> clear-xml-data == "" >> set-xml-data/content/with-attribute 'test/test "test" 'id "500" == "" >> Is there something wrong in my syntax ?

    Read the article

  • Load testing a quicktime streaming server from ubuntu machine

    - by ebeland
    I have software that can launch and control multiple firefox browsers on Ubuntu EC2 images. I need to run a small load test against a QuickTime Streaming server. The stream starts automatically when loaded in a browser that has the QuickTime plugin, so I don't need to automate the stream once it starts. Alternately, I can also make these machines run arbitrary ruby code or executables. How can I get these ubuntu machines to pull in the stream? Also, how can I capture bandwidth usage (maybe a shell script?) on the worker machines?

    Read the article

  • jtreg update, December 2012

    - by jjg
    There is a new version of jtreg available. The primary new feature is support for tests that have been written for use with TestNG, the popular open source testing framework. TestNG is supported by a variety of tools and plugins, which means that it is now possible to develop tests for OpenJDK using those tools, while still retaining the ability to have the tests be part of the OpenJDK test suite, and run with a single test harness, jtreg. jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg. TestNG support jtreg supports both single TestNG tests, which can be freely intermixed with other types of jtreg tests, and groups of TestNG tests. A single TestNG test class can be compiled and run by providing a test description using the new action tag: @run testng classname The test will be executed by using org.testng.TestNG. No main method is required. A group of TestNG tests organized in a standard package hierarchy can also be compiled and run by jtreg. Any such group must be identified by specifying the root directory of the package hierarchy. You can either do this in the top level TEST.ROOT file, or in a TEST.properties file in any subdirectory enclosing the group of tests. In either case, add a line to the file of the form: TestNG.dirs = dir ... Directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. In particular, note that you can simply use "TestNG.dirs = ." in a TEST.properties file in the root directory of the test group's package hierarchy. No additional test descriptions are necessary, but test descriptions containing information tags, such as @bug, @summary, etc are permitted. All the Java source files in the group will be compiled if necessary, before any of the tests in the group are run. The selected tests within the group will be run, one at a time, using org.testng.TestNG. Library classes The specification for the @library tag has been extended so that any paths beginning with '/' will be evaluated relative to the root directory of the test suite. In addition, some bugs have been fixed that prevented sharing the compiled versions of library classes between tests in different directories. Note: This has uncovered some issues in tests that use a combination of @build and @library tags, such that some tests may fail unexpectedly with ClassNotFoundException. The workaround for now is to ensure that library classes are listed before the test classes in any @build tags. To specify one or more library directories for a group of TestNG tests, add a line of the following form to the TEST.properties file in the root directory of the group's package hierarchy: lib.dirs = dir ... As before, directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. The libraries will be available to all classes in the group; you cannot specify different libraries for different tests within the group. Coming soon ... From this point on, jtreg development will be using the new jtreg repository in the OpenJDK code-tools project. There is a new email alias jtreg-dev at openjdk.java.net for discussions about jtreg development. The existing alias jtreg-use at openjdk.java.net will continue to be available for questions about using jtreg. For more information ... An updated version of the jtreg Tag Language Specification is being prepared, and will be made available when it is ready. In the meantime, you can find more information about the support for TestNG by executing the following command: $ jtreg -onlinehelp TestNG For more information on TestNG itself, visit testng.org.

    Read the article

  • Run EJB3Unit against Oracle Database

    - by justastefan
    I want to run EJB3Unit-Test in my oracle 10g Database. Therefore I use this configuration (ejb3unit.properties). ### The ejb3unit configuration file ### ejb3unit.inMemoryTest=false ejb3unit.connection.url=jdbc:oracle:thin:....:1432:SID ejb3unit.connection.driver_class=oracle.jdbc.OracleDriver ejb3unit.connection.username=user ejb3unit.connection.password=name ejb3unit.dialect=org.hibernate.dialect.Oracle10gDialect ejb3unit.show_sql=true ## values are create-drop, create, update ## ejb3unit.schema.update=create I will result in the following error: Caused by: HibernateException: cannot instantiate dialect class ... org.hibernate.dialect.Oracle10gDialect cannot be cast to org.ejb3unit.hibernate.dialect.Dialect How can ejb3unit-testing be done using oracle db?

    Read the article

  • Resources for TDD aimed at Python Web Development

    - by Null Route
    I am a hacker not and not a full-time programmer but am looking to start my own full application development experiment. I apologize if I am missing something easy here. I am looking for recommendations for books, articles, sites, etc for learning more about test driven development specifically compatible with or aimed at Python web application programming. I understand that Python has built-in tools to assist. What would be the best way to learn about these outside of RTFM? I have searched on StackOverflow and found the Kent Beck's and David Astels book on the subject. I have also bookmarked the Wikipedia article as it has many of these types of resources. Are there any particular ones you would recommend for this language/application?

    Read the article

  • MongoDB vs CouchDB (Speed optimization)

    - by Edward83
    Hi! I made some tests of speed to compare MongoDB and CouchDB. Only inserts were while testing. I got MongoDB 15x faster than CouchDB. I know that it is because of sockets vs http. But, it is very interesting for me how can I optimize inserts in CouchDB? Test platform: Windows XP SP3 32 bit. I used last versions of MongoDB, MongoDB C# Driver and last version of installation package of CouchDB for Windows. Thanks!

    Read the article

  • How to access project files from NUnit tests

    - by Daren Thomas
    I have some Tests that I run with ReSharpers "Run All Tests from Solution" feature. One of the classes being tested has a dependency on a file in the same folder as the assembly containing it. This file is copied to the output directory via MSBuild (set "Copy To Output Directory" to "Copy always"). Problem: The tests are not being run from the normal assembly output directory, but instead some temporary location in my user profile. Therefore, I don't really know where to look for the file - the test runner does not copy it there. Can I force it to?

    Read the article

  • How can I detect if the Solution is initializing using the DTE in a VisualStudio extension?

    - by justin.m.chase
    I am using the DTE to track when projects are loaded and removed from the solution so that I can update a custom Test Explorer extension. I then trigger a container discovery process. But when the solution is first loaded it does an asynchronous load of some projects and fires a lot of Project Added events. What I would really like to do is to ignore all of these events until the solution is done loading. I can't quite figure out the order of events such that I know for sure that this initialization process has completed. It would be really nice to be able to just query the solution object and ask it. Does anyone know if there is a property or interface or event that I can use to determine this?

    Read the article

  • Log information inside a JUnit Suite

    - by Alex Marinescu
    I'm currently trying to write inside a log file the total number of failed tests from a JUnite Suite. My testsuite is defined as follows: @RunWith(Suite.class) @SuiteClasses({Class1.class, Class2.class etc.}) public class SimpleTestSuite {} I tried to define a rule which would increase the total number of errors when a test fails, but apparently my rule is never called. @Rule public MethodRule logWatchRule = new TestWatchman() { public void failed(Throwable e, FrameworkMethod method) { errors += 1; } public void succeeded(FrameworkMethod method) { } }; Any ideas on what I should to do to achieve this behaviour?

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >