Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 19/192 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How do I write an RSpec test to unit-test this interesting metaprogramming code?

    - by Kyle Kaitan
    Here's some simple code that, for each argument specified, will add specific get/set methods named after that argument. If you write attr_option :foo, :bar, then you will see #foo/foo= and #bar/bar= instance methods on Config: module Configurator class Config def initialize() @options = {} end def self.attr_option(*args) args.each do |a| if not self.method_defined?(a) define_method "#{a}" do @options[:"#{a}"] ||= {} end define_method "#{a}=" do |v| @options[:"#{a}"] = v end else throw Exception.new("already have attr_option for #{a}") end end end end end So far, so good. I want to write some RSpec tests to verify this code is actually doing what it's supposed to. But there's a problem! If I invoke attr_option :foo in one of the test methods, that method is now forever defined in Config. So a subsequent test will fail when it shouldn't, because foo is already defined: it "should support a specified option" do c = Configurator::Config c.attr_option :foo # ... end it "should support multiple options" do c = Configurator::Config c.attr_option :foo, :bar, :baz # Error! :foo already defined # by a previous test. # ... end Is there a way I can give each test an anonymous "clone" of the Config class which is independent of the others?

    Read the article

  • Testing When Correctness is Poorly Defined?

    - by dsimcha
    I generally try to use unit tests for any code that has easily defined correct behavior given some reasonably small, well-defined set of inputs. This works quite well for catching bugs, and I do it all the time in my personal library of generic functions. However, a lot of the code I write is data mining code that basically looks for significant patterns in large datasets. Correct behavior in this case is often not well defined and depends on a lot of different inputs in ways that are not easy for a human to predict (i.e. the math can't reasonably be done by hand, which is why I'm using a computer to solve the problem in the first place). These inputs can be very complex, to the point where coming up with a reasonable test case is near impossible. Identifying the edge cases that are worth testing is extremely difficult. Sometimes the algorithm isn't even deterministic. Usually, I do the best I can by using asserts for sanity checks and creating a small toy test case with a known pattern and informally seeing if the answer at least "looks reasonable", without it necessarily being objectively correct. Is there any better way to test these kinds of cases?

    Read the article

  • Integration tests - "no exceptions are thrown" approach. Does it make sense?

    - by Andrew Florko
    Sometimes integration tests are rather complex to write or developers have no enough time to check output - does it make sense to write tests that make sure "no exceptions are thrown" only? Such tests provide some input parameters set(s) and doesn't check the result, but only make sure code not failed with exception? May be such tests are not very useful but appropriate in situations when you have no time?

    Read the article

  • Do you do unit tests for non production code?

    - by Ikaso
    I am interested in the following scenario specifically. Suppose you have team that writes production code and a team that writes automatic tests. The team that writes automatic tests has a dedicated framework intended to write the automatic tests. Should the testing team write unit tests for their framework although the framework is not used in production?

    Read the article

  • jQuery 1.9.0 b1 est disponible pour tests, les méthodes désapprouvées et obsolètes ne sont plus disponibles

    jQuery 1.9.0 b1 est disponible pour tests Les méthodes désapprouvées et obsolètes ne sont plus disponibles Cette version apporte de nombreuses modifications. L'équipe de développement demande un effort particulier sur les tests et les signalements de bogues. Vous devez tester tous vos codes afin d'y apporter les modifications nécessaires. La plupart des méthodes de l'API qui ont été signalées désapprouvées ou obsolètes, parfois depuis plusieurs versions, ne sont plus disponibles. Un nouveau plugin, Migrate, restaure plusieurs fonctionnalités supprimées afin que vos anciens codes puissent fonctionner avec la version 1.9.0. Ce plugin est un pis-aller qui ne doit jamais être ut...

    Read the article

  • Google accuse Bing de copier ses résultats de recherche, suite aux résultats de tests qu'il a effectué

    Google accuse Bing de copier ses résultats de recherche, suite aux résultats de tests qu'il a effectué Google vient de porter de graves accusations concernant son rival Bing. Mountain View a en effet effectué dans le plus grand secret divers tests. Ainsi, cent termes qui ne génèrent habituellement aucune réponse sur le web ont été "truqués" par Google, qui a crée de faux résultats à leur sujet, qui n'auraient donc logiquement jamais du apparaître. Puis, la firme a attendu... Et, au bout de seulement quinze jours, ils sont apparus sur Bing. Ce qui a conduit Google a déclarer, furieux, que le moteur de recherche de Microsoft copie ses propres résultats, ajoutant même que "Microsoft ne le nie pas".

    Read the article

  • Le navigateur mobile de Firefox est disponible en version pre-Alpha, pour des tests sur les téléphon

    Le navigateur mobile de Firefox est disponible en version pre-Alpha, pour des tests sur les téléphones Android Depuis quelques semaines, une build précoce de Fennec circulait sur certains sites Internet. La version mobile de Firefox était disponible au téléchargement pour les téléphones Android. Cependant, cette mouture n'était pas officielle puisqu'offerte par un particulier et optimisée pour le Droid. Aujourd'hui, Mozilla pallie à ce manque et rend disponible une build pre-Alpha de Fennec, compatible avec au moins avec Droid et le Nexus One. Cette version est uniquement destinée à la réalisation de tests. Elle ne permet pas encore les mises à jour automatiques, mais Mozilla la trouve assez aboutie pour être décorti...

    Read the article

  • Why not write all tests at once when doing TDD?

    - by RichK
    The Red - Green - Refactor cycle for TDD is well established and accepted. We write one failing unit test and make it pass as simply as possible. What are the benefits to this approach over writing many failing unit tests for a class and make them all pass in one go. The test suite still protects you against writing incorrect code or making mistakes in the refactoring stage, so what's the harm? Sometimes it's easier to write all the tests first as a form of 'brain dump' to quickly write down all the expected behavior in one go.

    Read the article

  • Is using build-in sorting considered cheating in practice tests?

    - by user10326
    I am using one of the practice online judges where a practice problem is asked and one submits the answer and gets back if it is accepted or not based on test inputs. My question is the following: In one of the practice tests, I needed to sort an array as part of the solution algorithm. If it matters the problem was: find 2 numbers in an array that add up to a specific target. As part of my algorithm I sorted the array, but to do that I used Java's quicksort and not implement sorting as part of the same method. To do that I had to do: java.util.Arrays.sort(array); Since I had to use the fully qualified name I am wondering if this is a kind of "cheating". (I mean perhaps an online judge does not expect this) Is it? In a formal interview (since these tests are practice for interview as I understand) would this be acceptable?

    Read the article

  • How can I run NUnit(Selenium Grid) tests in parallel?

    - by Benjamin Lee
    My current project uses NUnit for unit tests and to drive UATs written with Selenium. Developers normally run tests using ReSharper's test runner in VS.Net 2003 and our build box kicks them off via NAnt. We would like to run the UAT tests in parallel so that we can take advantage of Selenium Grid/RCs so that they will be able to run much faster. Does anyone have any thoughts on how this might be achieved? and/or best practices for testing Selenium tests against multiple browsers environments without writing duplicate tests automatically? Thank you.

    Read the article

  • Working effectively unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • Working effectively with unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • When I run cxxtest I get this error that I dont underdstand?

    - by user299648
    ./cxxtest/cxxtestgen.py -o tests.cpp --error-printer DrawTestSuite.h g++ -I./cxxtest/ -c tests.cpp g++ -o tests tests.o Color.o tests.o: In function DrawTestSuite::testLinewidthOne()': tests.cpp:(.text._ZN13DrawTestSuite16t… undefined reference toLinewidth::Linewidth(double)' tests.cpp:(.text._ZN13DrawTestSuite16t… undefined reference to `Linewidth::draw(std::basic_ostream &)' collect2: ld returned 1 exit status make: * [tests] Error 1// DrawTestSuite.h DrawTestSuite.h contains the unit-test and The test function calls on Linewidth.h to execute the constructer and member function draw I have include "Linewidth.h" in DrawTestSuite.h

    Read the article

  • jtreg update, March 2012

    - by jjg
    There is a new update for jtreg 4.1, b04, available. The primary changes have been to support faster and more reliable test runs, especially for tests in the jdk/ repository. [ For users inside Oracle, there is preliminary direct support for gathering code coverage data using jcov while running tests, and for generating a coverage report when all the tests have been run. ] -- jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg/. Scratch directories On platforms like Windows, if a test leaves a file open when the test is over, that can cause a problem for downstream tests, because the scratch directory cannot be emptied beforehand. This is addressed in agentvm mode by discarding any agents using that scratch directory and starting new agents using a new empty scratch directory. Successive directives use suffices _1, _2, etc. If you see such directories appearing in the work directory, that is an indication that files were left open in the preceding directory in the series. Locking support Some tests use shared system resources such as fixed port numbers. This causes a problem when running tests concurrently. So, you can now mark a directory such that all the tests within all such directories will be run sequentially, even if you use -concurrency:N on the command line to run the rest of the tests in parallel. This is seen as a short term solution: it is recommended that tests not use shared system resources whenever possible. If you are running multiple instances of jtreg on the same machine at the same time, you can use a new option -lock:file to specify a file to be used for file locking; otherwise, the locking will just be within the JVM used to run jtreg. "autovm mode" By default, if no options to the contrary are given on the command line, tests will be run in othervm mode. Now, a test suite can be marked so that the default execution mode is "agentvm" mode. In conjunction with this, you can now mark a directory such that all the tests within that directory will be run in "othervm" mode. Conceptually, this is equivalent to putting /othervm on every appropriate action on every test in that directory and any subdirectories. This is seen as a short term solution: it is recommended tests be adapted to use agentvm mode, or use "@run main/othervm" explicitly. Info in test result files The user name and jtreg version info are now stored in the properties near the beginning of the .jtr file. Build The makefiles used to build and test jtreg have been reorganized and simplified. jtreg is now using JT Harness version 4.4. Other jtreg provides access to GNOME_DESKTOP_SESSION_ID when set. jtreg ensures that shell tests are given an absolute path for the JDK under test. jtreg now honors the "first sentence rule" for the description given by @summary. jtreg saves the default locale before executing a test in samevm or agentvm mode, and restores it afterwards. Bug fixes jtreg tried to execute a test even if the compilation failed in agentvm mode because of a JVM crash. jtreg did not correctly handle the -compilejdk option. Acknowledgements Thanks to Alan, Amy, Andrey, Brad, Christine, Dima, Max, Mike, Sherman, Steve and others for their help, suggestions, bug reports and for testing this latest version.

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Why do I get a Illegal Access Error when running my Android tests?

    - by Janusz
    I get the following stack trace when running my Android tests on the Emulator: java.lang.NoClassDefFoundError: client.HttpHelper at client.Helper.<init>(Helper.java:14) at test.Tests.setUp(Tests.java:15) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:164) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:151) at android.test.InstrumentationTestRunner.onStart(InstrumentationTestRunner.java:425) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:1520) Caused by: java.lang.IllegalAccessError: cross-loader access from pre-verified class at dalvik.system.DexFile.defineClass(Native Method) at dalvik.system.DexFile.loadClass(DexFile.java:193) at dalvik.system.PathClassLoader.findClass(PathClassLoader.java:203) at java.lang.ClassLoader.loadClass(ClassLoader.java:573) at java.lang.ClassLoader.loadClass(ClassLoader.java:532) ... 11 more I run my tests from an extra project. And it seems there are some problems with loading the classes from the other project. I have run the tests before but now they are failing. The project under tests runs without problems. Line 14 of the Helper Class is: this.httpHelper = new HttpHelper(userProfile); I start a HttpHelper class that is responsible for executing httpqueries. I think somehow this helper class is not available anymore, but I have no clue why.

    Read the article

  • How do I programmatically run all the JUnit tests in my Java application?

    - by Andrew McKinlay
    From Eclipse I can easily run all the JUnit tests in my application. I would like to be able to run the tests on target systems from the application jar, without Eclipse (or Ant or Maven or any other development tool). I can see how to run a specific test or suite from the command line. I could manually create a suite listing all the tests in my application, but that seems error prone - I'm sure at some point I'll create a test and forget to add it to the suite. The Eclipse JUnit plugin has a wizard to create a test suite, but for some reason it doesn't "see" my test classes. It may be looking for JUnit 3 tests, not JUnit 4 annotated tests. I could write a tool that would automatically create the suite by scanning the source files. Or I could write code so the application would scan it's own jar file for tests (either by naming convention or by looking for the @Test annotation). It seems like there should be an easier way. What am I missing?

    Read the article

  • Unit Testing TSQL

    - by Grant Fritchey
    I went through a period of time where I spent a lot of effort figuring out how to set up unit tests for TSQL. It wasn't easy. There are a few tools out there that help, but mostly it involves lots of programming. well, not as much as before. Thanks to the latest Down Tools Week at Red Gate a new utility has been built and released into the wild, SQL Test. Like a lot of the new tools coming out of Red Gate these days, this one is directly integrated into SSMS, which means you're working where you're comfortable and where you already have lots of tools at your disposal. After the install, when you launch SSMS and get connected, you're prompted to install the tSQLt example database. Go for it. It's a quick way to see how the tool works. I'd suggest using it. It' gives you a quick leg up. The concepts are pretty straight forward. There are a series of CLR commands that you use to configure a test and the test assertions. In between you're calling TSQL, either calls to your structure, queries, or stored procedures. They already have the one things that I always found wanting in database tests, a way to compare tables of results. I also like the ability to create a dummy copy of tables for the tests. It lets you control structures and behaviors so that the tests are more focused. One of the issues I always ran into with the other testing tools is that setting up the tests might require potentially destructive changes to the structure of the database (dropping FKs, etc.) which added lots of time and effort to setting up the tests, making testing more difficult, and therefor, less useful. Functionally, this is pretty similar to the Visual Studio tests and TSQLUnit tests that I used to use. The primary improvement over the Visual Studio tests is that I'm working in SSMS instead of Visual Studio. The primary improvement over TSQLUnit is the SQL Test interface it self. A lot of the functionality is the same, but having a sweet little tool to manage & run the tests from makes a huge difference. Oh, and don't worry. You can still run these tests directly from TSQL too, so automation has not gone away. I'm still thinking about how I'd use this in a dev environment where I also had source control to fret. That might be another blog post right there. I'm just getting started with SQL Test, so this is the first of several blog posts & videos. Watch this space. Try the tool.

    Read the article

  • Is there a better way to organize my module tests that avoids an explosion of new source files?

    - by luser droog
    I've got a neat (so I thought) way of having each of my modules produce a unit-test executable if compiled with the -DTESTMODULE flag. This flag guards a main() function that can access all static data and functions in the module, without #including a C file. From the README: -- Modules -- The various modules were written and tested separately before being coupled together to achieve the necessary basic functionality. Each module retains its unit-test, its main() function, guarded by #ifdef TESTMODULE. `make test` will compile and execute all the unit tests, producing copious output, but importantly exitting with an appropriate success or failure code, so the `make test` command will fail if any of the tests fail. Module TOC __________ test obj src header structures CONSTANTS ---- --- --- --- -------------------- m m.o m.c m.h mfile mtab TABSZ s s.o s.c s.h stack STACKSEGSZ v v.o v.c v.h saverec_ f.o f.c f.h file ob ob.o ob.c ob.h object ar ar.o ar.c ar.h array st st.o st.c st.h string di di.o di.c di.h dichead dictionary nm nm.o nm.c nm.h name gc gc.o gc.c gc.h garbage collector itp itp.c itp.h context osunix.o osunix.c osunix.h unix-dependent functions It's compile by a tricky bit of makefile, m:m.c ob.h ob.o err.o $(CORE) itp.o $(OP) cc $(CFLAGS) -DTESTMODULE $(LDLIBS) -o $@ $< err.o ob.o s.o ar.o st.o v.o di.o gc.o nm.o itp.o $(OP) f.o where the module is compiled with its own C file plus every other object file except itself. But it's creating difficulties for the kindly programmer who offered to write the Autotools files for me. So the obvious way to make it "less weird" would be to bust-out all the main functions into separate source files. But, but ... Do I gotta?

    Read the article

  • What are the design principles that promote testable code? (designing testable code vs driving design through tests)

    - by bot
    Most of the projects that I work on consider development and unit testing in isolation which makes writing unit tests at a later instance a nightmare. My objective is to keep testing in mind during the high level and low level design phases itself. I want to know if there are any well defined design principles that promote testable code. One such principle that I have come to understand recently is Dependency Inversion through Dependency injection and Inversion of Control. I have read that there is something known as SOLID. I want to understand if following the SOLID principles indirectly results in code that is easily testable? If not, are there any well-defined design principles that promote testable code? I am aware that there is something known as Test Driven Development. Although, I am more interested in designing code with testing in mind during the design phase itself rather than driving design through tests. I hope this makes sense. One more question related to this topic is whether it's alright to re-factor an existing product/project and make changes to code and design for the purpose of being able to write a unit test case for each module?

    Read the article

  • Coded UI Test - How to change the exe it runs

    - by Vaccano
    I created a Coded UI Test from a Microsoft Test Manager recording. The exe it runs is the one the tester recorded against. I want this to be a test I run with my build. How do I change the exe that the coded UI test uses to be the output of: The TFS Build when a TFS Build is being run The local build when the test is being run on my machine.

    Read the article

  • image focus calculation

    - by Oren Mazor
    Hi folks, I'm trying to develop an image focusing algorithm for some test automation work. I've chosen to use AForge.net, since it seems like a nice mature .net friendly system. Unfortunately, I can't seem to find information on building autofocus algorithms from scratch, so I've given it my best try: take image. apply sobel edge detection filter, which generates a greyscale edge outline. generate a histogram and save the standard dev. move camera one step closer to subject and take another picture. if the standard dev is smaller than previous one, we're getting more in focus. otherwise, we've past the optimal distance to be taking pictures. is there a better way? update: HUGE flaw in this, by the way. as I get past the optimal focus point, my "image in focus" value continues growing. you'd expect a parabolic-ish function looking at distance/focus-value, but in reality you get something that's more logarithmic

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >