Search Results

Search found 13653 results on 547 pages for 'integration testing'.

Page 152/547 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • Is there a JUnit equivalent to NUnit's testcase attribute?

    - by Steph
    I've googled for JUnit test case, and it comes up with something that looks a lot more complicated to implement - where you have to create a new class that extends test case which you then call: public class MathTest extends TestCase { protected double fValue1; protected double fValue2; protected void setUp() { fValue1= 2.0; fValue2= 3.0; } } public void testAdd() { double result= fValue1 + fValue2; assertTrue(result == 5.0); } but what I want is something really simple, like the NUnit test cases [TestCase(1,2)] [TestCase(3,4)] public void testAdd(int fValue1, int fValue2) { double result= fValue1 + fValue2; assertIsTrue(result == 5.0); } Is there any way to do this in JUnit?

    Read the article

  • Is there a Java unit-test framework that auto-tests getters and setters?

    - by Michael Easter
    There is a well-known debate in Java (and other communities, I'm sure) whether or not trivial getter/setter methods should be tested. Usually, this is with respect to code coverage. Let's agree that this is an open debate, and not try to answer it here. There have been several blog posts on using Java reflection to auto-test such methods. Does any framework (e.g. jUnit) provide such a feature? e.g. An annotation that says "this test T should auto-test all the getters/setters on class C, because I assert that they are standard". It seems to me that it would add value, and if it were configurable, the 'debate' would be left as an option to the user.

    Read the article

  • python mock patch : a method of instance is called?

    - by JuanPablo
    In python 2.7, I have this function from slacker import Slacker def post_message(token, channel, message): channel = '#{}'.format(channel) slack = Slacker(token) slack.chat.post_message(channel, message) with mock and patch, I can check that the token is used in Slacker class import unittest from mock import patch from slacker_cli import post_message class TestMessage(unittest.TestCase): @patch('slacker_cli.Slacker') def test_post_message_use_token(self, mock_slacker): token = 'aaa' channel = 'channel_name' message = 'message string' post_message(token, channel, message) mock_slacker.assert_called_with(token) how I can check the string use in post_message ? I try with mock_slacker.chat.post_message.assert_called_with('#channel') but I get AssertionError: Expected call: post_message('#channel') Not called

    Read the article

  • Unit Test Sessions Window Closes when debugging

    - by Daniel Dyson
    When I select an NUnit test in the Unit Test Sessions window and click debug, the window disappears. My breakpoints are hit, but if I hit F5, the Unit Test Sessions window does not return until the test returns a result or I stop the debugging session. This is preventing me from viewing any console output during tests. Any ideas?

    Read the article

  • Rails + RSpec problem

    - by FancyDancy
    I have just installed Rspec and Rspec-rails. When i try to run the test, it says: rake aborted! Command /opt/local/bin/ruby -I"lib" "/opt/local/lib/ruby/gems/1.8/gems/rspec-1.3.0/bin/spec" "spec/controllers/free_controller_spec.rb" --options "/Volumes/Trash/dev/app/trunk/spec/spec.opts" failed Full log here: http://pastie.org/939211 However, my second "test" application with sqlite works with it. I think the problem is in my DB. My ruby version is 1.8.7, i use mysql as database. My files: specs/spec_helper.rb config/environment.rb config/environments/test.rb List of my gems My test is just: require 'spec_helper' describe FreeController do it "should respond with success" do get 'index' response.should be_success end end I really can't understand the error, so i don't know how to fix it.. Additional question: should i use a fixtures and ActiveRecord, if i going to use Machinist for creating test data? What should i do to disable them?

    Read the article

  • Unitesting JSPs

    - by Avi Y
    Hi, I would like to ask you what technologies exist out there for creating unitests for JSPs. I am already aware of the HtmlUnit/HttpUnit/JWebUnit/Selenium possibilities. Thank you!

    Read the article

  • Visual Studio 2010 and Test Driven Development

    - by devoured elysium
    I'm making my first steps in Test Driven Development with Visual Studio. I have some questions regarding how to implement generic classes with VS 2010. First, let's say I want to implement my own version of an ArrayList. I start by creating the following test (I'm using in this case MSTest): [TestMethod] public void Add_10_Items_Remove_10_Items_Check_Size_Is_Zero() { var myArrayList = new MyArrayList<int>(); for (int i = 0; i < 10; ++i) { myArrayList.Add(i); } for (int i = 0; i < 10; ++i) { myArrayList.RemoveAt(0); } int expected = 0; int actual = myArrayList.Size; Assert.AreEqual(expected, actual); } I'm using VS 2010 ability to hit ctrl + . and have it implement classes/methods on the go. I have been getting some trouble when implementing generic classes. For example, when I define an .Add(10) method, VS doesn't know if I intend a generic method(as the class is generic) or an Add(int number) method. Is there any way to differentiate this? The same can happen with return types. Let's assume I'm implementing a MyStack stack and I want to test if after I push and element and pop it, the stack is still empty. We all know pop should return something, but usually, the code of this test shouldn't care for it. Visual Studio would then think that pop is a void method, which in fact is not what one would want. How to deal with this? For each method, should I start by making tests that are "very specific" such as is obvious the method should return something so I don't get this kind of ambiguity? Even if not using the result, should I have something like int popValue = myStack.Pop() ? How should I do tests to generic classes? Only test with one generic kind of type? I have been using ints, as they are easy to use, but should I also test with different kinds of objects? How do you usually approach this? I see there is a popular tool called TestDriven for .NET. With VS 2010 release, is it still useful, or a lot of its features are now part of VS 2010, rendering it kinda useless? Thanks

    Read the article

  • Python: How to run unittest.main() for all source files in a subdirectory?

    - by Pete
    I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure: dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: if __name__ == "__main__": unittest.main() And run Python on either source, i.e. $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call execfile for every *.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.

    Read the article

  • How to configure .NET test assembly to use website web.config?

    - by Morten Christiansen
    I've run into a problem setting up Selenium tests for an ASP.NET MVC project in cases where I need the settings provided in the web.config of the site under test. The problem is that I want to create a dummy user before running the test and this causes an error saying that the password-answer supplied is invalid. This is due to the test assembly not using the web.config, instead using default values for membership configuration. I've tried to copy the relevant section (membership configuration) into the app.config of the assembly without luck, but I admit I'm just grasping at straws here.

    Read the article

  • When mocking a class with Moq, how can I CallBase for just specific methods?

    - by Daryn
    I really appreciate Moq's Loose mocking behaviour that returns default values when no expectations are set. It's convenient and saves me code, and it also acts as a safety measure: dependencies won't get unintentionally called during the unit test (as long as they are virtual). However, I'm confused about how to keep these benefits when the method under test happens to be virtual. In this case I do want to call the real code for that one method, while still having the rest of the class loosely mocked. All I have found in my searching is that I could set mock.CallBase = true to ensure that the method gets called. However, that affects the whole class. I don't want to do that because it puts me in a dilemma about all the other properties and methods in the class that hide call dependencies: if CallBase is true then I have to either Setup stubs for all of the properties and methods that hide dependencies -- Even though my test doesn't think it needs to care about those dependencies, or Hope that I don't forget to Setup any stubs (and that no new dependencies get added to the code in the future) -- Risk unit tests hitting a real dependency. Q: With Moq, is there any way to test a virtual method, when I mocked the class to stub just a few dependencies? I.e. Without resorting to CallBase=true and having to stub all of the dependencies? Example code to illustrate (uses MSTest, InternalsVisibleTo DynamicProxyGenAssembly2) In the following example, TestNonVirtualMethod passes, but TestVirtualMethod fails - returns null. public class Foo { public string NonVirtualMethod() { return GetDependencyA(); } public virtual string VirtualMethod() { return GetDependencyA();} internal virtual string GetDependencyA() { return "! Hit REAL Dependency A !"; } // [... Possibly many other dependencies ...] internal virtual string GetDependencyN() { return "! Hit REAL Dependency N !"; } } [TestClass] public class UnitTest1 { [TestMethod] public void TestNonVirtualMethod() { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); string result = mockFoo.Object.NonVirtualMethod(); Assert.AreEqual(expectedResultString, result); } [TestMethod] public void TestVirtualMethod() // Fails { var mockFoo = new Mock<Foo>(); mockFoo.Setup(m => m.GetDependencyA()).Returns(expectedResultString); // (I don't want to setup GetDependencyB ... GetDependencyN here) string result = mockFoo.Object.VirtualMethod(); Assert.AreEqual(expectedResultString, result); } string expectedResultString = "Hit mock dependency A - OK"; }

    Read the article

  • Is there an equivalent to RSpec's before(:all) in MiniTest?

    - by bergyman
    Since it now seems to have replaced TestUnit in 1.9.1, I can't seem to find an equivalent to this. There ARE times when you really just want a method to run once for the suite of tests. For now I've resorted to some lovely hackery along the lines of: Class ParseStandardWindTest < MiniTest::Unit::TestCase @@reader ||= PolicyDataReader.new(Time.now) @@data ||= @@reader.parse def test_stuff transaction = @@data[:transaction] assert true, transaction end end

    Read the article

  • Junit 4 test suite and individual test classes

    - by Hypnus
    I have a JUnit 4 test suite with BeforeClass and AfterClass methods that make a setup/teardown for the following test classes. What I need is to run the test classes also by them selves, but for that I need a setup/teardown scenario (BeforeClass and AfterClass or something like that) for each test class. The thing is that when I run the suite I do not want to execute the setup/teardown before and after each test class, I only want to execute the setup/teardown from the test suite (once). Is it possible ? Thanks in advance.

    Read the article

  • Preconfigure Android Emulator with location?

    - by Janusz
    I want to run automated tests with location on the android emulator. I can setup coordinates via Telnet, but that means starting up a console and manually configuring the emulator before running my junit tests. Is there a possibility to preconfigure the emulator with a KML file or something like that to ensure that there are always coordinates available?

    Read the article

  • How can I change a connection string, or other app settings, at test time in Visual Studio 2008?

    - by David
    I need to test a class library project in VS. This project, itself, does not have a web.config file, but the classes do on the web server to which it's deployed. I access these like this: ConfigurationManager.ConnectionStrings["stringname"].ConnectionString; Can I adjust these strings while running unit tests in VS? Should I have considered a different design method to avoid this problem?

    Read the article

  • Can I write a test without any assert in it ?

    - by stratwine
    Hi, I'd like to know if it is "ok" to write a test without any "assert" in it. So the test would fail only when an exception / error has occured. Eg: like a test which has a simple select query, to ensure that the database configuration is right. So when I change some db-configuration, I re-run this test and check if the configuration is right. ? Thanks!

    Read the article

  • Is there a way to extract the message from a JavaScript dialog in Chrome?

    - by Samuel
    I’ve been working on an extension for automating tests in Chrome, and I came across an obscure issue with JavaScript dialogs. The message shown in the dialog can’t be readily retrieved/copied. I’ve used the GetWindowText and InternalGetWindowText functions, but they only return the title of the dialog and the text from the buttons, not the actual message itself. I even looked at programs that extract text from forms, but no luck. So does anyone know of a way to retrieve the text from these JavaScript dialogs in Chrome?

    Read the article

  • Where should test classes be stored in the project?

    - by limc
    I build all my web projects at work using RAD/Eclipse, and I'm interested to know where do you guys normally store your test's *.class files. All my web projects have 2 source folders: "src" for source and "test" for testcases. The generated *.class files for both source folders are currently placed under WebContent/WEB-INF/classes folder. I want to separate the test *.class files from the src *.class files for 2 reasons:- There's no point to store them in WebContent/WEB-INF/classes and deploy them in production. Sonar and some other static code analysis tools don't produce an accurate static code analysis because it takes account of my crappy yet correct testcase code. So, right now, I have the following output folders:- "src" source folder compiles to WebContent/WEB-INF/classes folder. "test" source folder compiles to target/test-classes folder. Now, I'm getting this warning from RAD:- Broken single-root rule: A project may not contain more than one output folder. So, it seems like Eclipse-based IDEs prefer one project = one output folder, yet it provides an option for me to set up a custom output folder for my additional source folder from the "build path" dialog, and then it barks at me. I know I can just disable this warning myself, but I want to know how you guys handle this. Thanks.

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • where is "create instance" menu in visual studio 2010?

    - by austin powers
    Hi, in visual studio 2008 there is a sub-menu called "create instance" which is resides in class designer. Today I've opened VS.net 2010 and then opened class designer and create my class over there and when I wanted to test my class with the help of "create instance" option there was no such option available in vs.net 2010. and I've googled about it a little bit but no answer at all so I decided to mention about it here. where can I find this menu in vs.net 2010? regards.

    Read the article

  • Second Unit Test Not Running

    - by TomJ
    I am having trouble getting my Method B test to run. The logic is fine, but when the unit tests are run, only Method A will run. If Method A and B are switched in terms of spots, only Method B will run. So clearly the code is wrong at some point. Do I need to call method B's test from inside method A in order to get both unit tests to run? I'm pretty new to C#, so forgive my basic question. using redacted; using Microsoft.VisualStudio.TestTools.UnitTesting; using System; namespace UnitTests { [TestClass()] public class ClassTest { public TestContext TestContext{get;set;} [TestMethod()] public void MethodATest() { the unit test } [TestMethod()] public void MethodBTest() { the unit test } } }

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >