Search Results

Search found 10178 results on 408 pages for 'testing metaprogramming'.

Page 117/408 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • How do I connect StaticListableBeanFactory with ClassPathXmlApplicationContext?

    - by Aaron Digulla
    In the setup of my test cases, I have this code: ApplicationContext context = new ClassPathXmlApplicationContext( "spring/common.xml" ); StaticListableBeanFactory testBeanFactory = new StaticListableBeanFactory(); How do I connect the two in such a way that tests can register beans in the testBeanFactory during setup and the rest of the application uses them instead of the ones defined in common.xml? Note: I need to mix a static (common.xml) and a dynamic configuration. I can't use XML for the latter because that would mean to write 1000 XML files.

    Read the article

  • Rails + RSpec problem

    - by FancyDancy
    I have just installed Rspec and Rspec-rails. When i try to run the test, it says: rake aborted! Command /opt/local/bin/ruby -I"lib" "/opt/local/lib/ruby/gems/1.8/gems/rspec-1.3.0/bin/spec" "spec/controllers/free_controller_spec.rb" --options "/Volumes/Trash/dev/app/trunk/spec/spec.opts" failed Full log here: http://pastie.org/939211 However, my second "test" application with sqlite works with it. I think the problem is in my DB. My ruby version is 1.8.7, i use mysql as database. My files: specs/spec_helper.rb config/environment.rb config/environments/test.rb List of my gems My test is just: require 'spec_helper' describe FreeController do it "should respond with success" do get 'index' response.should be_success end end I really can't understand the error, so i don't know how to fix it.. Additional question: should i use a fixtures and ActiveRecord, if i going to use Machinist for creating test data? What should i do to disable them?

    Read the article

  • Rails test across multiple environments

    - by DSimon
    Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases. Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe. The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.

    Read the article

  • Dependency injection in C++

    - by Yorgos Pagles
    This is also a question that I asked in a comment in one of Miško Hevery's google talks that was dealing with dependency injection but it got buried in the comments. I wonder how can the factory / builder step of wiring the dependencies together can work in C++. I.e. we have a class A that depends on B. The builder will allocate B in the heap, pass a pointer to B in A's constructor while also allocating in the heap and return a pointer to A. Who cleans up afterwards? Is it good to let the builder clean up after it's done? It seems to be the correct method since in the talk it says that the builder should setup objects that are expected to have the same lifetime or at least the dependencies have longer lifetime (I also have a question on that). What I mean in code: class builder { public: builder() : m_ClassA(NULL),m_ClassB(NULL) { } ~builder() { if (m_ClassB) { delete m_ClassB; } if (m_ClassA) { delete m_ClassA; } } ClassA *build() { m_ClassB = new class B; m_ClassA = new class A(m_ClassB); return m_ClassA; } }; Now if there is a dependency that is expected to last longer than the lifetime of the object we are injecting it into (say ClassC is that dependency) I understand that we should change the build method to something like: ClassA *builder::build(ClassC *classC) { m_ClassB = new class B; m_ClassA = new class A(m_ClassB, classC); return m_ClassA; } What is your preferred approach?

    Read the article

  • What is the most idiomatic way to emulating Perl's Test::More::done_testing?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test exits prematurely - say due to some logic not executing when you expected it to. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it. Another approach is to emulate done_testing() by calling Test::More->builder->plan(tests=>$total_tests_calculated). I'm not sure if it's any better idiomatically-wise.

    Read the article

  • Is there a way to extract the message from a JavaScript dialog in Chrome?

    - by Samuel
    I’ve been working on an extension for automating tests in Chrome, and I came across an obscure issue with JavaScript dialogs. The message shown in the dialog can’t be readily retrieved/copied. I’ve used the GetWindowText and InternalGetWindowText functions, but they only return the title of the dialog and the text from the buttons, not the actual message itself. I even looked at programs that extract text from forms, but no luck. So does anyone know of a way to retrieve the text from these JavaScript dialogs in Chrome?

    Read the article

  • Element not found blocks execution in Selenium

    - by Mariano
    In my test, I try to verify if certain text exists (after an action) using find_element_by_xpath. If I use the right expression and my test pass, the routine ends correctly in no time. However if I try a wrong text (meaning that the test will fail) it hangs forever and I have to kill the script otherwise it does not end. Here is my test (the expression Thx user, client or password you entered is incorrect does not exist in the system, no matter what the user does): # -*- coding: utf-8 -*- import gettext import unittest from selenium import webdriver class TestWrongLogin(unittest.TestCase): def setUp(self): self.driver = webdriver.Firefox() self.driver.get("http://10.23.1.104:8888/") # let's check the language try: self.lang = self.driver.execute_script("return navigator.language;") self.lang = self.lang("-")[0] except: self.lang = "en" language = gettext.translation('app', '/app/locale', [self.lang], fallback=True) language.install() self._ = gettext.gettext def tearDown(self): self.driver.quit() def test_wrong_client(self): # test wrong client inputElement = self.driver.find_element_by_name("login") inputElement.send_keys("root") inputElement = self.driver.find_element_by_name("client") inputElement.send_keys("Unleash") inputElement = self.driver.find_element_by_name("password") inputElement.send_keys("qwerty") self.driver.find_element_by_name("form.submitted").click() # wait for the db answer self.driver.implicitly_wait(10) ret = self.driver.find_element_by_xpath( "//*[contains(.,'{0}')]".\ format(self._(u"Thx user, client or password you entered is incorrect"))) self.assertTrue(isinstance(ret, webdriver.remote.webelement.WebElement)) if __name__ == '__main__': unittest.main() Why does it do that and how can I prevent it?

    Read the article

  • How to test Gem Extensions in Rails

    - by rube_noob
    I have written an extension to an existing gem (that is stored in lib) and a corresponding test for my extension. How could I go about running the gem's tests as well as my own automatically. What is the best practice for this case?

    Read the article

  • How do I get the name of the test method that was ran in a testng tear down method?

    - by Zachary Spencer
    Basically I have a tear down method that I want to log to the console which test was just ran. How would I go about getting that string? I can get the class name, but I want the actual method that was just executed. Class testSomething() { @AfterMethod public void tearDown() { system.out.println('The test that just ran was....' + getTestThatJustRanMethodName()'); } @Test public void testCase() { assertTrue(1==1); } } should output to the screen: "The test that just ran was.... testCase" However I don't know the magic that getTestThatJustRanMethodName should actually be.

    Read the article

  • How to test soft deletion event listner without setting up NHibernate Sessions

    - by isuruceanu
    I have overridden the default NHibernate DefaultDeleteEventListener according to this source: http://nhforge.org/blogs/nhibernate/archive/2008/09/06/soft-deletes.aspx so I have protected override void DeleteEntity( IEventSource session, object entity, EntityEntry entityEntry, bool isCascadeDeleteEnabled, IEntityPersister persister, ISet transientEntities) { if (entity is ISoftDeletable) { var e = (ISoftDeletable)entity; e.DateDeleted = DateTime.Now; CascadeBeforeDelete(session, persister, entity, entityEntry, transientEntities); CascadeAfterDelete(session, persister, entity, transientEntities); } else { base.DeleteEntity(session, entity, entityEntry, isCascadeDeleteEnabled, persister, transientEntities); } } How can I test only this piece of code, without configuring an NHIbernate Session?

    Read the article

  • Need help mocking a ASP.NET Controller in RhinoMocks

    - by Pure.Krome
    Hi folks, I'm trying to mock up a fake ASP.NET Controller. I don't have any concrete controllers, so I was hoping to just mock a Controller and it will work. This is what I have, currently. _fakeRequestBase = MockRepository.GenerateMock<HttpRequestBase>(); _fakeRequestBase.Stub(x => x.HttpMethod).Return("GET"); _fakeContextBase = MockRepository.GenerateMock<HttpContextBase>(); _fakeContextBase.Stub(x => x.Request).Return(_fakeRequestBase); var controllerContext = new ControllerContext(_fakeContextBase, new RouteData(), MockRepository.GenerateMock<ControllerBase>()); _fakeController = MockRepository.GenerateMock<Controller>(); _fakeController.Stub(x => x.ControllerContext).Return(controllerContext); Everything works except the last line, which throws a runtime error and is asking me for some Rhino.Mocks source code or something (which I don't have). See how I'm trying to mock up an abstract Controller - is that allowed? Can someone help me?

    Read the article

  • Unit Test Sessions Window Closes when debugging

    - by Daniel Dyson
    When I select an NUnit test in the Unit Test Sessions window and click debug, the window disappears. My breakpoints are hit, but if I hit F5, the Unit Test Sessions window does not return until the test returns a result or I stop the debugging session. This is preventing me from viewing any console output during tests. Any ideas?

    Read the article

  • Python: How to run unittest.main() for all source files in a subdirectory?

    - by Pete
    I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure: dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: if __name__ == "__main__": unittest.main() And run Python on either source, i.e. $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call execfile for every *.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.

    Read the article

  • How to know if your Unit Test is "right-sized"?

    - by leeand00
    One thing that I've always noticed with my unit tests is that they get to be kind of verbose; seeing as they could also be not verbose enough, how do you get a sense of when your unit tests are the right size? I know of a good quote for this and it's: "Perfection is achieved, not when there is nothing left to add, but when there is nothing left to remove." - Antoine de Saint-Exupery.

    Read the article

  • Method parameters have incorrect values when using RowTest in VB.Net

    - by simon_bellis
    Hello, I have the following test method (VB.NET) <RowTest()> _ <Row(1, 2, 3)> _ Public Sub AddMultipleNumbers(ByVal number1 As Integer, ByVal number2 As Integer, ByVal result As Integer) Dim dvbc As VbClass = New VbClass() Dim actual As Integer = dvbc.Add(number1, number2) Assert.That(actual, [Is].SameAs(result)) End Sub My problem is that when the test runs, using TestDriven.Net, the three method parameters are 0 and not the values I am expecting. I have referenced the NUnit.Framework (v.2.5.3.9345) anf the NUnitExtension.RowTest (v.1.2.3.0).

    Read the article

  • How do I fix my Unit Test to have global access to everything?

    - by SLC
    Usually when you add one (in Visual Basic), it pops up a message asking if you want to enable an option that lets the test access things like private methods etc. However, I am editing a solution that does not have this enabled. I'd like to enable it so my unit tests will work, but I can't find the setting. Can anyone tell me how to enable it after the project has been created?

    Read the article

  • Recipe for creating a corrupt mysql table

    - by Chaim Geretz
    We had a process that crashed while trying to manipulate an expected mysql record set, running the offending query from the mysql cli showed the following. mysql SELECT ...; ERROR 1030: Got error 127 from table handler Is there a way to easily recreate this condition so we can validate our fix ? (production DB was already repaired).

    Read the article

  • Configure lua prosody for localhost only

    - by Alfred
    I want to use prosody or maybe another xmpp server to test my xmpp bot. I want it to only accept connection from the address/localhost(don't want to configure firewall to block access). I would like to know the easiest way to accomplish this.

    Read the article

  • How to flush coverage data when my test cause app crash - For ios app

    - by Ypy
    I want to get the code coverage of my tests. So I set the settings, build an app with .gcno files and run it on simulator. It can get the coverage data successfully if there is no crash issue. But if the app crashed, I will get nothing. So how can I get the code coverage data when the app crash? In my thought, this is because it will not call __gcov_flush() method when app crash. I only add app does not run in background to my plist file, so __gcov_flush() is called only at the time I press Home button. Is there any way to call __gcov_flush() before the app crash?

    Read the article

  • Measuring the CPU frequency scaling effect

    - by Bryan Fok
    Recently I am trying to measure the effect of the cpu scaling. Is it accurate if I use this clock to measure it? template<std::intmax_t clock_freq> struct rdtsc_clock { typedef unsigned long long rep; typedef std::ratio<1, clock_freq> period; typedef std::chrono::duration<rep, period> duration; typedef std::chrono::time_point<rdtsc_clock> time_point; static const bool is_steady = true; static time_point now() noexcept { unsigned lo, hi; asm volatile("rdtsc" : "=a" (lo), "=d" (hi)); return time_point(duration(static_cast<rep>(hi) << 32 | lo)); } }; Update: According to the comment from my another post, I believe redtsc cannot use for measure the effect of cpu frequency scaling because the counter from the redtsc does not affected by the CPU frequency, am i right?

    Read the article

  • Why junit ComparisonFailure is not used by assertEquals(Object, Object) ?

    - by Philippe Blayo
    In Junit 4, do you see any drawback to throw a ComparisonFailure instead of an AssertionError when assertEquals(Object, Object) fails ? assertEquals(Object, Object) throws a ComparisonFailure if both expected and actual are String an AssertionError if either is not a String @Test(expected=ComparisonFailure.class ) public void twoString() { assertEquals("a String", "another String"); } @Test(expected=AssertionError.class ) public void oneString() { assertEquals("a String", new Object()); } The two reasons why I ask the question: ComparisonFailure provide far more readable way to spot the differences in dialog box of eclipse or Intellij IDEA (FEST-Assert throws this exception) Junit 4 already use String.valueOf(Object) to build message "expected ... but was ..." (format method invoqued by Assert.assertEquals(message, Object, Object) in junit-4.8.2): static String format(String message, Object expected, Object actual) { ... String expectedString= String.valueOf(expected); String actualString= String.valueOf(actual); if (expectedString.equals(actualString)) return formatted + "expected: " + formatClassAndValue(expected, expectedString) +" but was: " + formatClassAndValue(actual, actualString); else return formatted +"expected:<"+ expectedString +"> but was:<"+ actualString +">"; Isn't it possible in assertEquals(message, Object, Object) to replace fail(format(message, expected, actual)); by throw new ComparisonFailure(message, formatClassAndValue(expectedObject, expectedString), formatClassAndValue(actualObject, actualString)); Do you see any compatibility issue with other tool, any algorithmic problem with that... ?

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >