Search Results

Search found 23949 results on 958 pages for 'test me'.

Page 33/958 | < Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >

  • Zend_Test_PHPUnit_ControllerTestCase: Test view parameters and not rendered output

    - by erenon
    Hi, I'm using Zend_Test_PHPUnit_ControllerTestCase to test my controllers. This class provides various ways to test the rendered output, but I don't want to get my view scripts involved. I'd like to test my view's vars. Is there a way to access to the controllers view object? Here is an example, what I'm trying to do: <?php class Controller extends Zend_Controller_Action { public function indexAction() { $this-view->foo = 'bar'; } } class ControllerTest extends Zend_Test_PHPUnit_ControllerTestCase { public function testShowCallsServiceFind() { $this->dispatch('/controller'); //doesn't work, there is no such method: $this->assertViewVar('foo', 'bar'); //doesn't work: $this-assertEquals( 'bar', $this->getView()->foo ); } }

    Read the article

  • Unit test with Authlogic on Rails 3

    - by Puru puru rin..
    Hello, I would like to write some unit test with a logged user using Authlogic. To start right, I used some code hosted in http://github.com/binarylogic/authlogic_example. But I get an error after rake test, because of "test_helper.rb" and the following class: class ActionController::TestCase setup :activate_authlogic end Here is my error: NameError: undefined local variable or method `activate_authlogic' for I think this Authlogic example is mapped over Rails 2; maybe it's a little bit different on Rails 3. Is there an other example where I can take example about unit test? Many thanks.

    Read the article

  • Ideal dev/test/QA environment for development

    - by Nick
    I am working to rebuild my company's dev/test/QA environment. We have 10-15 programmers that are involved in a number of projects. They currently all develop locally on their PCs and use the dev environment for testing. We currently do not have a QA environment, so deployments are frequently a pain because bugs are usually found after something has gone live. Here's what I envision: Doing away with everyone's local admin privileges and making everyone develop on a dev server Create a QA environment that is identical to our production systems. This will allow them to test deployments. Create a new test environment that is more locked down than the dev server so that proper testing can be done. What are your thoughts? What is the best way to set up an environment like this? We develop ASP .NET applications using MS Visual Studio 2008 (if that helps).

    Read the article

  • How much of Grails GORM to test?

    - by Lloyd Meinholz
    Is there a "best practice" or defacto standard with how much of the GORM functionality one should test in the unit/functional tests? My take is that one should probably do most of the domain testing as functional tests so that you get the full grails environment. But what do you test? Inserts, updates, deletes? Do you test constraints even though they were probably more thoroughly tested by the grails release? Or do you just assume that GORM does what it is supposed to do and move to other parts of the application?

    Read the article

  • How can I specifiy JUnit test dependencies?

    - by Egon Willighagen
    Our toolkit has over 15000 JUnit tests, and many tests are known to fail if some other test fails. For example, if the method X.foo() uses functionality from Y.foo() and YTest.testFoo() fails, then XTest.testFoo() will fail too. Obviously, XTest.testFoo() can also fail because of problems specific to X.foo(). While this is fine and I still want both tests run, it would be nice if one could annotate a test dependency with XTest.testFoo() pointing to YTest.testFoo(). This way, one could immediately see what functionality used by X.foo() is also failing, and what not. Is there such annotation available in JUnit or elsewhere? Something like: public YTests { @Test @DependsOn(method=org.example.tests.YTest#testFoo) public void testFoo() { // Assert.something(); } }

    Read the article

  • Did test server port change in Rails 2.3?

    - by kareem
    I upgraded rails to 2.3.2 from 2.1.1 yesterday and a bunch of my tests started failing. When I was running under 2.1.1, the test server was running on port 3000 so I had a HOST_DOMAIN variable that included the port - HOST_DOMAIN = "localhost.tst:3000". This is so my assert_redirected_to's would succeed. Now, however, it seems that the test server is running on port 80, so the port in HOST_DOMAIN is causing tests to fail. There's no specific reason I'm keeping the port in HOST_DOMAIN. I more want to know whether something in Rails 2.3 changed the port the test server runs on and where I can read more about why. I've searched a ton and can't find anything, so I'm going to my go-to place to ask development questions :) Thanks in advance.

    Read the article

  • How to set a global before PHPUnit's skeleton-test is run

    - by ministerOfPower
    We set a global in our prepend file used to form the path for our require_once calls. For example: require_once($GLOBALS['root'].'/library/particleboard/JsonUtil.php'); Problem is, when I run PHPUnit's skeleton test builder, the prepend file is not run, so the global is never set. When I run cd /company/trunk/queue/process; phpunit --skeleton-test QueueProcessView PHPUnit tries to resolve a require_once in QueueProcessView, but since the $GLOBALS['root'] is never set, I get a fatal error when including the required file. For example, to PHPUnit, what should be require_once(/code/trunk/library/particleboard/JsonUtil.php) is resolved as require_once(/library/particleboard/JsonUtil.php) Notice the missing root. Does anyone know if the skeleton-test code has some way to call PHP file before it is run? In this I could set my GLOBAL['root'] in this file. Any other creative solutions would be appreciated.

    Read the article

  • Windows IIS test server setup

    - by chopps
    hello everyone, I picked up a new server to do some testing and need of a little help in setting up my environment at home. Here is what I would like to do: The test server will be used to test new code and configurations for a SaaS product. I would like from my laptop to enter www.acme.com and have it hit the server. The server is connected to a wireless router. I have windows server 2008 with IIS running on an an IP of 192.168.1.4. What is the best way to set this up? I want to hit the test server for www.acme.com and not go out to the internet. Do i need to mess with the LMHosts file? Thanks for the help. Im sure its easy but have never done this before.

    Read the article

  • Prevent Visual Studio Web Test from changing request details

    - by keithwarren7
    I have a service that accepts Xmla queries for Analysis services, often times those queries themselves will have a string that contains a fragment that looks something like {{[Time].[Year].[All]}} Recording these requests works fine but when I try to re-run the test I get an error from the test runner... Request failed: Exception occurred: There is no context parameter with the name ' [Time].[Year].[All]' in the WebTestContext This was confusing for some time but when I asked VS to generate a coded version of the test I was able to see the problem a bit better. VS searches for the '{{' and '}}' tokens and makes changes, considering those areas to refer to Context parameters, the code looks like this.Context["\n\t[Time].[Year].[All]"].ToString() Anyone know how to instruct Visual Studio to not perform this replacement operation? Or another way around this issue?

    Read the article

  • Confusion testing fftw3 - poisson equation 2d test

    - by user3699736
    I am having trouble explaining/understanding the following phenomenon: To test fftw3 i am using the 2d poisson test case: laplacian(f(x,y)) = - g(x,y) with periodic boundary conditions. After applying the fourier transform to the equation we obtain : F(kx,ky) = G(kx,ky) /(kx² + ky²) (1) if i take g(x,y) = sin (x) + sin(y) , (x,y) \in [0,2 \pi] i have immediately f(x,y) = g(x,y) which is what i am trying to obtain with the fft : i compute G from g with a forward Fourier transform From this i can compute the Fourier transform of f with (1). Finally, i compute f with the backward Fourier transform (without forgetting to normalize by 1/(nx*ny)). In practice, the results are pretty bad? (For instance, the amplitude for N = 256 is twice the amplitude obtained with N = 512) Even worse, if i try g(x,y) = sin(x)*sin(y) , the curve has not even the same form of the solution. (note that i must change the equation; i divide by two the laplacian in this case : (1) becomes F(kx,ky) = 2*G(kx,ky)/(kx²+ky²) Here is the code: /* * fftw test -- double precision */ #include <iostream> #include <stdio.h> #include <stdlib.h> #include <math.h> #include <fftw3.h> using namespace std; int main() { int N = 128; int i, j ; double pi = 3.14159265359; double *X, *Y ; X = (double*) malloc(N*sizeof(double)); Y = (double*) malloc(N*sizeof(double)); fftw_complex *out1, *in2, *out2, *in1; fftw_plan p1, p2; double L = 2.*pi; double dx = L/((N - 1)*1.0); in1 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); out2 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); out1 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); in2 = (fftw_complex*) fftw_malloc(sizeof(fftw_complex)*(N*N) ); p1 = fftw_plan_dft_2d(N, N, in1, out1, FFTW_FORWARD,FFTW_MEASURE ); p2 = fftw_plan_dft_2d(N, N, in2, out2, FFTW_BACKWARD,FFTW_MEASURE); for(i = 0; i < N; i++){ X[i] = -pi + (i*1.0)*2.*pi/((N - 1)*1.0) ; for(j = 0; j < N; j++){ Y[j] = -pi + (j*1.0)*2.*pi/((N - 1)*1.0) ; in1[i*N + j][0] = sin(X[i]) + sin(Y[j]) ; // row major ordering //in1[i*N + j][0] = sin(X[i]) * sin(Y[j]) ; // 2nd test case in1[i*N + j][1] = 0 ; } } fftw_execute(p1); // FFT forward for ( i = 0; i < N; i++){ // f = g / ( kx² + ky² ) for( j = 0; j < N; j++){ in2[i*N + j][0] = out1[i*N + j][0]/ (i*i+j*j+1e-16); in2[i*N + j][1] = out1[i*N + j][1]/ (i*i+j*j+1e-16); //in2[i*N + j][0] = 2*out1[i*N + j][0]/ (i*i+j*j+1e-16); // 2nd test case //in2[i*N + j][1] = 2*out1[i*N + j][1]/ (i*i+j*j+1e-16); } } fftw_execute(p2); //FFT backward // checking the results computed double erl1 = 0.; for ( i = 0; i < N; i++) { for( j = 0; j < N; j++){ erl1 += fabs( in1[i*N + j][0] - out2[i*N + j][0]/N/N )*dx*dx; cout<< i <<" "<< j<<" "<< sin(X[i])+sin(Y[j])<<" "<< out2[i*N+j][0]/N/N <<" "<< endl; // > output } } cout<< erl1 << endl ; // L1 error fftw_destroy_plan(p1); fftw_destroy_plan(p2); fftw_free(out1); fftw_free(out2); fftw_free(in1); fftw_free(in2); return 0; } I can't find any (more) mistakes in my code (i installed the fftw3 library last week) and i don't see a problem with the maths either but i don't think it's the fft's fault. Hence my predicament. I am all out of ideas and all out of google as well. Any help solving this puzzle would be greatly appreciated. note : compiling : g++ test.cpp -lfftw3 -lm executing : ./a.out output and i use gnuplot in order to plot the curves : (in gnuplot ) splot "output" u 1:2:4 ( for the computed solution )

    Read the article

  • C# / Visual Studio: production and test code placement

    - by Patrick Linskey
    Hi, In JavaLand, I'm used to creating projects that contain both production and test code. I like this practice because it simplifies testing of internal code without artificially exposing the internals in a project's published API. So far, in my experiences with C# / Visual Studio / ReSharper / NUnit, I've created separate projects (i.e., separate DLLs) for production and test code. Is this the idiom, or am I off base? If this idiomatically correct, what's the right way to deal with exposing classes and methods for test purposes? Thanks, -Patrick

    Read the article

  • Integration Testing an Entire *Existing* Application (w/ automatic execution of test suite)

    - by Ev
    Hi there, I have just joined a team working on an existing Java web app. I have been tasked with creating an automated integration test suite that should run when developers commit to our continuous integration server (TeamCity), which automatically deploys to our staging server - so really the tests will be run against our staging web app server. I have read a lot of stuff about automated integration testing with frameworks like Watir, Selenium and RWebSpec. I have created tests in all of these and while I prefer Watir, I am open to anything. The thing that hasn't become clear to me is how to create an entire test suite for an application, and how to have that suite execute in it's entirety upon execution of some script. I can happily create individual tests of varying complexity, but there is a gap in my knowledge about how to tie everything together into something useful. Does anyone have any advice on how to create a full test suite and have it execute automatically? Thanks!

    Read the article

  • Action works, but test doesn't (Shoulda)

    - by trobrock
    I am trying to test my update action in Rails with this: context "on PUT to :update" do setup do @countdown = Factory(:countdown) @new_countdown = Factory.stub(:countdown) put :update, :id => @countdown.id, :name => @new_countdown.name, :end => @new_countdown.end end should_respond_with :redirect should_redirect_to("the countdowns view") { countdown_url(assigns(:countdown)) } should_assign_to :countdown should_set_the_flash_to /updated/i should "save :countdown with new attributes" do @countdown = Countdown.find(@countdown.id) assert_equal @new_countdown.name, @countdown.name assert_equal 0, (@new_countdown.end - @countdown.end).to_i end end When I actually go through the updating process using the scaffold that was built it updates the record fine, but the tests give me this error: 1) Failure: test: on PUT to :update should save :countdown with new attributes. (CountdownsControllerTest) [/test/functional/countdowns_controller_test.rb:86:in `__bind_1276353837_121269' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `call' /Library/Ruby/Gems/1.8/gems/thoughtbot-shoulda-2.10.2/lib/shoulda/context.rb:351:in `test: on PUT to :update should save :countdown with new attributes. ']: <"Countdown 8"> expected but was <"Countdown 7">.

    Read the article

  • Xpath or xquery and test the order

    - by mada
    Hi, Using SoapUI (great tool for WS by the way), I have the following xml result : <code>c</code> <code>b</code> <code>a</code> For this sample above, i would like to test the code value are order asc. Of course, for this sample the test will fail like excepted. Any solution with xquery or xpath (i can use groovy inside the test if necessary) Thanks in advance.

    Read the article

  • Test for empty jQuery selection result

    - by fsb
    Say I do var s = $('#something'); and next I want to test if jQuery found #something, i.e. I want to test if s is empty. I could use my trusty isempty() on it: function isempty(o) { for ( var i in o ) return false; return true; } Or since jQuery objects are arrays, I suppose I could test s.length. But neither seem quite in the idiom of jQuery, not very jQueryesque. What do you suggest?

    Read the article

  • ReSharper Unit Test Runner: Support for Deployment Items

    - by driis
    I like the Unit test runner in ReSharper 4.5, and would like to use it with my MSTest tests, but one thing annoys me: In some of our solutions, we have set up some Deployment Items in the .testrunconfig file. The ReSharper Unit Test runner does not seem to respect this, so I get errors when trying to run the unit tests from ReSharper. Is there any workraound for this ? Update: citizenmatt's answer was correct, the option to use a .testrunconfig with ReSharper exists in the Options dialog of ReSharper. You have to select the unit test provider on the list, then the controls to do that appears. (That was not obvious or discoverable, at least not for me ;-)

    Read the article

  • Devising a test strategy

    - by Simon Callan
    As part of a new job, I have to devise and implement a complete test strategy for the companies new product. So far, all I really know about it is that it is written in C++, uses an SQL database and has a web API which is used by a browser client written using GWT. As far as I know, there isn't much of an existing strategy, except for using Python scripts to test the web API. I need to develop and implement a suitable strategy for unit, system, regression and release testing, preferably a fully automated one. I'm looking for good references for : Devising the complete test strategy. Testing the web API. Testing the GWT based application. Unit testing C++ code. In addition, any suitable tools would be appreciate

    Read the article

  • Is Pex (Test generation) really usefull tool?

    - by Yauheni Sivukha
    Yes, it is possible to generate tests on boundary values for functions like "Sum" or "Divide". Pex is a good tool here. But more often we create tests on business behaviour. Let's consider example from classic Beck's tdd book: [Test] public void ShouldRoundOnCreation() { Money money = new Money(20.678); Assert.AreEqual(20.68,money.Amount); Assert.AreEqual(2068,money.Cents); } Can this test be generated? No :) 95 % of tests in my projects check business logic, and can not be generated. Pex (Especially in pair with Moles) can give 100% code coverage, but a high code coverage rate of a test suite does never indicate, that code is well tested - It only gives false confidence that everything is tested. And this is very dangerous. So, the question is - Is Pex really usefull tool?

    Read the article

  • How to test the XML sent to a web service in Ruby/Rails

    - by Jason Langenauer
    I'm looking for the best way to write unit test for code that POSTs to an external web service. The body of the POST request is an XML document which describes the actions and data for the web service to perform. Now, I've wrapped the webservice in its own class (similar to ActiveResource), and I can't see any way to test the exact XML being generated by the class without breaking encapsulation by exposing some of the internal XML generation as public methods on the class. This seems to be a code smell - from the point-of-view of the users of the class, they should not know, nor care, how the class actually implements the web service call, be it with XML, JSON or carrier pigeons. For an example of the class: class Resource def new #initialize the class end def save! Http.post("http://webservice.com", self.to_xml) end private def to_xml # returns an XML representation of self end end I want to be able to test the XML generated to ensure it conforms to what the specs for the web service are expecting. So can I best do this, without making to_xml a public method?

    Read the article

  • Test Jboss JMS externally - outside of the application server

    - by bmatsliah
    Hi All, I'm pretty new to the JMS functionality. I need to test a remote JMS which is on Jboss 4.2.1. I want to develop an external test (e.g from a main class) which sends and receives messages to/from the app. server. (I do have full access to the remote server.) My questions are: 1) How do I send messages to the Jboss JMS? 2) Where is the information needed to send a message located on the Jboss? ip, port, queue, user, pw, ect. to set up the local messaging test. 3) Is there a way to config/ develop a mechanism on the remote JMS Jboss server that sends back acknowledgments to the sender (in this case)? I greatly appreciate your help. Thanks, Ben

    Read the article

  • Need help with developing a class for my JUnit test

    - by alpdog14
    I have this JUnit test that I need help developing a Interface and Class for, here is the test: Box b1 = new DefaultBox( "abc" ); Box b2 = new DefaultBox( "def" ); Box b3 = new DefaultBox( "" ); assertEquals("abc", b1.contents()); assertEquals("[abc]", b1.toString()); assertTrue(b1.equals(b1)); assertFalse(b1.equals(b2)); assertFalse(b1.equals(null)); assertEquals("cba", b1.flip().contents()); assertEquals("", b3.flip().contents()); can anyone help me in developing a Default box class and a box interface to make these test pass? Any help would be most appreciated.

    Read the article

  • Proper structure for many test cases in Python with unittest

    - by mellort
    I am looking into the unittest package, and I'm not sure of the proper way to structure my test cases when writing a lot of them for the same method. Say I have a fact function which calculates the factorial of a number; would this testing file be OK? import unittest class functions_tester(unittest.TestCase): def test_fact_1(self): self.assertEqual(1, fact(1)) def test_fact_2(self): self.assertEqual(2, fact(2)) def test_fact_3(self): self.assertEqual(6, fact(3)) def test_fact_4(self): self.assertEqual(24, fact(4)) def test_fact_5(self): self.assertFalse(1==fact(5)) def test_fact_6(self): self.assertRaises(RuntimeError, fact, -1) #fact(-1) if __name__ == "__main__": unittest.main() It seems sloppy to have so many test methods for one method. I'd like to just have one testing method and put a ton of basic test cases (ie 4! ==24, 3!==6, 5!==120, and so on), but unittest doesn't let you do that. What is the best way to structure a testing file in this scenario? Thanks in advance for the help.

    Read the article

  • Using FlexMock in a rails functional test.

    - by dagda1
    Hi, I have the following index action: class ExpensesController < ApplicationController def index() @expenses = Expense.all end end I want to mock the call to all in a functional test. I am using flexmock and have written the following test: require 'test_helper' require 'flexmock' require 'flexmock/test_unit' class ExpensesControllerTest < ActionController::TestCase test "should render index" do flexmock(Expense).should_receive(:all).and_return([]) get :index assert_response :success assert_template :index assert_equal [], assigns(:presentations) end end The problem is the the last assertion fais with the following error message: <[] expected but was nil I am confused what I am doing wrong. Should this not work? Cheers Paul

    Read the article

  • Is a new thread in a Visual Studio test project aborted when the test ends?

    - by Michel
    Hi, i have to do some message exchange with a 3rd party (in a website). When the client posts a page, i start the message exchange. When that doesn't succeed for some reason, i report this to the client by rendering the page with a message. On the background, in a separate thread, i start a process to send abort messages to the 3rd party. I can't do this while the user is waiting for the page to come back, because it might take a few minutes. But in a test project, the test ends when the message to the 3rd party is sent, and after the new thread is started. But it seems that the new thread also ends, when the test is done. Is that normal behaviour? I do start the thread in a new class with a reference to 2 objects from the class which tries to send the message in the first place, may that be a problem?

    Read the article

  • Integrating Hudson with MS Test?

    - by hangy
    Is it possible to integrate Hudson with MS Test? I am setting up a smaller CI server on my development machine with Hudson right now, just so that I can have some statistics (ie. FxCop and compiler warnings). Of course, it would also be nice if it could just run my unit tests and present their output. Up to now, I have added the following batch task to Hudson, which makes it run the tests properly. "%PROGRAMFILES%\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe" /runconfig:LocalTestRun.testrunconfig /testcontainer:Tests\bin\Debug\Tests.dll However, as far as I know, Hudson does not support analysis of MS Test results, yet. Does anyone know whether the TRX files generated by MSTest.exe can be transformed to the JUnit or NUnit result format (because those are supported by Hudson), or whether there is any other way to integrate MS Test unit tests with Hudson?

    Read the article

< Previous Page | 29 30 31 32 33 34 35 36 37 38 39 40  | Next Page >