Search Results

Search found 10010 results on 401 pages for 'a b testing'.

Page 34/401 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • How can I tell JWebUnit to contact specific Selenium servers

    - by Peter Tillemans
    We want to run selenium backed jwebunit tests from our hudson server. We have a couple of selenium rc servers already on our network which I'd like to reuse. However how can I configure jwebunit to use those servers as I would like to avoid installing a slenium rc server on the hudson. Building is already work enough without starting/stopping firefoxes.

    Read the article

  • How should we setup up complex situations for tests?

    - by ShaneC
    I'm currently working on what I would call integration tests. I want to verify that if a WCF service is called it will do what I expect. Let's take a very simple scenario. Assume we have a contract object that we can put on hold or take off hold. Now writing the put on hold test is quite simple. You create a contract instance and execute the code that puts it on code. The question I have comes when we want to test the taking off hold service call. The problem is that putting a contract on hold can be actually quite complicated leading to various objects all be modified. So usually I would use the Builder pattern and do something like this.. var onHoldContract = new ContractBuilder().PutOnHold().Build(); The problem I have with this is now I have to pretty much replicate a large part of my put on hold service. Now when I change what putting something on hold means I have two places I have to modify. The other option that immediately jumps out at me is to just use the put on hold service as part of my test setup but now I'm coupling my test to the success of another piece of code which is something I don't like to do since it can lead to failures in one spot breaking unrelated tests elsewhere (if put on hold failed for example). Any other options I'm missing out here? or opinions on which method is preferable and why?

    Read the article

  • Need help with writing test

    - by London
    I'm trying to write a test for this class its called Receiver : public void get(People person) { if(null != person) { LOG.info("Person with ID " + person.getId() + " received"); processor.process(person); }else{ LOG.info("Person not received abort!"); } } Here is the test : @Test public void testReceivePerson(){ context.checking(new Expectations() {{ receiver.get(person); atLeast(1).of(person).getId(); will(returnValue(String.class)); }}); } Note: receiver is the instance of Receiver class(real not mock), processor is the instance of Processor class(real not mock) which processes the person(mock object of People class). GetId is a String not int method that is not mistake. Test fails : unexpected invocation of person.getId() I'm using jMock any help would be appreciated. As I understood when I call this get method to execute it properly I need to mock person.getId() , and I've been sniping around in circles for a while now any help would be appreciated.

    Read the article

  • How to assert certain method is called with Ruby minitest framework?

    - by steven.yang
    I want to test whether a function invokes other functions properly with minitest Ruby, but I cannot find a proper assert to test from the doc. The source code class SomeClass def invoke_function(name) name == "right" ? right () : wrong () end def right #... end def wrong #... end end The test code: describe SomeClass do it "should invoke right function" do # assert right() is called end it "should invoke other function" do # assert wrong() is called end end

    Read the article

  • Am I mocking this helper function right in my Django test?

    - by CppLearner
    lib.py from django.core.urlresolvers import reverse def render_reverse(f, kwargs): """ kwargs is a dictionary, usually of the form {'args': [cbid]} """ return reverse(f, **kwargs) tests.py from lib import render_reverse, print_ls class LibTest(unittest.TestCase): def test_render_reverse_is_correct(self): #with patch('webclient.apps.codebundles.lib.reverse') as mock_reverse: with patch('django.core.urlresolvers.reverse') as mock_reverse: from lib import render_reverse mock_f = MagicMock(name='f', return_value='dummy_views') mock_kwargs = MagicMock(name='kwargs',return_value={'args':['123']}) mock_reverse.return_value = '/natrium/cb/details/123' response = render_reverse(mock_f(), mock_kwargs()) self.assertTrue('/natrium/cb/details/' in response) But instead, I get File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'dummy_readfile' with arguments '('123',)' and keyword arguments '{}' not found. Why is it calling reverse instead of my mock_reverse (it is looking up my urls.py!!) The author of Mock library Michael Foord did a video cast here (around 9:17), and in the example he passed the mock object request to the view function index. Furthermore, he patched POll and assigned an expected return value. Isn't that what I am doing here? I patched reverse? Thanks.

    Read the article

  • Why are all response bodies after the first blank in Cucumber?

    - by James A. Rosen
    I'm using Cucumber (0.6.3), Cucumber-Rails (0.3.0), Webrat (0.7.0), and Rails (2.3.5) for some tests. The following scenario passes just fine: Scenario: load one page Given I am on the home page Then I should see "Welcome" The following, however, fails: Scenario: load two pages Given I am on the FAQ pag When I go to the home page Then I should see "Welcome" The problem is that the second @response.body is blank. I added a Rack middleware to get a little more information: class LogEachRequest def initialize(app); @app = app; @count = 0; end def call(env) puts "Processing request # #{@count += 1)" @app.call(env) end end It shows me only one request processed. That is, it only ever prints out Processing request # 1

    Read the article

  • Why is a menu item disabled when using SWTBot?

    - by reprogrammer
    I've written up a GUI test using SWTBot to test the Extract Method refactoring. I use editor.selectRange() to select a statement to extract into a method. But, when I run the unit test, the Extract Method refactoring menu item is disabled. Thus, SWTBot fails to invoke the refactoring. When we change org.eclipse.jdt.ui.actions.ExtractMethodAction so that the "Extract Method..." menu item is always enabled, our SWTBot passes. But, SWTBot should let us select the menu item without hacking the org.eclipse.jdt.ui plugin. The whole project containing the above unit test is available at github. I've also reported the problem on the Eclipse forum for SWTBot. But, we haven't received a solution from the forum.

    Read the article

  • Role change from Software Testing to Business Analyst [closed]

    - by Ankit
    After working for 4 years in software testing, I have finally got a chance to switch my career to BA profile. Well it has been my dream to get a BA profile. But, as I prepare my self to switch to a new profile and a new city. I ask myself is it really worth taking the risk. I am fairly senior in testing role and make a good amount of money. But, the charm of BA profile is too good to miss. Any comments ? Any suggestions ?

    Read the article

  • Why is django.test.client.Client not keeping me logged in.

    - by Mystic
    I'm using django.test.client.Client to test whether some text shows up when a user is logged in. However, I the Client object doesn't seem to be keeping me logged in. This test passes if done manually with Firefox but not when done with the Client object. class Test(TestCase): def test_view(self): user.set_password(password) user.save() client = self.client # I thought a more manual way would work, but no luck # client.post('/login', {'username':user.username, 'password':password}) login_successful = client.login(username=user.username, password=password) # this assert passes self.assertTrue(login_successful) response = client.get("/path", follow=True) #whether follow=True or not doesn't seem to work self.assertContains(response, "needle" ) When I print response it returns the login form that is hidden by: {% if not request.user.is_authenticated %} ... form ... {% endif %} This is confirmed when I run ipython manage.py shell. The problem seems to be that the Client object is not keeping the session authenticated.

    Read the article

  • Problems using User model in django unit tests

    - by theycallmemorty
    I have the following django test case that is giving me errors: class MyTesting(unittest.TestCase): def setUp(self): self.u1 = User.objects.create(username='user1') self.up1 = UserProfile.objects.create(user=self.u1) def testA(self): ... def testB(self): ... When I run my tests, testA will pass sucessfully but before testB starts, I get the following error: IntegrityError: column username is not unique It's clear that it is trying to create self.u1 before each test case and finding that it already exists in the Database. How do I get it to properly clean up after each test case so that subsequent cases run correctly?

    Read the article

  • Integration tests - "no exceptions are thrown" approach. Does it make sense?

    - by Andrew Florko
    Sometimes integration tests are rather complex to write or developers have no enough time to check output - does it make sense to write tests that make sure "no exceptions are thrown" only? Such tests provide some input parameters set(s) and doesn't check the result, but only make sure code not failed with exception? May be such tests are not very useful but appropriate in situations when you have no time?

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Best way to do TDD in express versions of visual studio(eg VB Express)

    - by Nathan W
    I have been looking in to doing some test driven development for one of the applications that I'm currently writing(OLE wrapper for an OLE object). The only problem is that I am using the express versions of Visual Studio(for now), at the moment I am using VB express but sometimes I use C# express. Is it possible to do TDD in the express versions? If so what are the bast was to go about it? Cheers. EDIT. By the looks of things I will have to buy the full visual studio so that I can do integrated TDD, hopefully there is money in the budget to buy a copy :). For now I think I will use Nunit like everyone is saying.

    Read the article

  • Is there an effective way to test XSL transforms/BizTalk maps?

    - by nlawalker
    Creating repeatable tests for BizTalk maps is frustrating. I can't find a way to handle testing them like I'd do unit testing, because I can't find ways to break them into logical chunks. They tend to be one big monolithic unit, and any change has the potential to ripple through the map and break a lot of unit tests. Even if I could break it up, creating XML test inputs is painful and error prone. Is there any effective way of testing these? I'd settle for recommendations for testing XSL transforms in general, but I specifically mention BizTalk maps primarily for the reason that when using the mapper, there really isn't any way to break your XSLT into templates (which I'd imagine you could use to break up your logic into testable chunks, but I've honestly never gotten that far with XSLT).

    Read the article

  • Specify test method name prefix for test suite in junit 3

    - by Marko Kocic
    Is it possible to tell JUnit 3 to use additional method name prefix when looking up test method names? The goal is to have additional tests running locally that should not be run on continuous integration server. CI server doesn't use test suites, it look up for all classes which name ends with "Test" and execute all methods that begins with "test". The goal is to be able to locally run not only tests run by integration server, but also tests which method name starts with, for example "nocitest" or something like that. I don't mind having to organize tests into tests suite locally, since CI is just ignoring them.

    Read the article

  • NUnit - Multiple properties of the same name? Linking to requirements

    - by Ryan Ternier
    I'm linking all our our System Tests to test cases and to our Requirements. Every requirement has an ID. Every Test Case / System Tests tests a variety of requirements. Every module of code links to multiple requirements. I'm trying to find the best way to link every system test to its driving requirements. I was hoping to do something like: [NUnit.Framework.Property("Release", "6.0.0")] [NUnit.Framework.Property("Requirement", "FR50082")] [NUnit.Framework.Property("Requirement", "FR50084")] [NUnit.Framework.Property("Requirement", "FR50085")] [TestCase(....)] public void TestSomething(string a, string b...) However, that will break because Property is a Key-Value pair. The system will not allow me to have multiple Properties with the same key. The reason I'm wanting this is to be able to test specific requirements in our system if a module changes that touches these requirements. Rather than run over 1,000 system tests on every build, this would allow us to target what to test based on changes done to our code. Some system tests run upwards of 5 minutes (Enterprise healthcare system), so "Just run all of them" isn't a viable solution. We do that, but only before promoting through our environments. Thoughts?

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >