Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 71/883 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Need help with writing test

    - by London
    I'm trying to write a test for this class its called Receiver : public void get(People person) { if(null != person) { LOG.info("Person with ID " + person.getId() + " received"); processor.process(person); }else{ LOG.info("Person not received abort!"); } } Here is the test : @Test public void testReceivePerson(){ context.checking(new Expectations() {{ receiver.get(person); atLeast(1).of(person).getId(); will(returnValue(String.class)); }}); } Note: receiver is the instance of Receiver class(real not mock), processor is the instance of Processor class(real not mock) which processes the person(mock object of People class). GetId is a String not int method that is not mistake. Test fails : unexpected invocation of person.getId() I'm using jMock any help would be appreciated. As I understood when I call this get method to execute it properly I need to mock person.getId() , and I've been sniping around in circles for a while now any help would be appreciated.

    Read the article

  • Why are all response bodies after the first blank in Cucumber?

    - by James A. Rosen
    I'm using Cucumber (0.6.3), Cucumber-Rails (0.3.0), Webrat (0.7.0), and Rails (2.3.5) for some tests. The following scenario passes just fine: Scenario: load one page Given I am on the home page Then I should see "Welcome" The following, however, fails: Scenario: load two pages Given I am on the FAQ pag When I go to the home page Then I should see "Welcome" The problem is that the second @response.body is blank. I added a Rack middleware to get a little more information: class LogEachRequest def initialize(app); @app = app; @count = 0; end def call(env) puts "Processing request # #{@count += 1)" @app.call(env) end end It shows me only one request processed. That is, it only ever prints out Processing request # 1

    Read the article

  • Am I mocking this helper function right in my Django test?

    - by CppLearner
    lib.py from django.core.urlresolvers import reverse def render_reverse(f, kwargs): """ kwargs is a dictionary, usually of the form {'args': [cbid]} """ return reverse(f, **kwargs) tests.py from lib import render_reverse, print_ls class LibTest(unittest.TestCase): def test_render_reverse_is_correct(self): #with patch('webclient.apps.codebundles.lib.reverse') as mock_reverse: with patch('django.core.urlresolvers.reverse') as mock_reverse: from lib import render_reverse mock_f = MagicMock(name='f', return_value='dummy_views') mock_kwargs = MagicMock(name='kwargs',return_value={'args':['123']}) mock_reverse.return_value = '/natrium/cb/details/123' response = render_reverse(mock_f(), mock_kwargs()) self.assertTrue('/natrium/cb/details/' in response) But instead, I get File "/var/lib/graphyte-webclient/graphyte-webenv/lib/python2.6/site-packages/django/core/urlresolvers.py", line 296, in reverse "arguments '%s' not found." % (lookup_view_s, args, kwargs)) NoReverseMatch: Reverse for 'dummy_readfile' with arguments '('123',)' and keyword arguments '{}' not found. Why is it calling reverse instead of my mock_reverse (it is looking up my urls.py!!) The author of Mock library Michael Foord did a video cast here (around 9:17), and in the example he passed the mock object request to the view function index. Furthermore, he patched POll and assigned an expected return value. Isn't that what I am doing here? I patched reverse? Thanks.

    Read the article

  • Why is a menu item disabled when using SWTBot?

    - by reprogrammer
    I've written up a GUI test using SWTBot to test the Extract Method refactoring. I use editor.selectRange() to select a statement to extract into a method. But, when I run the unit test, the Extract Method refactoring menu item is disabled. Thus, SWTBot fails to invoke the refactoring. When we change org.eclipse.jdt.ui.actions.ExtractMethodAction so that the "Extract Method..." menu item is always enabled, our SWTBot passes. But, SWTBot should let us select the menu item without hacking the org.eclipse.jdt.ui plugin. The whole project containing the above unit test is available at github. I've also reported the problem on the Eclipse forum for SWTBot. But, we haven't received a solution from the forum.

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Why is django.test.client.Client not keeping me logged in.

    - by Mystic
    I'm using django.test.client.Client to test whether some text shows up when a user is logged in. However, I the Client object doesn't seem to be keeping me logged in. This test passes if done manually with Firefox but not when done with the Client object. class Test(TestCase): def test_view(self): user.set_password(password) user.save() client = self.client # I thought a more manual way would work, but no luck # client.post('/login', {'username':user.username, 'password':password}) login_successful = client.login(username=user.username, password=password) # this assert passes self.assertTrue(login_successful) response = client.get("/path", follow=True) #whether follow=True or not doesn't seem to work self.assertContains(response, "needle" ) When I print response it returns the login form that is hidden by: {% if not request.user.is_authenticated %} ... form ... {% endif %} This is confirmed when I run ipython manage.py shell. The problem seems to be that the Client object is not keeping the session authenticated.

    Read the article

  • Role change from Software Testing to Business Analyst [closed]

    - by Ankit
    After working for 4 years in software testing, I have finally got a chance to switch my career to BA profile. Well it has been my dream to get a BA profile. But, as I prepare my self to switch to a new profile and a new city. I ask myself is it really worth taking the risk. I am fairly senior in testing role and make a good amount of money. But, the charm of BA profile is too good to miss. Any comments ? Any suggestions ?

    Read the article

  • Problems using User model in django unit tests

    - by theycallmemorty
    I have the following django test case that is giving me errors: class MyTesting(unittest.TestCase): def setUp(self): self.u1 = User.objects.create(username='user1') self.up1 = UserProfile.objects.create(user=self.u1) def testA(self): ... def testB(self): ... When I run my tests, testA will pass sucessfully but before testB starts, I get the following error: IntegrityError: column username is not unique It's clear that it is trying to create self.u1 before each test case and finding that it already exists in the Database. How do I get it to properly clean up after each test case so that subsequent cases run correctly?

    Read the article

  • Approximate timings for various operations on a "typical desktop PC" anno 2010

    - by knorv
    In the article "Teach Yourself Programming in Ten Years" Peter Norvig (Director of Research, Google) gives the following approximate timings for various operations on a typical 1GHz PC back in 2001: execute single instruction = 1 nanosec = (1/1,000,000,000) sec fetch word from L1 cache memory = 2 nanosec fetch word from main memory = 10 nanosec fetch word from consecutive disk location = 200 nanosec fetch word from new disk location (seek) = 8,000,000 nanosec = 8 millisec What would the corresponding timings be for your definition of a typical PC desktop anno 2010?

    Read the article

  • Integration tests - "no exceptions are thrown" approach. Does it make sense?

    - by Andrew Florko
    Sometimes integration tests are rather complex to write or developers have no enough time to check output - does it make sense to write tests that make sure "no exceptions are thrown" only? Such tests provide some input parameters set(s) and doesn't check the result, but only make sure code not failed with exception? May be such tests are not very useful but appropriate in situations when you have no time?

    Read the article

  • how to fully unit test functions and their internal validation

    - by Patrick
    I am just now getting into formal unit testing and have come across an issue in testing separate internal parts of functions. I have created a base class of data manipulation (i.e.- moving files, chmodding file, etc) and in moveFile() I have multiple levels of validation to pinpoint when a moveFile() fails (i.e.- source file not readable, destination not writeable). I can't seem to figure out how to force a couple particular validations to fail while not tripping the previous validations. Example: I want the copying of a file to fail, but by the time I've gotten to the actual copying, I've checked for everything that can go wrong before copying. Code Snippit: (Bad code on the fifth line...) // if the change permissions is set, change the file permissions if($chmod !== null) { $mod_result = chmod($destination_directory.DIRECTORY_SEPARATOR.$new_filename, $chmod); if($mod_result === false || $source_directory.DIRECTORY_SEPARATOR.$source_filename == '/home/k...../file_chmod_failed.qif') { DataMan::logRawMessage('File permissions update failed on moveFile [ERR0009] - ['.$destination_directory.DIRECTORY_SEPARATOR.$new_filename.' - '.$chmod.']', sfLogger::ALERT); return array('success' => false, 'type' => 'Internal Server Error [ERR0009]'); } } So how do I simulate the copy failing. My stop-gap measure was to perform a validation on the filename being copied and if it's absolute path matched my testing file, force the failure. I know this is very bad to put testing code into the actual code that will be used to run on the production server but I'm not sure how else to do it. Note: I am on PHP 5.2, symfony, using lime_test(). EDIT I am testing the chmodding and ensuring that the array('success' = false, 'type' = ..) is returned

    Read the article

  • Best way to do TDD in express versions of visual studio(eg VB Express)

    - by Nathan W
    I have been looking in to doing some test driven development for one of the applications that I'm currently writing(OLE wrapper for an OLE object). The only problem is that I am using the express versions of Visual Studio(for now), at the moment I am using VB express but sometimes I use C# express. Is it possible to do TDD in the express versions? If so what are the bast was to go about it? Cheers. EDIT. By the looks of things I will have to buy the full visual studio so that I can do integrated TDD, hopefully there is money in the budget to buy a copy :). For now I think I will use Nunit like everyone is saying.

    Read the article

  • Is there an effective way to test XSL transforms/BizTalk maps?

    - by nlawalker
    Creating repeatable tests for BizTalk maps is frustrating. I can't find a way to handle testing them like I'd do unit testing, because I can't find ways to break them into logical chunks. They tend to be one big monolithic unit, and any change has the potential to ripple through the map and break a lot of unit tests. Even if I could break it up, creating XML test inputs is painful and error prone. Is there any effective way of testing these? I'd settle for recommendations for testing XSL transforms in general, but I specifically mention BizTalk maps primarily for the reason that when using the mapper, there really isn't any way to break your XSLT into templates (which I'd imagine you could use to break up your logic into testable chunks, but I've honestly never gotten that far with XSLT).

    Read the article

  • Specify test method name prefix for test suite in junit 3

    - by Marko Kocic
    Is it possible to tell JUnit 3 to use additional method name prefix when looking up test method names? The goal is to have additional tests running locally that should not be run on continuous integration server. CI server doesn't use test suites, it look up for all classes which name ends with "Test" and execute all methods that begins with "test". The goal is to be able to locally run not only tests run by integration server, but also tests which method name starts with, for example "nocitest" or something like that. I don't mind having to organize tests into tests suite locally, since CI is just ignoring them.

    Read the article

  • NUnit - Multiple properties of the same name? Linking to requirements

    - by Ryan Ternier
    I'm linking all our our System Tests to test cases and to our Requirements. Every requirement has an ID. Every Test Case / System Tests tests a variety of requirements. Every module of code links to multiple requirements. I'm trying to find the best way to link every system test to its driving requirements. I was hoping to do something like: [NUnit.Framework.Property("Release", "6.0.0")] [NUnit.Framework.Property("Requirement", "FR50082")] [NUnit.Framework.Property("Requirement", "FR50084")] [NUnit.Framework.Property("Requirement", "FR50085")] [TestCase(....)] public void TestSomething(string a, string b...) However, that will break because Property is a Key-Value pair. The system will not allow me to have multiple Properties with the same key. The reason I'm wanting this is to be able to test specific requirements in our system if a module changes that touches these requirements. Rather than run over 1,000 system tests on every build, this would allow us to target what to test based on changes done to our code. Some system tests run upwards of 5 minutes (Enterprise healthcare system), so "Just run all of them" isn't a viable solution. We do that, but only before promoting through our environments. Thoughts?

    Read the article

  • Is count(*) really expensive ?

    - by Anil Namde
    I have a page where I have 4 tabs displaying 4 different reports based off different tables. I obtain the row count of each table using a select count(*) from <table> query and display number of rows available in each table on the tabs. As a result, each page postback causes 5 count(*) queries to be executed (4 to get counts and 1 for pagination) and 1 query for getting the report content. Now my question is: are count(*) queries really expensive -- should I keep the row counts (at least those that are displayed on the tab) in the view state of page instead of querying multiple times? How expensive are COUNT(*) queries ?

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >