Search Results

Search found 11953 results on 479 pages for 'functional testing'.

Page 129/479 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Rails 2.3.5 table populated by fixtures at end of test run rather than at start

    - by rlandster
    I start with a test database containing the schema but with no data in the tables. I run a test like so cd test/ ruby unit/directive_test.rb I get failures indicating that the code found no data in the data tables. However, I look at the tables after running that test and the data is now in the the table. In fact, if I immediately run the test again I get no failures. So it appears that the fixture is being loaded into the table too late for one of my modules to find it. When are the fixtures loaded? After or before the app/model/*.rb files are executed? If it is after the models are executed is there a way to delay the loading? This issue is also relevant when running rake test:units since that task clears the test data after it finished.

    Read the article

  • HTTP Proxy with monitor UI for local install

    - by zedoo
    Hi, I'm looking for an HTTP Proxy/GUI combination that should be installed locally on my Windows PC. The UI should display something similar to Firebugs "Network" tab, showing request/response headers and content as plaintext. It would be cool if I could attach requests to a sort of node for later comparison, similar to what you can do when using the Proxy that comes with JMeter.

    Read the article

  • Embedded MongoDB when running integration tests

    - by seanhodges
    My question is a variation of this one. Since my Java Web-app project requires a lot of read filters/queries and interfaces with tools like GridFS, I'm struggling to think of a sensible way to simulate MongoDB in the way the above solution suggests. Therefore, I'm considering running an embedded instance of MongoDB alongside my integration tests. I'd like it to start up automatically (either for each test or the whole suite), flush the database for every test, and shut down at the end. These tests might be run on development machines as well as the CI server, so my solution will also need to be portable. Can anyone with more knowledge on MongoDB help me get idea of the feasibility of this approach, and/or perhaps suggest any reading material that might help me get started? I'm also open to other suggestions people might have on how I could approach this problem...

    Read the article

  • xUnit false positive when comparing null terminated strings

    - by mr.b
    I've come across odd behavior when comparing strings. First assert passes, but I don't think it should.. Second assert fails, as expected... [Fact] public void StringTest() { string testString_1 = "My name is Erl. I am a program\0"; string testString_2 = "My name is Erl. I am a program"; Assert.Equal<string>(testString_1, testString_2); Assert.True(testString_1.Equals(testString_2)); } Any ideas?

    Read the article

  • Wrong code coverage on of unit test

    - by KamilPyc
    I'm using code coverage for unit tests in Xcode. Everything is working except some special cases, for example protocol declaration shows wrong values. If I have : @protocol SomeProtocole <NSObject> @property (nonatomic, readonly) NSObject *example; @end I will get 0% code coverage for this file. But I have unit test that is using class that conforms to that protocol. Only solution I found so far is to filter code coverage raport to not include protocols. But I would like to see real values for protocols. Any one have some solution to fix it?

    Read the article

  • How to disable translations during unit tests in django?

    - by Denilson Sá
    I'm using Django Internationalization tools to translate some strings from my application. The code looks like this: from django.utils.translation import ugettext as _ def my_view(request): output = _("Welcome to my site.") return HttpResponse(output) Then, I'm writing unit tests using the Django test client. These tests make a request to the view and compare the returned contents. How can I disable the translations while running the unit tests? I'm aiming to do this: class FoobarTestCase(unittest.TestCase): def setUp(self): # Do something here to disable the string translation. But what? # I've already tried this, but it didn't work: django.utils.translation.deactivate_all() def testFoobar(self): c = Client() response = c.get("/foobar") # I want to compare to the original string without translations. self.assertEquals(response.content.strip(), "Welcome to my site.")

    Read the article

  • How do you unit-test a method with complex input-output

    - by Dan
    When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert. What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example <root><child/></root> that will be used to check parent-child relationships between nodes and so on for the rest of expectations. Now, take a look at follwing Xml: <root> <child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1> <child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2> </root> In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?

    Read the article

  • Attribute to skip over statement in unit test c#

    - by Eli Perpinyal
    I am looking to skip a certain statement in my unit tests eg: if (MessageBox.Show("Are you sure you want to remove " + contact.CompanyName + " from the contacts?", "Confirm Delete", MessageBoxButton.YesNo, MessageBoxImage.Question, MessageBoxResult.Yes) == MessageBoxResult.Yes) is there an attribute i can place above the statement to avoid the unit test executing it?

    Read the article

  • How to unit tests functions which return results asyncronously in XCode?

    - by DevDevDev
    I have something like - (void)getData:(SomeParameter*)param { // Remotely call out for data returned asynchronously // returns data via a delegate method } - (void)handleDataDelegateMethod:(NSData*)data { // Handle returned data } I want to write a unit test for this, how can I do something better than NSData* returnedData = nil; - (void)handleDataDelegateMethod:(NSData*)data { returnedData = data; } - (void)test { [obj getData:param]; while (!returnedData) { [NSThread sleep:1]; } // Make tests on returnedData }

    Read the article

  • How big teams work with database

    - by Michael Riva
    Lets say I have a team, 20 developers. And we are making a project on .net. In team every one can easy create their tables according to their modules working on it. And we think to use an ORM, can you tell me how can and which ORM tools for good to working with team. Is there any proven way? I m asking becouse I never work with a team, so I dont know the best practices. So you guys what kind of pattern you use?. I realy wonder. The team members can write their unit tests and supply necessary design patterns. What kind of approach I need to create to manage team? What kind of ORM tools that we have to use?

    Read the article

  • How can I unit test my custom validation attribute

    - by MightyAtom
    I have a custom asp.net mvc class validation attribute. My question is how can I unit test it? It would be one thing to test that the class has the attribute but this would not actually test that the logic inside it. This is what I want to test. [Serializable] [EligabilityStudentDebtsAttribute(ErrorMessage = "You must answer yes or no to all questions")] public class Eligability { [BooleanRequiredToBeTrue(ErrorMessage = "You must agree to the statements listed")] public bool StatementAgree { get; set; } [Required(ErrorMessage = "Please choose an option")] public bool? Income { get; set; } .....removed for brevity } [AttributeUsage(AttributeTargets.Class)] public class EligabilityStudentDebtsAttribute : ValidationAttribute { // If AnyDebts is true then // StudentDebts must be true or false public override bool IsValid(object value) { Eligability elig = (Eligability)value; bool ok = true; if (elig.AnyDebts == true) { if (elig.StudentDebts == null) { ok = false; } } return ok; } } I have tried to write a test as follows but this does not work: [TestMethod] public void Eligability_model_StudentDebts_is_required_if_AnyDebts_is_true() { // Arrange var eligability = new Eligability(); var controller = new ApplicationController(); // Act controller.ModelState.Clear(); controller.ValidateModel(eligability); var actionResult = controller.Section2(eligability,null,string.Empty); // Assert Assert.IsInstanceOfType(actionResult, typeof(ViewResult)); Assert.AreEqual(string.Empty, ((ViewResult)actionResult).ViewName); Assert.AreEqual(eligability, ((ViewResult)actionResult).ViewData.Model); Assert.IsFalse(((ViewResult)actionResult).ViewData.ModelState.IsValid); } The ModelStateDictionary does not contain the key for this custom attribute. It only contains the attributes for the standard validation attributes. Why is this? What is the best way to test these custom attributes? Thanks

    Read the article

  • Redundant assert statements in .NET unit test framework

    - by Prabhu
    Isn't it true that every assert statement can be translated to an Assert.IsTrue, since by definition, you are asserting whether something is true or false? Why is it that test frameworks introduce options like AreEquals, IsNotNull, and especially IsFalse? I feel I spend too much time thinking about which Assert to use when I write unit tests.

    Read the article

  • How to run a recorded (HTML) selenium test from .NET

    - by Gaspar Nagy
    I run Selenium tests with Selenium RC from .NET (c#). In some cases, I would like to keep the test case source as HTML (to be able to modify it from Selenium IDE), but I would like to run/include these tests from my c# unit tests. Maybe it is obvious, but I can't find the API method in the Selenium Core to achieve this. Any idea how to do that? (I think the "includePartial" command in Selenium on Rails does the thing that I would need, but for c#.)

    Read the article

  • From a shell script open a new tab in a specific instance of Firefox.

    - by toc777
    Hi everyone, I have a shell script that creates Firefox profiles and then uses them to open multiple instances of Firefox simultaneously. The problem is how can I open a URL in a particular instance of Firefox? I have tried firefox -CREATEPROFILE test firefox -P 'test' -no-remote firefox -P test -url www.google.ie But the last part which is trying to open the URL using the test profile does not work, it always opens in then default profile. Is there any way to tell Firefox from the command line to open a URL using a particular profile? Thanks.

    Read the article

  • Is there a library available which easily can record and replay results of API calls?

    - by Billy ONeal
    I'm working on writing various things that call relatively complicated Win32 API functions. Here's an example: //Encapsulates calling NtQuerySystemInformation buffer management. WindowsApi::AutoArray NtDll::NtQuerySystemInformation( SystemInformationClass toGet ) const { AutoArray result; ULONG allocationSize = 1024; ULONG previousSize; NTSTATUS errorCheck; do { previousSize = allocationSize; result.Allocate(allocationSize); errorCheck = WinQuerySystemInformation(toGet, result.GetAs<void>(), allocationSize, &allocationSize); if (allocationSize <= previousSize) allocationSize = previousSize * 2; } while (errorCheck == 0xC0000004L); if (errorCheck != 0) { THROW_MANUAL_WINDOWS_ERROR(WinRtlNtStatusToDosError(errorCheck)); } return result; } //Client of the above. ProcessSnapshot::ProcessSnapshot() { using Dll::NtDll; NtDll ntdll; AutoArray systemInfoBuffer = ntdll.NtQuerySystemInformation( NtDll::SystemProcessInformation); BYTE * currentPtr = systemInfoBuffer.GetAs<BYTE>(); //Loop through the results, creating Process objects. SYSTEM_PROCESSES * asSysInfo; do { // Loop book keeping asSysInfo = reinterpret_cast<SYSTEM_PROCESSES *>(currentPtr); currentPtr += asSysInfo->NextEntryDelta; //Create the process for the current iteration and fill it with data. std::auto_ptr<ProcImpl> currentProc(ProcFactory( static_cast<unsigned __int32>(asSysInfo->ProcessId), this)); NormalProcess* nptr = dynamic_cast<NormalProcess*>(currentProc.get()); if (nptr) { nptr->SetProcessName(asSysInfo->ProcessName); } // Populate process threads for(ULONG idx = 0; idx < asSysInfo->ThreadCount; ++idx) { SYSTEM_THREADS& sysThread = asSysInfo->Threads[idx]; Thread thread( currentProc.get(), static_cast<unsigned __int32>(sysThread.ClientId.UniqueThread), sysThread.StartAddress); currentProc->AddThread(thread); } processes.push_back(currentProc); } while(asSysInfo->NextEntryDelta != 0); } My problem is in mocking out the NtDll::NtQuerySystemInformation method -- namely, that the data structure returned is complicated (Well, here it's actually relatively simple but it can be complicated), and writing a test which builds the data structure like the API call does can take 5-6 times as long as writing the code that uses the API. What I'd like to do is take a call to the API, and record it somehow, so that I can return that recorded value to the code under test without actually calling the API. The returned structures cannot simply be memcpy'd, because they often contain inner pointers (pointers to other locations in the same buffer). The library in question would need to check for these kinds of things, and be able to restore pointer values to a similar buffer upon replay. (i.e. check each pointer sized value if it could be interpreted as a pointer within the buffer, change that to an offset, and remember to change it back to a pointer on replay -- a false positive rate here is acceptable) Is there anything out there that does anything like this?

    Read the article

  • How can I use the compile time constant __LINE__ in a string?

    - by John
    I can use __LINE__ as a method parameter just fine, but I would like an easy way to use it in a function that uses strings. For instance say I have this: 11 string myTest() 12 { 13 if(!testCondition) 14 return logError("testcondition failed"); 15 } And I want the result of the function to be: "myTest line 14: testcondition failed" How can I write logError? Does it have to be some monstrosity of a macro?

    Read the article

  • Unit test helper methods?

    - by Aly
    Hi, I have classes which prviously had massive methods so i subdivided the work of this method into 'helper' methods. These helper methods are declared private to enforce encapsulation - however I want to unit test the big public methods, is it good to unit test the helper methods too as if one of them fail the public method that calls it will also fail - but this way we can identify why it failed. Also in order to test these using a mock object I would need to change their visibility from private to protected, is this desirable?

    Read the article

  • Get/save parameters to an expected JMock method call?

    - by Tayeb
    Hi, I want to test an "Adapter" object that when it receives an xml message, it digest it to a Message object, puts message ID + CorrelationID both with timestamps and forwards it to a Client object.=20 A message can be correlated to a previous one (e.g. m2.correlationID =3D m1.ID). I mock the Client, and check that Adapter successfully calls "client.forwardMessage(m)" twice with first message with null correlationID, and a second with a not-null correlationID. However, I would like to precisely test that the correlationIDs are set correctly, by grabing the IDs (e.g. m1.ID). But I couldn't find anyway to do so. There is a jira about adding the feature, but no one commented and it is unassigned. Is this really unimplemented? I read about the alternative of redesigning the Adapter to use an IdGenerator object, which I can stub, but I think there will be too many objects.=20 Don't you think it adds unnecessary complexity to split objects to a so fine granularity? Thanks, and I appreciate any comments :-) Tayeb

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >