Search Results

Search found 10010 results on 401 pages for 'a b testing'.

Page 100/401 | < Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >

  • What sort of Circular Dependencies does Oracle allow?

    - by Neil
    Hi all, I am creating test cases and I need to cover circular dependencies. So far I have been able to create two tables such that Table A has a FK to B and B has a FK to A. What other circular dependencies exist / are allowed between objects? I tried to create cycles between Views but Oracle successfully rejected that.

    Read the article

  • How to test onLowMemory conditions?

    - by Samuh
    I have put some instructions in onLowMemory() callback and want to test the same. Is there a "direct" way to test onLowMemory function of the application subclass? Or will I have to just overload the phone by starting many apps and doing memory intensive tasks? Thanks.

    Read the article

  • Using jmock how to reuse parameter

    - by BenZen
    I'm building a test, in wich i need to send question, and wait for the answer. Message passing is not the problem. In fact to figure out wich answer correspond to wich question, i use an id. My id is generated using an UUID. an i want to retrieve this id, wich is given as a parameter to a mocked object. It look like this: oneOf(message).setJMSCorrelationID(with(correlationId)); inSequence(sequence); Where correlationId is the string i'd like to keep for an other expecteation like this one: oneOf(session).createBrowser(with(inputChannel), with("JMSType ='pong' AND JMSCorrelationId = '"+correlationId+"'")); have you got an answer?

    Read the article

  • "dynamic" keyword and JSON data

    - by Peter Perhác
    An action method in my ASP.NET MVC2 application returns a JsonResult object and in my unit test I would like to check that the returned JSON object indeed contains the expected values. I tried this: 1. dynamic json = ((JsonResult)myActionResult).Data; 2. Assert.AreEqual(JsonMessagesHelper.ErrorLevel.ERROR.ToString(), json.ErrorLevel); But I get a RuntimeBinderException "'object' does not contain a definition for 'ErrorLevel'". However, when I place a breakpoint on line 2 and inspect the json dynamic variable (see picture below), it obviously does contain the ErrorLevel string and it has the expected value, so if the runtime binder wasn't playing funny the test would pass. What am I not getting? What am I doing wrong and how can I fix this? How can I make the assertion pass?

    Read the article

  • Comparing two objects that are the same in MbUnit

    - by Coppermill
    From MBUnit I am trying to check if the values of two objects are the same using Assert.AreSame(RawDataRow, result); However I am getting the following fail: ====================== Expected Value & Actual Value : {RawDataRow: CentreID = "CentreID1", CentreLearnerRef = "CentreLearnerRef1", ContactID = 1, DOB = 2010-05-05T00:00:00.0000000, Email = "Email1", ErrorCodes = "ErrorCodes1", ErrorDescription = "ErrorDescription1", FirstName = "FirstName1"} Remark : Both values look the same when formatted but they are distinct instances. ====================== I don't want to have to go through each property, can I do this from MbUnit

    Read the article

  • Managing logs/warnings in Python extensions

    - by Dimitri Tcaciuc
    TL;DR version: What do you use for configurable (and preferably captured) logging inside your C++ bits in a Python project? Details follow. Say you have a a few compiled .so modules that may need to do some error checking and warn user of (partially) incorrect data. Currently I'm having a pretty simplistic setup where I'm using logging framework from Python code and log4cxx library from C/C++. log4cxx log level is defined in a file (log4cxx.properties) and is currently fixed and I'm thinking how to make it more flexible. Couple of choices that I see: One way to control it would be to have a module-wide configuration call. # foo/__init__.py import sys from _foo import import bar, baz, configure_log configure_log(sys.stdout, WARNING) # tests/test_foo.py def test_foo(): # Maybe a custom context to change the logfile for # the module and restore it at the end. with CaptureLog(foo) as log: assert foo.bar() == 5 assert log.read() == "124.24 - foo - INFO - Bar returning 5" Have every compiled function that does logging accept optional log parameters. # foo.c int bar(PyObject* x, PyObject* logfile, PyObject* loglevel) { LoggerPtr logger = default_logger("foo"); if (logfile != Py_None) logger = file_logger(logfile, loglevel); ... } # tests/test_foo.py def test_foo(): with TemporaryFile() as logfile: assert foo.bar(logfile=logfile, loglevel=DEBUG) == 5 assert logfile.read() == "124.24 - foo - INFO - Bar returning 5" Some other way? Second one seems to be somewhat cleaner, but it requires function signature alteration (or using kwargs and parsing them). First one is.. probably somewhat awkward but sets up entire module in one go and removes logic from each individual function. What are your thoughts on this? I'm all ears to alternative solutions as well. Thanks,

    Read the article

  • Stub web calls in Scala

    - by Dennis Laumen
    I'm currently writing a wrapper of the Spotify Metadata API to learn Scala. Everything's fine and dandy but I'd like to unit test the code. To properly do this I'll need to stub the Spotify API and get consistent return values (stuff like popularity of tracks changes very frequently). Does anybody know how to stub web calls in Scala, the JVM in general or by using some external tool I could hook up into my Maven setup? PS I'm basically looking for something like Ruby's FakeWeb... Thanks in advance!

    Read the article

  • How can I write a unit test to determine whether an object can be garbage collected?

    - by driis
    In relation to my previous question, I need to check whether a component that will be instantiated by Castle Windsor, can be garbage collected after my code has finished using it. I have tried the suggestion in the answers from the previous question, but it does not seem to work as expected, at least for my code. So I would like to write a unit test that tests whether a specific object instance can be garbage collected after some of my code has run. Is that possible to do in a reliable way ? EDIT I currently have the following test based on Paul Stovell's answer, which succeeds: [TestMethod] public void ReleaseTest() { WindsorContainer container = new WindsorContainer(); container.Kernel.ReleasePolicy = new NoTrackingReleasePolicy(); container.AddComponentWithLifestyle<ReleaseTester>(LifestyleType.Transient); Assert.AreEqual(0, ReleaseTester.refCount); var weakRef = new WeakReference(container.Resolve<ReleaseTester>()); Assert.AreEqual(1, ReleaseTester.refCount); GC.Collect(); GC.WaitForPendingFinalizers(); Assert.AreEqual(0, ReleaseTester.refCount, "Component not released"); } private class ReleaseTester { public static int refCount = 0; public ReleaseTester() { refCount++; } ~ReleaseTester() { refCount--; } } Am I right assuming that, based on the test above, I can conclude that Windsor will not leak memory when using the NoTrackingReleasePolicy ?

    Read the article

  • Intelligent serial port mocks with Moq

    - by Padu Merloti
    I have to write a lot of code that deals with serial ports. Usually there will be a device connected at the other end of the wire and I usually create my own mocks to simulate their behavior. I'm starting to look at Moq to help with my unit tests. It's pretty simple to use it when you need just a stub, but I want to know if it is possible and if yes how do I create a mock for a hardware device that responds differently according to what I want to test. A simple example: One of the devices I interface with receives a command (move to position x), gives back an ACK message and goes to a "moving" state until it reaches the ordered position. I want to create a test where I send the move command and then keep querying state until it reaches the final position. I want to create two versions of the mock for two different tests, one where I expect the device to reach the final position successfully and the other where it will fail. Too much to ask?

    Read the article

  • Selenium RC cannot test on compressed html

    - by JH
    In order to have the fast speed of website, the web sever compress (gzip) the html files before sending to our clients. When running selenium tests, it shows a pop-up saying: You have chosen to open ... which is a: Bin file from: http://... Would you like to save this file? "Cancel" "Save File" It seems that the compressed html file doesn't unzip and browsers recognise it as Binary file.

    Read the article

  • What's your release process for your commercial application?

    - by dr. evil
    If you are developing a commercial desktop application, what's your release process? Sample process: Develop it: Patch bugs, add features, etc. Feature Freeze (do not fix, add anything unless it's absolutely required) Test it If everything is OK release it, if it's not fix it, test it, release it I think the most crucial question is what's your approach to "feature freeze test release" cycle? Or do you test it more frequently that you don't need such a cycle and your software is always ready for public release?

    Read the article

  • How do I write a spec to verify the rendering of partials?

    - by TheDeeno
    I'm using rr and rspec. Also, I'm using the collection short hand for partial rendering. My question: How do I correctly fill out the the following spec? before(:each) do assigns[:models] = Array.new(10, stub(Model)) end it "should render the 'listing' partial for each model" do # help me write something that actually verifies this end I've tried a few examples from the rspec book, rspec docs, and rr docs. Everything I try seems to leave me with runtime errors in the test - not failed assertions. Rather than show all the transformations I've tried, I figured all I'd need if someone showed me one that actually worked. I'd be good to go from there.

    Read the article

  • Is there a efficient way to do multiple test cases in c?

    - by Ahmed Abdelaal
    I use MS Visual Studio and I am new to C++, so I am just wondering if there is an faster more efficient way to do multiple test cases instead of keep clicking CTRL+F5 and re-opening the console many times. Like for example if I have this code #include <iostream> using namespace std; void main () { int x; cout<<"Enter a number"<<endl; cin>>x; cout<<x*2<<endl; } Is there a way I could try different values of x at once and getting the results together? Thanks

    Read the article

  • How can you unit test a DelegateCommand

    - by Damian
    I am trying to unit test my ViewModel and my SaveItem(save, CanSave) delegate command. I want to ensure that CanSave is called and returns the correct value given certain conditions. Basically, how can I invoke the delegate command from my unit test, actually it's more of an integration test. Obviously I could just test the return value of the CanSave method but I am trying to use BDD to the letter, ie. no code without a test first.

    Read the article

  • MSTest/NUnit Writing BDD style "Given, When, Then" tests

    - by Charlie
    I have been using MSpec to write my unit tests and really prefer the BDD style, I think it's a lot more readable. I'm now using Silverlight which MSpec doesn't support so I'm having to use MSTest but would still like to maintain a BDD style so am trying to work out a way to do this. Just to explain what I'm trying to acheive, here's how I'd write an MSpec test [Subject(typeof(Calculator))] public class when_I_add_two_numbers : with_calculator { Establish context = () => this.Calculator = new Calculator(); Because I_add_2_and_4 = () => this.Calculator.Add(2).Add(4); It should_display_6 = () => this.Calculator.Result.ShouldEqual(6); } public class with_calculator { protected static Calculator; } So with MSTest I would try to write the test like this (although you can see it won't work because I've put in 2 TestInitialize attributes, but you get what I'm trying to do..) [TestClass] public class when_I_add_two_numbers : with_calculator { [TestInitialize] public void GivenIHaveACalculator() { this.Calculator = new Calculator(); } [TestInitialize] public void WhenIAdd2And4() { this.Calculator.Add(2).Add(4); } [TestMethod] public void ThenItShouldDisplay6() { this.Calculator.Result.ShouldEqual(6); } } public class with_calculator { protected Calculator Calculator {get;set;} } Can anyone come up with some more elegant suggestions to write tests in this way with MSTest? Thanks

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Is it possible to access a running instance of an app using JNA/JNI?

    - by Carlos Blanco
    I'm writing a test engine for a Java application that has some of the code written in C. This application uses JNI to access it's native part. In the engine I'm writing, I use Fest to control de UI and perform the tests. However, I,m blind when dealing with the part that is written in C. I wonder if I can use JNA or JNI to access the native part of the app. I believe that the fact that the application is already running is huge issue here.

    Read the article

  • How do I unit test the methods in a method object?

    - by Sancho
    I've performed the "Replace Method with Method Object" refactoring described by Beck. Now, I have a class with a "run()" method and a bunch of member functions that decompose the computation into smaller units. How do I test those member functions? My first idea is that my unit tests be basically copies of the "run()" method (with different initializations), but with assertions between each call to the member functions to check the state of the computation. (I'm using Python and the unittest module.)

    Read the article

  • Any teams out there using TypeMock? Is it worth the hefty price tag?

    - by dferraro
    Hi, I hope this question is not 'controversial' - I'm just basically asking - has anyone here purchased TypeMock and been happy (or unhappy) with the results? We are a small dev shop of only 12 developers including the 2 dev managers. We've been using NMock so far but there are limitations. I have done research and started playing with TypeMock and I love it. It's super clean syntax and lets you basically mock everything, which is great for legacy code. The problem is - how do I justify to my boss spending 800-1200$ per license for an API which has 4-5 competitors that are completly free? 800-1200$ is how much Infragistrics or Telerik cost per license - and there sure as hell isn't 4-5 open source comparable UI frameworks... Which is why I find it a bit overpriced, albeit an awesome library... Any opinions / experiences are greatly appreciated. EDIT: after finding MOQ I thought I fell in love - until I found out that it's not fully supported in VB.NET because VB lacks lambda sub routines =(. Is anyone using MOQ for VB.NET? The problem is we are a mixed shop - we use C# for our CRM development and VB for everything else. Any guidence is greatly appreciated again

    Read the article

  • unit test for proxy checking

    - by zubin71
    Proxy configuration of a machine can be easily fetched using def check_proxy(): import urllib2 http_proxy = urllib2.getproxies().get('http') I need to write a test for the above written function. In order to do that I need to:- Set the system-wide proxy to an invalid URL during the test(sounds like a bad idea). Supply an invalid URL to http_proxy. How can I achieve either of the above?

    Read the article

  • Jquery ajax load of JSON in unit tests

    - by wmitchell
    I'm trying to load a dataset in jasmine for my tests like such ... However as its a json call I cant seem to always get the test denoted by "it" to wait till the JSON call has finished before using its array. I tried using the ajaxStop function to no avail. Any ideas ? describe("simple checks", function() { var exampleArray = new Array(); beforeEach(function(){ $(document).ajaxStop(function() { $(this).unbind("ajaxStop"); $.getJSON('/jasmine/obj.json', function(data) { $.each( json.jsonattr, function(i, widgetElement) { exampleArray.push(new widget(widgetElement)); }); }); }); }); it("use the exampleArray", function() { doSomething(exampleArray[0]); // frequently this is coming up as undefined });

    Read the article

  • documenting black-box test cases

    - by Blux
    Hi everyone, I want to write an initial (black box) test cases for one of my university projects. I haven't started coding yet, I'm still in completing the SRS document and i should specify the test cases i'm going to implement after the coding. The project is web based, and i should follow this template in each test case: +++++ Test case ID: Author: Initial state: Preconditions: Use Case: Test input: Expected output: ++++++ The thing is, i don't know what is the difference between "initial state" and "preconditions". In some of the test cases it's hard to differentiate between them. Like in "Edit Page" what should be the initial state and what should be the preconditions? any help will appreciated.=)

    Read the article

  • How to make sure web services are kept stable from one release to the next?

    - by Tor Hovland
    The company where I work is a software vendor with a suite of applications. There are also a number of web services, and of course they have to be kept stable even if the applications change. We haven't always succeeded with this, and sometimes a customer finds that a service is not behaving as before after upgrading. We now want to handle this better. In general, web services shouldn't change, and if they have to, at least we will know about it and document the change. But how do we ensure this? One idea is to compare the WSDL files with the previous versions at every release. That will make sure the interfaces don't change, but it won't detect that the behavior changes, for example if a bug is introduced in some common library. Another idea is to build up a suite of service tests, for example using soapUI. But then we'll never know if we have covered enough cases. What are some best practices regarding this?

    Read the article

  • How to assert/unit-test servers JSON response?

    - by shazax
    My current project uses JSON as data interchange format. Both Front-end and Back-end team agree upon a JSON structure before start integrating a service. At times due to un-notified changes in JSON structure by back-end team; it breaks the front-end code. Is there any external library that we could use to compare a mock JSON (fixture) with servers JSON response. Basically it should assert the whole JSON object and should throw an error if there is any violation in servers JSON format. Additional info: App is built on JQuery consuming REST JSON services.

    Read the article

< Previous Page | 96 97 98 99 100 101 102 103 104 105 106 107  | Next Page >