Search Results

Search found 13748 results on 550 pages for 'split testing'.

Page 112/550 | < Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >

  • Moq and accessing called parameters

    - by lozzar
    I've just started to implement unit tests (using xUnit and Moq) on an already established project of mine. The project extensively uses dependency injection via the unity container. I have two services A and B. Service A is the one being tested in this case. Service A calls B and gives it a delegate to an internal function. This 'callback' is used to notify A when a message has been received that it must handle. Hence A calls (where b is an instance of service B): b.RegisterHandler(Guid id, Action<byte[]> messageHandler); In order to test service A, I need to be able to call messageHandler, as this is the only way it currently accepts messages. Can this be done using Moq? ie. Can I mock service B, such that when RegisterHandler is called, the value of messageHandler is passed out to my test? Or do I need to redesign this? Are there any design patterns I should be using in this case? Does anyone know of any good resources on this kind of design?

    Read the article

  • Asp.Net MVC Tutorial Unit Tests

    - by Nicholas
    I am working through Steve Sanderson's book Pro ASP.NET MVC Framework and I having some issues with two unit tests which produce errors. In the example below it tests the CheckOut ViewResult: [AcceptVerbs(HttpVerbs.Post)] public ViewResult CheckOut(Cart cart, FormCollection form) { // Empty carts can't be checked out if (cart.Lines.Count == 0) { ModelState.AddModelError("Cart", "Sorry, your cart is empty!"); return View(); } // Invoke model binding manually if (TryUpdateModel(cart.ShippingDetails, form.ToValueProvider())) { orderSubmitter.SubmitOrder(cart); cart.Clear(); return View("Completed"); } else // Something was invalid return View(); } with the following unit test [Test] public void Submitting_Empty_Shipping_Details_Displays_Default_View_With_Error() { // Arrange CartController controller = new CartController(null, null); Cart cart = new Cart(); cart.AddItem(new Product(), 1); // Act var result = controller.CheckOut(cart, new FormCollection { { "Name", "" } }); // Assert Assert.IsEmpty(result.ViewName); Assert.IsFalse(result.ViewData.ModelState.IsValid); } I have resolved any issues surrounding 'TryUpdateModel' by upgrading to ASP.NET MVC 2 (Release Candidate 2) and the website runs as expected. The associated error messages are: *Tests.CartControllerTests.Submitting_Empty_Shipping_Details_Displays_Default_View_With_Error: System.ArgumentNullException : Value cannot be null. Parameter name: controllerContext* and the more detailed at System.Web.Mvc.ModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext) at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider) at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, IValueProvider valueProvider) at WebUI.Controllers.CartController.CheckOut(Cart cart, FormCollection form) Has anyone run into a similar issue or indeed got the test to pass?

    Read the article

  • C# unit test code questions

    - by 5YrsLaterDBA
    We start use C# build-in unit test functionality. I have VisualStudio 2008 created unit test code for me. I have few question above the generated code. Following are code I copied from the generated file: #region Additional test attributes // //You can use the following additional attributes as you write your tests: // //Use ClassInitialize to run code before running the first test in the class //[ClassInitialize()] //public static void MyClassInitialize(TestContext testContext) //{ //} // //Use ClassCleanup to run code after all tests in a class have run //[ClassCleanup()] //public static void MyClassCleanup() //{ //} // //Use TestInitialize to run code before running each test //[TestInitialize()] //public void MyTestInitialize() //{ //} // //Use TestCleanup to run code after each test has run //[TestCleanup()] //public void MyTestCleanup() //{ //} // #endregion If I need the initialize and cleanup methods, do I need to remove those "My" from the method name when I enable them? //Use ClassInitialize to run code before running the first test in the class //[ClassInitialize()] //public static void MyClassInitialize(TestContext testContext) //{ //} Do I need to call the "MyClassInitialize" method somewhere before running the first test or it will be called automatically before other methods are called. Similar questions for other three methods, are they called automatically at right time frame?

    Read the article

  • TestNG's groups

    - by the qwerty
    If we have <include name="web" > and <include name="weekend" >, TestNG runs all the methods that belong to either web or weekend. Is it possible to change this behaviour so TestNG would run all the methods that belong to web and weekend? Does anyone knows a way to accomplish this?

    Read the article

  • How rspec works with rails3 for integration-tests?

    - by makevoid
    What I'm trying to ahieve is to do integration tests with webrat in rails3 like Yehuda does with test-unit in http://pivotallabs.com/talks/76-extending-rails-3 minute 34. an example: describe SomeApp it "should show the index page" visit "/" body.should =~ /hello world/ end end Does someone knows a way to do it?

    Read the article

  • How to turn off Turbo Boost temporarily?

    - by actual
    In our application we have many versions of the same routine optimized for different kind of processor architectures. During install we run performance tests and select the best version of routine. Latest processors can boost their frequencies if few cores are in use, so sometimes our tests peeking wrong version of routine. Is there some way to temporarily turn off Turbo Boost?

    Read the article

  • How to mock Request.Files[] in MVC unit test class?

    - by kapil
    I want to test a controller method in MVC unit test. For my controller method to test, I require a Request.Files[] collection with length one. I want to mock Request.Files[] as I have used a file upload control on my view rendered by controller method. Can anyone please suggest how can I mock request.file collection in my unit test. thanks, kapil

    Read the article

  • How to run concurrency unit test?

    - by janetsmith
    Hi, How to use junit to run concurrency test? Let's say I have a class public class MessageBoard { public synchronized void postMessage(String message) { .... } public void updateMessage(Long id, String message) { .... } } I wan to test multiple access to this postMessage concurrently. Any advice on this? I wish to run this kind of concurrency test against all my setter functions (or any methodn that involves create/update/delete operation). Thanks

    Read the article

  • Mocking ImportError in Python

    - by Attila Oláh
    I'm trying this for almost two hours now, without any luck. I have a module that looks like this: try: from zope.cpomonent import queryUtility # and things like this except ImportError: # do some fallback operations <-- how to test this? Later in the code: try: queryUtility(foo) except NameError: # do some fallback actions <-- this one is easy with mocking # zope.component.queryUtility to raise a NameError Any ideas?

    Read the article

  • How to see the code generated by the compiler

    - by atch
    Guys in one of excersises (ch.5,e.8) from TC++PL Bjarne asks to do following: '"Run some tests to see if your compiler really generates equivalent code for iteration using pointers and iteration using indexing. If different degrees of optimization can be requested, see if and how that affects the quality of the generated code"' Any idea how to eat it and with what? Thanks in advice.

    Read the article

  • How do I use test Perl modules from test Perl scripts?

    - by DVK
    If my Perl code has a production code location and "test" code location (e.g. production Perl code us in /usr/code/scripts, test Perl code is in /usr/code/test/scripts; production Perl libraries are in /usr/code/lib/perl and test versions of those libraries are in /usr/code/test/lib/perl, is there an easy way for me to achieve such a setup? The exact requirements are: The code must be THE SAME in production and test location. To clarify, to promote any code (library or script) from test to production, the ONLY thing which needs to happen is literally issuing cp command from test to prod location - both the file name AND file contents must remain identical. Test versions of scripts must call other test scripts and test libraries (if exist) or production libraries (if test libraries do not exist) The code paths must be the same between test and production with the exception of base directory (/usr/code/ vs /usr/code/test/) I will present how we solved the problem as an answer to this question, but I'd like to know if there's a better way.

    Read the article

  • How To Run integrational Tests

    - by Vladimir
    In our project we have a plenty of Unit Tests. They help to keep project rather well-tested. Besides them we have a set of tests which are unit tests, but depends on some kind of external resource. We call them external tests. They can access web-service sometimes or similar. While unit tests is easy to run the integrational tests couldn't pass sometimes - for example due to timeout error. Also these tests can take too much time to run. Currently we keep integration/external unit tests just to run them when developing corresponding functionality. For plain unit tests we use TeamCIty for continuous integration. How do you run the integration unit tests and when do you run them?

    Read the article

  • What makes static initialization functions good, bad, or otherwise?

    - by Richard Levasseur
    Suppose you had code like this: _READERS = None _WRITERS = None def Init(num_readers, reader_params, num_writers, writer_params, ...args...): ...logic... _READERS = new ReaderPool(num_readers, reader_params) _WRITERS = new WriterPool(num_writers, writer_params) ...more logic... class Doer: def __init__(...args...): ... def Read(self, ...args...): c = _READERS.get() try: ...work with conn finally: _READERS.put(c) def Writer(...): ...similar to Read()... To me, this is a bad pattern to follow, some cons: Doers can be created without its preconditions being satisfied The code isn't easily testable because ConnPool can't be directly mocked out. Init has to be called right the first time. If its changed so it can be called multiple times, extra logic has to be added to check if variables are already defined, and lots of NULL values have to be passed around to skip re-initializing. In the event of threads, the above becomes more complicated by adding locking Globals aren't being used to communicate state (which isn't strictly bad, but a code smell) On the other hand, some pros: its very convenient to call Init(5, "user/pass", 2, "user/pass") It simple and "clean" Personally, I think the cons outweigh the pros, that is, testability and assured preconditions outweigh simplicity and convenience.

    Read the article

  • UnitTest++ constructing fixtures multiple times?

    - by Peter
    I'm writing some unit tests in UnitTest++ and want to write a bunch of tests which share some common resources. I thought that this should work via their TEST_FIXTURE setup, but it seems to be constructing a new fixture for every test. Sample code: #include <UnitTest++.h> struct SomeFixture { SomeFixture() { // this line is hit twice } }; TEST_FIXTURE(SomeFixture, FirstTest) { } TEST_FIXTURE(SomeFixture, SecondTest) { } I feel like I must be doing something wrong; I had thought that the whole point of having the fixture was so that the setup/teardown code only happens once. Am I wrong on this? Is there something else I have to do to make it work that way?

    Read the article

  • DUnit: How to run tests?

    - by Ian Boyd
    How do i run TestCase's from the IDE? i created a new project, with a single, simple, form: unit Unit1; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) private public end; var Form1: TForm1; implementation {$R *.DFM} end. Now i'll add a test case to check that pushing Button1 does what it should: unit Unit1; interface uses Windows, Messages, SysUtils, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls; type TForm1 = class(TForm) Button1: TButton; procedure Button1Click(Sender: TObject); private public end; var Form1: TForm1; implementation {$R *.DFM} uses TestFramework; type TForm1Tests = class(TTestCase) private f: TForm1; protected procedure SetUp; override; procedure TearDown; override; published procedure TestButton1Click; end; procedure TForm1.Button1Click(Sender: TObject); begin //todo end; { TForm1Tests } procedure TForm1Tests.SetUp; begin inherited; f := TForm1.Create(nil); end; procedure TForm1Tests.TearDown; begin f.Free; inherited; end; procedure TForm1Tests.TestButton1Click; begin f.Button1Click(nil); Self.CheckEqualsString('Hello, world!', f.Caption); end; end. Given what i've done (test code in the GUI project), how do i now trigger a run of the tests? If i push F9 then the form simply appears: Ideally there would be a button, or menu option, in the IDE saying Run DUnit Tests: Am i living in a dream-world? A fantasy land, living in a gumdrop house on lollipop lane?

    Read the article

  • MbUnit (gallio) and Visual Studio.Net Tests Not Completing or Debugging

    - by Davy
    Hi I'm using Gallio\MbUnit 3.1 with ReSharper and Visual Studio 2008. Everything is working well except this type of test: [Test] [Row("test@badEmail@_test.com")] [Row("test@badEmail@_test.")] public void IsValidEmail_Invalid_Emails_Should_Return_False(string invalidEmail) { Assert.IsFalse(AppHelper.IsValidEmail(invalidEmail), "Email validation failed for " + invalidEmail); } The test doesn't complete or go in to debug mode only when I pass in a parameter E.g. 'string invalidEmail'. If I remove that prameter it seems to work normally. It will run the test if I have: [Test] public void IsValidEmail_Invalid_Emails_Should_Return_False() { var invalidName = test@badEmail@_test.com"; Assert.IsFalse(AppHelper.IsValidEmail(invalidEmail), "Email validation failed for " + invalidEmail); } I appreciate that there may be better ways to achieve this test but I'm trying to work my way through a book and this is how it's explaining things. Any help is appreciated. Davy

    Read the article

  • Using SimpleModal with jstestdriver?

    - by leeand00
    Hello, I'm using simpleModal to display a dialog, and I want to be able to tell if it has been displayed or not using jstestdriver. In my browser when I display the dialog it sets the className to "simplemodal-data" on the id of the content I'm displaying, but when I do the same thing in jstestdriver it's like the .modal() function doesn't even do anything. Is there something I'm missing here? Also see this related thread.

    Read the article

  • UI-Automation cmdlet not finding the control

    - by sc_ray
    Hi, I am trying to test a WPF application using the UI-Automation framework that MSFT provides. There were a few powershell scripts written that invoked the cmdlets created to manipulate the visual controls of the application. There is a DropDown within my application that has an entry 'DropDownEntry'. In my cmdlet, I am trying to do something as follows: AutomationElement getItem = DropDown.FindFirst(TreeScope.Descendants, new AndCondition( new PropertyCondition(AutomationElement.ControlTypeProperty,ControlType.ListItem), new PropertyCondition(AutomationElement.NameProperty, "DropDownEntry",PropertyConditionFlags.IgnoreCase))); The above given snippet returns 'null' upon execution which essentially means that the above given logic was unable to find my dropdown entry. Can somebody tell me why this might be happening? I checked the name of my control and the values. Everything seems to be in order. I am not sure why this would be happening. Any help would be much appreciated. Thanks

    Read the article

  • When using a mocking framework and MSPEC where do you set your stubs

    - by Kev Hunter
    I am relatively new to using MSpec and as I write more and more tests it becomes obvious to reduce duplication you often have to use a base class for your setup as per Rob Conery's article I am happy with using the AssertWasCalled method to verify my expectations, but where do you set up a stub's return value, I find it useful to set the context in the base class injecting my dependencies but that (I think) means that I need to set my stubs up in the Because delegate which just feels wrong. Is there a better approach I am missing?

    Read the article

  • How can I mock this asynchronous method?

    - by Charlie
    I have a class that roughly looks like this: public class ViewModel { public ViewModel(IWebService service) { this.WebService = service; } private IWebService WebService{get;set;} private IEnumerable<SomeData> MyData{get;set;} private void GetReferenceData() { this.WebService.BeginGetStaticReferenceData(GetReferenceDataOnComplete, null); } private void GetReferenceDataOnComplete(IAsyncResult result) { this.MyData = this.WebService.EndGetStaticReferenceData(result); } . . . } I want to mock my IWebService interface so that when BeginGetStaticReferenceData is called it is able to call the callback method. I'm using Moq and I can't work out how to do this. My unit test set up code looks something like: //Arrange var service = new Mock<IWebService>(); service.Setup(x => x.BeginGetStaticReferenceData(/*.......don't know.....*/)); service.Setup(x => x.EndGetStaticReferenceData(It.IsAny<IAsyncResult>())).Returns(new List<SomeData>{new SomeData{Name="blah"}}); var viewModel = new ViewModel(service.Object); . .

    Read the article

  • How do you test your blackberry application on the device?

    - by Kai
    This may sound very noobish, but I can't seem to get my app to my blackberry. I was trying to follow the beginning blackberry development book's guide, but maybe I just missed the point somewhere. For remote download, Is it really as simple as drop the COD and JAD files in the same folder on your server then just navigate to the URL with your device's browser? The book says it should prompt a download screen, but all I get is a page full of cryptic characters. My app is a simple slideshow. Uses no signed things and is not MDS enabled. Did I forget something? Any help would be appreciated. Thanks

    Read the article

  • ExpectedExceptionAttribute is not working in MSTest

    - by Micah
    This is wierd, but all of a sudden the `ExpectedExceptionAttribute' quit working for me the other day. Not sure what's gone wrong. I'm running VS 2010 and VS 2005 side-by-side. It's not working in VS 2010. This test should pass, however it is failing: [TestMethod] [ExpectedException(typeof(ArgumentNullException))] public void Test_Exception() { throw new ArgumentNullException("test"); } Any ideas? This really sux.

    Read the article

  • Any tool to make git build every commit to a branch in a seperate repository?

    - by Wayne
    A git tool that meets the specs below is needed. Does one already exists? If not, I will create a script and make it available on GitHub for others to use or contribute. Is there a completely different and better way to solve the need to build/test every commit to a branch in a git repository? Not just to the latest but each one back to a certain staring point. Background: Our development environment uses a separate continuous integration server which is wonderful. However, it is still necessary to do full builds locally on each developer's PC to make sure the commit won't "break the build" when pushed to the CI server. Unfortunately, with auto unit tests, those build force the developer to wait 10 or 15 minutes for a build every time. To solve this we have setup a "mirror" git repository on each developer PC. So we develop in the main repository but anytime a local full build is needed. We run a couple commands in a in the mirror repository to fetch, checkout the commit we want to build, and build. It's works extremely lovely so we can continue working in the main one with the build going in parallel. There's only one main concern now. We want to make sure every single commit builds and tests fine. But we often get busy and neglect to build several fresh commits. Then if it the build fails you have to do a bisect or manually figure build each interim commit to figure out which one broke. Requirements for this tool. The tool will look at another repo, origin by default, fetch and compare all commits that are in branches to 2 lists of commits. One list must hold successfully built commits and the other lists commits that failed. It identifies any commit or commits not yet in either list and begins to build them in a loop in the order that they were committed. It stops on the first one that fails. The tool appropriately adds each commit to either the successful or failed list after it as attempted to build each one. The tool will ignore any "legacy" commits which are prior to the oldest commit in the success list. This logic makes the starting point possible in the next point. Starting Point. The tool building a specific commit so that, if successful it gets added to the success list. If it is the earliest commit in the success list, it becomes the "starting point" so that none of the commits prior to that are examined for builds. Only linear tree support? Much like bisect, this tool works best on a commit tree which is, at least from it's starting point, linear without any merges. That is, it should be a tree which was built and updated entirely via rebase and fast forward commits. If it fails on one commit in a branch it will stop without building the rest that followed after that one. Instead if will just move on to another branch, if any. The tool must do these steps once by default but allow a parameter to loop with an option to set how many seconds between loops. Other tools like Hudson or CruiseControl could do more fancy scheduling options. The tool must have good defaults but allow optional control. Which repo? origin by default. Which branches? all of them by default. What tool? by default an executable file to be provided by the user named "buildtest", "buildtest.sh" "buildtest.cmd", or buildtest.exe" in the root folder of the repository. Loop delay? run once by default with option to loop after a number of seconds between iterations.

    Read the article

< Previous Page | 108 109 110 111 112 113 114 115 116 117 118 119  | Next Page >