Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 50/1052 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Does it make sense to write tests for legacy code when there is no time for a complete refactoring?

    - by is4
    I usually try to follow the advice of the book Working Effectively with Legacy Code. I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider?

    Read the article

  • How do functional languages handle a mocking situation when using Interface based design?

    - by Programmin Tool
    Typically in C# I use dependency injection to help with mocking; public void UserService { public UserService(IUserQuery userQuery, IUserCommunicator userCommunicator, IUserValidator userValidator) { UserQuery = userQuery; UserValidator = userValidator; UserCommunicator = userCommunicator; } ... public UserResponseModel UpdateAUserName(int userId, string userName) { var result = UserValidator.ValidateUserName(userName) if(result.Success) { var user = UserQuery.GetUserById(userId); if(user == null) { throw new ArgumentException(); user.UserName = userName; UserCommunicator.UpdateUser(user); } } ... } ... } public class WhenGettingAUser { public void AndTheUserDoesNotExistThrowAnException() { var userQuery = Substitute.For<IUserQuery>(); userQuery.GetUserById(Arg.Any<int>).Returns(null); var userService = new UserService(userQuery); AssertionExtensions.ShouldThrow<ArgumentException>(() => userService.GetUserById(-121)); } } Now in something like F#: if I don't go down the hybrid path, how would I test workflow situations like above that normally would touch the persistence layer without using Interfaces/Mocks? I realize that every step above would be tested on its own and would be kept as atomic as possible. Problem is that at some point they all have to be called in line, and I'll want to make sure everything is called correctly.

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • Zend Framework: How to start PHPUnit testing Forms?

    - by Andrew
    I am having trouble getting my filters/validators to work correctly on my form, so I want to create a Unit test to verify that the data I am submitting to my form is being filtered and validated correctly. I started by auto-generating a PHPUnit test in Zend Studio, which gives me this: <?php require_once 'PHPUnit/Framework/TestCase.php'; /** * Form_Event test case. */ class Form_EventTest extends PHPUnit_Framework_TestCase { /** * @var Form_Event */ private $Form_Event; /** * Prepares the environment before running a test. */ protected function setUp () { parent::setUp(); // TODO Auto-generated Form_EventTest::setUp() $this->Form_Event = new Form_Event(/* parameters */); } /** * Cleans up the environment after running a test. */ protected function tearDown () { // TODO Auto-generated Form_EventTest::tearDown() $this->Form_Event = null; parent::tearDown(); } /** * Constructs the test case. */ public function __construct () { // TODO Auto-generated constructor } /** * Tests Form_Event->init() */ public function testInit () { // TODO Auto-generated Form_EventTest->testInit() $this->markTestIncomplete( "init test not implemented"); $this->Form_Event->init(/* parameters */); } /** * Tests Form_Event->getFormattedMessages() */ public function testGetFormattedMessages () { // TODO Auto-generated Form_EventTest->testGetFormattedMessages() $this->markTestIncomplete( "getFormattedMessages test not implemented"); $this->Form_Event->getFormattedMessages(/* parameters */); } } so then I open up terminal, navigate to the directory, and try to run the test: $ cd my_app/tests/unit/application/forms $ phpunit EventTest.php Fatal error: Class 'Form_Event' not found in .../tests/unit/application/forms/EventTest.php on line 19 So then I add a require_once at the top to include my Form class and try it again. Now it says it can't find another class. I include that one and try it again. Then it says it can't find another class, and another class, and so on. I have all of these dependencies on all these other Zend_Form classes. What should I do? How should I go about testing my Form to make sure my Validators and Filters are being attached correctly, and that it's doing what I expect it to do. Or am I thinking about this the wrong way?

    Read the article

  • Is there a way to unit test an async method?

    - by Jiho Han
    I am using Xunit and NMock on .NET platform. I am testing a presentation model where a method is asynchronous. The method creates an async task and executes it so the method returns immediately and the state I need to check aren't ready yet. I can set a flag upon finish without modifying the SUT but that would mean I would have to keep checking the flag in a while loop for example, with perhaps timeout. What are my options?

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • Classes with the same name - is it restricted only within the same translation unit?

    - by LeopardSkinPillBoxHat
    Let's just I had the following code: foo.h class Foo { // ... }; foo.cpp #include "foo.h" // Functions for class Foo defined here... Let's say that Foo are built into a static library foo.lib. Now let's say I have the following: foo2.h class Foo { // ... }; foo2.cpp #include "foo2.h" // Functions for class Foo defined here... This is built into a separate static library foo2.lib. Now, if I re-link foo.lib and foo2.lib into an executable program foo.exe, should it be complaining that class Foo has been defined twice? In my experiences, neither the compiler or the linker are complaining. I wouldn't be expecting the compiler to complain, because they have been defined in separate translation units. But why doesn't the linker complain? How does the linker differentiate between the 2 versions of the Foo class? Does it work by decorating the symbols?

    Read the article

  • Should a developer write their own test plan for Q/A?

    - by Mat Nadrofsky
    Who writes the test plans in your shop? Who should write them? I realize developers (like me) regularly do their own unit testing whilst developing and in some cases even their own Q/A depending on the size of the shop and the nature of the business, but in a big software shop with a full development team and Q/A team, who should be writing those official "my changes are done now" test plans? Soon, we'll be bringing on another Q/A member to our development team. My question is, going forward, is it a good practice to get your developers to write their own test plans? Something tells me that part of that might make sense but another part might not... What I like about that: Developer is very familiar with the changes made, thus it's easy to produce a document... What I don't like about that: Developer knows how it's supposed to work and might write a test plan that caters to this without knowing it. So, with the above in mind, what is the general stance on this topic? I'm of course already reading books like the Mythical Man-Month, Code Complete and a few others which really do help, but I'd like to get some input from the group as well.

    Read the article

  • Code generation tool, to create C# adapter classes for unit testing?

    - by RyBolt
    I know I wouldn't need this with Typemock, however, with something like MoQ , I need to use the adapter pattern to enable the creation of mocks via interfaces for code I don't control. For example, TcpClient is a .NET class, so I use adapter pattern to enable mocking of this object, b/c I need an interface of that class. I then produce interface ITcpClient, that can then be implemented via a TcpClientAdapter class, which is just plain vanilla adapter pattern implementation. I am looking for a tool to do this automatically (creation of interface and adapter), I would think there is one out there somewhere? (or is everyone just hand coding these)

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Unit Tests in Visual Studio 2010

    - by Ben
    Hi, i am trying to create a Unit test for a WinForm in a Visual Studio 2010 project. I add a new "Coded UI Test" to my project, open up the code file, then right click and select "Generate Code for Coded UI Test" - "Use Coded UI Test builder". I then start my application up, select "Record" on the UI Map control. I run my tests (in this case simply select a textbox, type in a random value, them click a button). I then select "Generate Code" from the UI Map control which generates the code which the test will use. When running this test, i get the error: Test method HelloWorldTest.CodedUITest1.CodedUITestMethod1 threw exception: Microsoft.VisualStudio.TestTools.UITest.Extension.UITestControlNotFoundException: The playback failed to find the control with the given search properties. Additional Details: TechnologyName: 'MSAA' ControlType: 'Window' Name: 'Form1' ClassName: 'WindowsForms10.Window' --- System.Runtime.InteropServices.COMException: Error HRESULT E_FAIL has been returned from a call to a COM component. Does anyone know where i am going wrong? Thanks

    Read the article

  • TFS 2010 RC does not run Visual Studio 2008 MSTest unit tests

    - by Bernard Vander Beken
    Steps: Run the build including unit tests. Expected result: the unit tests are executed and succeed. Actual result: the unit tests are built by the build, but this is the result: 1 test run(s) completed - 0% average pass rate (0% total pass rate) 0/4 test(s) passed, 0 failed, 4 inconclusive, View Test Results Other Errors and Warnings 1 error(s), 0 warning(s) TF270015: 'MSTest.exe' returned an unexpected exit code. Expected '0'; actual '1'. All the tests are enumerated (four), but the result for each test is "Not Executed". Context: Team Foundation Server 2010 release candidate A build definition that runs projects using the Visual Studio 2008 project format and .NET 3.5 SP1. The unit tests run on a development machine, within Visual Studio. The unit tests project references C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll Typical test class [TestClass] public class DemoTest { [TestMethod] public void DemoTestName() { } // etc }

    Read the article

  • Why is the unit test useful when the view is not unit testable in MVVM?

    - by BigTiger
    Why is the unit test useful when the view is not unit testable in MVVM? In MVVM, we have the models, view-models, and views. The claimed advantage is that MVVM can make the models and view=models unit testable. But all the three parties belong to the same application. If the views are not unit testable, why test the other two? Will unit testing the other two and leave one not tested improve the quality? Removing all the code-behind from the views sounds weird to me. How about the code-behind only handles the pure UI operations?

    Read the article

  • Anyone have an XSL to convert Boost.Test XML logs to a presentable format?

    - by Stuart Lange
    I have some C++ projects running through cruisecontrol.net. As a part of the build process, we compile and run Boost.Test unit test suites. I have these configured to dump XML log files. While the format is similar to JUnit/NUnit, it's not quite the same (and lacks some information), so cruisecontrol.net is unable to pick them up. I am wondering if anyone has created (or knows of) an existing XSL transform that will convert Boost.Test results to JUnit/NUnit format, or alternatively, directly to a presentable (html) format. Thanks!

    Read the article

  • Why does 'uses unit' disappear when I had a new unit ?

    - by TridenT
    I have a Unit test project for my Application using DUnit framework. This project have a unit surrounded by a $IFDEF to output test-results in a xml file instead of the gui or just command line. XML_OUTPUT define is enabled by switching the Build configuration. program DelphiCodeToDoc_Tests; uses TestFramework, TextTestRunner, Sysutils, Forms, GUITestRunner, {$IFDEF XML_OUTPUT} XmlTestRunner2 in 'DUnit_addon\XmlTestRunner2.pas', {$ENDIF} DCTDSetupTests in 'IntegrationTests\DCTDSetupTests.pas', ... This works perfectly. The issue starts when I'm adding a new unit to this project from the IDE (a new unit with 'FileNewUnit'). The Test project is now : uses TestFramework, TextTestRunner, Sysutils, Forms, GUITestRunner, DCTDSetupTests in 'IntegrationTests\DCTDSetupTests.pas', ... MyNewUnit in 'IntegrationTests\MyNewUnit.pas'; As you see, the test XML_OUTPUT has disappeared ... Each time I'm adding a unit, Delphi IDE deletes this test. Do you know why and how I can avoid it ?

    Read the article

  • Is it possible to compile IronRuby code to a .NET assembly (EXE or DLL)

    - by Chris Ammerman
    My scenario consists of the following points. I have a packaged software product I am developing in C# Since it is a packaged product, the public interfaces of the assemblies need to be tightly controlled... All assemblies are strong-named Any classes that don't absolutely have to be "public" are "internal" I want to write unit tests for those "internal" classes, since they are the bulk of the code And finally.... I want to try writing the unit tests in Ruby. Since the unit tests would be external to the assembly containing the code under test, the assemblies under test would each need to have an "InternalsVisibleTo" attribute specifying the name of the unit test assembly. Which of course would mean that the Ruby unit tests would have to compile down to a .NET assembly so they can be given access in this way. Can this be done? If so, how? All I can find on the web about "compiling IronRuby" is about building the actual IronRuby runtime from source.

    Read the article

  • tools for testing vim plugins

    - by intuited
    I'm looking for some tools for testing vim scripts. Either vim scripts that do unit/functional testing, or classes for some other library (eg Python's unittest module) that make it convenient to run vim with parameters that cause it to do some tests on its environment, and determine from the output whether or not a given test passed. I'm aware of a couple of vim scripts that do unit testing, but they're sort of vaguely documented and may or may not actually be useful: vim-unit: purports "To provide vim scripts with a simple unit testing framework and tools" first and only version (v0.1) was released in 2004 documentation doesn't mention whether or not it works reliably, other than to state that it is "fare [sic] from finished". unit-test.vim: This one also seems pretty experimental, and may not be particularly reliable. May have been abandoned or back-shelved: last commit was in 2009-11 ( 6 months ago) No tagged revisions have been created (ie no releases) So information from people who are using one of those two existent modules, and/or links to other, more clearly usable, options, are very welcome.

    Read the article

  • When using Microsoft Test Manager 2010 with SfTS, how do QA engineers know what tests they have to run?

    - by MADCookie
    We are moving our projects to TFS 2010 using the SfTS v3 (Scrum for Team System) template. We need to understand how Microsoft Test Manager is supposed to be used in this Scrum process. Specific scenario & question: The QA manager uses Test Manager to create Acceptance Test Work Items (WIs). These new WIs are created and "assigned to" him. The manager doesn't run all the tests, instead he wants to give that responsibility to his staff. How is a QA engineer supposed to know that he has tests to run? Everything says it is assigned to the manager.

    Read the article

  • Difference in techniques for setting a stubbed method's return value with Rhino Mocks

    - by CRice
    What is the main difference between these following two ways to give a method some fake implementation? I was using the second way fine in one test but in another test the behaviour can not be achieved unless I go with the first way. These are set up via: IMembershipService service = test.Stub<IMembershipService>(); so (the first), using (test.Record()) //test is MockRepository instance { service.GetUser("dummyName"); LastCall.Return(new LoginUser()); } vs (the second). service.Stub(r => r.GetUser("dummyName")).Return(new LoginUser()); Edit The problem is that the second technique returns null in the test, when I expect it to return a new LoginUser. The first technique behaves as expected by returning a new LoginUser. All other test code used in both cases is identical.

    Read the article

  • How to run xunit in Visual Studio 2012?

    - by user1978421
    I am very new to unit testing. I have been following the procedures for creating a unit test in visual studio 2012 on http://channel9.msdn.com/Events/TechEd/Europe/2012/DEV214. The test just won't start. And it will prompt me "A project with an Output Type of Class Library cannot be started directly. In order to debug this project, add an executable project to this solution which references the library project. Set an executable project as the startup project. Even though I attached the unit test class code to a console program, the test does not start and the test explorer is empty. In the video, it doesn't need to have any running program. The lady only created a class library, and the test will run. what should I do? Note. there is no "create unit test" on the mouse right click menu

    Read the article

  • How should I mock out my data connectivity

    - by BobTheBuilder
    I'm trying to unit test my Data Access Layer and I'm in the process of trying to mock my data connectivity to unit test my DAL and I'm coming unstuck trying to mock out the creation of the commands. I thought about using a queue of IDbParameters for the creation of the parameters, but the unit tests then require that the parameters are configured in the right order. I'm using MOQ and having looked around for some documentation to walk me through this, I'm finding lots of recommendation not to do this, but to write a wrapper for the connection, but it's my contention that my DAL is supposed to be the wrapper for my database and I don't feel I should be writing wrappers... if I do, how do I unit test the connectivity to the database for my wrapper? By writing another wrapper? It seems like it's turtles all the way down. So does anyone have any recommendations or tutorials regarding this particular area of unit testing/mocking?

    Read the article

  • Write your Tests in RSpec with IronRuby

    - by kazimanzurrashid
    [Note: This is not a continuation of my previous post, treat it as an experiment out in the wild. ] Lets consider the following class, a fictitious Fund Transfer Service: public class FundTransferService : IFundTransferService { private readonly ICurrencyConvertionService currencyConvertionService; public FundTransferService(ICurrencyConvertionService currencyConvertionService) { this.currencyConvertionService = currencyConvertionService; } public void Transfer(Account fromAccount, Account toAccount, decimal amount) { decimal convertionRate = currencyConvertionService.GetConvertionRate(fromAccount.Currency, toAccount.Currency); decimal convertedAmount = convertionRate * amount; fromAccount.Withdraw(amount); toAccount.Deposit(convertedAmount); } } public class Account { public Account(string currency, decimal balance) { Currency = currency; Balance = balance; } public string Currency { get; private set; } public decimal Balance { get; private set; } public void Deposit(decimal amount) { Balance += amount; } public void Withdraw(decimal amount) { Balance -= amount; } } We can write the spec with MSpec + Moq like the following: public class When_fund_is_transferred { const decimal ConvertionRate = 1.029m; const decimal TransferAmount = 10.0m; const decimal InitialBalance = 100.0m; static Account fromAccount; static Account toAccount; static FundTransferService fundTransferService; Establish context = () => { fromAccount = new Account("USD", InitialBalance); toAccount = new Account("CAD", InitialBalance); var currencyConvertionService = new Moq.Mock<ICurrencyConvertionService>(); currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); fundTransferService = new FundTransferService(currencyConvertionService.Object); }; Because of = () => { fundTransferService.Transfer(fromAccount, toAccount, TransferAmount); }; It should_decrease_from_account_balance = () => { fromAccount.Balance.ShouldBeLessThan(InitialBalance); }; It should_increase_to_account_balance = () => { toAccount.Balance.ShouldBeGreaterThan(InitialBalance); }; } and if you run the spec it will give you a nice little output like the following: When fund is transferred » should decrease from account balance » should increase to account balance 2 passed, 0 failed, 0 skipped, took 1.14 seconds (MSpec). Now, lets see how we can write exact spec in RSpec. require File.dirname(__FILE__) + "/../FundTransfer/bin/Debug/FundTransfer" require "spec" require "caricature" describe "When fund is transferred" do Convertion_Rate = 1.029 Transfer_Amount = 10.0 Initial_Balance = 100.0 before(:all) do @from_account = FundTransfer::Account.new("USD", Initial_Balance) @to_account = FundTransfer::Account.new("CAD", Initial_Balance) currency_convertion_service = Caricature::Isolation.for(FundTransfer::ICurrencyConvertionService) currency_convertion_service.when_receiving(:get_convertion_rate).with(:any, :any).return(Convertion_Rate) fund_transfer_service = FundTransfer::FundTransferService.new(currency_convertion_service) fund_transfer_service.transfer(@from_account, @to_account, Transfer_Amount) end it "should decrease from account balance" do @from_account.balance.should be < Initial_Balance end it "should increase to account balance" do @to_account.balance.should be > Initial_Balance end end I think the above code is self explanatory, treat the require(line 1- 4) statements as the add reference of our visual studio projects, we are adding all the required libraries with this statement. Next, the describe which is a RSpec keyword. The before does exactly the same as NUnit's Setup or MsTest’s TestInitialize attribute, but in the above we are using before(:all) which acts as ClassInitialize of MsTest, that means it will be executed only once before all the test methods. In the before(:all) we are first instantiating the from and to accounts, it is same as creating with the full name (including namespace)  like fromAccount = new FundTransfer.Account(.., ..), next, we are creating a mock object of ICurrencyConvertionService, check that for creating the mock we are not using the Moq like the MSpec version. This is somewhat an interesting issue of IronRuby or maybe the DLR, it seems that it is not possible to use the lambda expression that most of the mocking tools uses in arrange phase in Iron Ruby, like: currencyConvertionService.Setup(ccv => ccv.GetConvertionRate(Moq.It.IsAny<string>(), Moq.It.IsAny<string>())).Returns(ConvertionRate); But the good news is, there is already an excellent mocking tool called Caricature written completely in IronRuby which we can use to mock the .NET classes. May be all the mocking tool providers should give some thought to add the support for the DLR, so that we can use the tool that we are already familiar with. I think the rest of the code is too simple, so I am skipping the explanation. Now, the last thing, how we are going to run it with RSpec, lets first install the required gems. Open you command prompt and type the following: igem sources -a http://gems.github.com This will add the GitHub as gem source. Next type: igem install uuidtools caricature rspec and at last we have to create a batch file so that we can execute it in the Notepad++, create a batch like in the IronRuby bin directory like my previous post and put the following in that batch file: @echo off cls call spec %1 --format specdoc pause Next, add a run menu and shortcut in the Notepad++ like my previous post. Now when we run it it will show the following output: When fund is transferred - should decrease from account balance - should increase to account balance Finished in 0.332042 seconds 2 examples, 0 failures Press any key to continue . . . You will complete code of this post in the bottom. That's it for today. Download: RSpecIntegration.zip

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >