Search Results

Search found 544 results on 22 pages for 'tdd'.

Page 11/22 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Test-First development tool for SQL Server 2005?

    - by Jeff Jones
    For several years I have been using a testing tool called qmTest that allows me to do test-driven database development for some Firebird databases. I write a test for a new feature (table, trigger, stored procedure, etc.) until it fails, then modify the database until the test passes. If necessary, I do more work on the test until it fails again, then modify the database until the test passes. Once the test for the feature is complete and passes 100% of the time, I save it in a suite of other tests for the database. Before moving on to another test or a deployment, I run all the tests as a suite to make sure nothing is broken. Tests can have dependencies on other tests, and the results are recorded and displayed in a browser. Nothing new here, I am sure. Our shop is aiming toward standardizing on MSSQLServer and I want to use the same procedure for developing our databases. Does anyone know of tools that allow or encourage this kind of development? I believe the Team System does, but we do not own that at this point, and probably will not for some time. I am not opposed to scripting, but would welcome a more graphical environment. Any suggestions?

    Read the article

  • Seeding repository Rhino Mocks

    - by ahsteele
    I am embarking upon my first journey of test driven development in C#. To get started I'm using MSTest and Rhino.Mocks. I am attempting to write my first unit tests against my ICustomerRepository. It seems tedious to new up a Customer for each test method. In ruby-on-rails I'd create a seed file and load the customer for each test. It seems logical that I could put this boiler plate Customer into a property of the test class but then I would run the risk of it being modified. What are my options for simplifying this code? [TestMethod] public class CustomerTests : TestClassBase { [TestMethod] public void CanGetCustomerById() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } [TestMethod] public void CanGetCustomerByDifId() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByDifID("55")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByDifID("55")); } [TestMethod] public void CanGetCustomerByLogin() { // arrange var customer = new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" } } }; var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetCustomerByLogin("tdude")).Return(customer); // assert Assert.AreEqual(customer, repository.GetCustomerByLogin("tdude")); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • Visual Studio 2010 failed tests throw exceptions

    - by Dave Hanson
    In VisualStudio2010 Ultimate RC I cannot figure out how to suppress {"CollectionAssert.AreEqual failed. (Element at index 0 do not match.)"} from Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException If i Ctrl+Alt+E I get the exception dialog; however that exception doesn't seem to be in there to be suppressed. Does anyone else have any experience with this? I don't remember having to suppress these Assert fails in studio 2008 when running unit tests. My tests would fail and I could just click on the TestResults to see which tests failed instead of fighting through these dialogs. For now I guess I'll just run my tests through the command window.

    Read the article

  • how to avoid returning mocks from a mocked object list

    - by koen
    I'm trying out mock/responsibility driven design. I seem to have problems to avoid returning mocks from mocks in the case of finder objects. An example could be an object that checks whether the bills from last month are paid. It needs a service that retrieves a list of bills for that. So I need to mock that service that retrieves the bills. At the same time I need that mock to return mocked Bills (since I don't want my test to rely on the correctness bill implementation). Is my design flawed? Is there a better way to test this? Or is this the way it will need to be when using finder objects (the finding of the bills in this case)?

    Read the article

  • Where to put my xUnit tests for an F# assembly?

    - by Benjol
    I'm working on my first 'real' F# assembly, and trying to do things right. I've managed to get xUnit working too, but currently my test module is inside the same assembly. This bothers me a bit, because it means I'll be shipping an assembly where nearly half the code (and 80% of the API) is test methods. What is the 'right' way to do this? If I put the tests in another assembly, I think that means I have to expose internals that I'd rather keep private. I know that in C# there is a friend mechanism for tests (if that's the right terminology), is there an equivalent in F#? Alternatively, can anyone point me to an example project where this is being done 'properly'?

    Read the article

  • Testing Finite State Machines

    - by Pondidum
    I have inherited a large and firaly complex state machine at work. It has 31 possbile states to be in. It has the following inputs: Enum: Current State (so 0 - 30) Enum: source (currently only 2 entries) Boolean: Request Boolean: type Enum: Status (3 states) Enum: Handling (3 states) Boolean: Completed The 31 States are really needed (big business process). Breaking into seperate state machines doesnt seem feasable - each state is distinct. I have written tests for one set of inputs (the most common set), with one test per input (all inputs constant, except for the State input): [Subject("Application Process States")] public class When_state_is_meeting2Requested : AppProcessBase { Establish context = () => { //Setup.... }; Because of = () => process.Load(jas, vac); It Current_node_should_be_meeting2Requested = () => process.CurrentNode.ShouldBeOfType<meetingRequestedNode>(); It Can_move_to_clientDeclined = () => Check(process, process.clientDeclined); It Can_move_to_meeting1Arranged = () => Check(process, process.meeting1Arranged); It Can_move_to_meeting2Arranged = () => Check(process, process.meeting2Arranged); It Can_move_to_Reject = () => Check(process, process.Reject); It Cannot_move_to_any_other_state = () => AllOthersFalse(process); } As no one is entirely sure on what the output should be for each state and set of inputs i have been starting to write tests for it, however on calculation i will need to write 4320 ( 30*2*2*2*3*3*2 ) tests for it. Does anyone have any suggestions on how i should go about testing this?

    Read the article

  • How do I test database-related code with NUnit?

    - by Michael Haren
    I want to write unit tests with NUnit that hit the database. I'd like to have the database in a consistent state for each test. I thought transactions would allow me to "undo" each test so I searched around and found several articles from 2004-05 on the topic: http://weblogs.asp.net/rosherove/archive/2004/07/12/180189.aspx http://weblogs.asp.net/rosherove/archive/2004/10/05/238201.aspx http://davidhayden.com/blog/dave/archive/2004/07/12/365.aspx http://haacked.com/archive/2005/12/28/11377.aspx These seem to resolve around implementing a custom attribute for NUnit which builds in the ability to rollback DB operations after each test executes. That's great but... Does this functionality exists somewhere in NUnit natively? Has this technique been improved upon in the last 4 years? Is this still the best way to test database-related code? Edit: it's not that I want to test my DAL specifically, it's more that I want to test pieces of my code that interact with the database. For these tests to be "no-touch" and repeatable, it'd be awesome if I could reset the database after each one. Further, I want to ease this into an existing project that has no testing place at the moment. For that reason, I can't practically script up a database and data from scratch for each test.

    Read the article

  • Mongomapper - unit testing with shoulda on rails 2.3.5

    - by egarcia
    I'm trying to implement shoulda unit tests on a rails 2.3.5 app using mongomapper. So far I've: Configured a rails app that uses mongomapper (the app works) Added shoulda to my gems, and installed it with rake gems:install Added config.frameworks -= [ :active_record, :active_resource ] to config/environment.rb so ActiveRecord isn't used. My models look like this: class Account include MongoMapper::Document key :name, String, :required => true key :description, String key :company_id, ObjectId key :_type, String belongs_to :company many :operations end My test for that model is this one: class AccountTest < Test::Unit::TestCase should_belong_to :company should_have_many :operations should_validate_presence_of :name end It fails on the first should_belong_to: ./test/unit/account_test.rb:3: undefined method `should_belong_to' for AccountTest:Class (NoMethodError) Any ideas why this doesn't work? Should I try something different from shoulda? I must point out that this is the first time I try to use shoulda, and I'm pretty new to testing itself.

    Read the article

  • Any teams out there using TypeMock? Is it worth the hefty price tag?

    - by dferraro
    Hi, I hope this question is not 'controversial' - I'm just basically asking - has anyone here purchased TypeMock and been happy (or unhappy) with the results? We are a small dev shop of only 12 developers including the 2 dev managers. We've been using NMock so far but there are limitations. I have done research and started playing with TypeMock and I love it. It's super clean syntax and lets you basically mock everything, which is great for legacy code. The problem is - how do I justify to my boss spending 800-1200$ per license for an API which has 4-5 competitors that are completly free? 800-1200$ is how much Infragistrics or Telerik cost per license - and there sure as hell isn't 4-5 open source comparable UI frameworks... Which is why I find it a bit overpriced, albeit an awesome library... Any opinions / experiences are greatly appreciated. EDIT: after finding MOQ I thought I fell in love - until I found out that it's not fully supported in VB.NET because VB lacks lambda sub routines =(. Is anyone using MOQ for VB.NET? The problem is we are a mixed shop - we use C# for our CRM development and VB for everything else. Any guidence is greatly appreciated again

    Read the article

  • how to test or describe endless possibilities?

    - by koen
    Example class in pseudocode: class SumCalculator method calculate(int1, int2) returns int What is a good way to test this? In other words how should I describe the behavior I need? test1: canDetermineSumOfTwoIntegers or test2: returnsSumOfTwoIntegers or test3: knowsFivePlusThreeIsEight Test1 and Test2 seem vague and it would need to test a specific calculation, so it doesn't really describe what is being tested. Yet test3 is very limited. What is a good way to test such classes?

    Read the article

  • "autotest/rails [...] doesn't [...] exist. Aborting"

    - by Ethan
    I'm finding that autotest has stopped working... $ autotest loading autotest/rails Autotest style autotest/rails doesn't seem to exist. Aborting. According to this blog post, the common reason for this error is that people don't have the autotest-rails gem installed. However, I definitely have that installed: autotest-rails (4.1.0) ZenTest (4.1.4, 4.1.3, 4.1.1, 4.0.0, 3.11.1, 3.11.0, 3.10.0, 3.9.3, 3.9.2) I haven't installed any new gems today or yesterday, though I might have done a gem update yesterday. Another issue I saw mentioned was incompatibility with Ruby 1.9, but I'm using MRI Ruby 1.8.6.

    Read the article

  • Should a Unit-test replicate functionality or Test output?

    - by Daniel Beardsley
    I've run into this dilemma several times. Should my unit-tests duplicate the functionality of the method they are testing to verify it's integrity? OR Should unit tests strive to test the method with numerous manually created instances of inputs and expected outputs? I'm mainly asking the question for situations where the method you are testing is reasonably simple and it's proper operation can be verified by glancing at the code for a minute. Simplified example (in ruby): def concat_strings(str1, str2) return str1 + " AND " + str2 end Simplified functionality-replicating test for the above method: def test_concat_strings 10.times do str1 = random_string_generator str2 = random_string_generator assert_equal (str1 + " AND " + str2), concat_strings(str1, str2) end end I understand that most times the method you are testing won't be simple enough to justify doing it this way. But my question remains; is this a valid methodology in some circumstances (why or why not)?

    Read the article

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • Is this an idiomatic way to pass mocks into objects?

    - by Billy ONeal
    I'm a bit confused about passing in this mock class into an implementation class. It feels wrong to have all this explicitly managed memory flying around. I'd just pass the class by value but that runs into the slicing problem. Am I missing something here? Implementation: namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } }; namespace detail { struct FileApi { virtual HANDLE CreateFileW( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile ) { return ::CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } virtual void CloseHandle(HANDLE handleToClose) { ::CloseHandle(handleToClose); } }; } class File : boost::noncopyable { HANDLE hWin32; boost::scoped_ptr<detail::FileApi> fileApi; public: File( __in LPCWSTR lpFileName, __in DWORD dwDesiredAccess, __in DWORD dwShareMode, __in_opt LPSECURITY_ATTRIBUTES lpSecurityAttributes, __in DWORD dwCreationDisposition, __in DWORD dwFlagsAndAttributes, __in_opt HANDLE hTemplateFile, __in detail::FileApi * method = new detail::FileApi() ) { fileApi.reset(method); hWin32 = fileApi->CreateFileW(lpFileName, dwDesiredAccess, dwShareMode, lpSecurityAttributes, dwCreationDisposition, dwFlagsAndAttributes, hTemplateFile); } ~File() { fileApi->CloseHandle(hWin32); } }; Tests: namespace detail { struct MockFileApi : public FileApi { MOCK_METHOD7(CreateFileW, HANDLE(LPCWSTR, DWORD, DWORD, LPSECURITY_ATTRIBUTES, DWORD, DWORD, HANDLE)); MOCK_METHOD1(CloseHandle, void(HANDLE)); }; } using namespace detail; using namespace testing; TEST(Test_File, OpenPassesArguments) { MockFileApi * api = new MockFileApi; EXPECT_CALL(*api, CreateFileW(Eq(L"BozoFile"), Eq(56), Eq(72), Eq(reinterpret_cast<LPSECURITY_ATTRIBUTES>(67)), Eq(98), Eq(102), Eq(reinterpret_cast<HANDLE>(98)))) .Times(1).WillOnce(Return(reinterpret_cast<HANDLE>(42))); File test(L"BozoFile", 56, 72, reinterpret_cast<LPSECURITY_ATTRIBUTES>(67), 98, 102, reinterpret_cast<HANDLE>(98), api); }

    Read the article

  • Using RhinoMocks, how do you mock or stub a concrete class without an empty constructor?

    - by Mark Rogers
    Mocking a concrete class with Rhino Mocks seems to work pretty easy when you have an empty constructor on a class: public class MyClass{ public MyClass() {} } But if I add a constructor that takes parameters and remove the one that doesn't take parameters: public class MyClass{ public MyClass(MyOtherClass instance) {} } I tend to get an exception: System.MissingMethodException : Can't find a constructor with matching arguments I've tried putting in nulls in my call to Mock or Stub, but it doesn't work. Can I create mocks or stubs of concrete classes with Rhino Mocks, or must I always supply (implicitly or explicitly) a parameter-less constructor?

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • How do you unit test a class that's meant to talk to data?

    - by Arda Xi
    I have a few repository classes that are meant to talk to different kinds of data, deriving from an IRepository interface laid out like so: In implementations, the code talks to a data source, be this a directory of XML files or a database or even just a cache. Is it possible to reliably unit test any of these implementations? I don't see a mock implementation working, because then I'm only testing the mock code and not the actual code.

    Read the article

  • Free Testing / Code Coverage systems for C++

    - by Billy ONeal
    I'd like to start using a Test Driven Development system for a private project since I saw my employer using it and realized it was very useful. My employer's project was in C# but mines are in C and C++. I looked around and saw that several packages exist for both Java and .NET (for example: NCover, NUnit, ...). Unfortunately I found it difficult to find good C++ testing frameworks. Do you know of any unit testing frameworks that satisfy the following requirements? IMPORTANT: Must provide code coverage statistics, as I'd like to have some idea of how well my tests cover my code-base. Must be free Usable with C++ projects EDIT: To be clear, I know of many existing unit test frameworks. The code coverage piece is what's most important.

    Read the article

  • Unit testing a SQL code generator

    - by Tom H.
    The team I'm on is currently writing code in TSQL to generate TSQL code that will be saved as scripts and later run. We're having a little difficulty in separating our unit tests between testing the code generator parts and testing the actual code that they generate. I've read through another similar question, but I was hoping to get some specific examples of what kind of unit test cases we might have. As an example, let's say that I have a bit of code that simply generates a DROP statement for a view, given the view schema and name. Do I just test that the generated code matches some expected outcome using string comparisons and then in a later integration or system test make sure that the drop actually drops the view if it exists, does nothing if the view doesn't exist, or raises an error if the view is one that we are marking as not allowing a drop? Thanks for any advice!

    Read the article

  • Silverlight Unit Testing Framework running tests in external class library

    - by Jonas Follesø
    I'm currently looking into different options for unit testing Silverlight applications. One of the frameworks available is the Silverlight Unit Test Framework from Microsoft (developed primary by Jeff Wilcox, http://www.jeff.wilcox.name/2010/05/sl3-utf-bits/). One of the scenarios I'm looking into is running the same tests on both Silverlight 3 (PC) and Windows Phone 7. The Silverlight Unit Test Framework (SLUT) runs on both PC and phone. To prevent having to copy or link files I would like to put my tests into a shared test library, that can be loaded by either a WP7 application using the SLUT, or a Silverlight 3 application using SLUT. So my question is: will SLUT load unit tests defined in a referenced class library, or only in the executing assembly?

    Read the article

  • Jasmine testing coffeescript expect(setTimeout).toHaveBeenCalledWith

    - by Lee Quarella
    In the process of learning Jasmine, I've come to this issue. I want a basic function to run, then set a timeout to call itself again... simple stuff. class @LoopObj constructor: -> loop: (interval) -> #do some stuff setTimeout((=>@loop(interval)), interval) But I want to test to make sure the setTimeout was called with the proper args describe "loop", -> xit "does nifty things", -> it "loops at a given interval", -> my_nifty_loop = new LoopObj interval = 10 spyOn(window, "setTimeout") my_nifty_loop.loop(interval) expect(setTimeout).toHaveBeenCalledWith((-> my_nifty_loop.loop(interval)), interval) I get this error: Expected spy setTimeout to have been called with [ Function, 10 ] but was called with [ [ Function, 10 ] ] Is this because the (-> my_nifty_loop.loop(interval)) function does not equal the (=>@loop(interval)) function? Or does it have something to do with the extra square brackets around the second [ [ Function, 10 ] ]? Something else altogther? Where have I gone wrong?

    Read the article

  • Why do we need mocking frameworks like Easymock , JMock or Mockito?

    - by Praneeth
    Hi, We use hand written stubs in our unit tests and I'm exploring the need for a Mock framework like EasyMock or Mockito in our project. I do not find a compelling reason for switching to Mocking frameworks from hand written stubs. Can anyone please answer why one would opt for mocking frameworks when they are already doing unit tests using hand written mocks/stubs. Thanks

    Read the article

  • Usage of Assert.Inconclusive

    - by Johannes Rudolph
    Hi, Im wondering how someone should use Assert.Inconclusive(). I'm using it if my Unit test would be about to fail for a reason other than what it is for. E.g. i have a method on a class that calculates the sum of an array of ints. On the same class there is also a method to calculate the average of the element. It is implemented by calling sum and dividing it by the length of the array. Writing a Unit test for Sum() is simple. However, when i write a test for Average() and Sum() fails, Average() is likely to fail also. The failure of Average is not explicit about the reason it failed, it failed for a reason other than what it should test for. That's why i would check if Sum() returns the correct result, otherwise i Assert.Inconclusive(). Is this to be considered good practice? What is Assert.Inconclusive intended for? Or should i rather solve the previous example by means of an Isolation Framework?

    Read the article

  • Test Driven Development with C++: How to test a class which depends on other classes?

    - by Nikhil
    Suppose I have a class A which depends on 3 other classes X, Y and Z, either A uses these through a reference or a pointer or say A is templated to be instantiated with X, Y and Z doesn't matter, the key is that in order to test A, I need to have X, Y and Z. So I need to have fakes for A, B and C. Suppose I write them. Now, how do I swap real and fake objects easily? I can see that this works very easily in the case of templates. In order to make it work when A depends on X, Y and Z through a reference or a pointer, I would need to have a base class say X_Interface from which I can inherit X_Real and X_Fake. So basically, I would end up in having 3 times the number of classes for every class that would need to have a fake. I am most likely missing something. There has to be a simpler way to do this. Having a base class X_Interface is also quite expensive as I will be using more space and making virtual calls. I guess I could use CRTP as I know whether its a X_Real or X_Fake at compile time but still there must be a better way.

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >