Search Results

Search found 10052 results on 403 pages for 'mutation testing'.

Page 73/403 | < Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >

  • The pImpl idiom and Testability

    - by Rimo
    The pImpl idiom in c++ aims to hide the implementation details (=private members) of a class from the users of that class. However it also hides some of the dependencies of that class which is usually regarded bad from a testing point of view. For example if class A hides its implementation details in Class AImpl which is only accessible from A.cpp and AImpl depends on a lot of other classes, it becomes very difficult to unit test class A since the testing framework has no access to the methods of AImpl and also no way to inject dependency into AImpl. This has been a problem for me lately and I am beginning to think that the pImpl idiom and writing testable code don't mix well. Has anyone come across this problem before? and have you found a solution?

    Read the article

  • Testing instance variables from controllers with rspec

    - by Thiago
    Hi, I am trying to get the following spec to run: describe BlacklistController, "GET index" do it "should display the list of universally blocked numbers" do get :index debugger assigns[:blocked_numbers].should contain "190" end end Here's the action def index @blocked_numbers << "190" respond_to do |format| format.html end end And the failure simply says that assigns[:blocked_numbers} is nil. Why's that?

    Read the article

  • Testing for Adjacent Cells In a Multi-level Grid

    - by Steve
    I'm designing an algorithm to test whether cells on a grid are adjacent or not. The catch is that the cells are not on a flat grid. They are on a multi-level grid such as the one drawn below. Level 1 (Top Level) | - - - - - | | A | B | C | | - - - - - | | D | E | F | | - - - - - | | G | H | I | | - - - - - | Level 2 | -Block A- | -Block B- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | | -Block D- | -Block E- | | 1 | 2 | 3 | 1 | 2 | 3 | | - - - - - | - - - - - | | 4 | 5 | 6 | 4 | 5 | 6 | ... | - - - - - | - - - - - | | 7 | 8 | 9 | 7 | 8 | 9 | | - - - - - | - - - - - | . . . . . . This diagram is simplified from my actual need but the concept is the same. There is a top level block with many cells within it (level 1). Each block is further subdivided into many more cells (level 2). Those cells are further subdivided into level 3, 4 and 5 for my project but let's just stick to two levels for this question. I'm receiving inputs for my function in the form of "A8, A9, B7, D3". That's a list of cell Ids where each cell Id has the format (level 1 id)(level 2 id). Let's start by comparing just 2 cells, A8 and A9. That's easy because they are in the same block. private static RelativePosition getRelativePositionInTheSameBlock(String v1, String v2) { RelativePosition relativePosition; if( v1-v2 == -1 ) { relativePosition = RelativePosition.LEFT_OF; } else if (v1-v2 == 1) { relativePosition = RelativePosition.RIGHT_OF; } else if (v1-v2 == -BLOCK_WIDTH) { relativePosition = RelativePosition.TOP_OF; } else if (v1-v2 == BLOCK_WIDTH) { relativePosition = RelativePosition.BOTTOM_OF; } else { relativePosition = RelativePosition.NOT_ADJACENT; } return relativePosition; } An A9 - B7 comparison could be done by checking if A is a multiple of BLOCK_WIDTH and whether B is (A-BLOCK_WIDTH+1). Either that or just check naively if the A/B pair is 3-1, 6-4 or 9-7 for better readability. For B7 - D3, they are not adjacent but D3 is adjacent to A9 so I can do a similar adjacency test as above. So getting away from the little details and focusing on the big picture. Is this really the best way to do it? Keeping in mind the following points: I actually have 5 levels not 2, so I could potentially get a list like "A8A1A, A8A1B, B1A2A, B1A2B". Adding a new cell to compare still requires me to compare all the other cells before it (seems like the best I could do for this step is O(n)) The cells aren't all 3x3 blocks, they're just that way for my example. They could be MxN blocks with different M and N for different levels. In my current implementation above, I have separate functions to check adjacency if the cells are in the same blocks, if they are in separate horizontally adjacent blocks or if they are in separate vertically adjacent blocks. That means I have to know the position of the two blocks at the current level before I call one of those functions for the layer below. Judging by the complexity of having to deal with mulitple functions for different edge cases at different levels and having 5 levels of nested if statements. I'm wondering if another design is more suitable. Perhaps a more recursive solution, use of other data structures, or perhaps map the entire multi-level grid to a single-level grid (my quick calculations gives me about 700,000+ atomic cell ids). Even if I go that route, mapping from multi-level to single level is a non-trivial task in itself.

    Read the article

  • Testing shared memory ,strange thing happen

    - by barfatchen
    I have 2 program compiled in 4.1.2 running in RedHat 5.5 , It is a simple job to test shared memory , shmem1.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_EXCL | O_RDWR), (S_IREAD | S_IWRITE))) > 0 ) { first = 1; /* We are the first instance */ } else if((shm_fd = shm_open(STATE_FILE, (O_CREAT | O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } if(first) { for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); shm_unlink(STATE_FILE); exit(0); }//main And shmem2.c like following : #define STATE_FILE "/program.shared" #define NAMESIZE 1024 #define MAXNAMES 100 typedef struct { char name[MAXNAMES][NAMESIZE]; int heartbeat ; int iFlag ; } SHARED_VAR; int main (void) { int first = 0; int shm_fd; static SHARED_VAR *conf; if((shm_fd = shm_open(STATE_FILE, (O_RDWR), (S_IREAD | S_IWRITE))) < 0) { printf("Could not create shm object. %s\n", strerror(errno)); return errno; } ftruncate(shm_fd, sizeof(SHARED_VAR)); if((conf = mmap(0, sizeof(SHARED_VAR), (PROT_READ | PROT_WRITE), MAP_SHARED, shm_fd, 0)) == MAP_FAILED) { return errno; } int idx ; for(idx=0;idx< 1000000000;idx++) { conf->heartbeat = conf->heartbeat + 1 ; } printf("conf->heartbeat=(%d)\n",conf->heartbeat) ; close(shm_fd); exit(0); } After compiled : gcc shmem1.c -lpthread -lrt -o shmem1.exe gcc shmem2.c -lpthread -lrt -o shmem2.exe And Run both program almost at the same time with 2 terminal : [test]$ ./shmem1.exe First creation of the shm. Setting up default values conf->heartbeat=(840825951) [test]$ ./shmem2.exe conf->heartbeat=(1215083817) I feel confused !! since shmem1.c is a loop 1,000,000,000 times , how can it be possible to have a answer like 840,825,951 ? I run shmem1.exe and shmem2.exe this way,most of the results are conf-heartbeat will larger than 1,000,000,000 , but seldom and randomly , I will see result conf-heartbeat will lesser than 1,000,000,000 , either in shmem1.exe or shmem2.exe !! if run shmem1.exe only , it is always print 1,000,000,000 , my question is , what is the reason cause conf-heartbeat=(840825951) in shmem1.exe ? Update: Although not sure , but I think I figure it out what is going on , If shmem1.exe run 10 times for example , then conf-heartbeat = 10 , in this time shmem1.exe take a rest and then back , shmem1.exe read from shared memory and conf-heartbeat = 8 , so shmem1.exe will continue from 8 , why conf-heartbeat = 8 ? I think it is because shmem2.exe update the shared memory data to 8 , shmem1.exe did not write 10 back to shared memory before it took a rest ....that is just my theory... i don't know how to prove it !!

    Read the article

  • SHTML - Testing

    - by Michael
    I am creating an shtml website and I am wondering how can you test the webiste in dreamweaver. Do you simply change the extentions back to html?

    Read the article

  • Testing IPhone code on Windows

    - by steve
    I'm picking up a new Dell laptop. My primary machine is a IMac. I will most likely have to write some IPhone projects for someone in the future. While I do most of my work on the IMac there would be maybe 25% of the time where I work from my laptop. Can anyone tell me if I use objective C / IPhone SDK's if there is a generic objective C compiler I can use to see if my code would in theroy work? Not looking to do hackintosh or anything like that. My other option is to just get a discounted mac mini (Think this is most likely) as well as the Dell. Thanks for any advice

    Read the article

  • Why am I getting "ArgumentError: wrong number of arguments (1 for 0)" when running my rails function

    - by Hisham
    I'm stumped on what's causing this. I get this error and stack trace in all my functional tests where I call 'post'. Here is the full stack trace: 7) Error: test_should_validate(UsersControllerTest): ArgumentError: wrong number of arguments (1 for 0) /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:48:in `to_query' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:48:in `build_query_string' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:46:in `each' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:46:in `build_query_string' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:233:in `append_query_string' generated code (/Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route.rb:154):3:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:365:in `__send__' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:365:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:364:in `each' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/routing/route_set.rb:364:in `generate' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:208:in `rewrite_path' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:187:in `rewrite_url' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/url_rewriter.rb:165:in `rewrite' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:450:in `build_request_uri' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:406:in `process' /Users/hisham/src/rails/ftuBackend/vendor/rails/actionpack/lib/action_controller/test_process.rb:376:in `post' functional/users_controller_test.rb:57:in `test_should_validate' /Users/hisham/src/rails/ftuBackend/vendor/rails/activesupport/lib/active_support/testing/setup_and_teardown.rb:60:in `__send__' /Users/hisham/src/rails/ftuBackend/vendor/rails/activesupport/lib/active_support/testing/setup_and_teardown.rb:60:in `run' This is the test I'm running: def test_should_validate post :validate, :user => { :email => '[email protected]', :password => 'quire', :password_confirmation => 'quire', :agreed_to_terms => "true" } assert assigns(:user).errors.empty? assert_response :success end

    Read the article

  • Ruby on rails generates tests for you. Do those give a false sense of a safety net?

    - by Hamish Grubijan
    Disclaimer: I have not used RoR, and I have not generated tests. But, I will still dare to post this question. Quality Assurance is theoretically impossible to get 100% right in general (Undecidable problem ;), and it is hard in practice. So many developers do not understand that writing good automated tests is an art, and it is hard. When I hear that RoR generates the tests for you, I get very skeptical. It cannot be that easy. Testing is a general concept; it applies across languages. So does the concept of code contracts, it is similar for languages that support it. Code contracts do not generate themselves. The programmer must add the requirements and the promises manually, after doing some thinking about the algorithm / function. If a human gets it wrong, then the tools will propagate the error. Similarly with testing - it takes human judgement about what should happen. Tests do not write themselves, and we are far from the day when a business analyst can just have a conversation with a computer and tell it informally what the requirements are and have the computer do all the work. There is no magic ... how can RoR generate good tests for you? Please shed some light on this. Opinions are ok, for this is a community wiki. Thanks!

    Read the article

  • How do I run JUnit from NetBeans?

    - by FarmBoy
    I've been trying to understand how to start writing and running JUnit tests. When I'm reading this article: http://junit.sourceforge.net/doc/testinfected/testing.htm I get the the middle of the page and they write, "JUnit comes with a graphical interface to run tests. Type the name of your test class in the field at the top of the window. Press the Run button." I don't know how to launch this program. I don't even know which package it is in, or how you run a library class from an IDE. Being stuck, I tried this NetBeans tutorial: http://www.netbeans.org/kb/docs/java/junit-intro.html It seemed to be going OK, but then I noticed that the menu options for this tutorial for testing a Java Class Library are different from those for a regular Java application, or for a Java Web App. So the instructions in this tutorial don't apply generally. I'm using NetBeans 6.7, and I've imported JUnit 4.5 into the libraries folder. What would be the normal way to run JUnit, after having written the tests? The JUnit FAQ describes the process from the Console, and I'm willing to do that if that is what is typical, but given all that I can do inside netbeans, it seems hard to believe that there isn't an easier way. Thanks much. EDIT: If I right-click on the project and select "Test" the output is: init: deps-jar: compile: compile-test: test-report: test: BUILD SUCCESSFUL (total time: 0 seconds) This doesn't strike me as the desired output of a test, especially since this doesn't change whether the test condition is true or not. Any ideas?

    Read the article

  • Will IOC solve our problems?

    - by user127954
    Just trying to implement unit testing into a brownfield type system. Be aware i'm relatively new into the unit testing world. Its going to be a gradual migration of course because there are just so many areas of pain. The current problem i'm trying to solve is we followed a lot of bad practices from our VB6 days and in the conversion of our app to .Net. We have LOT AN LOTS of shared/static functions which call other shared functions and those call others and so on. Sometimes depedencies are passed in as parameters and sometimes they are just newed up within the calling function. I've already instructed our developers to stop creating shared functions and instead create instance members and only use those instance members off of interfaces but that doesn't alleviate the current situation. So you must recursively pass in each and every dependency at the top layer for each function in your code path and method signatures are turning into a mess. I'm hoping this is something that IOC will fix. Currently we are using NUnit/Moq and i'm starting to investigate StructureMap. So far i understand that you pretty much tell StructureMap for x interface i want to default to the concrete class y: ObjectFactory.Initialize(x=>{x.ForRequestType<IInterface>().TheDefaultIsConcreteType<MyClass>()}); Then to runtime: var mytype = ObjectFactory.GetInstance<IInterface>(); the IOC container will initialize the correct type for you. Not sure yet how to swap a fake in for the concrete type but hopefully thats simple. Again will IOC solve the problems i was talking about above? Is there a specific IOC framework that will do it better than StructureMap or can they all handle this situation. Any help would be much appreciated.

    Read the article

  • externalizing junit stub objects.

    - by Ajay
    Hi!    In my project we created stub files for testing junits in java(factories) itself. However, we have to externalize these stubs. After seeing a number of serializers/deserializers, we settled on using XStream to serialize and deserialize these stub objects. XStream works like a charm. Its pretty good at what it claims to be. Previously, we had a single factory class say AFactory which produced all the stubs needed for testing different test cases. Now when externalizing each of the stub generated, we hit a road block. We had to create 1 xml file for each stub produced by the factory. For example, public final class AFactory{ public static A createStub1(){ /*Code here */} public static A createStub2(){ /*Code here */} public static A createStub3(){ /*Code here */} } Now, when trying to move this stubs to external files, we had to create 1 xml file for each stub created(A-stub1.xml, A-stub2.xml and A-stub3.xml). The problem with this approach is that, it leads to proliferation of xml stub files. I was thinking, how about keeping all the stubs related to a single bean class in a single xml file. <?xml version="1.0"?> <stubs class="A"> <stub id="stub1"> <!-- Here comes the externalized xml stub representation --> </stub> <stub id="stub2"> </stub> </stubs> Is there a framework which allows you keep all the stub in xml representation in a single xml file as above ? Or What do you guys suggest should be the right approach to adhere to ?

    Read the article

  • java.lang.IllegalStateException: missing behavior definition for the preceding method call getMessag

    - by user362199
    Hi All, I'm using EasyMock(version 2.4) and TestNG for writing UnitTest. I have a following scenario and I cannot change the way class hierarchy is defined. I'm testing ClassB which is extending ClassA. ClassB look like this public class ClassB extends ClassA { public ClassB() { super("title"); } @Override public String getDisplayName() { return ClientMessages.getMessages("ClassB.title"); } } ClassA code public abstract class ClassA { private String title; public ClassA(String title) { this.title = ClientMessages.getMessages(title); } public String getDisplayName() { return this.title; } } ClientMessages class code public class ClientMessages { private static MessageResourse messageResourse; public ClientMessages(MessageResourse messageResourse) { this.messageResourse = messageResourse; } public static String getMessages(String code) { return messageResourse.getMessage(code); } } MessageResourse Class code public class MessageResourse { public String getMessage(String code) { return code; } } Testing ClassB import static org.easymock.classextension.EasyMock.createMock; import org.easymock.classextension.EasyMock; import org.testng.Assert; import org.testng.annotations.Test; public class ClassBTest { private MessageResourse mockMessageResourse = createMock(MessageResourse.class); private ClassB classToTest; private ClientMessages clientMessages; @Test public void testGetDisplayName() { EasyMock.expect(mockMessageResourse.getMessage("ClassB.title")).andReturn("someTitle"); clientMessages = new ClientMessages(mockMessageResourse); classToTest = new ClassB(); Assert.assertEquals("someTitle" , classToTest.getDisplayName()); EasyMock.replay(mockMessageResourse); } } When I'm running this this test I'm getting following exception: java.lang.IllegalStateException: missing behavior definition for the preceding method call getMessage("title") While debugging what I found is, it's not considering the mock method call mockMessageResourse.getMessage("ClassB.title") as it has been called from the construtor (ClassB object creation). Can any one please help me how to test in this case. Thanks.

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • Defining jUnit Test cases Correctly

    - by Epitaph
    I am new to Unit Testing and therefore wanted to do some practical exercise to get familiar with the jUnit framework. I created a program that implements a String multiplier public String multiply(String number1, String number2) In order to test the multiplier method, I created a test suite consisting of the following test cases (with all the needed integer parsing, etc) @Test public class MultiplierTest { Multiplier multiplier = new Multiplier(); // Test for 2 positive integers assertEquals("Result", 5, multiplier.multiply("5", "1")); // Test for 1 positive integer and 0 assertEquals("Result", 0, multiplier.multiply("5", "0")); // Test for 1 positive and 1 negative integer assertEquals("Result", -1, multiplier.multiply("-1", "1")); // Test for 2 negative integers assertEquals("Result", 10, multiplier.multiply("-5", "-2")); // Test for 1 positive integer and 1 non number assertEquals("Result", , multiplier.multiply("x", "1")); // Test for 1 positive integer and 1 empty field assertEquals("Result", , multiplier.multiply("5", "")); // Test for 2 empty fields assertEquals("Result", , multiplier.multiply("", "")); In a similar fashion, I can create test cases involving boundary cases (considering numbers are int values) or even imaginary values. 1) But, what should be the expected value for the last 3 test cases above? (a special number indicating error?) 2) What additional test cases did I miss? 3) Is assertEquals() method enough for testing the multiplier method or do I need other methods like assertTrue(), assertFalse(), assertSame() etc 4) Is this the RIGHT way to go about developing test cases? How am I "exactly" benefiting from this exercise? 5)What should be the ideal way to test the multiplier method? I am pretty clueless here. If anyone can help answer these queries I'd greatly appreciate it. Thank you.

    Read the article

  • TDD test data loading methods

    - by Dave Hanson
    I am a TDD newb and I would like to figure out how to test the following code. I am trying to write my tests first, but I am having trouble for creating a test that touches my DataAccessor. I can't figure out how to fake it. I've done the extend the shipment class and override the Load() method; to continue testing the object. I feel as though I end up unit testing my Mock objects/stubs and not my real objects. I thought in TDD the unit tests were supposed to hit ALL of the methods on the object; however I can never seem to test that Load() code only the overriden Mock Load My tests were write an object that contains a list of orders based off of shipment number. I have an object that loads itself from the database. public class Shipment { //member variables protected List<string> _listOfOrders = new List<string>(); protected string _id = "" //public properties public List<string> ListOrders { get{ return _listOfOrders; } } public Shipment(string id) { _id = id; Load(); } //PROBLEM METHOD // whenever I write code that needs this Shipment object, this method tries // to hit the DB and fubars my tests // the only way to get around is to have all my tests run on a fake Shipment object. protected void Load() { _listOfOrders = DataAccessor.GetOrders(_id); } } I create my fake shipment class to test the rest of the classes methods .I can't ever test the Real load method without having an actual DB connection public class FakeShipment : Shipment { protected new void Load() { _listOfOrders = new List<string>(); } } Any thoughts? Please advise. Dave

    Read the article

  • asp.net mvc - How to create fake test objects quickly and efficiently

    - by Simon G
    Hi, I'm currently testing the controller in my mvc app and I'm creating a fake repository for testing. However I seem to be writing more code and spending more time for the fakes than I do on the actual repositories. Is this right? The code I have is as follows: Controller public partial class SomeController : Controller { IRepository repository; public SomeController(IRepository rep) { repository = rep; } public virtaul ActionResult Index() { // Some logic var model = repository.GetSomething(); return View(model); } } IRepository public interface IRepository { Something GetSomething(); } Fake Repository public class FakeRepository : IRepository { private List<Something> somethingList; public FakeRepository(List<Something> somethings) { somthingList = somthings; } public Something GetSomething() { return somethingList; } } Fake Data class FakeSomethingData { public static List<Something> CreateSomethingData() { var somethings = new List<Something>(); for (int i = 0; i < 100; i++) { somethings.Add(new Something { value1 = String.Format("value{0}", i), value2 = String.Format("value{0}", i), value3 = String.Format("value{0}", i) }); } return somethings; } } Actual Test [TestClass] public class SomethingControllerTest { SomethingController CreateSomethingController() { var testData = FakeSomethingData.CreateSomethingData(); var repository = new FakeSomethingRepository(testData); SomethingController controller = new SomethingController(repository); return controller; } [TestMethod] public void SomeTest() { // Arrange var controller = CreateSomethingController(); // Act // Some test here // Arrange } } All this seems to be a lot of extra code, especially as I have more than one repository. Is there a more efficient way of doing this? Maybe using mocks? Thanks

    Read the article

  • Web Automation Tool

    - by Aaron
    I've realized I need a full-fledged browser automation tool for testing user interactions with our JavaScript widget library. I was using qunit, starting with unit testing and then I unwisely started incorporating more and more functional tests. That was a bad idea: trying to simulate a lot of user actions with JavaScript. The timing issues have gotten out of control and have made the suite too brittle. Now I spend more time fixing the tests, then I do developing. Is it possible to find a browser automation tool that works in: Windows XP: IE6,7,8, FF3 OSX: Safari, FF3 ? I've looked into SeleniumIDE and RC, but there seems to be some IE8 problems. I've also seen some things about Google's WebDriver, which confusingly seems to work with Selenium. Our organziation has licenses for IBM's Rational Functional Tester, but I don' think that will work on the MAC. The idea is to try to run tests on all the browsers our organization supports. Doable? Are my requirements unrealistic? Any recommendations as far as software to try? Thanks!

    Read the article

  • How can I test this SQL Server performance Utility?

    - by Martin Smith
    As part of my MSc I need to do a three month project later this year. I have decided to do something which will likely be useful for me in the workplace and spend the time getting to understand SQL Server internals. The deliverable for this project will be a performance advisor looking at a variety of different rules. Some static such as finding redundant indexes, some more dynamic such as using XEvents to find outlying invocations of stored procedure execution times when certain parameters are passed. I am struggling to come up with a good way of testing this though. I can obviously design a "bad" database and a synthetic workload that my tool will pick up issues on but I also need to demonstrate that it has real world utility. Looking at the self tuning database literature it is common to use TPC benchmarks but I've had a look at the TPCC site and it looks very time consuming to implement and not that good a fit to my project's testing needs in any event (I would still be able to "rig" it by the decisions I made on indexing or physical architecture). Plan A would be to find willing beta tester(s) but in the event that isn't possible I will need a fallback plan. The best idea I have come up with so far is to use the various MS sample applications as examples of real world applications. e.g. http://msftdpprodsamples.codeplex.com/ http://www.asp.net/community/projects/ Does anyone have any better suggestions?

    Read the article

  • Why not lump all service classes into a Factory method (instead of injecting interfaces)?

    - by Andrew
    We are building an ASP.NET project, and encapsulating all of our business logic in service classes. Some is in the domain objects, but generally those are rather anemic (due to the ORM we are using, that won't change). To better enable unit testing, we define interfaces for each service and utilize D.I.. E.g. here are a couple of the interfaces: IEmployeeService IDepartmentService IOrderService ... All of the methods in these services are basically groups of tasks, and the classes contain no private member variables (other than references to the dependent services). Before we worried about Unit Testing, we'd just declare all these classes as static and have them call each other directly. Now we'll set up the class like this if the service depends on other services: public EmployeeService : IEmployeeService { private readonly IOrderService _orderSvc; private readonly IDepartmentService _deptSvc; private readonly IEmployeeRepository _empRep; public EmployeeService(IOrderService orderSvc , IDepartmentService deptSvc , IEmployeeRepository empRep) { _orderSvc = orderSvc; _deptSvc = deptSvc; _empRep = empRep; } //methods down here } This really isn't usually a problem, but I wonder why not set up a factory class that we pass around instead? i.e. public ServiceFactory { virtual IEmployeeService GetEmployeeService(); virtual IDepartmentService GetDepartmentService(); virtual IOrderService GetOrderService(); } Then instead of calling: _orderSvc.CalcOrderTotal(orderId) we'd call _svcFactory.GetOrderService.CalcOrderTotal(orderid) What's the downfall of this method? It's still testable, it still allows us to use D.I. (and handle external dependencies like database contexts and e-mail senders via D.I. within and outside the factory), and it eliminates a lot of D.I. setup and consolidates dependencies more. Thanks for your thoughts!

    Read the article

  • N-tier architecture and unit tests (using Java)

    - by Alexandre FILLATRE
    Hi there, I'd like to have your expert explanations about an architectural question. Imagine a Spring MVC webapp, with validation API (JSR 303). So for a request, I have a controller that handles the request, then passes it to the service layer, which passes to the DAO one. Here's my question. At which layer should the validation occur, and how ? My though is that the controller has to handle basic validation (are mandatory fields empty ? Is the field length ok ? etc.). Then the service layer can do some tricker stuff, that involve other objets. The DAO does no validation at all. BUT, if I want to implement some unit testing (i.e. test layers below service, not the controllers), I'll end up with unexpected behavior because some validations should have been done in the Controller layer. As we don't use it for unit testing, there is a problem. What is the best way to deal with this ? I know there is no universal answer, but your personal experience is very welcomed. Thanks a lot. Regards.

    Read the article

  • Best way to test a Delphi application

    - by Osama ALASSIRY
    I have a Delphi application that has many dependencies, and it would be difficult to refactor it to use DUnit (it's huge), so I was thinking about using something like AutomatedQA's TestComplete to do the testing from the front-end UI. My main problem is that a bugfix or new feature sometimes breaks old code that was previously tested (manually), and used to work. I have setup the application to use command-line switches to open-up a specific form that could be tested, and I can create a set of values and clicks needed to be done. But I have a few questions before I do anything drastic... (and before purchasing anything) Is it worth it? Would this be a good way to test? The result of the test should in my database (Oracle), is there an easy way in testcomplete to check these values (multiple fields in multiple tables)? I would need to setup a test database to do all the automated testing, would there be an easy way to automate re-setting the test db? Other than drop user cascade, create user,..., impdp. Is there a way in testcomplete to specify command-line parameters for an exe? Does anybody have any similar experiences.

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • Unit tests - The benefit from unit tests with contract changes?

    - by Stefan Hendriks
    Recently I had an interesting discussion with a colleague about unit tests. We where discussing when maintaining unit tests became less productive, when your contracts change. Perhaps anyone can enlight me how to approach this problem. Let me elaborate: So lets say there is a class which does some nifty calculations. The contract says that it should calculate a number, or it returns -1 when it fails for some reason. I have contract tests who test that. And in all my other tests I stub this nifty calculator thingy. So now I change the contract, whenever it cannot calculate it will throw a CannotCalculateException. My contract tests will fail, and I will fix them accordingly. But, all my mocked/stubbed objects will still use the old contract rules. These tests will succeed, while they should not! The question that rises, is that with this faith in unit testing, how much faith can be placed in such changes... The unit tests succeed, but bugs will occur when testing the application. The tests using this calculator will need to be fixed, which costs time and may even be stubbed/mocked a lot of times... How do you think about this case? I never thought about it thourougly. In my opinion, these changes to unit tests would be acceptable. If I do not use unit tests, I would also see such bugs arise within test phase (by testers). Yet I am not confident enough to point out what will cost more time (or less). Any thoughts?

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Prove correctness of unit test

    - by Timo Willemsen
    I'm creating a graph framework for learning purposes. I'm using a TDD approach, so I'm writing a lot of unit tests. However, I'm still figuring out how to prove the correctness of my unit tests For example, I have this class (not including the implementation, and I have simplified it) public class SimpleGraph(){ //Returns true on success public boolean addEdge(Vertex v1, Vertex v2) { ... } //Returns true on sucess public boolean addVertex(Vertex v1) { ... } } I also have created this unit tests @Test public void SimpleGraph_addVertex_noSelfLoopsAllowed(){ SimpleGraph g = new SimpleGraph(); Vertex v1 = new Vertex('Vertex 1'); actual = g.addVertex(v1); boolean expected = false; boolean actual = g.addEdge(v1,v1); Assert.assertEquals(expected,actual); } Okay, awesome it works. There is only one crux here, I have proved that the functions work for this case only. However, in my graph theory courses, all I'm doing is proving theorems mathematically (induction, contradiction etc. etc.). So I was wondering is there a way I can prove my unit tests mathematically for correctness? So is there a good practice for this. So we're testing the unit for correctness, instead of testing it for one certain outcome.

    Read the article

< Previous Page | 69 70 71 72 73 74 75 76 77 78 79 80  | Next Page >