Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 27/192 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Can I configure NUnit so that Debug.Fail doesn't show a message box when I run my tests?

    - by panamack
    I have this property: public SubjectStatus Status { get { return status; } set { if (Enum.IsDefined(typeof(SubjectStatus), value)) { status = value; } else { Debug.Fail("Error setting Subject.Status", "There is no SubjectStatus enum constant defined for that value."); return; } } } and this unit test [Test] public void StatusProperty_StatusChangedToValueWithoutEnumDefinition_StatusUnchanged() { Subject subject = new TestSubjectImp("1"); // assigned by casting from an int to a defined value subject.Status = (SubjectStatus)2; Assert.AreEqual(SubjectStatus.Completed, subject.Status); // assigned by casting from an int to an undefined value subject.Status = (SubjectStatus)100; // no change to previous value Assert.AreEqual(SubjectStatus.Completed, subject.Status); } Is there a way I can prevent Debug.Fail displaying a message box when I run my tests, but allow it to show me one when I debug my application?

    Read the article

  • Is problem solving of puzzles/logic tests a skill that can be developed with practise or only someth

    - by dotnetdev
    Programming is essentially problem solving/using a lot of logic. With solving puzzles (like the ones recruiters like MS etc ask), is this a skill that can be developed with practise or is it a skill that only someone who is gifted has (I assume the former as many people can pass these tests)? Even so, I keep thinking it is a special skill for someone gifted, not for someone with a lot of practise. I guess that with practise you are perhaps more open-minded and start to think out of the box more (solving technical problems in development may also foster this mindset perhaps). Thanks

    Read the article

  • Does Django tests run slower on the mac compared to linux?

    - by Thierry Lam
    I'm currently developing my Django projects on both: Mac OS X 10.5, 32 bit Ubuntu Server 9.10 64 bits (1 CPU, 512MB RAM) Both of the above OS are using: Python 2.6.4 Django 1.1.1 MySQL 5.1 Running 12 tests for one of my application take: Mac: 57.513s Linux: 30.935s EDIT: Mac Hardware Spec: MacBook Pro 2.2 GHz Intel Core 2 Duo 3GB RAM I'm running the Ubuntu OS on the same mac above through VMware Fusion 2.0.6. You might argue that Ubuntu Server 64 bits is faster but I have observed a similar speed difference on Ubuntu 8.10 32 bits desktop edition. Even if I turn off my linux VM and other mac applications, I still experience the slowness. Has anyone else experienced this Django test speed difference across those two OS?

    Read the article

  • Whats are basic things you should kept in mind while writing functional tests?

    - by piemesons
    Hello, what kind of functionality should be covered in functional test? How to prioritize what are the functionality that must be covered in functional test. I can understand it depends upon on the project, whats the functionality present the project. Lets take a example of stackoverflow. Suppose we are developing a basic working model of this. What kind of functional test must be covered in this. Just brief what the key points while writing functional tests. (taking any basic functionality from this site just to understand the reference) Still its platform independent question.I am using ruby on rails developer, but keeping ruby on rails as mind will be preferable.

    Read the article

  • Getting Assert to work in Visual C++ Unit Tests?

    - by garsh0p
    I'm using Visual Studio 2008's built in testing framework in my Visual C++ project. I'm adding a new Test Project, then a new Unit Test. However, I can't use any of the functions provided by Assert. Assert shows up in the Intellisense, but I can't do anything with it. I've done unit tests fine in Visual C#. Am I forgetting to do anything? EDIT: There isn't much code because everything I'm doing is auto-generated by Visual Studio 2008. Here are the steps I'm doing: File - New Project - Visual C++ - General - Empty Project Right click solution in Solution Explorer - Add - New Project... Visual C++ - Test - Test Project Open UnitTest1.cpp (auto-generated) Go to TestMethod1() From here, when I try to use the Assert class (like Assert.AreEqual), I can't do it. If I do the same in a Visual C# project, it works fine.

    Read the article

  • File.Exists("SDF File Path") returns a FALSE when run through Unit tests even though it exists in th

    - by shiva-hv
    I am writing Unit tests for a Windows Project. The Executable project on the Client Side of this Windows Project has a code File.Exists("LanguageLookups.sdf") which is used to check and return a Bool if the sdf file exists in the Execution Directory or not. But when i execute the same piece of Code through a Unit test; The code File.Exists("LanguageLookups.sdf") returns a FALSE. Its not able to find this SDF File. Can anybody help me on this?

    Read the article

  • What's the best way to avoid try...catch...finally... in my unit tests?

    - by Bruce Li
    I'm writing many unit tests in VS 2010 with Microsoft Test. In each test class I have many test methods similar to below: [TestMethod] public void This_is_a_Test() { try { // do some test here // assert } catch (Exception ex) { // test failed, log error message in my log file and make the test fail } finally { // do some cleanup with different parameters } } When each test method looks like this I fell it's kind of ugly. But so far I haven't found a good solution to make my test code more clean, especially the cleanup code in the finally block. Could someone here give me some advices on this? Thanks in advance.

    Read the article

  • Is it a bad idea to create tests that rely on each other within a test fixture?

    - by nbolton
    For example: // NUnit-like pseudo code (within a TestFixture) Ctor() { m_globalVar = getFoo(); } [Test] Create() { a(m_globalVar) } [Test] Delete() { // depends on Create being run b(m_globalVar) } … or… // NUnit-like pseudo code (within a TestFixture) [Test] CreateAndDelete() { Foo foo = getFoo(); a(foo); // depends on Create being run b(foo); } … I’m going with the later, and assuming that the answer to my question is: No, at least not with NUnit, because according to the NUnit manual: The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session. ... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.

    Read the article

  • is it a good idea to write tests for environments other than development?

    - by jcollum
    Let's say I have a (fairly typical) set of environments: PROD, UAT, QA, DEV. Is it a good idea to run your tests across all environments? Here's what I'm thinking of. I have a proc in SQL that my code depends on, I'll call it proc_getActiveCustomers. If that proc isn't present my app will go south real fast. So I write a test that checks for the existence of this proc in the database. Nothing new here. But when I then deploy my app to the QA environment, would I also want to have a test that checks that environment for the existence of proc_getActiveCustomers? I think this is a good idea but I've never heard much about testing in environments outside of development. Makes me wonder if there's some downside I'm not aware of. The direction that I'm going is to have a list of environments in code and then passing that environment into my unit test.

    Read the article

  • Does Visual Studio run Tests with a less privileged process?

    - by Filip Ekberg
    I have an application that is supposed to read from the Registry and when executing a console application my registry access works perfectly. However when I move it over to a test this returns null: var masterKey = Registry.LocalMachine.OpenSubKey("path_to_my_key"); So my question is: Does Visual Studio run Tests with a less privileged process? I tested to see what user this gave me: var x = WindowsIdentity.GetCurrent().Name; and it gives me the same as in the console application. So I am a bit confused here.

    Read the article

  • Where do you put your unit test?

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewers?

    Read the article

  • organizing unit test

    - by soulmerge
    I have found several conventions to housekeeping unit tests in a project and I'm not sure which approach would be suitable for our next PHP project. I am trying to find the best convention to encourage easy development and accessibility of the tests when reviewing the source code. I would be very interested in your experience/opinion regarding each: One folder for productive code, another for unit tests: This separates unit tests from the logic files of the project. This separation of concerns is as much a nuisance as it is an advantage: Someone looking into the source code of the project will - so I suppose - either browse the implementation or the unit tests (or more commonly: the implementation only). The advantage of unit tests being another viewpoint to your classes is lost - those two viewpoints are just too far apart IMO. Annotated test methods: Any modern unit testing framework I know allows developers to create dedicated test methods, annotating them (@test) and embedding them in the project code. The big drawback I see here is that the project files get cluttered. Even if these methods are separated using a comment header (like UNIT TESTS below this line) it just bloats the class unnecessarily. Test files within the same folders as the implementation files: Our file naming convention dictates that PHP files containing classes (one class per file) should end with .class.php. I could imagine that putting unit tests regarding a class file into another one ending on .test.php would render the tests much more present to other developers without tainting the class. Although it bloats the project folders, instead of the implementation files, this is my favorite so far, but I have my doubts: I would think others have come up with this already, and discarded this option for some reason (i.e. I have not seen a java project with the files Foo.java and FooTest.java within the same folder.) Maybe it's because java developers make heavier use of IDEs that allow them easier access to the tests, whereas in PHP no big editors have emerged (like eclipse for java) - many devs I know use vim/emacs or similar editors with little support for PHP development per se. What is your experience with any of these unit test placements? Do you have another convention I haven't listed here? Or am I just overrating unit test accessibility to reviewing developers?

    Read the article

  • SQL Server with XML and selecting child nodes

    - by Zenox
    I have the following XML: <tests> <test>1</test> <test>2</test> <test>3</test> </tests> And I am trying the following query: CREATE PROCEDURE [dbo].[test] @Tests xml=null AS BEGIN SELECT doc.col.value('(test)[1]', 'nvarchar(50)') FROM @Tests.nodes('//tests') AS doc(col) END But it only returns me a value from the first What am I missing here?

    Read the article

  • Passthrough Objects – Duck Typing++

    - by EltonStoneman
    [Source: http://geekswithblogs.net/EltonStoneman] Can't see a genuine use for this, but I got the idea in my head and wanted to work it through. It's an extension to the idea of duck typing, for scenarios where types have similar behaviour, but implemented in differently-named members. So you may have a set of objects you want to treat as an interface, which don't implement the interface explicitly, and don't have the same member names so they can't be duck-typed into implicitly implementing the interface. In a fictitious example, I want to call Get on whichever ICache implementation is current, and have the call passed through to the relevant method – whether it's called Read, Retrieve or whatever: A sample implementation is up on github here: PassthroughSample. This uses Castle's DynamicProxy behind the scenes in the same way as my duck typing sample, but allows you to configure the passthrough to specify how the inner (implementation) and outer (interface) members are mapped:       var setup = new Passthrough();     var cache = setup.Create("PassthroughSample.Tests.Stubs.AspNetCache, PassthroughSample.Tests")                             .WithPassthrough("Name", "CacheName")                             .WithPassthrough("Get", "Retrieve")                             .WithPassthrough("Set", "Insert")                             .As<ICache>(); - or using some ugly Lambdas to avoid the strings :     Expression<Func<ICache, string, object>> get = (o, s) => o.Get(s);     Expression<Func<Memcached, string, object>> read = (i, s) => i.Read(s);     Expression<Action<ICache, string, object>> set = (o, s, obj) => o.Set(s, obj);     Expression<Action<Memcached, string, object>> insert = (i, s, obj) => i.Put(s, obj);       ICache cache = new Passthrough<ICache, Memcached>()                     .Create()                     .WithPassthrough(o => o.Name, i => i.InstanceName)                     .WithPassthrough(get, read)                     .WithPassthrough(set, insert)                     .As();   - or even in config:   ICache cache = Passthrough.GetConfigured<ICache>(); ...  <passthrough>     <types>       <typename="PassthroughSample.Tests.Stubs.ICache, PassthroughSample.Tests"             passesThroughTo="PassthroughSample.Tests.Stubs.AppFabricCache, PassthroughSample.Tests">         <members>           <membername="Name"passesThroughTo="RegionName"/>           <membername="Get"passesThroughTo="Out"/>           <membername="Set"passesThroughTo="In"/>         </members>       </type>   Possibly useful for injecting stubs for dependencies in tests, when your application code isn't using an IoC container. Possibly it also has an alternative implementation using .NET 4.0 dynamic objects, rather than the dynamic proxy.

    Read the article

  • Getting from a user-story to code while using TDD (scrum)

    - by Ittai
    I'm getting into scrum and TDD and I think I have some confusion which I'd like to get your feedback about. Let's assume I have a user-story in my backlog, in order for me to start developing it as part of TDD I need to have requirements, right so far? Is it true to say that the product manager and the QA should be responsible for taking the user-story and breaking it down to acceptance tests? I think the above is true since the acceptance tests need to be formal, so they can be used as tests, but also human readable so that the product can approve they are the requirements, right? Is it also true that I later take these acceptance tests and use them as my requirements, i.e. they are a set of use-cases which I implement (through TDD)? I hope I'm not making too much of a mess but that's the current flow I have in mind right now. Update I think my initial intentions were unclear so I'll try to rephrase. I want to know more details about the scrum flow of turning a user-story into code while using TDD. The starting point is obvious, a user surfaces a need (or the user's representative as the product) which is a short 1-2 lines description in the known format and that is added to the product backlog. When there is a spring planning meeting user-stories are taken from the backlog and assigned to developers. In order for a developer to write code they need requirements (especially in TDD since the requirements are what the tests are derived from). When, by whom and to which format are the requirements compiled? What I had in mind was that the product and QA define the requirements via acceptance tests (I'm thinking of automatic using FitNesse or the sort but that's not the core I think) which help to serve 2 purposes at the same time: They define "Done" properly. They give a developer something to derive tests from. I wasn't sure when these were written (before the sprint they're picked then that might be a waste since additional information will arrive or the story won't be picked, during the iteration then the developer might get stuck waiting for them...)

    Read the article

  • windows 2003 server : can't join domain

    - by phill
    I originally tried to rejoin a computer to a network which led to a "cannot find domain" error. The username/password box don't even come up. some tests i ran: I can ping the server, however I can't ping the domain name domain1.local. nslookup can't find the domain either. It looks to the isp's dns instead of my own to resolve the local machines. So i go to the dns and run netdiag.exe and gives me this error. DNS test . . . . . . . . . . . . . : Failed [WARNING] Cannot find a primary authoritative DNS server for the name 'stmartinsrv.stmartin.local.'. [RCODE_SERVER_FAILURE] The name 'srv.domain1.local.' may not be registered in DNS. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.156.1'. Please wait for 30 minutes for DNS server replication. [WARNING] The DNS entries for this DC are not registered correctly on DNS se rver '68.94.157.1'. Please wait for 30 minutes for DNS server replication. [FATAL] No DNS servers have the DNS records for this DC registered. Redir and Browser test . . . . . . : Passed List of NetBt transports currently bound to the Redir NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The redir is bound to 1 NetBt transport. List of NetBt transports currently bound to the browser NetBT_Tcpip_{04BB0F6B-06AE-4D60-80C8-2A7A24C1D87B} The browser is bound to 1 NetBt transport. then running dcdiag C:\Program Files\Support Toolsdcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\SRV Starting test: Connectivity The host 1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local cou ld not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (1c99f63c-49ec-40db-b3d3-6265c00fbd3e._msdcs.domain1.local) couldn't be resolved, the server name (srv.domain1.local) resolved to the IP address (192.168.1.21) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... SRV failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\SRV Skipping all tests, because server SRV is not responding to directory service requests Running partition tests on : ForestDnsZones Starting test: CrossRefValidation ......................... ForestDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... ForestDnsZones passed test CheckSDRefDom Running partition tests on : DomainDnsZones Starting test: CrossRefValidation ......................... DomainDnsZones passed test CrossRefValidation Starting test: CheckSDRefDom ......................... DomainDnsZones passed test CheckSDRefDom Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : domain1 Starting test: CrossRefValidation ......................... domain1 passed test CrossRefValidation Starting test: CheckSDRefDom ......................... domain1 passed test CheckSDRefDom Running enterprise tests on : domain1.local Starting test: Intersite ......................... domain1.local passed test Intersite Starting test: FsmoCheck ......................... domain1.local passed test FsmoCheck from previous postings, I've tried adding the domain suffix to the nic ip properties to both the client machine and the dc server which didn't help. note: there is only one nic on the server any ideas? thanks in advance

    Read the article

  • Thoughts on Thoughts on TDD

    Brian Harry wrote a post entitled Thoughts on TDD that I thought I was going to let lie, but I find that I need to write a response. I find myself in agreement with Brian on many points in the post, but I disagree with his conclusion. Not surprisingly, I agree with the things that he likes about TDD. Focusing on the usage rather than the implementation is really important, and this is important whether you use TDD or not. And YAGNI was a big theme in my Seven Deadly Sins of Programming series. Now, on to what he doesnt like. He says that he finds it inefficient to have tests that he has to change every time he refactors. Here is where we part company. If you are having to do a lot of test rewriting (say, more than a couple of minutes work to get back to green) *often* when you are refactoring your code, I submit that either you are testing things that you dont need to test (internal details rather than external implementation), your code perhaps isnt as decoupled as it could be, or maybe you need a visit to refactorers anonymous. I also like to refactor like crazy, but as we all know, the huge downside of refactoring is that we often break things. Important things. Subtle things. Which makes refactoring risky. *Unless* we have a set of tests that have great coverage. And TDD (or Example-based Design, which I prefer as a term) gives those to us. Now, I dont know what sort of coverage Brian gets with the unit tests that he writes, but I do know that for the majority of the developers Ive worked with and I count myself in that bucket the coverage of unit tests written afterwards is considerably inferior to the coverage of unit tests that come from TDD. For me, it all comes down to the answer to the following question: How do you ensure that your code works now and will continue to work in the future? Im willing to put up with a little efficiency on the front side to get that benefit later. Its not the writing of the code thats the expensive part, its everything else that comes after. I dont think that stepping through test cases in the debugger gets you what you want. You can verify what the current behavior is, sure, and do it fairly cheaply, but you dont help the guy in the future who doesnt know what conditions were important if he has to change your code. His second part that he doesnt like backing into an architecture (go read to see what he means). Ive certainly had to work with code that was like this before, and its a nightmare the code that nobody wants to touch. But thats not at all the kind of code that you get with TDD, because if youre doing it right youre doing the write a failing tests, make it pass, refactor approach. Now, you may miss some useful refactorings and generalizations for this, but if you do, you can refactor later because you have the tests that make it safe to do so, and your code tends to be easy to refactor because the same things that make code easy to write unit tests for make it easy to refactor. I also think Brian is missing an important point. We arent all as smart as he is. Im reminded a bit of the lesson of Intentional Programming, Charles Simonyis paradigm for making programming easier. I played around with Intentional Programming when it was young, and came to the conclusion that it was a pretty good thing if you were as smart as Simonyi is, but it was pretty much a disaster if you were an average developer. In this case, TDD gives you a way to work your way into a good, flexible, and functional architecture when you dont have somebody of Brians talents to help you out. And thats a good thing.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Emacs equivalent of this Vim command to run my tests?

    - by eightbitraptor
    I'm using Emacs at the moment and experimenting with it for my Rails development and there is one thing that I do quite regularly in Vim and I'd like to know if an equivalent exists in Emacs, or an alternative workflow to achieve the behavior that I need. The command in Vim is :map ;t :!rspec --no-color %<cr> Essentially this maps a key combination to run a bash/shell command on the file represented by the current buffer (% expands to the filename at runtime, the <cr> is just a carriage return at the end to execute the command). I map all sorts of random little commands as and when I need them and I really miss the immediacy of this approach. How can I achieve something similar?

    Read the article

  • Django's self.client.login(...) does not work in unit tests

    - by thebossman
    I have created users for my unit tests in two ways: 1) Create a fixture for "auth.user" that looks roughly like this: { "pk": 1, "model": "auth.user", "fields": { "username": "homer", "is_active": 1, "password": "sha1$72cd3$4935449e2cd7efb8b3723fb9958fe3bb100a30f2", ... } } I've left out the seemingly unimportant parts. 2) Use 'create_user' in the setUp function (although I'd rather keep everything in my fixtures class): def setUp(self): User.objects.create_user('homer', '[email protected]', 'simpson') Note that the password is simpson in both cases. I've verified that this info is correctly being loaded into the test database time and time again. I can grab the User object using User.objects.get. I can verify the password is correct using 'check_password.' The user is active. Yet, invariably, self.client.login(username='homer', password='simpson') FAILS. I'm baffled as to why. I think I've read every single Internet discussion pertaining to this. Can anybody help? The login code in my unit test looks like this: login = self.client.login(username='homer', password='simpson') self.assertTrue(login) Thanks.

    Read the article

  • PHP unit tests for controller returns no errors and no success message.

    - by Mallika Iyer
    I'm using the zend modular director structure, i.e. application modules users controllers . . lessons reports blog I have a unit test for a controller in 'blog' that goes something like the below section of code: I'm definitely doing something very wrong, or missing something - as when i run the test, i get no error, no success message (that goes usually like ...OK (2 tests, 2 assertions)). I get all the text from layout.phtml, where i have the global site layout. This is my first endeavor writing a unittest for zend-M-V-C structure so probably I'm missing something important? Here goes.... require_once '../../../../public/index.php'; require_once '../../../../application/Bootstrap.php'; require_once '../../../../application/modules/blog/controllers/BrowseController.php'; require_once '../../../TestConfiguration.php'; class Blog_BrowseControllerTest extends Zend_Test_PHPUnit_ControllerTestCase { public function setUp() { $this->bootstrap = array($this, 'appBootstrap'); Blog_BrowseController::setUp(); } public function appBootstrap() { require_once dirname(__FILE__) . '/../../bootstrap.php'; } public function testAction() { $this->dispatch('/'); $this->assertController('browse'); $this->assertAction('index'); } public function tearDown() { $this->resetRequest(); $this->resetResponse(); Blog_BrowseController::tearDown(); } }

    Read the article

  • Why should I bother with unit testing if I can just use integration tests?

    - by CodeGrue
    Ok, I know I am going out on a limb making a statement like that, so my question is for everyone to convince me I am wrong. Take this scenario: I have method A, which calls method B, and they are in different layers. So I unit test B, which delivers null as a result. So I test that null is returned, and the unit test passes. Nice. Then I unit test A, which expects an empty string to be returned from B. So I mock the layer B is in, an empty string is return, the test passes. Nice again. (Assume I don't realize the relationship of A and B, or that maybe two differente people are building these methods) My concern is that we don't find the real problem until we test A and B togther, i.e. Integration Testing. Since an integration test provides coverage over the unit test area, it seems like a waste of effort to build all these unit tests that really don't tell us anything (or very much) meaningful. Why am I wrong?

    Read the article

  • How to keep your unit tests simple and isolated and still guarantee DDD invariants ?

    - by ian31
    DDD recommends that the domain objects should be in a valid state at any time. Aggregate roots are responsible for guaranteeing the invariants and Factories for assembling objects with all the required parts so that they are initialized in a valid state. However this seems to complicate the task of creating simple, isolated unit tests a lot. Let's assume we have a BookRepository that contains Books. A Book has : an Author a Category a list of Bookstores you can find the book in These are required attributes : a book has to have an author, a category and at least a book store you can buy the book from. There's likely to be a BookFactory since it is quite a complex object, and the Factory will initialize the Book with at least all the mentioned attributes. Now we want to unit test a method of the BookRepository that returns all the Books. To test if the method returns the books, we have to set up a test context (the Arrange step in AAA terms) where some Books are already in the Repository. If the only tool at our disposal to create Book objects is the Factory, the unit test now also uses and is dependent on the Factory and inderectly on Category, Author and Store since we need those objects to build up a Book and then place it in the test context. Would you consider this is a dependency in the same way that in a Service unit test we would be dependent on, say, a Repository that the Service would call ? How would you solve the problem of having to re-create a whole cluster of objects in order to be able to test a simple thing ? How would you break that dependency and get rid of all these attributes we don't need in our test ? By using mocks or stubs ? If you mock up things a Repository contains, what kind of mock/stubs would you use as opposed to when you mock up something the object under test talks to or consumes ?

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >