Search Results

Search found 9492 results on 380 pages for 'logic unit'.

Page 3/380 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Looking for a very subtle unit testing example

    - by Stéphane Bruckert
    In the context of Continuous Integration, I need to teach unit testing to a 20-people audience of programmers. Everything will be all right, but I am still trying to find the perfect unit testing example. More than writing tests like a robot, I want to show that unit testing can help prevent very subtle errors. I am thinking of the following scenario to happen when doing a live TDD demo: the test cases would already be written, we would have to write methods together, most of us would naturally have forgotten to handle a specific case for a method, everyone would then be surprised, when seeing that all tests don't pass, the failing test would make us think more and realize that we forgot an important case. My question will probably finish as "too broad" or "not clear what you are asking", but we never know, one of you might have a great idea. Your answer can use Java and JUnit, though any other language will be fine since only the idea will matter.

    Read the article

  • How to unit test image processing code?

    - by rold2007
    I'm working in image processing (mainly OCR) and I wonder how I should integrate unit tests in my development. I'm already using unit tests for more "common" type of code but when dealing with image processing code I'm not sure how to deal with it. This kind of code always need some image data input/output and mocking this is not obvious. For now I'm mostly doing integration tests but they take a while to run and I would like some ideas on how to break down this kind of code into unit tests so that I can run them more quickly.

    Read the article

  • Are unit tests really used as documentation?

    - by stijn
    I cannot count the number of times I read statements in the vein of 'unit tests are a very important source of documentation of the code under test'. I do not deny they are true. But personally I haven't found myself using them as documentation, ever. For the typical frameworks I use, the method declarations document their behaviour and that's all I need. And I assume the unit tests backup everything stated in that documentation, plus likely some more internal stuff, so on one side it duplicates the ducumentation while on the other it might add some more that is irrelevant. So the question is: when are unit tests used as documentation? When the comments do not cover everything? By developpers extending the source? And what do they expose that can be useful and relevant that the documentation itself cannot expose?

    Read the article

  • Is static universally "evil" for unit testing and if so why does resharper recommend it?

    - by Vaccano
    I have found that there are only 3 ways to unit test (mock/stub) dependencies that are static in C#.NET: Moles TypeMock JustMock Given that two of these are not free and one has not hit release 1.0, mocking static stuff is not too easy. Does that make static methods and such "evil" (in the unit testing sense)? And if so, why does resharper want me to make anything that can be static, static? (Assuming resharper is not also "evil".) Clarification: I am talking about the scenario when you want to unit test a method and that method calls a static method in a different unit/class. By most definitions of unit testing, if you just let the method under test call the static method in the other unit/class then you are not unit testing, you are integration testing. (Useful, but not a unit test.)

    Read the article

  • What is the value to checking in broken unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in broken unit tests? I will use a simple example. Case sensitivity. The current code is Case Sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests has found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by Adam W.
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • What is the value of checking in failing unit tests?

    - by user20194
    While there are ways of keeping unit tests from being executed, what is the value of checking in failing unit tests? I will use a simple example: Case Sensitivity. The current code is case sensitive. A valid input into the method is "Cat" and it would return an enum of Animal.Cat. However, the desired functionality of the method should not be case sensitive. So if the method described was passed "cat" it could possibly return something like Animal.Null instead of Animal.Cat and the unit test would fail. Though a simple code change would make this work, a more complex issue may take weeks to fix, but identifying the bug with a unit test could be a less complex task. The application currently being analyzed has 4 years of code that "works". However, recent discussions regarding unit tests have found flaws in the code. Some just need explicit implementation documentation (ex. case sensitive or not), or code that does not execute the bug based on how it is currently called. But unit tests can be created executing specific scenarios that will cause the bug to be seen and are valid inputs. What is the value of checking in unit tests that exercise the bug until someone can get around to fixing the code? Should this unit test be flagged with ignore, priority, category etc, to determine whether a build was successful based on tests executed? Eventually the unit test should be created to execute the code once someone fixes it. On one hand it shows that identified bugs have not been fixed. On the other, there could be hundreds of failed unit tests showing up in the logs and weeding through the ones that should fail vs. failures due to a code check-in would be difficult to find.

    Read the article

  • Adding dynamic business logic/business process checks to a system

    - by Jordan Reiter
    I'm wondering if there is a good extant pattern (language here is Python/Django but also interested on the more abstract level) for creating a business logic layer that can be created without coding. For example, suppose that a house rental should only be available during a specific time. A coder might create the following class: from bizlogic import rules, LogicRule from orders.models import Order class BeachHouseAvailable(LogicRule): def check(self, reservation): house = reservation.house_reserved if not (house.earliest_available < reservation.starts < house.latest_available ) raise RuleViolationWhen("Beach house is available only between %s and %s" % (house.earliest_available, house.latest_available)) return True rules.add(Order, BeachHouseAvailable, name="BeachHouse Available") This is fine, but I don't want to have to code something like this each time a new rule is needed. I'd like to create something dynamic, ideally something that can be stored in a database. The thing is, it would have to be flexible enough to encompass a wide variety of rules: avoiding duplicates/overlaps (to continue the example "You already have a reservation for this time/location") logic rules ("You can't rent a house to yourself", "This house is in a different place from your chosen destination") sanity tests ("You've set a rental price that's 10x the normal rate. Are you sure this is the right price?" Things like that. Before I recreate the wheel, I'm wondering if there are already methods out there for doing something like this.

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • Mock Objects for Unit Testing

    - by user9009
    Hello How often QA engineers are responsible for developing Mock Objects for Unit Testing. So dealing with Mock Objects is just developer job ?. The reason i ask is i'm interested in QA as my career and am learning tools like JUnit , TestNG and couple of frameworks. I just want to know until what level of unit testing is done by developer and from what point QA engineer takes over testing for better test coverage ? Thanks

    Read the article

  • Quality of Code in unit tests?

    - by m3th0dman
    Is it worth to spend time when writing unit tests in order that the code written there has good quality and is very easy to read? When writing this kinds of tests I break very often the Law of Demeter, for faster writing and not using so many variables. Technically, unit tests are not reused directly - are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functionaly.

    Read the article

  • How have Guava unit tests been generated automatically?

    - by dzieciou
    Guava has unit test cases automatically generated: Guava has staggering numbers of unit tests: as of July 2012, the guava-tests package includes over 286,000 individual test cases. Most of these are automatically generated, not written by hand, but Guava's test coverage is extremely thorough, especially for com.google.common.collect. How they were generated? What techniques and technologies were used to design and generate them?

    Read the article

  • Unit-testing code that relies on untestable 3rd party code

    - by DudeOnRock
    Sometimes, especially when working with third party code, I write unit-test specific code in my production code. This happens when third party code uses singletons, relies on constants, accesses the file-system/a resource I don't want to access in a test situation, or overuses inheritance. The form my unit-test specific code takes is usually the following: if (accessing or importing a certain resource fails) I assume this is a test case and load a mock object Is this poor form, and if it is, what is normally done when writing tests for code that uses untestable third party code?

    Read the article

  • Dynamically evaluating simple boolean logic in Python

    - by a paid nerd
    I've got some dynamically-generated boolean logic expressions, like: (A or B) and (C or D) A or (A and B) A empty - evaluates to True The placeholders get replaced with booleans. Should I, Convert this information to a Python expression like True or (True or False) and eval it? Create a binary tree where a node is either a bool or Conjunction/Disjunction object and recursively evaluate it? Convert it into nested S-expressions and use a Lisp parser? Something else? Suggestions welcome.

    Read the article

  • Why (not) logic programming?

    - by Anto
    I have not yet heard about any uses of a logical programming language (such as Prolog) in the software industry, nor do I know of usage of it in hobby programming or open source projects. It (Prolog) is used as an academic language to some extent, though (why is it used in academia?). This makes me wonder, why should you use logic programming, and why not? Why is it not getting any detectable industry usage?

    Read the article

  • unit/integration testing web service proxy client

    - by cori
    I'm rewriting a PHP client/proxy library that provides an interface to a SOAP-based .Net webservice, and in the process I want to add some unit and integration tests so future modifications are less risky. The work the library I'm working on performs is to marshall the calls to the web service and do a little reorganizing of the responses to present a slightly more -object-oriented interface to the underlying service. Since this library is little else than a thin layer on top of web service calls, my basic assumption is that I'll really be writing integration tests more than unit tests - for example, I don't see any reason to mock away the web service - the work that's performed by the code I'm working on is very light; it's almost passing the response from the service right back to its consumer. Most of the calls are basic CRUD operations: CreateRole(), CreateUser(), DeleteUser(), FindUser(), &ct. I'll be starting from a known database state - the system I'm using for these tests is isolated for testing purposes, so the results will be more or less predictable. My question is this: is it natural to use web service calls to confirm the results of operations within the tests and to reset the state of the application within the scope of each test? Here's an example: One test might be createUserReturnsValidUserId() and might go like this: public function createUserReturnsValidUserId() { // we're assuming a global connection to the service $newUserId = $client->CreateUser("user1"); assertNotNull($newUserId); assertNotNull($client->FindUser($newUserId); $client->deleteUser($newUserId); } So I'm creating a user, making sure I get an ID back and that it represents a user in the system, and then cleaning up after myself (so that later tests don't rely on the success or failure of this test w/r/t the number of users in the system, for example). However this still seems pretty fragile - lots of dependencies and opportunities for tests to fail and effect the results of later tests, which I definitely want to avoid. Am I missing some options of ways to decouple these tests from the system under test, or is this really the best I can do? I think this is a fairly general unit/integration testing question, but if it matters I'm using PHPUnit for the testing framework.

    Read the article

  • how to architect this to make it unit testable

    - by SOfanatic
    I'm currently working on a project where I'm receiving an object via web service (WSDL). The overall process is the following: Receive object - add/delete/update parts (or all) of it - and return the object with the changes made. The thing is that sometimes these changes are complicated and there is some logic involved, other databases, other web services, etc. so to facilitate this I'm creating a custom object that mimics the original one but has some enhanced functionality to make some things easier. So I'm trying to have this process: Receive original object - convert/copy it to custom object - add/delete/update - convert/copy it back to original object - return original object. Example: public class Row { public List<Field> Fields { get; set; } public string RowId { get; set; } public Row() { this.Fields = new List<Field>(); } } public class Field { public string Number { get; set; } public string Value { get; set; } } So for example, one of the "actions" to perform on this would be to find all Fields in a Row that match a Value equal to something, and update them with some other value. I have a CustomRow class that represents the Row class, how can I make this class unit testable? Do I have to create an interface ICustomRow to mock it in the unit test? If one of the actions is to sum all of the Values in the Fields that have a Number equal to 10, like this function, how can design the custom class to facilitate unit tests. Sample function: public int Sum(FieldNumber number) { return row.Fields.Where(x => x.FieldNumber.Equals(number)).Sum(x => x.FieldValue); } Am I approaching this the wrong way?

    Read the article

  • Implementing a "state-machine" logic for methods required by an object in C++

    - by user827992
    What I have: 1 hypothetical object/class + other classes and related methods that gives me functionality. What I want: linking this object to 0 to N methods in realtime on request when an event is triggered Each event is related to a single method or a class, so a single event does not necessarily mean "connect this 1 method only" but can also mean "connect all the methods from that class or a group of methods" Avoiding linked lists because I have to browse the entire list to know what methods are linked, because this does not ensure me that the linked methods are kept in a particular order (let's say an alphabetic order by their names or classes), and also because this involve a massive amount of pointers usage. Example: I have an object Employee Jon, Jon acquires knowledge and forgets things pretty easily, so his skills may vary during a period of time, I'm responsible for what Jon can add or remove from his CV, how can I implement this logic?

    Read the article

  • MVC - Business Logic

    - by BriskLabs Pakistan
    I have created a MVC based simple java application. its helps the user to add records through data forms to database..... i want that the data that i put into the database as a record is worked upon i.e by performing calculations on it. the original data should remain unaffected. while the new data after calculations performed must be stored as a new entity record into database. Where should i write the code for this background calculation .. as it is the rules and business logic... in a new java beans file... Please guide. regards

    Read the article

  • GH-Unit for unit testing Objective-C code, why am I getting linking errors?

    - by djhworld
    Hi there, I'm trying to dive into the quite frankly terrible world of unit testing using Xcode (such a convoluted process it seems.) Basically I have this test class, attempting to test my Show.h class #import <GHUnit/GHUnit.h> #import "Show.h" @interface ShowTest : GHTestCase { } @end @implementation ShowTest - (void)testShowCreate { Show *s = [[Show alloc] init]; GHAssertNotNil(s,@"Was nil."); } @end However when I try to build and run my tests it moans with this error: - Undefined symbols: "_OBJC_CLASS_$_Show", referenced from: __objc_classrefs__DATA@0 in ShowTest.o ld: symbol(s) not found collect2: ld returned 1 exit status Now I'm presuming this is a linking error. I tried following every step in the instructions located here: - http://github.com/gabriel/gh-unit/blob/master/README.md And step 2 of these instructions confused me: - In the Target 'Tests' Info window, General tab: Add a linked library, under Mac OS X 10.5 SDK section, select GHUnit.framework Add a linked library, select your project. Add a direct dependency, and select your project. (This will cause your application or framework to build before the test target.) How am I supposed to add my project to the linked library list when all it accepts it .dylib, .framework and .o files. I'm confused! Thanks for any help that is received.

    Read the article

  • ASP.NET MVC unit testing

    - by Simon Lomax
    Hi, I'm getting started with unit testing and trying to do some TDD. I've read a fair bit about the subject and written a few tests. I just want to know if the following is the right approach. I want to add the usual "contact us" facility on my web site. You know the thing, the user fills out a form with their email address, enters a brief message and hits a button to post the form back. The model binders do their stuff and my action method accepts the posted data as a model. The action method would then parse the model and use smtp to send an email to the web site administrator infoming him/her that somebody filled out the contact form on their site. Now for the question .... In order to test this, would I be right in creating an interface IDeliver that has a method Send(emailAddress, message) to accept the email address and message body. Implement the inteface in a concrete class and let that class deal with smtp stuff and actually send the mail. If I add the inteface as a parameter to my controller constructor I can then use DI and IoC to inject the concrete class into the controller. But when unit testing I can create a fake or mock version of my IDeliver and do assertions on that. The reason I ask is that I've seen other examples of people generating interfaces for SmtpClient and then mocking that. Is there really any need to go that far or am I not understanding this stuff?

    Read the article

  • Creating mock Objects in PHP unit

    - by Mike
    Hi, I've searched but can't quite find what I'm looking for and the manual isn't much help in this respect. I'm fairly new to unit testing, so not sure if I'm on the right track at all. Anyway, onto the question. I have a class: <?php class testClass { public function doSomething($array_of_stuff) { return AnotherClass::returnRandomElement($array_of_stuff); } } ?> Now, clearly I want the AnotherClass::returnRandomElement($array_of_stuff); to return the same thing every time. My question is, in my unit test, how do I mockup this object? I've tried adding the AnotherClass to the top of the test file, but when I want to test AnotherClass I get the "Cannot redeclare class" error. I think I understand factory classes, but I'm not sure how I would apply that in this instance. Would I need to write an entirely seperate AnotherClass class which contained test data and then use the Factory class to load that instead of the real AnotherClass? Or is using the Factory pattern just a red herring. I tried this: $RedirectUtils_stub = $this->getMockForAbstractClass('RedirectUtils'); $o1 = new stdClass(); $o1->id = 2; $o1->test_id = 2; $o1->weight = 60; $o1->data = "http://www.google.com/?ffdfd=fdfdfdfd?route=1"; $RedirectUtils_stub->expects($this->any()) ->method('chooseRandomRoot') ->will($this->returnValue($o1)); $RedirectUtils_stub->expects($this->any()) ->method('decodeQueryString') ->will($this->returnValue(array())); in the setUp() function, but these stubs are ignored and I can't work out whether it's something I'm doing wrong, or the way I'm accessing the AnotherClass methods. Help! This is driving me nuts.

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The importance of Unit Testing in BI

    - by Davide Mauri
    One of the main steps in the process we internally use to develop a BI solution is the implementation of Unit Test of you BI Data. As you may already know, I’ve create a simple (for now) tool that leverages NUnit to allow us to quickly create Unit Testing without having to resort to use Visual Studio Database Professional: http://queryunit.codeplex.com/ Once you have a tool like this one, you can start also to make sure that your BI solution (DWH and CUBE) is not only structurally sound (I mean, the cube or the report gets processed correctly), but you can also check that the logical integrity of your business rules is enforced. For example let’s say that the customer tell you that they will never create an invoice for a specific product-line in 2010 since that product-line is dismissed and will never be sold again. Ok we know that this in theory is true, but a lot of this business rule effectiveness depends on the fact the people does not do a mistake while inserting new orders/invoices and the ERP used implements a check for this business logic. Unfortunately these last two hypotesis are not always true, so you may find yourself really having some invoices for a product line that doesn’t exists anymore. Maybe this kind of situation in future will be solved using Master Data Management but, meanwhile, how you can give and idea of the data quality to your customers? How can you check that logical integrity of the analytical data you produce is exactly what you expect? Well, Unit Testing of a DWH or a CUBE can be a solution. Once you have defined your test suite, by writing SQL and MDX queries that checks that your data is what you expect to be, if you use NUnit (and QueryUnit does), you can then use a tool like NUnit2Report to create a nice HTML report that can be shipped via email to give information of data quality: In addition to that, since NUnit produces an XML file as a result, you can also import it into a SQL Server Database and then monitor the quality of data over time. I’ll be speaking about this approach (and more in general about how to “engineer” a BI solution) at the next European SQL PASS Adaptive BI Best Practices http://www.sqlpass.org/summit/eu2010/Agenda/ProgramSessions/AdaptiveBIBestPratices.aspx I’ll enjoy discussing with you all about this, so see you there! And remember: “if ain't tested it's broken!” (Sorry I don’t remember how said that in first place :-)) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Unit testing in Django

    - by acjohnson55
    I'm really struggling to write effective unit tests for a large Django project. I have reasonably good test coverage, but I've come to realize that the tests I've been writing are definitely integration/acceptance tests, not unit tests at all, and I have critical portions of my application that are not being tested effectively. I want to fix this ASAP. Here's my problem. My schema is deeply relational, and heavily time-oriented, giving my model object high internal coupling and lots of state. Many of my model methods query based on time intervals, and I've got a lot of auto_now_add going on in timestamped fields. So take a method that looks like this for example: def summary(self, startTime=None, endTime=None): # ... logic to assign a proper start and end time # if none was provided, probably using datetime.now() objects = self.related_model_set.manager_method.filter(...) return sum(object.key_method(startTime, endTime) for object in objects) How does one approach testing something like this? Here's where I am so far. It occurs to me that the unit testing objective should be given some mocked behavior by key_method on its arguments, is summary correctly filtering/aggregating to produce a correct result? Mocking datetime.now() is straightforward enough, but how can I mock out the rest of the behavior? I could use fixtures, but I've heard pros and cons of using fixtures for building my data (poor maintainability being a con that hits home for me). I could also setup my data through the ORM, but that can be limiting, because then I have to create related objects as well. And the ORM doesn't let you mess with auto_now_add fields manually. Mocking the ORM is another option, but not only is it tricky to mock deeply nested ORM methods, but the logic in the ORM code gets mocked out of the test, and mocking seems to make the test really dependent on the internals and dependencies of the function-under-test. The toughest nuts to crack seem to be the functions like this, that sit on a few layers of models and lower-level functions and are very dependent on the time, even though these functions may not be super complicated. My overall problem is that no matter how I seem to slice it, my tests are looking way more complex than the functions they are testing.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >