Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 29/313 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Auto Mocking using JustMock

    - by mehfuzh
    Auto mocking containers are designed to reduce the friction of keeping unit test beds in sync with the code being tested as systems are updated and evolve over time. This is one sentence how you define auto mocking. Of course this is a more or less formal. In a more informal way auto mocking containers are nothing but a tool to keep your tests synced so that you don’t have to go back and change tests every time you add a new dependency to your SUT or System Under Test. In Q3 2012 JustMock is shipped with built in auto mocking container. This will help developers to have all the existing fun they are having with JustMock plus they can now mock object with dependencies in a more elegant way and without needing to do the homework of managing the graph. If you are not familiar with auto mocking then I won't go ahead and educate you rather ask you to do so from contents that is already made available out there from community as this is way beyond the scope of this post. Moving forward, getting started with Justmock auto mocking is pretty simple. First, I have to reference Telerik.JustMock.Container.DLL from the installation folder along with Telerik.JustMock.DLL (of course) that it uses internally and next I will write my tests with mocking container. It's that simple! In this post first I will mock the target with dependencies using current method and going forward do the same with auto mocking container. In short the sample is all about a report builder that will go through all the existing reports, send email and log any exception in that process. This is somewhat my  report builder class looks like: Reporter class depends on the following interfaces: IReporBuilder: used to  create and get the available reports IReportSender: used to send the reports ILogger: used to log any exception. Now, if I just write the test without using an auto mocking container it might end up something like this: Now, it looks fine. However, the only issue is that I am creating the mock of each dependency that is sort of a grunt work and if you have ever changing list of dependencies then it becomes really hard to keep the tests in sync. The typical example is your ASP.NET MVC controller where the number of service dependencies grows along with the project. The same test if written with auto mocking container would look like: Here few things to observe: I didn't created mock for each dependencies There is no extra step creating the Reporter class and sending in the dependencies Since ILogger is not required for the purpose of this test therefore I can be completely ignorant of it. How cool is that ? Auto mocking in JustMock is just released and we also want to extend it even further using profiler that will let me resolve not just interfaces but concrete classes as well. But that of course starts the debate of code smell vs. working with legacy code. Feel free to send in your expert opinion in that regard using one of telerik’s official channels. Hope that helps

    Read the article

  • Do you test your SQL/HQL/Criteria ?

    - by 0101
    Do you test your SQL or SQL generated by your database framework? There are frameworks like DbUnit that allow you to create real in-memory database and execute real SQL. But its very hard to use(not developer-friendly so to speak), because you need to first prepare test data(and it should not be shared between tests). P.S. I don't mean mocking database or framework's database methods, but tests that make you 99% sure that your SQL is working even after some hardcore refactoring.

    Read the article

  • Workflow Activity Extensions, Activity Packs and Unit Testing Framework

    - by JoshReuben
    http://wf.codeplex.com/ contains a plethora of infrastructure code and new activities for extending Workflow Foundation 4. These are also available as Nuget packages. These include: Activity Extensions Security Activity Pack ADO.NET Activity Pack Azure Activity Pack Activity Unit Testing Framework   view my PowerPoint presentation on these and more here: http://www.slideshare.net/joshuareuben9/workflow-foundation-activity-packs-extensions-and-unit-testing

    Read the article

  • Has test driven development (TDD) actually benefited a real world project?

    - by James
    I am not new to coding. I have been coding (seriously) for over 15 years now. I have always had some testing for my code. However, over the last few months I have been learning test driven design/development (TDD) using Ruby on Rails. So far, I'm not seeing the benefit. I see some benefit to writing tests for some things, but very few. And while I like the idea of writing the test first, I find I spend substantially more time trying to debug my tests to get them to say what I really mean than I do debugging actual code. This is probably because the test code is often substantially more complicated than the code it tests. I hope this is just inexperience with the available tools (RSpec in this case). I must say though, at this point, the level of frustration mixed with the disappointing lack of performance is beyond unacceptable. So far, the only value I'm seeing from TDD is a growing library of RSpec files that serve as templates for other projects/files. Which is not much more useful, maybe less useful, than the actual project code files. In reading the available literature, I notice that TDD seems to be a massive time sink up front, but pays off in the end. I'm just wondering, are there any real world examples? Does this massive frustration ever pay off in the real world? I really hope I did not miss this question somewhere else on here. I searched, but all the questions/answers are several years old at this point. It was a rare occasion when I found a developer who would say anything bad about TDD, which is why I have spent as much time on this as I have. However, I noticed that nobody seems to point to specific real-world examples. I did read one answer that said the guy debugging the code in 2011 would thank you for have a complete unit testing suite (I think that comment was made in 2008). So, I'm just wondering, after all these years, do we finally have any examples showing the payoff is real? Has anybody actually inherited or gone back to code that was designed/developed with TDD and has a complete set of unit tests and actually felt a payoff? Or did you find that you were spending so much time trying to figure out what the test was testing (and why it was important) that you just tossed out the whole mess and dug into the code?

    Read the article

  • SQL Server Unit Testing with tSQLt

    When one considers the amount of time and effort that Unit Testing consumes for the Database Developer, is surprising how few good SQL Server Test frameworks are around. tSQLt , which is open source and free to use, is one of the frameworks that provide a simple way to populate a table with test data as part of the unit test, and check the results with what should be expected. Sebastian Meine and Dennis Lloyd, who created tSQLt, explain

    Read the article

  • SQL Server Unit Testing with tSQLt

    When one considers the amount of time and effort that Unit Testing consumes for the Database Developer, is surprising how few good SQL Server Test frameworks are around. tSQLt , which is open source and free to use, is one of the frameworks that provide a simple way to populate a table with test data as part of the unit test, and check the results with what should be expected. Sebastian and Dennis, who created tSQLt, explain.

    Read the article

  • What is the most appropriate testing method in this scenario?

    - by Daniel Bruce
    I'm writing some Objective-C apps (for OS X/iOS) and I'm currently implementing a service to be shared across them. The service is intended to be fairly self-contained. For the current functionality I'm envisioning there will be only one method that clients will call to do a fairly complicated series of steps both using private methods on the class, and passing data through a bunch of "data mangling classes" to arrive at an end result. The gist of the code is to fetch a log of changes, stored in a service-internal data store, that has occurred since a particular time, simplify the log to only include the last applicable change for each object, attach the serialized values for the affected objects and return this all to the client. My question then is, how do I unit-test this entry point method? Obviously, each class would have thorough unit tests to ensure that their functionality works as expected, but the entry point seems harder to "disconnect" from the rest of the world. I would rather not send in each of these internal classes IoC-style, because they're small and are only made classes to satisfy the single-responsibility principle. I see a couple possibilities: Create a "private" interface header for the tests with methods that call the internal classes and test each of these methods separately. Then, to test the entry point, make a partial mock of the service class with these private methods mocked out and just test that the methods are called with the right arguments. Write a series of fatter tests for the entry point without mocking out anything, testing the entire functionality in one go. This looks, to me, more like "integration testing" and seems brittle, but it does satisfy the "only test via the public interface" principle. Write a factory that returns these internal services and take that in the initializer, then write a factory that returns mocked versions of them to use in tests. This has the downside of making the construction of the service annoying, and leaks internal details to the client. Write a "private" initializer that take these services as extra parameters, use that to provide mocked services, and have the public initializer back-end to this one. This would ensure that the client code still sees the easy/pretty initializer and no internals are leaked. I'm sure there's more ways to solve this problem that I haven't thought of yet, but my question is: what's the most appropriate approach according to unit testing best practices? Especially considering I would prefer to write this test-first, meaning I should preferably only create these services as the code indicates a need for them.

    Read the article

  • What's a good unit test framework for Common Lisp projects?

    - by Lorenzo V.
    I need to write a unit test suite for a project I am developing in my spare time. Being a CL newbie I was overwhelmed by the amount of choices for a CL implementation, I spent quite some time to choose one. Now I am facing exactly the same thing with unit test frameworks. A quick glance at http://www.cliki.net/test%20framework shows 20 unit test frameworks! Choice is good but for a novice like me this can be a bit confusing and given the number of frameworks it would be painful to try them all. I would like to use a framework which: Is reasonably well maintained Easy to use but with some degree of flexibility Offers some sort of integration with Emacs (or it is possible to easily integrate it with Emacs) Integration with git post-commit hooks Integration with a continous integration system (such as buildbot) What are your experiences in this field?

    Read the article

  • How can I start a TCP server in the background during a Perl unit test?

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the TCP server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call make test after perl Makefile.PL, this test starts and I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the & at the end to force it to start in background and then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • How can I unit test an Android Activity that acts on Accelerometer?

    - by Corey Sunwold
    I am starting with an Activity based off of this ShakeActivity and I want to write some unit tests for it. I have written some small unit tests for Android activities before but I'm not sure where to start here. I want to feed the accelerometer some different values and test how the activity responds to it. For now I'm keeping it simple and just updating a private int counter variable and a TextView when a "shake" event happens. So my question largely boils down to this: How can I send fake data to the accelerometer from a unit test?

    Read the article

  • Does Ruby/Rails require more unit testing than say a PHP app?

    - by Blankman
    I don't find the unit testing push in the PHP market like I see/read in the ruby/rails arena. Could one just as easily NOT unit test in ruby/rails as in php, or is ruby just too bendable and breakable that it "more" recommended to test in ruby than in php? Meaning there are large code bases like vBulletin, and from what I can tell, they don't unit test. I hope you understand what I am asking here, not the pros/cons of testing, or whether one should test or not, but rather, does one language need to be tested more than another? maybe its easy to write buggy code, or break during refactoring etc.

    Read the article

  • Ways to Unit Test Oauth for different services in ruby?

    - by viatropos
    Are there any best practices in writing unit tests when 90% of the time I'm building the Oauth connecting class, I need to actually be logging into the remote service? I am building a rubygem that logs in to Twitter/Google/MySpace, etc., and the hardest part is making sure I have the settings right for that particular provider, and I would like to write tests for that. Is there a recommended way to do that? If I did mocks or stubs, I'd still have to spend that 90% of the time figuring out how to use the service, and would end up writing tests after the fact instead of before...

    Read the article

  • NullPointerException with static variables

    - by tomekK
    I just hit very strange (to me) behaviour of java. I have following classes: public abstract class Unit { public static final Unit KM = KMUnit.INSTANCE; public static final Unit METERS = MeterUnit.INSTANCE; protected Unit() { } public abstract double getValueInUnit(double value, Unit unit); protected abstract double getValueInMeters(double value); } And: public class KMUnit extends Unit { public static final Unit INSTANCE = new KMUnit(); private KMUnit() { } //here are abstract methods overriden } public class MeterUnit extends Unit { public static final Unit INSTANCE = new MeterUnit(); private MeterUnit() { } ///abstract methods overriden } And my test case: public class TestMetricUnits extends TestCase { @Test public void testConversion() { System.out.println("Unit.METERS: " + Unit.METERS); System.out.println("Unit.KM: " + Unit.KM); double meters = Unit.KM.getValueInUnit(102.11, Unit.METERS); assertEquals(0.10211, meters, 0.00001); } } 1) MKUnit and MeterUnit are both singletons initialized statically, so during class loading. Constructors are private, so they can't be initialized anywhere else. 2) Unit class contains static final references to MKUnit.INSTANCE and MeterUnit.INSTANCE I would expect that: KMUnit class is loaded and instance is created. MeterUnit class is loaded and instance is created. Unit class is loaded and both KM and METERS variable are initialized, they are final so they cant be changed. But when I run my test case in console with maven my result is: T E S T S Running de.audi.echargingstations.tests.TestMetricUnits Unit.METERS: m Unit.KM: null Tests run: 3, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.089 sec <<< FAILURE! - in de.audi.echargingstations.tests.TestMetricUnits testConversion(de.audi.echargingstations.tests.TestMetricUnits) Time elapsed: 0.011 sec <<< ERROR! java.lang.NullPointerException: null at de.audi.echargingstations.tests.TestMetricUnits.testConversion(TestMetricUnits.java:29) Results : Tests in error: TestMetricUnits.testConversion:29 NullPointer And the funny part is that, when I run this test from eclipse via JUnit runner everything is fine, I have no NullPointerException and in console I have: Unit.METERS: m Unit.KM: km So the question is: what can be the reason that KM variable in Unit is null (and in the same time METERS is not null)

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • How do I structure my tests with Python unittest module?

    - by persepolis
    I'm trying to build a test framework for automated webtesting in selenium and unittest, and I want to structure my tests into distinct scripts. So I've organised it as following: base.py - This will contain, for now, the base selenium test case class for setting up a session. import unittest from selenium import webdriver # Base Selenium Test class from which all test cases inherit. class BaseSeleniumTest(unittest.TestCase): def setUp(self): self.browser = webdriver.Firefox() def tearDown(self): self.browser.close() main.py - I want this to be the overall test suite from which all the individual tests are run. import unittest import test_example if __name__ == "__main__": SeTestSuite = test_example.TitleSpelling() unittest.TextTestRunner(verbosity=2).run(SeTestSuite) test_example.py - An example test case, it might be nice to make these run on their own too. from base import BaseSeleniumTest # Test the spelling of the title class TitleSpelling(BaseSeleniumTest): def test_a(self): self.assertTrue(False) def test_b(self): self.assertTrue(True) The problem is that when I run main.py I get the following error: Traceback (most recent call last): File "H:\Python\testframework\main.py", line 5, in <module> SeTestSuite = test_example.TitleSpelling() File "C:\Python27\lib\unittest\case.py", line 191, in __init__ (self.__class__, methodName)) ValueError: no such test method in <class 'test_example.TitleSpelling'>: runTest I suspect this is due to the very special way in which unittest runs and I must have missed a trick on how the docs expect me to structure my tests. Any pointers?

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Converting time period strings to value/unit pair

    - by randomtoor
    I need to parse the contents of a string that represents a time period. The format of the string is value/unit, e.g.: 1s, 60min, 24h. I would separate the actual value (an int) and unit (a str) to separated variables. At the moment I do it like this: def validate_time(time): binsize = time.strip() unit = re.sub('[0-9]','',binsize) if unit not in ['s','m','min','h','l']: print "Error: unit {0} is not valid".format(unit) sys.exit(2) tmp = re.sub('[^0-9]','',binsize) try: value = int(tmp) except ValueError: print "Error: {0} is not valid".format(time) sys.exit(2) return value,unit However, it is not ideal as things like 1m0 are also (wrongly) validated (value=10,unit=m). What is the best way to validate/parse this input?

    Read the article

  • Writing the tests for FluentPath

    - by Latest Microsoft Blogs
    Writing the tests for FluentPath is a challenge. The library is a wrapper around a legacy API (System.IO) that wasn’t designed to be easily testable. If it were more testable, the sensible testing methodology would be to tell System.IO to act against Read More......(read more)

    Read the article

  • New OBI 11g on-line Sales & Pre-sales Partner Assessment Tests

    - by Mike.Hallett(at)Oracle-BI&EPM
    Our OBI partners can now update their specialisation certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g.   New OPN on-line Sales & Pre-sales Assessment Tests Available Oracle Business Intelligence Foundation Suite 11g Sales Specialist   Oracle Business Intelligence Foundation Suite 11g PreSales Specialist   Oracle Business Intelligence Foundation Suite 11g Support Specialist

    Read the article

  • New OBI 11G Online Sales & Pre-Sales Partner Assessment Tests

    - by Cinzia Mascanzoni
    OBI partners can now update their specialization certification to the latest product version 11g for OBI: until recently, the accreditation had examined skills for OBI 10g. New OPN on-line Sales & Pre-sales Assessment Tests Available Oracle Business Intelligence Foundation Suite 11g Sales Specialist Oracle Business Intelligence Foundation Suite 11g PreSales Specialist Oracle Business Intelligence Foundation Suite 11g Support Specialist Read more on Specialization

    Read the article

  • Converting NUnit tests to MSUnit.

    - by TATWORTH
    I created the MSTest project by creating a new class library project and copying the test classes to it. I then followed the instructions in the following posts.http://social.msdn.microsoft.com/Forums/en/vststest/thread/eeb42224-bc1f-476d-98b4-93d0daf44aadhttp://dangerz.blogspot.co.uk/2012/01/converting-nunit-to-mstest.htmlHowever I did not need to add the GUID fix as I used ReSharper to run both sets of tests.

    Read the article

  • Should selenium tests be written in imperative style?

    - by Amogh Talpallikar
    Is an automation tester supposed to know concepts of OOPS and design patterns to write Tests in a way where changes & code re-use are possible? For example, I pick up Java to write cucumber step definitions that instruct a selenium webdriver. Should I be using a lot of inheritance, interfaces, delegation etc. to make life easier or would that be overly complicated for something that should just line by line instructions?

    Read the article

  • TDD - Outside In vs Inside Out

    - by Songo
    What is the difference between building an application Outside In vs building it Inside Out using TDD? These are the books I read about TDD and unit testing: Test Driven Development: By Example Test-Driven Development: A Practical Guide: A Practical Guide Real-World Solutions for Developing High-Quality PHP Frameworks and Applications Test-Driven Development in Microsoft .NET xUnit Test Patterns: Refactoring Test Code The Art of Unit Testing: With Examples in .Net Growing Object-Oriented Software, Guided by Tests---This one was really hard to understand since JAVA isn't my primary language :) Almost all of them explained TDD basics and unit testing in general, but with little mention of the different ways the application can be constructed. Another thing I noticed is that most of these books (if not all) ignore the design phase when writing the application. They focus more on writing the test cases quickly and letting the design emerge by itself. However, I came across a paragraph in xUnit Test Patterns that discussed the ways people approach TDD. There are 2 schools out there Outside In vs Inside Out. Sadly the book doesn't elaborate more on this point. I wish to know what is the main difference between these 2 cases. When should I use each one of them? To a TDD beginner which one is easier to grasp? What is the drawbacks of each method? Is there any materials out there that discuss this topic specifically?

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Should we test all our methods?

    - by Zenzen
    So today I had a talk with my teammate about unit testing. The whole thing started when he asked me "hey, where are the tests for that class, I see only one?". The whole class was a manager (or a service if you prefer to call it like that) and almost all the methods were simply delegating stuff to a DAO so it was similar to: SomeClass getSomething(parameters) { return myDao.findSomethingBySomething(parameters); } A kind of boilerplate with no logic (or at least I do not consider such simple delegation as logic) but a useful boilerplate in most cases (layer separation etc.). And we had a rather lengthy discussion whether or not I should unit test it (I think that it is worth mentioning that I did fully unit test the DAO). His main arguments being that it was not TDD (obviously) and that someone might want to see the test to check what this method does (I do not know how it could be more obvious) or that in the future someone might want to change the implementation and add new (or more like "any") logic to it (in which case I guess someone should simply test that logic). This made me think, though. Should we strive for the highest test coverage %? Or is it simply an art for art's sake then? I simply do not see any reason behind testing things like: getters and setters (unless they actually have some logic in them) "boilerplate" code Obviously a test for such a method (with mocks) would take me less than a minute but I guess that is still time wasted and a millisecond longer for every CI. Are there any rational/not "flammable" reasons to why one should test every single (or as many as he can) line of code?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >