Search Results

Search found 12476 results on 500 pages for 'unit testing'.

Page 68/500 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Books or Articles on Using NUnit to Test Entire Features

    - by INTPnerd
    Are there any books or articles that show you how to use NUnit to test entire features of a program? Is there a name for this type of testing? This is different from the typical use of NUnit for unit testing where you test individual classes. This is similar to acceptance testing except that it is written by the developer to discern that the program does what they interpreted as being what the customer wants the program to do. I don't need it to be readable by non-programmers or to produce a readable specification for non-programmers. The problem I am having is keeping this feature testing code maintainable. I need help in organizing my feature testing code. I also need help organizing the program code to be drivable in this way. I am having a hard time being able to issue commands to the program while still having good code design.

    Read the article

  • Good tool to test WCF service (not SOAP/HTTP webservice)

    - by Kangkan
    I recently found SOAPUI and discovered that it is just a great tool for testing any SOAP/HTTP service. Conventionally, we have been developing our own driver to test our services (WCF based netTCP binding) so far. But with SOAPUI experience, I am really looking for some such tool that can be used with such ease with built-in facilities for load testing, functional testing etc. The other thought in my mind is that for services that I wish to deploy with netTCP can be first tested using a HTTP binding using SOAPUI. Once found suitable, the binding can be changed for the intended one. I shall like the views from all the experts here. Thanks.

    Read the article

  • Unittest and mock

    - by user1410756
    I'm testing with unittest in python and it's ok. Now, I have introduced mock and I need to resolve a question. This is my code: from mock import Mock import unittest class Matematica(object): def __init__(self, op1, op2): self.op1 = op1 self.op2 = op2 def adder(self): return self.op1 + self.op2 def subs(self): return abs(self.op1 - self.op2) def molt(self): return self.op1 * self.op2 def divid(self): return self.op1 / self.op2 class TestMatematica(unittest.TestCase): """Test della classe Matematica""" def testing(self): """Somma""" mat = Matematica(10,20) self.assertEqual(mat.adder(),30) """Sottrazione""" self.assertEqual(mat.subs(),10) class test_mock(object): def __init__(self, matematica): self.matematica = matematica def execute(self): self.matematica.adder() self.matematica.adder() self.matematica.subs() if __name__ == "__main__": result = unittest.TextTestRunner(verbosity=2).run(TestMatematica('testing')) a = Matematica(10,20) b = test_mock(a) b.execute() mock_foo = Mock(b.execute)#return_value = 'rafa') mock_foo() print mock_foo.called print mock_foo.call_count print mock_foo.method_calls This code is functionally and result of print is: True, 1, [] . Now, I need to count how many times are called self.matematica.adder() and self.matematica.subs() . THANKS

    Read the article

  • Does isolation frameworks (Moq, RhinoMock, etc) lead to test overspecification?

    - by Marius
    In Osherove's great book "The Art of Unit Testing" one of the test anti-patterns is over-specification which is basically the same as testing the internal state of the object instead of some expected output. To my experience, using Isolation frameworks can cause the same unwanted side effects as testing internal behavior because one tends to only implement the behavior necessary to make your stub interact with the object under test. Now if your implementation changes later on (but the contract remains the same), your test will suddenly break because you are expecting some data from the stub which was not implemented. So what do you think is the best approach to counter this? 1) Implement your stubs/mocks fully, this has the negative side-effect of potentially making your test less readable and also specifying more than necessary to make your test pass. 2) Favor manual, fully implemented fakes. 3) Implement your stubs/fakes so that they make your test just pass, and then deal with the brittleness that this might introduce.

    Read the article

  • Constructing mocks in unit tests

    - by Flynn1179
    Is there any way to have a mock constructed instead of a real instance when testing code that calls a constructor? For example: public class ClassToTest { public void MethodToTest() { MyObject foo = new MyObject(); Console.WriteLine(foo.ToString()); } } In this example, I need to create a unit test that confirms that calling MethodToTest on an instance of ClassToTest will indeed output whatever the result of the ToString() method of a newly created instance of MyObject. I can't see a way of realistically testing the 'ClassToTest' class in isolation; testing this method would actually test the 'myObject.ToString()' method as well as the MethodToTest method.

    Read the article

  • Python unittest with expensive setup

    - by Staale
    My test file is basically: class Test(unittest.TestCase): def testOk(): pass if __name__ == "__main__": expensiveSetup() try: unittest.main() finally: cleanUp() However, I do wish to run my test through Netbeans testing tools, and to do that I need unittests that don't rely on an environment setup done in main. Looking at http://stackoverflow.com/questions/402483/caching-result-of-setup-using-python-unittest - it recommends using Nose. However, I don't think Netbeans supports this. I didn't find any information indicating that it does. Additionally, I am the only one here actually writing tests, so I don't want to introduce additional dependencies for the other 2 developers unless they are needed. How can I do the setup and cleanup once for all the tests in my TestSuite? The expensive setup here is creating some files with dummy data, as well as setting up and tearing down a simple xml-rpc server. I also have 2 test classes, one testing locally and one testing all methods over xml-rpc.

    Read the article

  • Sequence Number in testing Spring application with JUnit (Hibernating, Spring MVC)

    - by MBK
    I am testing DAO in Spring Application. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:/applicationContext.xml") @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) @Transactional public class CommentDAOImplTest { @Autowired //testing mehods here} The tests are running good. Iam able to add an comment and I also have a defaultRollback property set. So, the added comment will be deleted automatically. happy!..Now the problem is with the sequence number for mcomment. Can I, in any way rollback the seq number? any suggestins on that. I dont want to mess up the sequrnce number. Business requires comment Id to be showed. (I still dont know why). I know in memory db is an option....but I am guessing defaultRollback purpose is to eliminate in memory db testing and mocking. (Just my opinion.)

    Read the article

  • “It’s only test code…”

    - by Chris George
    “Let me hack this in, it’s only test code”, “Don’t worry about getting it reviewed, it’s only test code”, “It doesn’t have to be elegant or efficient, it’s only test code”… do these phrases sound familiar? Chances are if you’ve working with test automation, at one point or other you will have heard these phrases, you have probably even used them yourself! What is certain is that code written under this “it’s only test code” mantra will come back and bite you in the arse! I’ve recently encountered a case where a test was giving a false positive, therefore hiding a real product bug because that test code was very badly written. Firstly it was very difficult to understand what the test was actually trying to achieve let alone how it was doing it, and this complexity masked a simple logic error. These issues are real and they do happen. Let’s take a step back from this and look at what we are trying to do. We are writing test code that tests product code, and we do this to create a suite of tests that will help protect our software against regressions. This test code is making sure that the product behaves as it should by employing some sort of expected result verification. The simple cases of these are generally not a problem. However, automation allows us to explore more complex scenarios in many more permutations. As this complexity increases then so does the complexity of the test code. It is at this point that code which has not been architected properly will cause problems.   Keep your friends close… So, how do we make sure we are doing it right? The development teams I have worked on have always had Test Engineers working very closely with their Software Engineers. This is something that I have always tried to take full advantage of. They are coding experts! So run your ideas past them, ask for advice on how to structure your code, help you design your data structures. This may require a shift in your teams viewpoint, as contrary to this section title and folklore, Software Engineers are not actually the mortal enemy of Test Engineers. As time progresses, and test automation becomes more and more ingrained in what we do, the two roles are converging more than ever. Over the 16 years I have spent as a Test Engineer, I have seen the grey area between the two roles grow significantly larger. This serves to strengthen the relationship and common bond between the two roles which helps to make test code activities so much easier!   Pair for the win Possibly the best thing you could do to write good test code is to pair program on the task. This will serve a few purposes. you will get the benefit of the Software Engineers knowledge and experience the Software Engineer will gain knowledge on the testing process. Sharing the love is a wonderful thing! two pairs of eyes are always better than one… And so are two brains. Between the two of you, I will guarantee you will derive more useful test cases than if it was just one of you.   Code reviews Another policy which certainly pays dividends is the practice of code reviews. By having one of your peers review your code before you commit it serves two purposes. Firstly, it forces you to explain your code. Just the act of doing this will often pick up errors in your code. Secondly, it gets yet another pair of eyes on your code! I cannot stress enough how important code reviews are. The benefits they offer apply as much to product code as test code. In short, Software and Test Engineers should all be doing them! It can be extended even further by getting test code reviewed by a Software Engineer and a Test Engineer, and likewise product code. This serves to keep both functions in the loop with changes going on within your code base.   Learn from your devs I briefly touched on this earlier but I’d like to go into more detail here. Pairing with your Software Engineers when writing your test code is such an amazing opportunity to improve your coding skills. As I sit here writing this article waiting to be called into court for jury service, it reminds me that it takes a lot of patience to be a Test Engineer, almost as much as it takes to be a juror! However tempting it is to go rushing in and start writing your automated tests, resist that urge. Discuss what you want to achieve then talk through the approach you’re going to take. Then code it up together. I find it really enlightening to ask questions like ‘is there a better way to do this?’ Or ‘is this how you would code it?’ The latter question, especially, is where I learn the most. I’ve found that most Software Engineers will be reluctant to show you the ‘right way’ to code something when writing tests because they perceive the ‘right way’ to be too complicated for the Test Engineer (e.g. not mentioning LINQ and instead doing something verbose). So by asking how THEY would code it, it unleashes their true dev-ness and advanced code usually ensues! I would like to point out, however, that you don’t have to accept their method as the final answer. On numerous occasions I have opted for the more simple/verbose solution because I found the code written by the Software Engineer too advanced and therefore I would find it unreadable when I return to the code in a months’ time! Always keep the target audience in mind when writing clever code, and in my case that is mostly Test Engineers.  

    Read the article

  • Apache keeps resetting while testing on localhost...

    - by Scott
    Hello everyone. I'm getting errors while testing web pages on localhost. I'm running Windows 7 64-bit. I'm not using Wamp or Xampp. This is what the error.log tells me (I've highlighted the errors in question): [Sat Mar 06 05:10:55 2010] [notice] Apache/2.2.14 (Win32) PHP/5.2.13 configured -- resuming normal operations [Sat Mar 06 05:10:55 2010] [notice] Server built: Sep 28 2009 22:41:08 [Sat Mar 06 05:10:55 2010] [notice] Parent: Created child process 6588 httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Child process is running [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Acquired the start mutex. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting 1000 worker threads. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting thread to listen on port 80. Any input would be greatly appreciated. Thanks.

    Read the article

  • testing ssl cert for smtps => "secure connection could not be established with this website"

    - by cc young
    testing ssl cert on server using a web service. https, imaps and pop3s all check, but smtps yields the message "we advise you not to submit any confidential or personal data to this website because a secure connection could not be established with this website." running postfix tls logging: connect from s097.networking4all.com[213.249.64.242] lost connection after UNKNOWN from s097.networking4all.com[213.249.64.242] disconnect from s097.networking4all.com[213.249.64.242] these work correctly: telnet mydomain.net 587 openssl s_client -starttls smtp -crlf -connect mydomain.net:587 but cannot get email using ssl to log into either 587 or 564 - get same "UNKNOWN" problem. email smtp w/o ssh works fine. the test site is http://www.networking4all.com/en/support/tools/site+check/

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • gtk2/mate apps are choppy on debian testing [migrated]

    - by b0ti
    I have recently upgraded to a core-i7 system and now all the apps from mate (GTK2 based) are very slow. Basically when I switch to a workspace that has a couple mate-terminals open, they are redrawn as if these were sent over the network. GTK3 apps and firefox work properly and are refreshed instantly. I'm running Debian testing. I have tried to reinstall xorg and related packages and everything seems to work fine except for this. Here is my xorg.log. Any hints?

    Read the article

  • Install package from debian stable unavailable in testing/unstable repositories

    - by overprescribed
    I'm currently running Debian testing and would like to install a package only available in the stable repositories. (I'm surprised I haven't come across this issue before) I could download the .deb directly and use dpkg to manually install it, but installing packages from one release into another is usually frowned upon. What's the best course of action? EDIT: Zoredache is right, I didn't realize this package has been removed from future versions of Debian as it no longer has a maintainer. It is of course, also pointed out by Zoredache, important to find out why a particular package has been removed before attempting to install it. I've altered the title slightly to reflect the actual issue.

    Read the article

  • Missing kernel on debian-testing-amd64-DVD-1

    - by Kyrol
    I want to install the Debian testing linux version. But at a certain point of the installation, an error occurs: Impossible to install kernel. a compatible kernel version is missing... … or something like that. I see that the image ISO of the DVD is less than 4GB, but is impossible that there is no compatible kernel version for the system. I have an Asus X53Sc with Intel core I7-2630QM, Geforce GT 520MX.

    Read the article

  • ipv6 ssh tunnel service for testing?

    - by Geuis
    I need to do some testing on a service that I run to make sure that it can handle ipv6 addresses. Basically, I need to connect to it from an ipv6 address. I've created a tunnel via tunnelbroker.net, but I'm finding the steps required to get a tunnel configured on my machine and router to be a lot of trouble. Given that I'm not a networking specialist and that I haven't had to dig into routing configuration in years, I'd like to know if there's an existing service that I can just ssh into and use it as my ipv6 endpoint. Simply being able to curl or wget from such an endpoint to my service would be more than enough to test what I need. Thanks!

    Read the article

  • Unit testing a controller in ASP.NET MVC 2 with RedirectToAction

    - by Rob Walker
    I have a controller that implements a simple Add operation on an entity and redirects to the Details page: [HttpPost] public ActionResult Add(Thing thing) { // ... do validation, db stuff ... return this.RedirectToAction<c => c.Details(thing.Id)); } This works great (using the RedirectToAction from the MvcContrib assembly). When I'm unit testing this method I want to access the ViewData that is returned from the Details action (so I can get the newly inserted thing's primary key and prove it is now in the database). The test has: var result = controller.Add(thing); But result here is of type: System.Web.Mvc.RedirectToRouteResult (which is a System.Web.Mvc.ActionResult). It doesn't hasn't yet executed the Details method. I've tried calling ExecuteResult on the returned object passing in a mocked up ControllerContext but the framework wasn't happy with the lack of detail in the mocked object. I could try filling in the details, etc, etc but then my test code is way longer than the code I'm testing and I feel I need unit tests for the unit tests! Am I missing something in the testing philosophy? How do I test this action when I can't get at its returned state?

    Read the article

  • testing devise with shoulda and machinist

    - by mattherick
    hello! I´d like to test my app with shoulda and machinist. I use the devise authentification gem. I get following error: $ ruby unit/page_test.rb c:/Ruby/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/rails/gem_dependency.rb:119:Warning: Gem::Dependency#version_requirements is deprecated and will be rem oved on or after August 2010. Use #requirement c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:443:in load_missing_constant': uninitialized constant Admins (N ameError) from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:80:inconst_missing' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:92:in const_missing' from c:/Users/Mattherick/Desktop/heimspiel/heimspiel_app/app/controllers/admins_controller.rb:1 from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:ingem_original_require' from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in require' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:158:inrequire' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:265:in require_or_load' from c:/Ruby/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:224:independ_on' ... 12 levels... from ./unit/../test_helper.rb:2 from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in gem_original_require' from c:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inrequire' from unit/page_test.rb:1 Somebody an idea what´s wrong? If I don´t use devise my tests are okay. And my second question: Does somebdoy has a good tutorial for increasing different roles in the devise gem? If I generate my own views and add a few attributes to my devise-model, they won´t be save in the database. I read the docu at github, but don´t really checked it. mattherick

    Read the article

  • How to implement or emulate an "abstract" OCUnit test class?

    - by Quinn Taylor
    I have a number of Objective-C classes organized in an inheritance hierarchy. They all share a common parent which implements all the behaviors shared among the children. Each child class defines a few methods that make it work, and the parent class raises an exception for the methods designed to be implemented/overridden by its children. This effectively makes the parent a pseudo-abstract class (since it's useless on its own) even though Objective-C doesn't explicitly support abstract classes. The crux of this problem is that I'm unit testing this class hierarchy using OCUnit, and the tests are structured similarly: one test class that exercises the common behavior, with a subclass corresponding to each of the child classes under test. However, running the test cases on the (effectively abstract) parent class is problematic, since the unit tests will fail in spectacular fashion without the key methods. (The alternative of repeating the common tests across 5 test classes is not really an acceptable option.) The non-ideal solution I've been using is to check (in each test method) whether the instance is the parent test class, and bail out if it is. This leads to repeated code in every test method, a problem that becomes increasingly annoying if one's unit tests are highly granular. In addition, all such tests are still executed and reported as successes, skewing the number of meaningful tests that were actually run. What I'd prefer is a way to signal to OCUnit "Don't run any tests in this class, only run them in its child classes." To my knowledge, there isn't (yet) a way to do that, something similar to a +(BOOL)isAbstractTest method I can implement/override. Any ideas on a better way to solve this problem with minimal repetition? Does OCUnit have any ability to flag a test class in this way, or is it time to file a Radar? Edit: Here's a link to the test code in question. Notice the frequent repetition of if (...) return; to start a method, including use of the NonConcreteClass() macro for brevity.

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • implementation musical instrument using audio unit

    - by Develop.Kim
    post same question at apple developer forum ,too hi first sorry that my english is poor.. i want develop iphone application that playing musical instrument like 'ocarina' but don't need blow mic features. so first i tried to find that how implementation 'virtual musical instrument ' in iphone development. the during the decide implementation using 'Audio Unit' to report this article (link) so i want two kind of questions. i recognize that the 'musical instrument' can be divided into three sound that 'attack', 'sustain' , 'release'. 'decay' maybe included (link) . How implementation when audio unit base 'AUInstrumentBase' each sound ? i download sample 'SinSynth' (link) . i want play note this instrument unit for analyze source and study. Is there way to using AULab? expected the way using MIDI input . but i don't have MIDI. in addition, i wonder that i would think it right the way. to ask the advice... thank for reading poor english my article.

    Read the article

  • Running unittest with typical test directory structure.

    - by Major Major
    The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory: new_project/ antigravity/ antigravity.py test/ test_antigravity.py setup.py etc. for example see this Python project howto. My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path. I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing. The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with. So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."

    Read the article

  • What are various methods for discovering test cases

    - by NativeByte
    All, I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects. Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test. Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though. I would appreciate your views on this. Thanks, Byte

    Read the article

  • How can this Ambient Context become null?

    - by Mark Seemann
    Can anyone help me explain how TimeProvider.Current can become null in the following class? public abstract class TimeProvider { private static TimeProvider current = DefaultTimeProvider.Instance; public static TimeProvider Current { get { return TimeProvider.current; } set { if (value == null) { throw new ArgumentNullException("value"); } TimeProvider.current = value; } } public abstract DateTime UtcNow { get; } public static void ResetToDefault() { TimeProvider.current = DefaultTimeProvider.Instance; } } Observations All unit tests that directly reference TimeProvider also invokes ResetToDefault() in their Fixture Teardown. There is no multithreaded code involved. Once in a while, one of the unit tests fail because TimeProvider.Current is null (NullReferenceException is thrown). This only happens when I run the entire suite, but not when I just run a single unit test, suggesting to me that there is some subtle test interdependence going on. It happens approximately once every five or six test runs. When a failure occurs, it seems to be occuring in the first executed tests that involves TimeProvider.Current. More than one test can fail, but only one fails in a given test run. FWIW, here's the DefaultTimeProvider class as well: public class DefaultTimeProvider : TimeProvider { private readonly static DefaultTimeProvider instance = new DefaultTimeProvider(); private DefaultTimeProvider() { } public override DateTime UtcNow { get { return DateTime.UtcNow; } } public static DefaultTimeProvider Instance { get { return DefaultTimeProvider.instance; } } } I suspect that there's some subtle interplay going on with static initialization where the runtime is actually allowed to access TimeProvider.Current before all static initialization has finished, but I can't quite put my finger on it. Any help is appreciated.

    Read the article

  • Quick, Linux-compatible unit-aware calculator

    - by endolith
    I want to be able to press a keyboard combination, start typing a mathematical expression that includes units and slightly advanced math (not just a four-function calculator), and get a result immediately, in units that I specify, that I can copy and paste. Currently I open Firefox and press Ctrl+K, type in the search box, and it usually gives me a result in the drop-down from Google Calculator. It doesn't always, though, so I press "=" at the end, wait for a result, remove the equals, wait for a result, realize it doesn't understand the way I typed a unit, open the result in a new tab, etc. it sucks. Wolfram Alpha is smarter, but very much slower, and the output is all images, not text, and I don't have a quick widget for it, if such a thing could even exist. GNU units has a ton of units, which is great, and I can define my own units, which is great, but they have to be written in specific, unintuitive ways, it doesn't handle much advanced math, and I'd need to open a terminal, start units, etc. I hate the command line. I wasted a lot of time trying to make front-ends for units in Deskbar and Launchy, but I'm not a real coder and I don't use either of those anymore. Any other solutions or enhancements of these?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >