Search Results

Search found 94 results on 4 pages for 'teardown'.

Page 3/4 | < Previous Page | 1 2 3 4  | Next Page >

  • Making ehcache read-write for test code and read-only for production code

    - by Rick
    I would like to annotate many of my Hibernate entities that contain reference data and/or configuration data with @Cache(usage = CacheConcurrencyStrategy.READ_ONLY) However, my JUnit tests are setting up and tearing down some of this reference/configuration data using the Hibernate entities. Is there a recommended way of having entities be read-write during test setup and teardown but read-only for production code? Two of my immediate thoughts for non-ideal workarounds are: Using NONSTRICT_READ_WRITE, but I am not sure what the hidden downsides are. Creating subclassed entities in my test code to override the read-only cache annotation. Any recommendations on the cleanest way to handle this? (Note: Project uses maven.)

    Read the article

  • Unity: How to remove(unregister) registered instance from Unity mapping.

    - by bug0r
    Hello, I meet one problem that i can't solve now. I have the following: UnityHelper.DefaultContainer.RegisterInstance(typeof(IMyInterface), "test", instance); where UnityHelper.DefaultContainer is my helper for getting unity container with loaded configuration. here I registered instance as an instance of IMyInterface. So anywhere(some time after using) I want to remove this mapping. Remove it at all. How I can do it? I have tried UnityHelper.DefaultContainer.Teardown(instance) but is was unsuccessfull and the following code returns instance anyway. UnityHelper.DefaultContainer.ResolveAll() Any ideas? Thank you.

    Read the article

  • Eclipse Outline View - Visible JavaScript Categories in Eclipse?

    - by leeand00
    I just found an option in the little white down arrow in Eclipse that reads "Visible Categories..." How can I use this? It seems to me that it could be used to only show functions that have an @category in their comments, but I haven't been able to make that work. If it did work it would be incredibly useful for separating out Unit Tests from their common-functions and separating them from the setUp and tearDown methods, so what is it really for? By the way I'm editing a Javascript file in the Eclipse "Javascript Editor", I don't know if that makes any difference or not.

    Read the article

  • UnitTest++ constructing fixtures multiple times?

    - by Peter
    I'm writing some unit tests in UnitTest++ and want to write a bunch of tests which share some common resources. I thought that this should work via their TEST_FIXTURE setup, but it seems to be constructing a new fixture for every test. Sample code: #include <UnitTest++.h> struct SomeFixture { SomeFixture() { // this line is hit twice } }; TEST_FIXTURE(SomeFixture, FirstTest) { } TEST_FIXTURE(SomeFixture, SecondTest) { } I feel like I must be doing something wrong; I had thought that the whole point of having the fixture was so that the setup/teardown code only happens once. Am I wrong on this? Is there something else I have to do to make it work that way?

    Read the article

  • Unit Testing in QTestLib - running single test / tests in class / all tests

    - by Dave
    I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button. All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes. I'm sure I'm missing something. I'd like to be able to easily: Run a single test method Run the tests in an entire class Run all tests Any of those would call the appropriate setup / teardown functions.

    Read the article

  • what is the right way to exit Windoes Service OnStart if configuration is wrong and nothing to do in

    - by matti
    Is something like this ok? protected override void OnStart(string[] args) { if (SomeApp.Initialize()) { SomeApp.StartMonitorAndWork(); base.OnStart(args); } } protected override void OnStop() { SomeApp.TearDown(); base.OnStop(); } Here Initialize reads a config file and if it's wrong there's nothing to do so service should STOP! If config is ok StartMonitorAndWork starts: Timer(new TimerCallback(DoWork), null, startTime, loopTime); and DoWork polls database periodically. The question is: "Is exiting OnStart without doing nothing enough if Initialize returns false? OR should there be something like this: private void ExitService() { this.OnStop(); System.Environment.Exit(1); } protected override void OnStart(string[] args) { if (ObjectFolderApp.Initialize()) { SomeApp.StartMonitorAndWork(); base.OnStart(args); } else { ExitService(); } } Thanks & BR - Matti

    Read the article

  • Using StructureMap, how do you explicitly trigger the reinstantiation of a object with InstanceScope

    - by Mark Rogers
    I have an integration test harness where I want to teardown and then re-instantiate some of the singleton-scoped objects I've registered with StructureMap, after and before each test. This way I can simulate the actual run time environment, but not have the singleton's state being passed from one test to another. Maybe this isn't a great way to do an integration test, but I'm running out of alternative solutions (read open to any advice). So can an object with InstanceScope.Singleton, be re-instantiated? What's the best way to do re-instantiate a singleton-scoped object with StructureMap?

    Read the article

  • using ruby test and selenium grid how can I keep the same browser window for multiple tests?

    - by George Horlacher
    Each of my tests start a new selenium client browser and tear it down so they can run stand alone with this code: def setup if $selenium @selenium = $selenium else @selenium = Selenium::SeleniumDriver.new("#$sell_server", 4444, "#$browser", "http://#$network.#$host:2086", 10000); @selenium.start end @selenium.set_context("test_login") end def teardown @selenium.stop unless $selenium assert_equal [], @verification_errors end What I'd like is to run a suite of tests that all use the same browser and don't keep opening and closing new browsers for every test. I've tried using $selenium as a global object / browser but each test still opens up a new browser and closes it. How should this be done?

    Read the article

  • How to use @BeforeClass and @AfterClass in JunitPerf?

    - by allenzzzxd
    Hi, I want to do some actions before the whole test suite (also after the suite). So I wrote like: public class PerformanceTest extends TestCase { @BeforeClass public static void suiteSetup() throws Exception { //do something } @AfterClass public static void suiteSetup() throws Exception { //do something } @Before public void setUp() throws Exception { //do something } @After public void tearDown() throws Exception { //do something } public PerformanceTest(String testName){ super(testName); } public static Test suite() { TestSuite suite = new TestSuite(); Test testcase1 = new PerformanceTest("DoTest1"); Test loadTest1 = new LoadTest(testcase1, n); Test testcase2 = new PerformanceTest("DoTest2"); Test loadTest2 = new LoadTest(testcase2, n); } public void DoTest1 throws Throwable{ //do something } public void DoTest2 throws Throwable{ //do something } } But I found that it never reach the code in @BeforeClass and @AfterClass. So how could I do to solve this problem? Or is there other way to realize this? Thank you for your help.

    Read the article

  • Sending object C from class A to class B

    - by user278618
    Hi, I can't figure out how to design classes in my system. In classA I create object selenium (it simulates user actions at website). In this ClassA I create another objects like SearchScreen, Payment_Screen and Summary_Screen. # -*- coding: utf-8 -*- from selenium import selenium import unittest, time, re class OurSiteTestCases(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 5555, "*chrome", "http://www.someaddress.com/") time.sleep(5) self.selenium.start() def test_buy_coffee(self): sel = self.selenium sel.open('/') sel.window_maximize() search_screen=SearchScreen(self.selenium) search_screen.choose('lavazza') payment_screen=PaymentScreen(self.selenium) payment_screen.fill_test_data() summary_screen=SummaryScreen(selenium) summary_screen.accept() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main() It's example SearchScreen module: class SearchScreen: def __init__(self,selenium): self.selenium=selenium def search(self): self.selenium.click('css=button.search') I want to know if there is anything ok with a design of those classes?

    Read the article

  • python unit testing os.remove fails file system

    - by hwjp
    Am doing a bit of unit testing on a function which attempts to open a new file, but should fail if the file already exists. when the function runs sucessfully, the new file is created, so i want to delete it after every test run, but it doesn't seem to be working: class MyObject_Initialisation(unittest.TestCase): def setUp(self): if os.path.exists(TEMPORARY_FILE_NAME): try: os.remove(TEMPORARY_FILE_NAME) except WindowsError: #TODO: can't figure out how to fix this... #time.sleep(3) #self.setUp() #this just loops forever pass def tearDown(self): self.setUp() any thoughts? The Windows Error thrown seems to suggest the file is in use... could it be that the tests are run in parallel threads? I've read elsewhere that it's 'bad practice' to use the filesystem in unit testing, but really? Surely there's a way around this that doesn't invole dummying the filesystem?

    Read the article

  • How do I get the name of the test method that was ran in a testng tear down method?

    - by Zachary Spencer
    Basically I have a tear down method that I want to log to the console which test was just ran. How would I go about getting that string? I can get the class name, but I want the actual method that was just executed. Class testSomething() { @AfterMethod public void tearDown() { system.out.println('The test that just ran was....' + getTestThatJustRanMethodName()'); } @Test public void testCase() { assertTrue(1==1); } } should output to the screen: "The test that just ran was.... testCase" However I don't know the magic that getTestThatJustRanMethodName should actually be.

    Read the article

  • python - selenium script syntax error

    - by William Hawkes
    Okay, I used selenium to test some automation, which I got to work. I did an export of the script for python. When I tried to run the python script it generated, it gave me a "SyntaxError: invalid syntax" error message. Here's the python script in question: from selenium import selenium import unittest, time, re class WakeupCall(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*chrome", "http://the.web.site") self.selenium.start() def test_wakeup_call(self): sel = self.selenium sel.open("/index.php#deposit") sel.wait_for_page_to_load("30000") sel.click("link=History") sel.wait_for_page_to_load("30000") try: self.failUnless(sel.is_text_present("key phrase number 1.")) except AssertionError, e: self.verificationErrors.append(str(e)) The last line is what generated the "SyntaxError: invalid syntax" error message. A "^" was under the comma. The rest of the script goes as follows: def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if name == "main": unittest.main()

    Read the article

  • How do I get the name of the test method that was run in a testng tear down method?

    - by Zachary Spencer
    Basically I have a tear down method that I want to log to the console which test was just run. How would I go about getting that string? I can get the class name, but I want the actual method that was just executed. Class testSomething() { @AfterMethod public void tearDown() { system.out.println('The test that just ran was....' + getTestThatJustRanMethodName()'); } @Test public void testCase() { assertTrue(1==1); } } should output to the screen: "The test that just ran was.... testCase" However I don't know the magic that getTestThatJustRanMethodName should actually be.

    Read the article

  • Whitelist IP from google-authenticator in sshd pam

    - by spudwaffle
    My Ubuntu 12.04 server uses the google-authenticator pam module to provide two step authentication for ssh. I need to make it so that a certain IP does not need to type the verification code. The /etc/pam.d/sshd file is below: # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password auth required pam_google_authenticator.so I've already tried adding a auth sufficient pam_exec.so /etc/pam.d/ip.sh line above the google-authenticator line, but I can't understand how to check an IP adress in the bash script.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Selenium error on Win7

    - by hawkeye
    I'm starting Selenium server with the following on a command line: java -jar selenium-server.jar Here is the code: import com.thoughtworks.selenium.*; import java.util.regex.Pattern; import org.openqa.selenium.server.SeleniumServer; import junit.framework.*; public class orkut extends SeleneseTestCase { public void setUp() throws Exception { //SeleniumServer server = new SeleniumServer(); //server.start(); setUp("https://www.google.com/", "*firefox C:\\Program Files (x86)\\Mozilla Firefox\\firefox.exe"); } public void testOrkut() throws Exception { selenium.setTimeout("10000"); selenium.open("/accounts/ServiceLogin?service=orkut&hl=en-US&rm=false&continue=http%3A%2F%2Fwww.orkut.com%2FRedirLogin%3Fmsg%3D0&cd=IN&skipvpage=true&sendvemail=false"); selenium.type("Email", "username"); selenium.type("Passwd", "password"); selenium.click("signIn"); selenium.selectFrame("orkutFrame"); selenium.click("link=Communities"); selenium.waitForPageToLoad("10000"); } public static Test suite() { return new TestSuite(orkut.class); } public void tearDown(){ selenium.stop(); } public static void main(String args[]) { junit.textui.TestRunner.run(suite()); } } Here is the error: .E Time: 33.386 There was 1 error: 1) testOrkut(orkut)java.lang.RuntimeException: Could not start Selenium session: Failed to start new browser session: Unable to delete file C:\Users\user\AppData\Local\Temp\customProfileDir78cf02e3efca4772a71525c4a7523cac\parent.lock at com.thoughtworks.selenium.DefaultSelenium.start(DefaultSelenium.java:89) at com.thoughtworks.selenium.SeleneseTestBase.setUp(SeleneseTestBase.java:123) at com.thoughtworks.selenium.SeleneseTestBase.setUp(SeleneseTestBase.java:104) at com.thoughtworks.selenium.SeleneseTestCase.setUp(SeleneseTestCase.java:78) at orkut.setUp(orkut.java:14) at com.thoughtworks.selenium.SeleneseTestCase.runBare(SeleneseTestCase.java:212) at orkut.main(orkut.java:37) Caused by: com.thoughtworks.selenium.SeleniumException: Failed to start new browser session: Unable to delete file C:\Users\M022534\AppData\Local\Temp\customProfileDir78cf02e3efca4772a71525c4a7523cac\parent.lock at com.thoughtworks.selenium.HttpCommandProcessor.throwAssertionFailureExceptionOrError(HttpCommandProcessor.java:97) at com.thoughtworks.selenium.HttpCommandProcessor.doCommand(HttpCommandProcessor.java:91) at com.thoughtworks.selenium.HttpCommandProcessor.getString(HttpCommandProcessor.java:262) at com.thoughtworks.selenium.HttpCommandProcessor.start(HttpCommandProcessor.java:223) at com.thoughtworks.selenium.DefaultSelenium.start(DefaultSelenium.java:81) ... 16 more FAILURES!!! Tests run: 1, Failures: 0, Errors: 1

    Read the article

  • Selenium - Could not start Selenium session: Failed to start new browser session: Error while launching browser

    - by Yatendra Goel
    I am new to Selenium. I generated my first java selenium test case and it has compiled successfully. But when I run that test I got the following RuntimeException java.lang.RuntimeException: Could not start Selenium session: Failed to start new browser session: Error while launching browser at com.thoughtworks.selenium.DefaultSelenium.start <DefaultSelenium.java:88> Kindly tell me how can I fix this error. This is the java file I want to run. import com.thoughtworks.selenium.*; import java.util.regex.Pattern; import junit.framework.*; public class orkut extends SeleneseTestCase { public void setUp() throws Exception { setUp("https://www.google.com/", "*chrome"); } public void testOrkut() throws Exception { selenium.setTimeout("10000"); selenium.open("/accounts/ServiceLogin?service=orkut&hl=en-US&rm=false&continue=http%3A%2F%2Fwww.orkut.com%2FRedirLogin%3Fmsg%3D0&cd=IN&skipvpage=true&sendvemail=false"); selenium.type("Email", "username"); selenium.type("Passwd", "password"); selenium.click("signIn"); selenium.selectFrame("orkutFrame"); selenium.click("link=Communities"); selenium.waitForPageToLoad("10000"); } public static Test suite() { return new TestSuite(orkut.class); } public void tearDown(){ selenium.stop(); } public static void main(String args[]) { junit.textui.TestRunner.run(suite()); } } I first started the selenium server through the command prompt and then execute the above java file through another command prompt. Second Question: Can I do right click on a specified place on a webpage with selenium.

    Read the article

  • Webdriver with Python

    - by vishal kharge
    I had written a scipt in Java with Webdriver and it worked fine and below is the code for the sample import org.junit.After; import org.junit.AfterClass; import org.junit.Before; import org.junit.BeforeClass; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebDriverBackedSelenium; import org.openqa.selenium.firefox.FirefoxDriver; import com.thoughtworks.selenium.Selenium; import java.util.*; import java.lang.Thread.*; public class Login { @BeforeClass public static void setUpBeforeClass() throws Exception { } @AfterClass public static void tearDownAfterClass() throws Exception { } @Before public void setUp() throws Exception { } @After public void tearDown() throws Exception { } public static void main(String[] args) { WebDriver driver = new FirefoxDriver(); Selenium selenium = new WebDriverBackedSelenium(driver, "http://192.168.10.10:8080/"); selenium.open("/"); selenium.keyPress("name=user_id", "admin"); } } } But my requirement is to implement the same in python with webdriver, can you please let me know how this can be done with the above example and webdriver binaries and how to do setup for the same

    Read the article

  • Selenium RC: how to capture/handle error?

    - by KenBurnsFan1
    Hi, My test uses Selenium to loop through a CSV list of URLs via an HTTP proxy (working script below). As I watch the script run I can see about 10% of the calls produce "Proxy error: 502" ("Bad_Gateway"); however, the errors are not captured by my catch-all "except Exception" clause -- ie: instead of writing 'error' in the appropriate row of the "output.csv", they get passed to the else clause and produce a short piece of html that starts: "Proxy error: 502 Read from server failed: Unknown error." Also, if I collect all the URLs which returned 502s and re-run the script, they all pass, which leads me to believe that this is a sporadic network path issue. Question: Can the script be made to recognize the the 502 errors, sleep a minute, and then retry the URL instead of moving on to the next URL in the list? The only alternative that I can think of is to apply re.search("Proxy error: 502") after "get_html_source" as a way to catch the bad calls. Then, if the RE matches, put the script to sleep for a minute and then retry 'sel.open(row[0]' on the URL which produced the 502. Any advice would be much appreciated. Thanks! #python 2.6 from selenium import selenium import unittest, time, re, csv, logging class Untitled(unittest.TestCase): def setUp(self): self.verificationErrors = [] self.selenium = selenium("localhost", 4444, "*firefox", "http://baseDomain.com") self.selenium.start() self.selenium.set_timeout("60000") def test_untitled(self): sel = self.selenium spamReader = csv.reader(open('ListOfSubDomains.csv', 'rb')) for row in spamReader: try: sel.open(row[0]) except Exception: ofile = open('output.csv', 'ab') ofile.write("error" + '\n') ofile.close() else: time.sleep(5) html = sel.get_html_source() ofile = open('output.csv', 'ab') ofile.write(html.encode('utf-8') + '\n') ofile.close() def tearDown(self): self.selenium.stop() self.assertEqual([], self.verificationErrors) if __name__ == "__main__": unittest.main()

    Read the article

  • Selenium tests not building due to NUnit error (Mono+OS X)

    - by Jem
    I'm running Selenium RC on my Mac and driving my tests using NUnit in C#. My problem is that when I try and build a simple test in Mono I get the following error. Error CS0433: The imported type `NUnit.Framework.Assert' is defined multiple times (CS0433) (TestProject) When I comment out the Assert's it runs fine. The code I'm using currently is just a dump from the openqa site using System; using System.Text; using System.Text.RegularExpressions; using System.Threading; using NUnit.Framework; using Selenium; namespace SeleniumTests { [TestFixture] public class AllTests { private ISelenium selenium; private StringBuilder verificationErrors; [SetUp] public void SetupTest () { selenium = new DefaultSelenium ("localhost", 4444, "*safari", "http://www.google.co.uk"); selenium.Start (); verificationErrors = new StringBuilder (); } [TearDown] public void TeardownTest () { try { selenium.Stop (); } catch (Exception) { // Ignore errors if unable to close the browser } Assert.AreEqual ("", verificationErrors.ToString ()); } [Test] public void GoogleHomepageTests () { // Open Google search engine. selenium.Open ("http://www.google.com/"); // Assert Title of page. Assert.AreEqual ("Google", selenium.GetTitle ()); // Provide search term as "Selenium OpenQA" selenium.Type ("q", "Selenium OpenQA"); // Read the keyed search term and assert it. Assert.AreEqual ("Selenium OpenQA", selenium.GetValue ("q")); // Click on Search button. selenium.Click ("btnG"); // Wait for page to load. selenium.WaitForPageToLoad ("5000"); // Assert that "www.openqa.org" is available in search results. Assert.IsTrue (selenium.IsTextPresent ("www.openqa.org")); // Assert that page title is - "Selenium OpenQA - Google Search" Assert.AreEqual ("Selenium OpenQA - Google Search", selenium.GetTitle ()); } } } Any ideas? Is it a OSX/Mono thing?

    Read the article

  • Multiple responses from identical calls in asynch QUnit + Mockjax tests

    - by NickL
    I'm trying to test some jQuery ajax code using QUnit and Mockjax and have it return different JSON for different tests, like this: $(document).ready(function() { function functionToTest() { return $.getJSON('/echo/json/', { json: JSON.stringify({ "won't": "run" }) }); } module("first"); test("first test", function() { stop(); $.mockjax({ url: '/echo/json/', responseText: JSON.stringify({ hello: 'HEYO!' }) }); functionToTest().done(function(json) { ok(true, json.hello); start(); }); }); test("second test", function() { stop(); $.mockjax({ url: '/echo/json/', responseText: JSON.stringify({ hello: 'HELL NO!' }) }); functionToTest().done(function(json) { ok(true, json.hello); start(); }); }); }); Unfortunately it returns the same response for each call, and order can't be guaranteed, so was wondering how I could set it up so that it was coupled to the actual request and came up with this: $.mockjax({ url: '/echo/json/', response: function(settings) { if (JSON.parse(settings.data.json).order === 1) { this.responseText = JSON.stringify({ hello: 'HEYO!' }); } else { this.responseText = JSON.stringify({ hello: 'HELL NO!' }); } } }); This relies on parameters being sent to the server, but what about requests without parameters, where I still need to test different responses? Is there a way to use QUnit's setup/teardown to do this?

    Read the article

  • Unable to use nMock GetProperty routine on a property of an inherited object...

    - by Chris
    I am getting this error when trying to set an expectation on an object I mocked that inherits from MembershipUser: ContactRepositoryTests.UpdateTest : FailedSystem.InvalidProgramException: JIT Compiler encountered an internal limitation. Server stack trace: at MockObjectType1.ToString() Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(ref MessageData msgData, Int32 type) at System.Object.ToString() at NMock2.Internal.ExpectationBuilder.On(Object receiver) Here are the tools I am using... VS2008 (SP1) Framework 3.5 nUnit 2.4.8 nMock 2.0.0.44 Resharper 4.1 I am at a loss as to why this would be happening. Any help would be appreciated. Test Class... [TestFixture] public class AddressRepositoryTests { private Mockery m_Mockery; private Data.IAddress m_MockDataAddress; private IUser m_MockUser; [SetUp] public void Setup() { m_Mockery = new Mockery(); m_MockDataAddress = m_Mockery.NewMock<Data.IAddress>(); m_MockUser = m_Mockery.NewMock<IUser>(); } [TearDown] public void TearDown() { m_Mockery.Dispose(); } [Test] public void CreateTest() { string line1 = "unitTestLine1"; string line2 = "unitTestLine2"; string city = "unitTestCity"; int stateId = 1893; string postalCode = "unitTestPostalCode"; int countryId = 223; bool active = false; int createdById = 1; Expect.Once .On(m_MockUser) .GetProperty("Identity") .Will(Return.Value(createdById)); Expect.Once .On(m_MockDataAddress) .Method("Insert") .With( line1, line2, city, stateId, postalCode, countryId, active, createdById, Is.Anything ) .Will(Return.Value(null)); IAddressRepository addressRepository = new AddressRepository(m_MockDataAddress); IAddress address = addressRepository.Create( line1, line2, city, stateId, postalCode, countryId, active, m_MockUser ); Assert.IsNull(address); } } User Class... public interface IUser { int? Identity { get; set; } int? CreatedBy { get; set; } DateTime CreatedOn { get; set; } int? ModifiedBy { get; set; } DateTime? ModifiedOn { get; set; } string UserName { get; } object ProviderUserKey { get; } string Email { get; set; } string PasswordQuestion { get; } string Comment { get; set; } bool IsApproved { get; set; } bool IsLockedOut { get; } DateTime LastLockoutDate { get; } DateTime CreationDate { get; } DateTime LastLoginDate { get; set; } DateTime LastActivityDate { get; set; } DateTime LastPasswordChangedDate { get; } bool IsOnline { get; } string ProviderName { get; } string ToString(); string GetPassword(); string GetPassword(string passwordAnswer); bool ChangePassword(string oldPassword, string newPassword); bool ChangePasswordQuestionAndAnswer(string password, string newPasswordQuestion, string newPasswordAnswer); string ResetPassword(string passwordAnswer); string ResetPassword(); bool UnlockUser(); } public class User : MembershipUser, IUser { #region Public Properties private int? m_Identity; public int? Identity { get { return m_Identity; } set { if (value <= 0) throw new Exception("Address.Identity must be greater than 0."); m_Identity = value; } } public int? CreatedBy { get; set; } private DateTime m_CreatedOn = DateTime.Now; public DateTime CreatedOn { get { return m_CreatedOn; } set { m_CreatedOn = value; } } public int? ModifiedBy { get; set; } public DateTime? ModifiedOn { get; set; } #endregion Public Properties #region Public Constructors public User() { } #endregion Public Constructors } Address Class... public interface IAddress { int? Identity { get; set; } string Line1 { get; set; } string Line2 { get; set; } string City { get; set; } string PostalCode { get; set; } bool Active { get; set; } int? CreatedBy { get; set; } DateTime CreatedOn { get; set; } int? ModifiedBy { get; set; } DateTime? ModifiedOn { get; set; } } public class Address : IAddress { #region Public Properties private int? m_Identity; public int? Identity { get { return m_Identity; } set { if (value <= 0) throw new Exception("Address.Identity must be greater than 0."); m_Identity = value; } } public string Line1 { get; set; } public string Line2 { get; set; } public string City { get; set; } public string PostalCode { get; set; } public bool Active { get; set; } public int? CreatedBy { get; set; } private DateTime m_CreatedOn = DateTime.Now; public DateTime CreatedOn { get { return m_CreatedOn; } set { m_CreatedOn = value; } } public int? ModifiedBy { get; set; } public DateTime? ModifiedOn { get; set; } #endregion Public Properties } AddressRepository Class... public interface IAddressRepository { IAddress Create(string line1, string line2, string city, int stateId, string postalCode, int countryId, bool active, IUser createdBy); } public class AddressRepository : IAddressRepository { #region Private Properties private Data.IAddress m_DataAddress; private Data.IAddress DataAddress { get { if (m_DataAddress == null) m_DataAddress = new Data.Address(); return m_DataAddress; } set { m_DataAddress = value; } } #endregion Private Properties #region Public Constructor public AddressRepository() { } public AddressRepository(Data.IAddress dataAddress) { DataAddress = dataAddress; } #endregion Public Constructor #region Public Methods public IAddress Create(string line1, string line2, string city, int stateId, string postalCode, int countryId, bool active, IUser createdBy) { if (String.IsNullOrEmpty(line1)) throw new Exception("You must enter a Address Line 1 to register."); if (String.IsNullOrEmpty(city)) throw new Exception("You must enter a City to register."); if (stateId <= 0) throw new Exception("You must select a State to register."); if (String.IsNullOrEmpty(postalCode)) throw new Exception("You must enter a Postal Code to register."); if (countryId <= 0) throw new Exception("You must select a Country to register."); DataSet dataSet = DataAddress.Insert( line1, line2, city, stateId, postalCode, countryId, active, createdBy.Identity, DateTime.Now ); return null; } #endregion Public Methods } DataAddress Class... public interface IAddress { DataSet GetByAddressId (int? AddressId); DataSet Update (int? AddressId, string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, Guid? ModifiedBy); DataSet Insert (string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, int? CreatedBy, DateTime? CreatedOn); } public class Address : IAddress { public DataSet GetByAddressId (int? AddressId) { Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand("prAddress_GetByAddressId"); DataSet dataSet; try { database.AddInParameter(dbCommand, "AddressId", DbType.Int32, AddressId); dataSet = database.ExecuteDataSet(dbCommand); } catch (SqlException sqlException) { string callMessage = "prAddress_GetByAddressId " + "@AddressId = " + AddressId; throw new Exception(callMessage, sqlException); } return dataSet; } public DataSet Update (int? AddressId, string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, Guid? ModifiedBy) { Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand("prAddress_Update"); DataSet dataSet; try { database.AddInParameter(dbCommand, "AddressId", DbType.Int32, AddressId); database.AddInParameter(dbCommand, "Address1", DbType.AnsiString, Address1); database.AddInParameter(dbCommand, "Address2", DbType.AnsiString, Address2); database.AddInParameter(dbCommand, "City", DbType.AnsiString, City); database.AddInParameter(dbCommand, "StateId", DbType.Int32, StateId); database.AddInParameter(dbCommand, "PostalCode", DbType.AnsiString, PostalCode); database.AddInParameter(dbCommand, "CountryId", DbType.Int32, CountryId); database.AddInParameter(dbCommand, "IsActive", DbType.Boolean, IsActive); database.AddInParameter(dbCommand, "ModifiedBy", DbType.Guid, ModifiedBy); dataSet = database.ExecuteDataSet(dbCommand); } catch (SqlException sqlException) { string callMessage = "prAddress_Update " + "@AddressId = " + AddressId + ", @Address1 = " + Address1 + ", @Address2 = " + Address2 + ", @City = " + City + ", @StateId = " + StateId + ", @PostalCode = " + PostalCode + ", @CountryId = " + CountryId + ", @IsActive = " + IsActive + ", @ModifiedBy = " + ModifiedBy; throw new Exception(callMessage, sqlException); } return dataSet; } public DataSet Insert (string Address1, string Address2, string City, int? StateId, string PostalCode, int? CountryId, bool? IsActive, int? CreatedBy, DateTime? CreatedOn) { Database database = DatabaseFactory.CreateDatabase(); DbCommand dbCommand = database.GetStoredProcCommand("prAddress_Insert"); DataSet dataSet; try { database.AddInParameter(dbCommand, "Address1", DbType.AnsiString, Address1); database.AddInParameter(dbCommand, "Address2", DbType.AnsiString, Address2); database.AddInParameter(dbCommand, "City", DbType.AnsiString, City); database.AddInParameter(dbCommand, "StateId", DbType.Int32, StateId); database.AddInParameter(dbCommand, "PostalCode", DbType.AnsiString, PostalCode); database.AddInParameter(dbCommand, "CountryId", DbType.Int32, CountryId); database.AddInParameter(dbCommand, "IsActive", DbType.Boolean, IsActive); database.AddInParameter(dbCommand, "CreatedBy", DbType.Int32, CreatedBy); database.AddInParameter(dbCommand, "CreatedOn", DbType.DateTime, CreatedOn); dataSet = database.ExecuteDataSet(dbCommand); } catch (SqlException sqlException) { string callMessage = "prAddress_Insert " + "@Address1 = " + Address1 + ", @Address2 = " + Address2 + ", @City = " + City + ", @StateId = " + StateId + ", @PostalCode = " + PostalCode + ", @CountryId = " + CountryId + ", @IsActive = " + IsActive + ", @CreatedBy = " + CreatedBy + ", @CreatedOn = " + CreatedOn; throw new Exception(callMessage, sqlException); } return dataSet; } }

    Read the article

  • How do I work with constructs in PHPUnit?

    - by Ben Dauphinee
    I am new into PHPUnit, and just digging through the manual. I cannot find a decent example of how to build a complete test from end to end though, and so, am left with questions. One of these is how can I prep my environment to properly test my code? I am trying to figure out how to properly pass various configuration values needed for both the test setup/teardown methods, and the configs for the class itself. // How can I set these variables on testing start? protected $_db = null; protected $_config = null; // So that this function runs properly? public function setUp(){ $this->_acl = new acl( $this->_db, // The database connection for the class passed // from whatever test construct $this->_config // Config values passed in from construct ); } // Can I just drop in a construct like this, and have it work properly? // And if so, how can I set the construct call properly? public function __construct( Zend_Db_Adapter_Abstract $db, $config = array(), $baselinedatabase = NULL, $databaseteardown = NULL ){ $this->_db = $db; $this->_config = $config; $this->_baselinedatabase = $baselinedatabase; $this->_databaseteardown = $databaseteardown; } // Or is the wrong idea to be pursuing?

    Read the article

  • How can I run Ruby specs and/or tests in MacVim without locking up MacVim?

    - by Henry
    About 6 months ago I switched from TextMate to MacVim for all of my development work, which primarily consists of coding in Ruby, Ruby on Rails and JavaScript. With TextMate, whenever I needed to run a spec or a test, I could just command+R on the test or spec file and another window would open and the results would be displayed with the 'pretty' format applied. If the spec or test was a lengthy one, I could just continue working with the codebase since the test/spec was running in a separate process/window. After the test ran, I could click through the results directly to the corresponding line in the spec file. Tim Pope's excellent rails.vim plugin comes very close to emulating this behavior within the MacVim environment. Running :Rake when the current buffer is a test or spec runs the file then splits the buffer to display the results. You can navigate through the results and key through to the corresponding spot in the file. The problem with the rails.vim approach is that it locks up the MacVim window while the test runs. This can be an issue with big apps that might have a lot of setup/teardown built into the tests. Also, the visual red/green html results that TextMate displays (via --format pretty, I'm assuming) is a bit easier to scan than the split window. This guy came close about 18 mos ago: http://cassiomarques.wordpress.com/2009/01/09/running-rspec-files-from-vim-showing-the-results-in-firefox/ The script he has worked with a bit of hacking, but the tests still ran within MacVim and locked up the current window. Any ideas on how to fully replicate the TextMate behavior described above in MacVim? Thanks!

    Read the article

< Previous Page | 1 2 3 4  | Next Page >