Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 74/328 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • mercurial: how to synchronize mq patches from a master repo as mq patches to a set of clone repos

    - by dim
    I have to run a dozen of different build tests on a code base maintained in a mercurial repository. I don't want to run serially these tests on same repository because they modify a set of common files and I want to run them in parallel on different machines. Also, after all tests are run I want to have access to latest test results from those test work areas. Currently I'm cloning the master repository a dozen of times and run in each clone one different test. Before each test execution I do a pull/update/purge preparation sequence in order to start the test on latest clean state. That's good for me. I'm also preparing new changes using mq extension that I would test on all clones as above before committing them. For testing some ready candidate mq patches I want somehow to deploy/synchronize them to be available in test clones and apply those ready for testing using some guard before running the test. Did anybody do this synchronization before? What's the most simple way to do it? Do I need to have versioned mq patches for that?

    Read the article

  • Quality assurance in small developer teams

    - by Kim L
    Ideally, in a project you will developers, testers, QA manager(s) etc which all make their contribution to the quality of the code. But what if you don't have that kind of resources? If you just have, for example, three developers and don't have the resources to hire a full time QA manager, how do you assure that the code quality meets set standards? What kind of things do you pay attention to in quality assurance? Quality isn't just about the code doing what it is supposed to do (code is properly tested with automatic tests). Quality is also about the code being clean (readable, maintainable, well structured, documented, etc). I'm looking forward to hear what kind of processes you have applied to your team to assure that the quality meets the set standards. We've applied a process where we rotate the QA role between the developers. Each developer is responsible for QA one week at a time. Each changeset is revised and checked that existing tests pass, required new tests have been written, that the code is clean and, of course, that the project builds.

    Read the article

  • Why is iPdb not displaying STOUT after my input?

    - by BryanWheelock
    I can't figure out why ipdb is not displaying stout properly. I'm trying to debug why a test is failing and so I attempt to use ipdb debugger. For some reason my Input seems to be accepted, but the STOUT is not displayed until I (c)ontinue. Is this something broken in ipdb? It makes it very difficult to debug a program. Below is an example ipdb session, notice how I attempt to display the values of the attributes with: user.is_authenticated() user_profile.reputation user.is_superuser The results are not displayed until 'begin captured stdout ' In [13]: !python manage.py test Creating test database... < SNIP remove loading tables nosetests ...E.. /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_authenticated() user_profile.reputation user.is_superuser c F /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or c .....EE...... FAIL: test_can_retag_questions (APP.forum.tests.test_views.AuthorizationFunctionsTestCase) Traceback (most recent call last): File "/Users/Bryan/work/APP/../APP/forum/tests/test_views.py", line 71, in test_can_retag_questions self.assertTrue(auth.can_retag_questions(user)) AssertionError: -------------------- begin captured stdout << --------------------- ipdb True ipdb 4001 ipdb False ipdb --------------------- end captured stdout << ---------------------- Ran 20 tests in 78.032s FAILED (errors=3, failures=1) Destroying test database... In [14]: Here is the actual test I'm trying to run: def can_retag_questions(user): """Determines if a User can retag Questions.""" user_profile = user.get_profile() import ipdb; ipdb.set_trace() return user.is_authenticated() and ( RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_superuser) I've also tried to use pdb, but that doesn't display anything. I see my test progress .... , and then nothing and not responsive to keyboard input. Is this a problem with readline?

    Read the article

  • Web Automation Tool

    - by Aaron
    I've realized I need a full-fledged browser automation tool for testing user interactions with our JavaScript widget library. I was using qunit, starting with unit testing and then I unwisely started incorporating more and more functional tests. That was a bad idea: trying to simulate a lot of user actions with JavaScript. The timing issues have gotten out of control and have made the suite too brittle. Now I spend more time fixing the tests, then I do developing. Is it possible to find a browser automation tool that works in: Windows XP: IE6,7,8, FF3 OSX: Safari, FF3 ? I've looked into SeleniumIDE and RC, but there seems to be some IE8 problems. I've also seen some things about Google's WebDriver, which confusingly seems to work with Selenium. Our organziation has licenses for IBM's Rational Functional Tester, but I don' think that will work on the MAC. The idea is to try to run tests on all the browsers our organization supports. Doable? Are my requirements unrealistic? Any recommendations as far as software to try? Thanks!

    Read the article

  • SqlLite/Fluent NHibernate integration test harness initialization not repeatable after large data se

    - by Mark Rogers
    In one of my main data integration test harnesses I create and use Fluent NHibernate's SingleConnectionSessionSourceForSQLiteInMemoryTesting, to get a fresh session for each test. After each test, I close the connection, session, and session factory, and throw out the nested StructureMap container they came from. This works for almost any simple data integration test I can think, including ones that utilize Fluent NHib's PersistenceSpecification object. When I test the application's lengthy database bootstrapping process, which creates and saves thousands of domain objects, I start seeing issues. It's not that the setup and tear down fails, in fact, the test successfully bootstraps the in-memory database as the application would bootstrap the real database in the production environment. The problem occurs when the database bootstrapping occurs a second time on a new in-memory database, with a new session and session factory. The error is: NHibernate.StaleStateException : Unexpected row count: 0; expected: 1 The row count is indeed Unexpected, the row that the application under test is looking for should be in the session. You see, it's not that any data from the last integration test is sticking around, it's that for some reason the session just stops working mid-database-boostrap. And I've looked everywhere for a place I might be holding on to an old session and I can't find one. I've searched through the code for static singleton objects, but there are none anywhere near the code in question. I have a couple StructureMap InstanceScope singleton's but they are getting thrown out with each nested container that is lost after every test teardown. I've tried every possible variation on disposing and closing every object involved with each test teardown and it still fails on this lengthy database bootstrap. But non-bootstrap related database tests appear to work fine. I'm starting to run out of options and may have to surrender lengthy database integration tests in favor of WatiN-based acceptance tests. Can anyone give me any clue about how I can figure out why some of my SingleConnectionSessionSourceForSQLiteInMemoryTesting aren't repeatable? Any advice at all, about how to make an NHibernate SqlLite database integration test harness repeatable?

    Read the article

  • nhibernate error recovery

    - by Berryl
    I downloaded Rhino Security today and started going through some of the tests. Several that run perfectly in isolation start getting errors after one that purposely raises an exception runs though. Here is that test: [Test] public void EntitesGroup_CanCreate() { var group = _authorizationRepository.CreateEntitiesGroup("Accounts"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<EntitiesGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } And here are the tests and error messages that fail: [Test] public void User_CanSave() { var ayende = new User {Name = "ayende"}; _session.Save(ayende); _session.Flush(); _session.Evict(ayende); var fromDb = _session.Get<User>(ayende.Id); Assert.That(fromDb, Is.Not.Null); Assert.That(ayende.Name, Is.EqualTo(fromDb.Name)); } ----> System.Data.SQLite.SQLiteException : Abort due to constraint violation column Name is not unique [Test] public void UsersGroup_CanCreate() { var group = _authorizationRepository.CreateUsersGroup("Admininstrators"); _session.Flush(); _session.Evict(group); var fromDb = _session.Get<UsersGroup>(group.Id); Assert.NotNull(fromDb); Assert.That(fromDb.Name, Is.EqualTo(group.Name)); } failed: NHibernate.AssertionFailure : null id in Rhino.Security.Tests.User entry (don't flush the Session after an exception occurs) Does anyone see how I can reset the state of the in memory SQLite db after the first test? I changed the code to use nunit instead of xunit so maybe that is part of the problem here as well. Cheers, Berryl

    Read the article

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • Getting a "No default module defined for this application" exception while running controller unit t

    - by Doron
    I have an application with the default directory structure, for an application without custom modules (see structure figure at the end). I have written a ControllerTestCase.php file as instructed in many tutorials, and I've created the appropriate bootstrap file as well (once again, see figures at the end). I've written some model tests which run just fine, but when I started writing the index controller test file, with only one test in it, with only one line in it ("$this-dispatch('/');"), I'm getting the following exception when running phpunit (but navigating with the browser to the same location - all is good and working): 1) IndexControllerTest::test_indexAction_noParams Zend_Controller_Exception: No default module defined for this application Why is this ? What have I done wrong ? Appendixes: Directory structure: -proj/ -application/ -controllers/ -IndexController.php -ErrorController.php -config/ -layouts/ -models/ -views/ -helpers/ -scripts/ -error/ -error.phtml -index/ -index.phtml -Bootstrap.php -library/ -tests/ -application/ -controllers/ -IndexControllerTest.php -models/ -bootstrap.php -ControllerTestCase.php -library/ -phpunit.xml -public/ -index.php (Basically I have some more files in the models directory, but that's not relevant to this question.) application.ini file: [production] phpSettings.display_startup_errors = 0 phpSettings.display_errors = 0 includePaths.library = APPLICATION_PATH "/../library" bootstrap.path = APPLICATION_PATH "/Bootstrap.php" bootstrap.class = "Bootstrap" appnamespace = "My" resources.frontController.controllerDirectory = APPLICATION_PATH "/controllers" resources.frontController.params.displayExceptions = 0 resources.layout.layoutPath = APPLICATION_PATH "/layouts/scripts/" resources.view[] = phpSettings.date.timezone = "America/Los_Angeles" [staging : production] [testing : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 [development : production] phpSettings.display_startup_errors = 1 phpSettings.display_errors = 1 resources.frontController.params.displayExceptions = 1 tests/application/bootstrap.php file <?php error_reporting(E_ALL); // Define path to application directory defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../../application')); // Define application environment defined('APPLICATION_ENV') || define('APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'testing')); // Ensure library/ is on include_path set_include_path(implode(PATH_SEPARATOR, array( realpath(APPLICATION_PATH . '/../library'), // this project' library get_include_path(), ))); /** Zend_Application */ require_once 'Zend/Application.php'; require_once 'ControllerTestCase.php';

    Read the article

  • Continuous Integration with 64-bit Sharepoint and TFS 2008?

    - by Hirvox
    I've set up a 64-bit TFS 2008 build server with Sharepoint, continuous integration and out-of-the-box MSTest. Unit tests for plain business logic classes run just fine and test results are published into TFS. However, any test that uses Sharepoint's API fails horribly, SPFarm.Local returning null and so on. Is there a way to fix this? The tests run fine in an otherwise identical 32-bit development environment (Windows Server 2008 under Hyper-V, Sharepoint patched up to June 2009 cumulative update) from both Visual Studio and command line, so the problem is not about improper use of SPContext.Current or any other part of the API that needs to be run in a web server context. I've ruled out permissions issues, because the build agent account can deploy the solution and create site collections just fine with stsadm. The next culprit could be that the unit tests were being run with a 32-bit process, which couldn't access the 64-bit Sharepoint API properly. I tried a workaround, but it has the side effect of disabling TFS support in MSTest. Do I have to wait for 2010 versions of MS tools (and hope for the best) or is there a third-party test framework available that runs natively in 64 bit and can publish test results into TFS 2008?

    Read the article

  • Is it possible to run PHPUnit with a dynamically loaded extension library?

    - by therefromhere
    I have a suite of PHPUnit tests for my extension, and I want to run them as part of the extension's Hudson build process. So I want to run PHPUnit specifying the extension library to load at runtime, but I can't figure out how to do this. My directory structure is as follows: /myextension.c /otherextensionfiles.* /modules/myextension.so /tests/unittests.php I've tried running PHPUnit with an configuration XML file as follows: <phpunit> <php> <ini name="extension_dir" value="../modules/"/> <ini name="extension" value="myextension.so"/> </php> </phpunit> And then running it as follows (from the tests directory): phpunit --configuration config.xml unittests.php But then I get Fatal error: Call to undefined function myfunction(), so it's not loading the library. I've also tried: phpunit -d extension_dir=../modules/ -d extension=myextension.so unittests.php And also dl('myextension.so') to the test setup, but no joy. If it's relevant, this is using PHP 5.2 and PHPUnit 3.4.11.

    Read the article

  • C#: How to run NUnit from my code

    - by Flavio
    Hello, I'd like to use NUnit to run unit tests in my plug-in, but it needs to be run in the context of my application. To solve this, I was trying to develop a plug-in that runs NUnit, which in turn will execute my tests in the application's context. I didn't find a specific documentation on this subject so I dug a piece of information here and there and I came out with the following piece of code (which is similar to one I found here in StackOverflow): SimpleTestRunner runner = new SimpleTestRunner(); TestPackage package = new TestPackage( "Test" ); string loc = Assembly.GetExecutingAssembly().Location package.Assemblies.Add( loc ); if( runner.Load(package) ) { TestResult result = runner.Run( new NullListener() ); } The result variable says "has no TestFixture" although I know for sure it is there. In fact my test file contains two test. Using another approach I found, which is summarized by the following code: TestSuiteBuilder builder = new TestSuiteBuilder(); TestSuite testSuite = builder.Build( package ); // Run tests TestResult result = testSuite.Run( new NullListener(), NUnit.Core.TestFilter.Empty ); I saw nunit data structures with only 1 test and I had the same error. For sake of completeness, I am using the latest version of nunit, which is 2.5.5.10112. Does anyone know what I'm missing? A sample code would be appreciated. thanks

    Read the article

  • What guidelines should be followed when using an unstable/testing/stable branching scheme?

    - by Elliot
    My team is currently using feature branches while doing development. For each user story in our sprint, we create a branch and work it in isolation. Hence, according to Martin Fowler, we practice Continuous Building, not Continuous Integration. I am interested in promoting an unstable/testing/stable scheme, similar to that of Debian, so that code is promoted from unstable = testing = stable. Our definition of done, I'd recommend, is when unit tests pass (TDD always), minimal documentation is complete, automated functional tests pass, and feature has been demo'd and accepted by PO. Once accepted by the PO, the story will be merged into the testing branch. Our test developers spend most of their time in this branch banging on the software and continuously running our automated tests. This scares me, however, because commits from another incomplete story may now make it into the testing branch. Perhaps I'm missing something because this seems like an undesired consequence. So, if moving to a code promotion strategy to solve our problems with feature branches, what strategy/guidelines do you recommend? Thanks.

    Read the article

  • Testing Finite State Machines

    - by Pondidum
    I have inherited a large and firaly complex state machine at work. It has 31 possbile states to be in. It has the following inputs: Enum: Current State (so 0 - 30) Enum: source (currently only 2 entries) Boolean: Request Boolean: type Enum: Status (3 states) Enum: Handling (3 states) Boolean: Completed The 31 States are really needed (big business process). Breaking into seperate state machines doesnt seem feasable - each state is distinct. I have written tests for one set of inputs (the most common set), with one test per input (all inputs constant, except for the State input): [Subject("Application Process States")] public class When_state_is_meeting2Requested : AppProcessBase { Establish context = () => { //Setup.... }; Because of = () => process.Load(jas, vac); It Current_node_should_be_meeting2Requested = () => process.CurrentNode.ShouldBeOfType<meetingRequestedNode>(); It Can_move_to_clientDeclined = () => Check(process, process.clientDeclined); It Can_move_to_meeting1Arranged = () => Check(process, process.meeting1Arranged); It Can_move_to_meeting2Arranged = () => Check(process, process.meeting2Arranged); It Can_move_to_Reject = () => Check(process, process.Reject); It Cannot_move_to_any_other_state = () => AllOthersFalse(process); } As no one is entirely sure on what the output should be for each state and set of inputs i have been starting to write tests for it, however on calculation i will need to write 4320 ( 30*2*2*2*3*3*2 ) tests for it. Does anyone have any suggestions on how i should go about testing this?

    Read the article

  • Calculate an Internet (aka IP, aka RFC791) checksum in C#

    - by Pat
    Interestingly, I can find implementations for the Internet Checksum in almost every language except C#. Does anyone have an implementation to share? Remember, the internet protocol specifies that: "The checksum field is the 16 bit one's complement of the one's complement sum of all 16 bit words in the header. For purposes of computing the checksum, the value of the checksum field is zero." More explanation can be found from Dr. Math. There are some efficiency pointers available, but that's not really a large concern for me at this point. Please include your tests! (Edit: Valid comment regarding testing someone else's code - but I am going off of the protocol and don't have test vectors of my own and would rather unit test it than put into production to see if it matches what is currently being used! ;-) Edit: Here are some unit tests that I came up with. They test an extension method which iterates through the entire byte collection. Please comment if you find fault in the tests. [TestMethod()] public void InternetChecksum_SimplestValidValue_ShouldMatch() { IEnumerable<byte> value = new byte[1]; // should work for any-length array of zeros ushort expected = 0xFFFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidSingleByteExtreme_ShouldMatch() { IEnumerable<byte> value = new byte[]{0xFF}; ushort expected = 0xFF; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); } [TestMethod()] public void InternetChecksum_ValidMultiByteExtrema_ShouldMatch() { IEnumerable<byte> value = new byte[] { 0x00, 0xFF }; ushort expected = 0xFF00; ushort actual = value.InternetChecksum(); Assert.AreEqual(expected, actual); }

    Read the article

  • DSL to generate test data

    - by queen3
    There're several ways to generate data for tests (not only unit tests), for example, Object Mother, builders, etc. Another useful approach is to write test data as plain text: product: Main; prices: 145, 255; Expire: 10-Apr-2011; qty: 2; includes: Sub product: Sub; prices: 145, 255; Expire: 10-Apr-2011; qty: 2 and then parse it into C# objects. This is easy to use in unit tests (because deep inner collections can be written in single line), this is even more convenient to use in FitNesse-like system (because this DSL naturally fits into wiki), and so on. So I use this and write parser, but it's tedious to write each time. I'm not a big expert in DSL/language parsers, but I think they can help here. What would be the right one to use? I only heard about: DSL (I mean, any DSL) Boo (that I think can do DSL) ANTLR but I don't even know which one to pick and where to start. So the question: is it reasonable to use some kind of DSL to generate test data? What would you suggest to do so? Are there any existing cases?

    Read the article

  • Using Reflection.Emit to emit a "using (x) { ... }" block?

    - by Lasse V. Karlsen
    I'm trying to use Reflection.Emit in C# to emit a using (x) { ... } block. At the point I am in code, I need to take the current top of the stack, which is an object that implements IDisposable, store this away in a local variable, implement a using block on that variable, and then inside it add some more code (I can deal with that last part.) Here's a sample C# piece of code I tried to compile and look at in Reflector: public void Test() { TestDisposable disposable = new TestDisposable(); using (disposable) { throw new Exception("Test"); } } This looks like this in Reflector: .method public hidebysig instance void Test() cil managed { .maxstack 2 .locals init ( [0] class LVK.Reflection.Tests.UsingConstructTests/TestDisposable disposable, [1] class LVK.Reflection.Tests.UsingConstructTests/TestDisposable CS$3$0000, [2] bool CS$4$0001) L_0000: nop L_0001: newobj instance void LVK.Reflection.Tests.UsingConstructTests/TestDisposable::.ctor() L_0006: stloc.0 L_0007: ldloc.0 L_0008: stloc.1 L_0009: nop L_000a: ldstr "Test" L_000f: newobj instance void [mscorlib]System.Exception::.ctor(string) L_0014: throw L_0015: ldloc.1 L_0016: ldnull L_0017: ceq L_0019: stloc.2 L_001a: ldloc.2 L_001b: brtrue.s L_0024 L_001d: ldloc.1 L_001e: callvirt instance void [mscorlib]System.IDisposable::Dispose() L_0023: nop L_0024: endfinally .try L_0009 to L_0015 finally handler L_0015 to L_0025 } I have no idea how to deal with that ".try ..." part at the end there when using Reflection.Emit. Can someone point me in the right direction?

    Read the article

  • Operation is not valid due to the current state of the object?

    - by Bill
    I am programming in C#; the code was working about a week ago, however it throws an exception and I don't understand at all what could be wrong with it. Var root = new CalculationNode(); -> Throw exception. In the call stack thats the only thing listed, I've been told that it could be that I need a clean build, but I am open to any ideas or suggestions. Thanks, -Bill Update: Exception's Detail System.InvalidOperationException was unhandled by user code Message=Operation is not valid due to the current state of the object. Source=Calculator.Logic StackTrace: at ~.Calculator.Logic.MyBaseExpressionParser.Parse(String expression) in ~\Source\Calculator.Logic\MyBaseExpressionParser.cs:line 44 at ~.Calculator.Logic.Tests.MyBaseCalculatorServiceTests.BasicMathDivision() in ~\Projects\Tests\Calculator.Logic.Tests\MyBaseCalculatorServiceTests.cs:line 60 InnerException: CalculationNode's code: public sealed calss CalculationNode { public CalculationNode() { this.Left = null; this.Right = null; this.Element = new CalculationElement(); } public CalculationNode Left {get;set;} public CalculationNode Right {get;set;} public CalculationElement Element {get; set;} } CalculationElement's code: public sealed class CalculationElement { public CalculationElement() { Value = string.Empty; IsOperator = false; } public string Value {get; set} public bool IsOperator {get; set} }

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • Is JPA persistence.xml classpath located?

    - by Vinnie
    Here's what I'm trying to do. I'm using JPA persistence in a web application, but I have a set of unit tests that I want to run outside of a container. I have my primary persistence.xml in the META_INF folder of my main app and it works great in the container (Glassfish). I placed a second persistence.xml in the META-INF folder of my test-classes directory. This contains a separate persistence unit that I want to use for test only. In eclipse, I placed this folder higher in the classpath than the default folder and it seems to work. Now when I run the maven build directly from the command line and it attempts to run the unit tests, the persistence.xml override is ignored. I can see the override in the META-INF folder of the maven generated test-classes directory and I expected the maven tests to use this file, but it isn't. My Spring test configuration overrides, achieved in a similar fashion are working. I'm confused at to whether the persistence.xml is located through the classpath. If it were, my override should work like the spring override since the maven surefire plugin explains "[The test class directory] will be included at the beginning the test classpath". Did I wrongly anticipate how the persistence.xml file is located? I could (and have) create a second persistence unit in the production persistence.xml file, but it feels dirty to place test configuration into this production file. Any other ideas on how to achieve my goal is welcome.

    Read the article

  • Assemblies mysteriously loaded into new AppDomains

    - by Eric
    I'm testing some code that does work whenever assemblies are loaded into an appdomain. For unit testing (in VS2k8's built-in test host) I spin up a new, uniquely-named appdomain prior to each test with the idea that it should be "clean": [TestInitialize()] public void CalledBeforeEachTestMethod() { AppDomainSetup appSetup = new AppDomainSetup(); appSetup.ApplicationBase = @"G:\<ProjectDir>\bin\Debug"; Evidence baseEvidence = AppDomain.CurrentDomain.Evidence; Evidence evidence = new Evidence( baseEvidence ); _testAppDomain = AppDomain.CreateDomain( "myAppDomain" + _appDomainCounter++, evidence, appSetup ); } [TestMethod] public void MissingFactoryCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadMissingRegistrationAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } [TestMethod] public void InvalidFactoryMethodCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadInvalidFactoriesAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } public class SupportingClass : MarshalByRefObject { public void LoadMissingRegistrationAssembly() { MissingRegistration.Main(); } public void LoadInvalidFactoriesAssembly() { InvalidFactories.Main(); } } If every test is run individually I find that it works correctly; the appdomain is created and has only the few intended assemblies loaded. However, if multiple tests are run in succession then each _testAppDomain already has assemblies loaded from all previous tests. Oddly enough, the two tests get appdomains with different names. The test assemblies that define MissingRegistration and InvalidFactories (two different assemblies) are never loaded into the unit test's default appdomain. Can anyone explain this behavior?

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • Accessing Web.config directly in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >