Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 77/274 | < Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >

  • Using Reflection.Emit to emit a "using (x) { ... }" block?

    - by Lasse V. Karlsen
    I'm trying to use Reflection.Emit in C# to emit a using (x) { ... } block. At the point I am in code, I need to take the current top of the stack, which is an object that implements IDisposable, store this away in a local variable, implement a using block on that variable, and then inside it add some more code (I can deal with that last part.) Here's a sample C# piece of code I tried to compile and look at in Reflector: public void Test() { TestDisposable disposable = new TestDisposable(); using (disposable) { throw new Exception("Test"); } } This looks like this in Reflector: .method public hidebysig instance void Test() cil managed { .maxstack 2 .locals init ( [0] class LVK.Reflection.Tests.UsingConstructTests/TestDisposable disposable, [1] class LVK.Reflection.Tests.UsingConstructTests/TestDisposable CS$3$0000, [2] bool CS$4$0001) L_0000: nop L_0001: newobj instance void LVK.Reflection.Tests.UsingConstructTests/TestDisposable::.ctor() L_0006: stloc.0 L_0007: ldloc.0 L_0008: stloc.1 L_0009: nop L_000a: ldstr "Test" L_000f: newobj instance void [mscorlib]System.Exception::.ctor(string) L_0014: throw L_0015: ldloc.1 L_0016: ldnull L_0017: ceq L_0019: stloc.2 L_001a: ldloc.2 L_001b: brtrue.s L_0024 L_001d: ldloc.1 L_001e: callvirt instance void [mscorlib]System.IDisposable::Dispose() L_0023: nop L_0024: endfinally .try L_0009 to L_0015 finally handler L_0015 to L_0025 } I have no idea how to deal with that ".try ..." part at the end there when using Reflection.Emit. Can someone point me in the right direction?

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • Operation is not valid due to the current state of the object?

    - by Bill
    I am programming in C#; the code was working about a week ago, however it throws an exception and I don't understand at all what could be wrong with it. Var root = new CalculationNode(); -> Throw exception. In the call stack thats the only thing listed, I've been told that it could be that I need a clean build, but I am open to any ideas or suggestions. Thanks, -Bill Update: Exception's Detail System.InvalidOperationException was unhandled by user code Message=Operation is not valid due to the current state of the object. Source=Calculator.Logic StackTrace: at ~.Calculator.Logic.MyBaseExpressionParser.Parse(String expression) in ~\Source\Calculator.Logic\MyBaseExpressionParser.cs:line 44 at ~.Calculator.Logic.Tests.MyBaseCalculatorServiceTests.BasicMathDivision() in ~\Projects\Tests\Calculator.Logic.Tests\MyBaseCalculatorServiceTests.cs:line 60 InnerException: CalculationNode's code: public sealed calss CalculationNode { public CalculationNode() { this.Left = null; this.Right = null; this.Element = new CalculationElement(); } public CalculationNode Left {get;set;} public CalculationNode Right {get;set;} public CalculationElement Element {get; set;} } CalculationElement's code: public sealed class CalculationElement { public CalculationElement() { Value = string.Empty; IsOperator = false; } public string Value {get; set} public bool IsOperator {get; set} }

    Read the article

  • Is JPA persistence.xml classpath located?

    - by Vinnie
    Here's what I'm trying to do. I'm using JPA persistence in a web application, but I have a set of unit tests that I want to run outside of a container. I have my primary persistence.xml in the META_INF folder of my main app and it works great in the container (Glassfish). I placed a second persistence.xml in the META-INF folder of my test-classes directory. This contains a separate persistence unit that I want to use for test only. In eclipse, I placed this folder higher in the classpath than the default folder and it seems to work. Now when I run the maven build directly from the command line and it attempts to run the unit tests, the persistence.xml override is ignored. I can see the override in the META-INF folder of the maven generated test-classes directory and I expected the maven tests to use this file, but it isn't. My Spring test configuration overrides, achieved in a similar fashion are working. I'm confused at to whether the persistence.xml is located through the classpath. If it were, my override should work like the spring override since the maven surefire plugin explains "[The test class directory] will be included at the beginning the test classpath". Did I wrongly anticipate how the persistence.xml file is located? I could (and have) create a second persistence unit in the production persistence.xml file, but it feels dirty to place test configuration into this production file. Any other ideas on how to achieve my goal is welcome.

    Read the article

  • Unit test project doesn't recognize the classes it was generated from

    - by DougLeary
    I have a fairly simple file-system website consisting of one aspx page and several classes in separate .cs files. Everything is on my own HD. The web app itself builds and runs fine. Out of curiosity I decided to try out Visual Studio's nifty, easy-to-use unit test feature. So I opened each class file and clicked Create Unit Tests. VS generated a test project containing a set of test classes and some other files. Easy! But when I try to build or run the test project it throws a series of build errors, one for every class: The type or namespace name 'class-name' could not be found (are you missing a using directive or an assembly reference?). Somebody asked if my test project has a reference to the original project. Well no, because the original project is a file-system website. It has no bin folder and no DLL, so there's nothing to reference as far as I can tell. I would think that since VS generated these unit tests it would generate whatever references it needs, but apparently not. Is generating unit tests for file-system web apps an undocumented no-no, or is there a magic trick to getting it to work?

    Read the article

  • Assemblies mysteriously loaded into new AppDomains

    - by Eric
    I'm testing some code that does work whenever assemblies are loaded into an appdomain. For unit testing (in VS2k8's built-in test host) I spin up a new, uniquely-named appdomain prior to each test with the idea that it should be "clean": [TestInitialize()] public void CalledBeforeEachTestMethod() { AppDomainSetup appSetup = new AppDomainSetup(); appSetup.ApplicationBase = @"G:\<ProjectDir>\bin\Debug"; Evidence baseEvidence = AppDomain.CurrentDomain.Evidence; Evidence evidence = new Evidence( baseEvidence ); _testAppDomain = AppDomain.CreateDomain( "myAppDomain" + _appDomainCounter++, evidence, appSetup ); } [TestMethod] public void MissingFactoryCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadMissingRegistrationAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } [TestMethod] public void InvalidFactoryMethodCausesAppDomainUnload() { SupportingClass supportClassObj = (SupportingClass)_testAppDomain.CreateInstanceAndUnwrap( GetType().Assembly.GetName().Name, typeof( SupportingClass ).FullName ); try { supportClassObj.LoadInvalidFactoriesAssembly(); Assert.Fail( "Should have nuked the app domain" ); } catch( AppDomainUnloadedException ) { } } public class SupportingClass : MarshalByRefObject { public void LoadMissingRegistrationAssembly() { MissingRegistration.Main(); } public void LoadInvalidFactoriesAssembly() { InvalidFactories.Main(); } } If every test is run individually I find that it works correctly; the appdomain is created and has only the few intended assemblies loaded. However, if multiple tests are run in succession then each _testAppDomain already has assemblies loaded from all previous tests. Oddly enough, the two tests get appdomains with different names. The test assemblies that define MissingRegistration and InvalidFactories (two different assemblies) are never loaded into the unit test's default appdomain. Can anyone explain this behavior?

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Accessing Web.config directly in ASP.NET MVC 1

    - by Neil T.
    I'm trying to implement integration testing in my ASP.NET MVC 1.0 solution. The technologies in use are LINQ-to-SQL, NUnit and WatiN. I recently discovered a pattern that will allow me to create a testing version of the database on the fly without modifying the development version of the database. I needed this behavior in order to run my user interface tests in WatiN that may modify the database. The plan is to modify the connection string in the Web.config file, and pass that new connection string to the DataContext constructor. This way, I don't have to add routes or modify my URLs in order to perform the integration testing. I've set up the project so that the test setup can modify the connection string to point to the test database when the tests are running. The connection string is stored in web.config. The problem I'm having is that when I try to run the tests, I get a NullReferenceException when trying to access the HTTPContext. From everything that I have read so far, the HTTPContext is only available within the context of a controller. Here is the code for the property that is supposed to give me the reference to the Web.config file: private System.Configuration.Configuration WebConfig { get { ExeConfigurationFileMap fileMap = new ExeConfigurationFileMap(); // NullReferenceException occurs on this line. fileMap.ExeConfigFilename = HttpContext.Current.Server.MapPath("~\\web.config"); System.Configuration.Configuration config = ConfigurationManager.OpenMappedExeConfiguration(fileMap, ConfigurationUserLevel.None); return config; } } Is there something that I am missing in order to make this work? Is there a better way to accomplish what I'm trying to achieve?

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Django unit testing: South-migrated DB works in MySQL, throws duplicate PK error in PostGreSQL. Am I

    - by unclaimedbaggage
    Hi folks, (Worth starting off with a disclaimer: I'm very new to PostGreSQL) I have a django site which involves a standard app/tests.py testing file. If I migrate the DB to MySQL (through South),, the tests all pass. However in PostGresQL, I'm getting the following error: IntegrityError: duplicate key value violates unique constraint "business_contact_pkey" Note this happens while unit testing only - the actual page runs fine in both MySQL & PostGresql. Really having a heckuva time figuring this one out. Anyone have ideas? Below are the Postgresql "\d business_contact" & offending tests.py method if they help. No changes made to either DB except the (same) South migrations Thanks first_name | character varying(200) | not null mobile_phone | character varying(100) | surname | character varying(200) | not null business_id | integer | not null created | timestamp with time zone | not null deleted | boolean | not null default false updated | timestamp with time zone | not null slug | character varying(150) | not null phone | character varying(100) | email | character varying(75) | id | integer | not null default nextval('business_contact_id_seq'::regclass) Indexes: "business_contact_pkey" PRIMARY KEY, btree (id) "business_contact_slug_key" UNIQUE, btree (slug) "business_contact_business_id" btree (business_id) Foreign-key constraints: "business_id_refs_id_772cc1b7b40f4b36" FOREIGN KEY (business_id) REFERENCES business(id) DEFERRABLE INITIALLY DEFERRED Referenced by: TABLE "business" CONSTRAINT "primary_contact_id_refs_id_dfaf59c4041c850" FOREIGN KEY (primary_contact_id) REFERENCES business_contact(id) DEFERRABLE INITIALLY DEFERRED TEST DEF: def test_add_business_contact(self): """ Add a business contact """ contact_slug = 'test-new-contact-added-new-adf' business_id = 1 business = Business.objects.get(id=business_id) postdata = { 'first_name': 'Test', 'surname': 'User', 'business': '1', 'slug': contact_slug, 'email': '[email protected]', 'phone': '12345678', 'mobile_phone': '9823452', 'business': 1, 'business_id': 1, } #Test to ensure contacts that should not exist are not returned contact_not_exists = Contact.objects.filter(slug=contact_slug) self.assertFalse(contact_not_exists) #Add the contact and ensure it is present in the DB afterwards """ contact_add_url = '%s%s/contact/add/' % (settings.BUSINESS_URL, business.slug) self.client.post(contact_add_url, postdata) added_contact = Contact.objects.filter(slug=contact_slug) print added_contact try: self.assertTrue(added_contact) except: formset = ContactForm(postdata) print formset.errors self.assertFalse(True, "Contact not found in the database - most likely, the post values in the test didn't validate against the form")

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Unit testing code which uses Umbraco v4 API

    - by Jason Evans
    I'm trying to write a suite of integration tests, where I have custom code which uses the Umbraco API. The Umbraco database lives in a SQL Server CE 4.0 database (*.sdf file) and I've managed to get that association working fine. My problem looks to be dependancies in the Umbraco code. For example, I would like to create a new user for my tests, so I try: var user = User.MakeNew("developer", "developer", "mypassword", "[email protected]", adminUserType); Now as you can see, I have pass a user type which is an object. I've tried two separate ways to create the user type, both of which fail due to a null object exception: var adminUserType = UserType.GetUserType(1); var adminUserType2 = new UserType(1); The problem is that in each case the UserType code calls it's Cache method which uses the HttpRuntime class, which naturally is null. My question is this: Can someone suggest a way of writing integration tests against Umbraco code? Will I have to end up using a mocking framework such as TypeMock or JustMock?

    Read the article

  • Java: design for using many executors services and only few threads

    - by Guillaume
    I need to run in parallel multiple threads to perform some tests. My 'test engine' will have n tests to perform, each one doing k sub-tests. Each test result is stored for a later usage. So I have n*k processes that can be ran concurrently. I'm trying to figure how to use the java concurrent tools efficiently. Right now I have an executor service at test level and n executor service at sub test level. I create my list of Callables for the test level. Each test callable will then create another list of callables for the subtest level. When invoked a test callable will subsequently invoke all subtest callables test 1 subtest a1 subtest ...1 subtest k1 test n subtest a2 subtest ...2 subtest k2 call sequence: test manager create test 1 callable test1 callable create subtest a1 to k1 testn callable create subtest an to kn test manager invoke all test callables test1 callable invoke all subtest a1 to k1 testn callable invoke all subtest an to kn This is working fine, but I have a lot of new treads that are created. I can not share executor service since I need to call 'shutdown' on the executors. My idea to fix this problem is to provide the same fixed size thread pool to each executor service. Do you think it is a good design ? Do I miss something more appropriate/simple for doing this ?

    Read the article

  • Difference between float and double

    - by VaioIsBorn
    I know, i've read about the difference between double precision and single precision etc. But they should give the same results on most cases right ? I was solving a problem on a programming contest and there were calculations with floating point numbers that were not really big so i decided to use float instead of double, and i checked it - i was getting the correct results. But when i send the solution, it said only 1 of 10 tests was correct. I checked again and again, until i found that using float is not the same using double. I put double for the calculations and double for the output, and the program gave the SAME results, but this time it passed all the 10 tests correctly. I repeat, the output was the SAME, the results were the SAME, but putting float didn't work - only double. The values were not so big too, and the program gave the same results on the same tests both with float and double, but the online judge accepted only the double-provided solution. Why ? What is the difference ?

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • read xml in javascript problem

    - by Najmi
    hai all, i have a problem with my code to read the xml.I have use ajax to read xml data and populate it in combobox. My problem is it only read the first data.Here is my code my xml like this <area> <code>1</code> <name>area1</name> </area> <area> <code>2</code> <name>area2</name> </area> and my javascript if(http.readyState == 4 && http.status == 200) { //get select elements var item = document.ProblemMaintenanceForm.elements["probArea"]; //empty combobox item.options.length = 0; //read xml data from action file var test = http.responseXML.getElementsByTagName("area"); alert(test.length); for ( var i=0; i < test.length; i++ ){ var tests = test[i]; item.options[item.options.length] = new Option(tests.getElementsByTagName("name")[i].childNodes[i].nodeValue,tests.getElementsByTagName("code")[i].childNodes[i].nodeValue); } }

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • How to unit test generic classes

    - by Rowland Shaw
    I'm trying to set up some unit tests for an existing compact framework class library. However, I've fallen at the first hurdle, where it appears that the test framework is unable to load the types involved (even though they're both in the class library being tested) Test method MyLibrary.Tests.MyGenericClassTest.MyMethodTest threw exception: System.MissingMethodException: Could not load type 'MyLibrary.MyType' from assembly 'MyLibrary, Version=1.0.3778.36113, Culture=neutral, PublicKeyToken=null'.. My code is loosely: public class MyGenericClass<T> : List<T> where T : MyType, new() { public bool MyMethod(T foo) { throw new NotImplementedException(); } } With test methods: public void MyMethodTestHelper<T>() where T : MyType, new() { MyGenericClass<T> target = new MyGenericClass<T>(); foo = new T(); expected = true; actual = target.MyMethod(foo); Assert.AreEqual(expected, actual); } [TestMethod()] public void MyMethodTest() { MyMethodTestHelper<MyType>(); } I'm a bit stumped though, as I can't even get it to break in the debugger to get to the inner exception, so what else do I check? EDIT this does seem to be something specific to the Compact Framework - recompiling the class libraries and the unit tests for the full framework, gives the expected output (i.e. the debugger stops when I'm going to throw a NotImplementedException).

    Read the article

  • How to invoke make install for one subdirectory of Qt project

    - by chalup
    I'm working on custom library and I wish users could just use it by adding: CONFIG += mylib to their pro files. This can be done by installing mylib.prf file to %QTDIR%/mkspec/features. I've checked out in Qt Mobility project how to create and install such file, but there is one thing I'd like to do differently. If I correctly understood the pro/pri files of Qt Mobility, inside the example projects they don't really use CONFIG += mobility, instead they add QtMobility sources to include path and share the *.obj directory with main library project. For my library I'd like to have examples that are as independent projects as possible, i.e. projects that can be compiled from anywhere once MyLib is compiled and installed. I have following directory structure: mylib | |- examples |- src |- tests \- mylib.pro It seems that the easiest way to achieve what I described above is creating mylib.pro like this: TEMPLATE = subdirs SUBDIRS += src SUBDIRS += examples tests:SUBDIRS += tests And somehow enforce invoking "cd src && make install" after building src. What is the best way to do this? Of course any other suggestions for automatic library deployment before examples compilation are welcome.

    Read the article

  • django test client trouble

    - by Anton Koval'
    I've got a problem... we're writing project using django, and i'm trying to use django.test.client with nose test-framework for tests. Our code is like this: from simplejson import loads from urlparse import urljoin from django.test.client import Client TEST_URL = "http://smakly.localhost:9090/" def test_register(): cln = Client() ref_data = {"email": "[email protected]", "name": "???????", "website": "http://hot.bear.com", "xhr": "true"} print urljoin(TEST_URL, "/accounts/register/") response = loads(cln.post(urljoin(TEST_URL, "/accounts/register/"), ref_data)) print response["message"] and in nose output I catch: Traceback (most recent call last): File "/home/psih/work/svn/smakly/eggs/nose-0.11.1-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/psih/work/svn/smakly/src/smakly.tests/smakly/tests/frontend/test_profile.py", line 25, in test_register response = loads(cln.post(urljoin(TEST_URL, "/accounts/register/"), ref_data)) File "/home/psih/work/svn/smakly/parts/django/django/test/client.py", line 313, in post response = self.request(**r) File "/home/psih/work/svn/smakly/parts/django/django/test/client.py", line 225, in request response = self.handler(environ) File "/home/psih/work/svn/smakly/parts/django/django/test/client.py", line 69, in __call__ response = self.get_response(request) File "/home/psih/work/svn/smakly/parts/django/django/core/handlers/base.py", line 78, in get_response urlconf = getattr(request, "urlconf", settings.ROOT_URLCONF) File "/home/psih/work/svn/smakly/parts/django/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'ROOT_URLCONF' My settings.py file does have this attribute. If I get the data from the server with standard urllib2.urllopen().read() it works in the proper way. Any ideas how I can solve this case?

    Read the article

< Previous Page | 73 74 75 76 77 78 79 80 81 82 83 84  | Next Page >