Search Results

Search found 6281 results on 252 pages for 'automated tests'.

Page 16/252 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Global.asax for Unit Tests?

    - by AngryHacker
    In my MSTest UnitTest project, before running any tests, I need to execute some commands. Is there a feature, kind of like Global.asax is for web based projects, that will let me kick off something before any tests run?

    Read the article

  • How to run tests in plugins?

    - by Daniel Engmann
    We have splitted our grails application into several inplace-plugins. Now we want to have the tests in the same plugin like the classes which they test. Is it possible to configure our application (e.g. in BuildConfig.groovy) so that the tests in the plugins are executed too when we run "test-app"?

    Read the article

  • Creating Tests at Runtime

    - by James Thigpen
    Are there any .NET testing frameworks which allow dynamic creation of tests without having to deal with a hokey Attribute syntax? Something like: foreach (var t in tests) { TestFx.Run(t.Name, t.TestDelegate); } But with the test reporting as you would expect... I could do something like this with RowTests et al, but that seems hokey.

    Read the article

  • Programming Concepts That Were "Automated" By Modern Languages

    - by Ygam
    Weird question but here it is. What are the programming concepts that were "automated" by modern languages? What I mean are the concepts you had to manually do before. Here is an example: I have just read that in C, you manually do garbage collection; with "modern" languages however, the language itself takes care of it. Do you know of any other, or there aren't any more?

    Read the article

  • Python/Django tests running only one test at a time

    - by user2876296
    I have a unittest for my view class TestFromAllAdd(TestCase): fixtures = ['staging_accounts_user.json', 'staging_main_category.json', 'staging_main_dashboard.json', 'staging_main_location.json', 'staging_main_product.json', 'staging_main_shoppinglist.json'] def setUp(self): self.factory = RequestFactory() self.c = Client() self.c.login(username='admin', password='admin') def from_all_products_html404_test(self): request = self.factory.post('main/adding_from_all_products', {'product_id': ''}) request.user = User.objects.get(username= 'admin') response = adding_from_all_products(request) self.assertEqual(response.status_code, 404) But I have a few more classes with tests and I cant run them all at the same time: python manage.py test main doesnt run tests, but if i run; python manage.py test main.TestFromAllAdd.from_all_products_html404_test , runs one test;

    Read the article

  • Using automated bdd-gui-tests to keep user-documentation-screenshots up do date?

    - by k3b
    Are there developpers out there, who (ab)use the CaptureScreenshot() function of their automated gui-tests to also create uptodate-screenshots for the userdocumentation? Background: Whithin the lifetime of an application, its gui-elements are constantly changing. It makes a lot of work to keep the userdocumentation uptodate, especially if the example data in the pictures should match the textual description. If you already have automated bdd-gui-tests why not let them take screenshots at certain points? I am currently playing with webapps in dotnet+specflow+selenium, but this topic also applies to other bdd-engines (JRuby-Cucumber, mspec, rspec, ...) and gui-test-Frameworks (WaitN, WaitR, MsWhite, ....) Any experience, thoughts or url-links to this topic would be helpfull. How is the cost/benefit relation? Is it worth the efford? What are the Drawbacks? See also: Is it practical to retroactively write specifications documenting a system via automated acceptance tests?

    Read the article

  • Is it dangerous to substitute unit tests for user testing? [closed]

    - by MushinNoShin
    Is it dangerous to substitute unit tests for user testing? A co-worker believes we can reduce the manual user testing we need to do by adding more unit tests. Is this dangerous? Unit tests seem to have a very different purpose than user testing. Aren't unit tests to inform design and allow breaking changes to be caught early? Isn't that fundamentally different than determining if an aspect of the system is correct as a whole of the system? Is this a case of substituting apples for oranges?

    Read the article

  • How do I check that my tests were not removed by other developers?

    - by parxier
    I've just came across an interesting collaborative coding issue at work. I've written some unit/functional/integration tests and implemented new functionality into application that's got ~20 developers working on it. All tests passed and I checked in the code. Next day I updated my project and noticed (by chance) that some of my test methods were deleted by other developers (merging problems on their end). New application code was not touched. How can I detect such problem automatically? I mean, I write tests to automatically check that my code still works (or was not deleted), how do I do the same for tests? We're using Java, JUnit, Selenium, SVN and Hudson CI if it matters.

    Read the article

  • What Are Some Tips For Writing A Large Number of Unit Tests?

    - by joshin4colours
    I've recently been tasked with testing some COM objects of the desktop app I work on. What this means in practice is writing a large number (100) unit tests to test different but related methods and objects. While the unit tests themselves are fairly straight forward (usually one or two Assert()-type checks per test), I'm struggling to figure out the best way to write these tests in a coherent, organized manner. What I have found is that copy and Paste coding should be avoided. It creates more problems than it's worth, and it's even worse than copy-and-paste code in production code because test code has to be more frequently updated and modified. I'm leaning toward trying an OO-approach using but again, the sheer number makes even this approach daunting from an organizational standpoint due to concern with maintenance. It also doesn't help that the tests are currently written in C++, which adds some complexity with memory management issues. Any thoughts or suggestions?

    Read the article

  • Making Separate Assemblies For Different Types Of Tests For The Same Component?

    - by sooprise
    I was told by a few members here that splitting up my unit tests into different assemblies for different components is the best way to structure unit tests. Now, I have a few questions about that idea. What are the advantages of this? Organization, and isolation of errors? Let's say I have a component named "calculator", and I create an assembly for the unit tests on "calculator". Would I create a separate assembly for the integration tests I want to run on "calculator"? Or is the definition of an integration test a test across multiple components, like "calculator" and whatever else, which would require a separate assembly to test both of them together? In that case, would I have one assembly to do all of the integration testing for every component combination?

    Read the article

  • What may be wrong with String::ToIdentifier::EN tests?

    - by wk01
    I try to install Perl module String::ToIdentifier::EN (as depndency of DBIx::Class::Schema::Loader) but it fails on tests. I googled those errors but get no picture, where is problem: Building and testing String-ToIdentifier-EN-0.07 cp lib/String/ToIdentifier/EN.pm blib/lib/String/ToIdentifier/EN.pm cp lib/String/ToIdentifier/EN/Unicode.pm blib/lib/String/ToIdentifier/EN/Unicode.pm Manifying blib/man3/String::ToIdentifier::EN.3pm Manifying blib/man3/String::ToIdentifier::EN::Unicode.3pm PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'inc', 'blib/lib', 'blib/arch')" t/00_basic.t t/10_ascii.t t/20_capitalization.t Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 25 tests but ran 4. # Looks like your test exited with 25 just after 4. t/00_basic.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 21/25 subtests Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 768 tests but ran 512. # Looks like your test exited with 25 just after 512. t/10_ascii.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 256/768 subtests t/20_capitalization.t .. ok Test Summary Report ------------------- t/00_basic.t (Wstat: 6400 Tests: 4 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 25 tests but ran 4. t/10_ascii.t (Wstat: 6400 Tests: 512 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 768 tests but ran 512. Files=3, Tests=528, 1 wallclock secs ( 0.07 usr 0.02 sys + 0.42 cusr 0.04 csys = 0.55 CPU) Result: FAIL Failed 2/3 test programs. 0/528 subtests failed. make: *** [test_dynamic] Error 255 -> FAIL Installing String::ToIdentifier::EN failed. See /home/wanradt/.cpanm/build.log for details. Byte order is not compatible at... seems a key, but to where?

    Read the article

  • Debugging Cactus Tests in Eclipse

    - by Th3sandm4n
    Side note: This is inherited code, I didn't do any of the setup and am new to the project. I'm trying to set up remote debugging in Eclipse for these unit tests that use Cactus. I've read around a bit (but I can't seem to find any REAL information how to set this up). Closest I've found is here (http://www.eclipse.org/webtools/community/tutorials/CactusInWTP/CactusInWTP.html), but it just says to Debug - Debug on Server, but nowhere does it say where the debug port is set or anything, and I can't find anything on how to enable this, set it. Just asking to see if anyone has set this up before, it would really help stepping through the code rather than just logging. The plugin (http://jakarta.apache.org/cactus/integration/eclipse/runner_plugin.html) Looks promising, but I also don't even know where to download it, it doesn't link to a location -.- The project uses ant, cactus, and I'm using Eclipse. Thanks EDIT Here is the target I'm using <junit fork="no" forkmode="perTest" printsummary="yes" haltonfailure="no" haltonerror="no" failureproperty="tests.failed"> <jvmarg value="-Xdebug" /> <jvmarg value="-Xrunjdwp:transport=dt_socket,address=localhost:8005,server=y,suspend=y" /> <formatter type="xml" usefile="true" /> <formatter type="plain" usefile="false" /> <classpath> <pathelement location="${clover.jar}"/> <path refid="cactus.classpath.id" /> <pathelement location="../ejb/src" /> </classpath> <sysproperty key="cactus.contextURL" value="${cactus.contextURL}"/> <test name="com.test.AllTests" outfile="TESTS" /> </junit>

    Read the article

  • Asp.Net MVC Tutorial Unit Tests

    - by Nicholas
    I am working through Steve Sanderson's book Pro ASP.NET MVC Framework and I having some issues with two unit tests which produce errors. In the example below it tests the CheckOut ViewResult: [AcceptVerbs(HttpVerbs.Post)] public ViewResult CheckOut(Cart cart, FormCollection form) { // Empty carts can't be checked out if (cart.Lines.Count == 0) { ModelState.AddModelError("Cart", "Sorry, your cart is empty!"); return View(); } // Invoke model binding manually if (TryUpdateModel(cart.ShippingDetails, form.ToValueProvider())) { orderSubmitter.SubmitOrder(cart); cart.Clear(); return View("Completed"); } else // Something was invalid return View(); } with the following unit test [Test] public void Submitting_Empty_Shipping_Details_Displays_Default_View_With_Error() { // Arrange CartController controller = new CartController(null, null); Cart cart = new Cart(); cart.AddItem(new Product(), 1); // Act var result = controller.CheckOut(cart, new FormCollection { { "Name", "" } }); // Assert Assert.IsEmpty(result.ViewName); Assert.IsFalse(result.ViewData.ModelState.IsValid); } I have resolved any issues surrounding 'TryUpdateModel' by upgrading to ASP.NET MVC 2 (Release Candidate 2) and the website runs as expected. The associated error messages are: *Tests.CartControllerTests.Submitting_Empty_Shipping_Details_Displays_Default_View_With_Error: System.ArgumentNullException : Value cannot be null. Parameter name: controllerContext* and the more detailed at System.Web.Mvc.ModelValidator..ctor(ModelMetadata metadata, ControllerContext controllerContext) at System.Web.Mvc.DefaultModelBinder.OnModelUpdated(ControllerContext controllerContext, ModelBindingContext bindingContext) at System.Web.Mvc.DefaultModelBinder.BindComplexModel(ControllerContext controllerContext, ModelBindingContext bindingContext) at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, String prefix, String[] includeProperties, String[] excludeProperties, IValueProvider valueProvider) at System.Web.Mvc.Controller.TryUpdateModel[TModel](TModel model, IValueProvider valueProvider) at WebUI.Controllers.CartController.CheckOut(Cart cart, FormCollection form) Has anyone run into a similar issue or indeed got the test to pass?

    Read the article

  • NAnt not running NUnit tests

    - by ctford
    I'm using NUnit 2.5 and NAnt 0.85 to compile a .NET 3.5 library. Because NAnt 0.85 doesn't support .NET 3.5 out of the box, I've added an entry for the 3.5 framework to NAnt.exe.config. 'MyLibrary' builds, but when I hit the "test" target to execute the NUnit tests, none of them seem to run. [nunit2] Tests run: 0, Failures: 0, Not run: 0, Time: 0.012 seconds Here are the entries in my NAnt.build file for the building and running the tests: <target name="build_tests" depends="build_core"> <mkdir dir="Target" /> <csc target="library" output="Target\Test.dll" debug="true"> <references> <include name="Target\MyLibrary.dll"/> <include name="Libraries\nunit.framework.dll"/> </references> <sources> <include name="Test\**\*.cs" /> </sources> </csc> </target> <target name="test" depends="build_tests"> <nunit2> <formatter type="Plain" /> <test assemblyname="Target\Test.dll" /> </nunit2> </target> Is there some versioning issue I need to be aware of? Test.dll runs fine in the NUnit GUI. The testing dll is definitely being found, because if I move it I get the following error: Failure executing test(s). If you assembly is not build using NUnit 2.2.8.0... Could not load file or assembly 'Test' or one of its dependencies... I would be grateful if anyone could point me in the right direction or describe a similary situation they have encountered. Edit I have since tried it with NAnt 0.86 beta 1, and the same problem occurs.

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • starting and stopping hsqldb from unit tests

    - by Casey
    I'm trying to create integration tests using hsqldb in an in memory mode. At the moment, I have to start the hsqldb server from the command line before running the unit tests. I would like to be able to be able to control the hsqldb server from my integration tests. I can't seem to get this to all work out though from code. Thanks, Casey Update: This appears to work along with having a hibernate.cfg.xml file in the classpath: org.hsqldb.Server.main(new String[]{}); and in my hibernate.cfg.xml file: <property name="connection.driver_class">org.hsqldb.jdbcDriver</property> <property name="connection.url">jdbc:hsqldb:mem:ww</property> <property name="connection.username">sa</property> <property name="connection.password"></property> <property name="connection.pool_size">1</property> <property name="dialect">org.hibernate.dialect.HSQLDialect</property> <property name="current_session_context_class">thread</property> <property name="cache.provider_class">org.hibernate.cache.NoCacheProvider</property> <property name="hbm2ddl.auto">update</property>

    Read the article

  • Layout of Ant build.xml with junit tests

    - by Derk
    In a project I have a folder src for all application source code and a different folder test for all junit test (both with a simular package hierarchy). Now I want that Ant can run all tests in in test folder, bot the problem is that now also files in the src folder with "Test" in the filename are included. This is the test target in the build.xml: <target name="test" depends="build"> <mkdir dir="reports/tests" /> <junit fork="yes" printsummary="yes"> <formatter type="xml"/> <classpath> <path location="${build}/WEB-INF/classes"/> <fileset dir="${projectname.home}/lib"> <include name="servlet-api.jar"/> <include name="httpunit.jar"/> <include name="Tidy.jar"/> <include name="js.jar"/> <include name="junit.jar"/> </fileset> </classpath> <batchtest todir="reports/tests"> <fileset dir="${build}/WEB-INF/classes"> <include name="**/*Test.class"/> </fileset> </batchtest> </junit> And I have added the test folder to the build target: <target name="build" depends="init,init-props,prepare"> <javac source="1.5" debug="true" destdir="${build}/WEB-INF/classes"> <src path="src" /> <src path="test" /> <classpath refid="classpath"/> </javac> </target>

    Read the article

  • MSTest unit test passes by itself, fails when other tests are run

    - by Sarah Vessels
    I'm having trouble with some MSTest unit tests that pass when I run them individually but fail when I run the entire unit test class. The tests test some code that SLaks helped me with earlier, and he warned me what I was doing wasn't thread-safe. However, now my code is more complicated and I don't know how to go about making it thread-safe. Here's what I have: public static class DLLConfig { private static string _domain; public static string Domain { get { return _domain = AlwaysReadFromFile ? readCredentialFromFile(DOMAIN_TAG) : _domain ?? readCredentialFromFile(DOMAIN_TAG); } } } And my test is simple: string expected = "the value I know exists in the file"; string actual = DLLConfig.Domain; Assert.AreEqual(expected, actual); When I run this test by itself, it passes. When I run it alongside all the other tests in the test class (which perform similar checks on different properties), actual is null and the test fails. I note this is not a problem with a property whose type is a custom Enum type; maybe I'm having this problem with the Domain property because it is a string? Or maybe it's a multi-threaded issue with how MSTest works?

    Read the article

  • Some unit tests fail in automated Team Build task

    - by weenet
    I have an odd situation. I have a suite of unit tests that pass on my dev machine. They pass on the build machine if run from visual studio. But 5 of them reliably fail during the automated build. There is nothing noteworthy about the ones that fail that I can see (and I've stared at them a long time). Anyone seen anything like this? Is there a way to see the test output in the Team Build log? All I get is Passed or Failed messages, but not the Assert message. Thanks!

    Read the article

  • Automated builds of BizTalk 2009 projects using Team System 2008 Build

    - by Doug
    I'm trying to configure automated build of BizTalk 2009 projects using Team Foundation Server 2008. We have a staging server which has BizTalk 2009 installed. I ran the Team Foundation Server Build Setup on this server, and it can build non-BizTalk projects OK. However, BizTalk projects fail to build. I suspected something was amiss when "Deployment" was not a valid build type! I tried copying various things over from a developer PC which has BizTalk and Visual Studio 2008 installed, but still couldn't get it to work. I don't really want to install Visual Studio on the staging server, but without it the "Developer Tools and SDK" option in the BizTalk install is greyed out. I guess I need this in order for BizTalk projects to compile. So, my question is can a BizTalk 2009 server be used as a TFS build agent to build BizTalk projects without having Visual Studio installed. If the answer is no, what's the smallest part of VS that can be installed to get this to work? Thanks in advance.

    Read the article

  • Automated payment notification with php

    - by Rob Y
    I'm about to integrate automated payments into a site. To date, I've successfully used paypal for a number of projects, but these have always been sites which sell physical goods, meaning I can upload the cart contents, user pays, person physically ships goods. This site is a one off payment to enable extra features on a web app. My current thinking is to go down the paypal IPN route to get a notification back and update the users account based on the successful payment. Question is in two parts: 1 - is there a better / simpler way? (any payment processor considered) 2 - does anyone know of a code library or plug in for php which will speed up my integration? Thanks for your help. Rob

    Read the article

  • SQL Server 2008 automated database drop, create and fill

    - by lox
    For the database in my project I have a drop/create script for the database, a script for creating tables and SPs and an Access 2003 .mdb file with some exported values. To set up the database from scratch I can use my SQL management studio to first run one script, then the other and lastly manually run the sort of tedious import task. But I would like to do this as automated as possible. Hopefully something like putting the three files in a folder along with a fourth script to execute. Looking something like: run script "dropcreate.sql" run script "createtables.sql" import "values.mdb" How is this done? I hope to avoid using SSIS and the like. The tricky this is of course the import of data, where I can't seem to find a simple way. It is also important that the files a left as they are and not embedded into anything.

    Read the article

  • how often should the entire suite of a system's unit tests be run?

    - by gerryLowry
    Generally, I'm still very much a unit testing neophyte. BTW, you may also see this question on other forums like xUnit.net, et cetera, because it's an important question to me. I apoligize in advance for my cross posting; your opinions are very important to me and not everyone in this forum belongs to the other forums too. I was looking at a large decade old legacy system which has had over 700 unit tests written recently (700 is just a small beginning). The tests happen to be written in MSTest but this question applies to all testing frameworks AFAIK. When I ran, via vs2008 "ALL TESTS", the final count was only seven tests. That's about 1% of the total tests that have been written to date. MORE INFORMATION: The ASP.NET MVC 2 RTM source code, including its unit tests, is available on CodePlex; those unit tests are also written in MSTest even though (an irrelevant fact) Brad Wilson later joined the ASP.NET MVC team as its Senior Programmer. All 2000 plus tests get run, not just a few. QUESTION: given that AFAIK the purpose of unit tests is to identify breakages in the SUT, am I correct in thinking that the "best practice" is to always, or at least very frequently, run all of the tests? Thank you. Regards, Gerry (Lowry)

    Read the article

  • Ideas on simulating webservices for local automated testing.

    - by novice123
    I am testing an app, which talks to different webservices over the internet. For my automated testing, I don't want to go over the network. To achieve this, I need to simulate the webservice on my machine using another app. My initial thought is to record all the requests and responses between client and webservice, and then just write a simulation app which replays these responses. The disadvantage here is that everytime the webservice protocol changes a bit, I have to modify all my recorded resposnes. so I am looking to see if there are more elegant solutions. have anyone solved a similar problem? any thoughts, suggestion are appreciated.

    Read the article

  • Web Search API which allows automated queries

    - by Spi1988
    I need to develop a java desktop application which sends queries to a search engine in order to obtain the very first highest ranked pages (Example: the first 4 pages only). Some heavy processing needs to be performed on the retrieved pages, so the time between a query and another won't be less then a minute. I would like to know whether there is any web search API for java, suitable for my situation, i.e. which allows the use of automated queries? (since in my case, the queries are generated programatically, and not through user interaction) I have checked Google's AJAX Search API and also Yahoo's Search Boss, however they only allow queries triggered by direct user interaction.

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >