Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 52/328 | < Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • Testing with Qt's QTestLib module

    - by ak
    Hi I started writing some tests with Qt's unit testing system. How do you usually organize the tests? It is one test class per one module class, or do you test the whole module with a single test class? Qt docs (or some podcast that I recently watched) suggested to follow the former strategy. I want to write tests for a module. The module provides only one class that is going to be used by the module user, but there is a lot of logic abstracted in other classes, which I would also like to test, besides testing the public class. The problem is that Qt's proposed way to run tests involved the QTEST_MAIN macro: QTEST_MAIN(TestClass) #include "test_class.moc" and eventually one test program is capable of testing just one test class. And it kinda sucks to create test projects for every single class in the module. Of course, one could take a look at the QTEST_MAIN macro, rewrite it, and run other test classes. But is there something, that works out of the box?

    Read the article

  • Why is RSpec so slow under Rails?

    - by Adrian Dunston
    Whenever I run rspec tests for my Rails application it takes forever and a day of overhead before it actually starts running tests. Why is rspec so slow? Is there a way to speed up Rails' initial load or single out the part of my Rails app I need (e.g. ActiveRecord stuff only) so it doesn't load absolutely everything to run a few tests?

    Read the article

  • Easymock vs Mockito: Design vs Maintainability?

    - by RAbraham
    One way of thinking about this is: if we care about the Design of the code then Easymock is the better choice as it gives feedback to you by its concept of expectations If we care about the maintainability of tests( easier to read,write and having less brittle tests which are not affected much by change), then Mockito seems a better choice. My question is: - If you have used Easymock in large scale projects, do you find that your tests are harder to maintain? - What are the limitations of Mockito( other than endo testing)

    Read the article

  • Programatically Gathering NUnit results

    - by skb
    Hi. I am running some NUnit tests automatically when my nightly build completes. I have a console application which detects the new build, and then copies the built MSI's to a local folder, and deploys all of my components to a test server. After that, I have a bunch of tests in NUnit dll's that I run by executing "nunit-console.exe" using Process/ProcessStartInfo. My question is, how can programatically I get the numbers for Total Success/Failed tests?

    Read the article

  • Grails - Link checking as part of a continuous integration.

    - by Reverend Gonzo
    So, we have a grails app set up with a Hudson CI build process. We're running unit tests, integration tests, and about to set up Selenium for some functional tests as well. However, are there any good ways of fully testing a sites links to make sure nothing has broken in a release. I know there's link checkers in general, but I'd like to have it be a part of the build process, so a build outright fails if something isn't right.

    Read the article

  • Jenkins plugin for different types of slaves

    - by user1195996
    We have some tests that need to be run on multiple types of specific hardware. Its possible that these tests might pass on some pieces of hardware but fail on others, and we want to know where they work and where they fail. So, for certain tests, we would like to provide a list of hardware they need to be tested on. We'd like to put all the needed hardware in a pool that Jenkins has access to, and then have Jenkins run the right tests on the right hardware, depending on the hardware list that comes with the test. And of course we'd like to keep track of which test worked where. Is there a plugin for Jenkins to be able to handle this sort of thing? Has anyone else solved this sort of problem?

    Read the article

  • PHP specifying a fixed include source for scripts in different directories

    - by Extrakun
    I am currently doing unit testing, and use folders to organize my test cases. All cases pertaining to managing of user accounts, for example, go under \tests\accounts. Over time, there are more test cases, and I begin to seperate the cases by types, such as \tests\accounts\create, \tests\account\update and etc. However, one annoying problem is I have to specify the path to a set of common includes. I have to use includes like this: include_once ("../../../../autoload.php"); include_once ("../../../../init.php"); A test case in tests\accounts\ would require change to the include (one less directory level down). Is there anyway to have them somehow locating my two common includes? I understand I could set include paths within my PHP's configurations, or use server environment variables, but I would like to avoid such solutions as they make the application less portable and coupled with another layer which the programmer can't control (some web-host doesn't allow configuration of PHP's configuration settings, for example)

    Read the article

  • Smoke testing a .NET web application

    - by pdr
    I cannot believe I'm the first person to go through this thought process, so I'm wondering if anyone can help me out with it. Current situation: developers write a web site, operations deploy it. Once deployed, a developer Smoke Tests it, to make sure the deployment went smoothly. To me this feels wrong, it essentially means it takes two people to deploy an application; in our case those two people are on opposite sides of the planet and timezones come into play, causing havoc. But the fact remains that developers know what the minimum set of tests is and that may change over time (particularly for the web service portion of our app). Operations, with all due respect to them (and they would say this themselves), are button-pushers who need a set of instructions to follow. The manual solution is that we document the test cases and operations follow that document each time they deploy. That sounds painful, plus they may be deploying different versions to different environments (specifically UAT and Production) and may need a different set of instructions for each. On top of this, one of our near-future plans is to have an automated daily deploy environment, so then we'll have to instruct a computer as to how to deploy a given version of our app. I would dearly like to add to that instructions for how to smoke test the app. Now developers are better at documenting instructions for computers than they are for people, so the obvious solution seems to be to use a combination of nUnit (I know these aren't unit tests per se, but it is a built-for-purpose test runner) and either the Watin or Selenium APIs to run through the obvious browser steps and call to the web service and explain to the Operations guys how to run those unit tests. I can do that; I have mostly done it already. But wouldn't it be nice if I could make that process simpler still? At this point, the Operations guys and the computer are going to have to know which set of tests relate to which version of the app and tell the nUnit runner which base URL it should point to (say, www.example.com = v3.2 or test.example.com = v3.3). Wouldn't it be nicer if the test runner itself had a way of giving it a base URL and letting it download say a zip file, unpack it and edit a configuration file automatically before running any test fixtures it found in there? Is there an open source app that would do that? Is there a need for one? Is there a solution using something other than nUnit, maybe Fitnesse? For the record, I'm looking at .NET-based tools first because most of the developers are primarily .NET developers, but we're not married to it. If such a tool exists using other languages to write the tests, we'll happily adapt, as long as there is a test runner that works on Windows.

    Read the article

  • correct approach to store in database

    - by John
    I'm developing an online website (using Django and Mysql). I have a Tests table and User table. I have 50 tests within the table and each user completes them at their own pace. How do I store the status of the tests in my DB? One idea that came to my mind is to create an additional column in User table. That column containing testid's separated by comma or any other delimiter. userid | username | testscompleted 1 john 1, 5, 34 2 tom 1, 10, 23, 25 Another idea was to create a seperate table to store userid and testid. So, I'll have only 2 columns but thousands of rows (no of tests * no of users) and they will always continue to increase. userid | testid 1 1 1 5 2 1 1 34 2 10

    Read the article

  • JUnit test failing - complaining of missing data that was just inserted

    - by Collin Peters
    I have an extremely odd problem in my JUnit tests that I just can't seem to nail down. I have a multi-module java webapp project with a fairly standard structure (DAO's, service clasess, etc...). Within this project I have a 'core' project which contains some abstracted setup code which inserts a test user along with the necessary items for a user (in this case an 'enterprise', so a user must belong to an enterprise and this is enforced at the database level) Fairly simple so far... but here is where the strangeness begins some tests fail to run and throw a database exception where it complains that a user cannot be inserted because an enterprise does not exist. But it just created the enterprise in the preceding line of code! And there was no errors in the insertion of the enterprise. Stranger yet, if this test class is run by itself everything works fine. It is only when the test is run as part of the project that it fails! And the exact same abstracted code was run by 10+ tests before the one that fails! f I have been banging my head against a wall with this for days and haven't really made any progress. I'm not even sure what information to offer up to help diagnose this. Using JUnit 4.4, Spring 2.5.6, iBatis 2.3.0, Postgresql 8.3 Switching to org.springframework.jdbc.datasource.DriverManagerDataSource from org.apache.commons.dbcp.BasicDataSource changed the problem. Using DriverManagerDataSource the tests work for the first time, but now all of a sudden a lot of data isn't rolled back in the database! It leaves everything behind. All with no errors Tests fail when run via Eclipse & Maven Please ask for any info which may help me solve my problem!

    Read the article

  • Unit testing custom controls in Silverlight

    - by Hrvoje
    I have several custom controls (some kind of frames for content and layout management, like wrap panel), and would like to write unit tests for them. It's hard to find any good examples except Silverlight control toolkit, which has some helper classes to do unit tests and it's quite complicated. For MVVM classes it's easy to write tests because they don't use SL dependency system and infrastructure. Questions: how to unit test DepedenyProperty, what do I need to test how to test attached property do I test bindings with theme or UserControl, like simple textblock content binding, or command/event binding in MVVM with UserControl what else do I test in my custom controls, beside my business logic any good tutorial to achieve tests like those in control toolkit How do I start? Is SL controls toolkit only option for learning? For testing framework i'm using one from control toolkit, and for continuus integration on TFS build server I planned to use Statlight (from codeplex). Any advice on that?

    Read the article

  • MSBuild CreateItem condition include based on config file

    - by Mac
    I'm trying to select a list of test dlls that contain corresponding config files MyTest.Tests.dll MyTest.Tests.config I have to use a createItem as the dlls are not available at the time of the script loading <CreateItem Include="$(AssemblyFolder)\*.Tests.dll" Condition="???" <Output TaskParameter="Include" ItemName="TestBinariesWithConfig"/> </CreateItem> Is there a condition I can use or is this the wrong approach? Thanks Mac

    Read the article

  • Caching result of setUp() using Python unittest

    - by dbr
    I currently have a unittest.TestCase that looks like.. class test_appletrailer(unittest.TestCase): def setup(self): self.all_trailers = Trailers(res = "720", verbose = True) def test_has_trailers(self): self.failUnless(len(self.all_trailers) > 1) # ..more tests.. This works fine, but the Trailers() call takes about 2 seconds to run.. Given that setUp() is called before each test is run, the tests now take almost 10 seconds to run (with only 3 test functions) What is the correct way of caching the self.all_trailers variable between tests? Removing the setUp function, and doing.. class test_appletrailer(unittest.TestCase): all_trailers = Trailers(res = "720", verbose = True) ..works, but then it claims "Ran 3 tests in 0.000s" which is incorrect.. The only other way I could think of is to have a cache_trailers global variable (which works correctly, but is rather horrible): cache_trailers = None class test_appletrailer(unittest.TestCase): def setUp(self): global cache_trailers if cache_trailers is None: cache_trailers = self.all_trailers = all_trailers = Trailers(res = "720", verbose = True) else: self.all_trailers = cache_trailers

    Read the article

  • Recommendations for Continuous integration for Mercurial/Kiln + MSBuild + MSTest

    - by TDD
    We have our source code stored in Kiln/Mercurial repositories; we use MSBuild to build our product and we have Unit Tests that utilize MSTest (Visual Studio Unit Tests). What solutions exist to implement a continuous integration machine (i.e. Build machine). The requirements for this are: A build should be kicked of when necessary (i.e. code has changed in the Repositories we care about) Before the actual build, the latest version of the source code must be acquired from the repository we are building from The build must build the entire product The build must build all Unit Tests The build must execute all unit tests A summary of success/failure must be sent out after the build has finished; this must include information about the build itself but also about which Unit Tests failed and which ones succeeded. The summary must contain which changesets were in this build that were not yet in the previous successful (!) build The system must be configurable so that it can build from multiple branches(/Repositories). Ideally, this system would run on a single box (our product isn't that big) without any server components. What solutions are currently available? What are their pros/cons? From the list above, what can be done and what cannot be done? Thanks

    Read the article

  • Convincing why testing is good

    - by FireAphis
    Hello, In my team of real-time-embedded C/C++ developers, most people don't have any culture of testing their code beyond the casual manual sanity checks. I personally strongly believe in advantages of autonomous automatic tests, but when I try to convince I get some reappearing arguments like: We will spend more time on writing the tests than writing the code. It takes a lot of effort to maintain the tests. Our code is spaghetti; no way we can unit-test it. Our requirement are not sealed – we’ll have to rewrite all the tests every time the requirements are changed. Now, I'd gladly hear any convincing tips and advises, but what I am really looking for are references to researches, articles, books or serious surveys that show (preferably in numbers) how testing is worth the effort. Something like "We in IBM/Microsoft/Google, surveying 3475 active projects, found out that putting 50% more development time into testing decreased by 75% the time spent on fixing bugs" or "after half a year, the time needed to write code with test was only marginally longer than what used to take without tests". Any ideas? P.S.: I'm adding C++ tag too in case someone has a specific experience with convincing this, usually elitist, type of developers :-)

    Read the article

  • How to do concurrent modification testing for grails application

    - by werner5471
    I'd like to run tests that simulate users modifying certain data at the same time for a grails application. Are there any plug-ins / tools / mechanisms I can use to do this efficiently? They don't have to be grails specific. It should be possible to fire multiple actions in parallel. I'd prefer to run the tests on functional level (so far I'm using Selenium for other tests) to see the results from the user perspective. Of course this can be done in addition to integration testing if you'd recommend to run concurrent modification tests on integration level as well.

    Read the article

  • Is there a way to undo Mocha stubbing of any_instance?

    - by Steve Weet
    Within my controller specs I am stubbing out valid? for some routing tests, (based on Ryan Bates nifty_scaffold) as follows :- it "create action should render new template when model is invalid" do Company.any_instance.stubs(:valid?).returns(false) post :create response.should render_template(:new) end This is fine when I test the controllers in isolation. I also have the following in my model spec it "is valid with valid attributes" do @company.should be_valid end Again this works fine when tested in isolation. The problem comes if I run spec for both models and controllers. The model test always fails as the valid? method has been stubbed out. Is there a way for me to remove the stubbing of any_instance when the controller test is torn down. I have got around the problem by running the tests in reverse alphabetic sequence to ensure the model tests run before the controllers but I really don't like my tests being sequence dependant.

    Read the article

  • Python doctests / sphinx : style guide, how to use those and have a readable code ?

    - by Sébastien Piquemal
    Hi ! I love doctests, it is the only testing framwork I use, because it is so quick to write, and because used with sphinx it makes such great documentations with almost no effort... However, very often, I end-up doing things like this : """ Descriptions ============= bla bla bla ... >>> test 1 bla bla bla + tests tests tests * 200 lines = poor readability of the actual code """ What I mean is that I put all my tests with documentation explanations on the top of the module, so you have to scroll stupidly to find the actual code, and this is quite ugly (in my opinion). However, I think that the doctests should still stay in the module, because you should be able to read them while reading the source code. So here comes my question : sphinx/doctests lovers, how do you organize your doctests, such as the code readability doesn't suffer ?

    Read the article

  • NUnit - Multiple properties of the same name? Linking to requirements

    - by Ryan Ternier
    I'm linking all our our System Tests to test cases and to our Requirements. Every requirement has an ID. Every Test Case / System Tests tests a variety of requirements. Every module of code links to multiple requirements. I'm trying to find the best way to link every system test to its driving requirements. I was hoping to do something like: [NUnit.Framework.Property("Release", "6.0.0")] [NUnit.Framework.Property("Requirement", "FR50082")] [NUnit.Framework.Property("Requirement", "FR50084")] [NUnit.Framework.Property("Requirement", "FR50085")] [TestCase(....)] public void TestSomething(string a, string b...) However, that will break because Property is a Key-Value pair. The system will not allow me to have multiple Properties with the same key. The reason I'm wanting this is to be able to test specific requirements in our system if a module changes that touches these requirements. Rather than run over 1,000 system tests on every build, this would allow us to target what to test based on changes done to our code. Some system tests run upwards of 5 minutes (Enterprise healthcare system), so "Just run all of them" isn't a viable solution. We do that, but only before promoting through our environments. Thoughts?

    Read the article

  • Given a short (2-week) sprint, is it ever acceptable to forgo TDD to "get things done"?

    - by Ben Aston
    Given a short sprint, is it ever acceptable to forgo TDD to "get things done" within the sprint. For example a given piece of work might need say 1/3 of the sprint to design the object model around an existing implementation. Under this scenario you might well end up with implemented code, say half way through the sprint, without any tests (implementing unit tests during this "design" stage would add significant effort and the tests would likely be thrown away a few times until the final "design" is settled upon). You might then spend a day or two in the second week adding in unit / integration tests after the fact. Is this acceptable?

    Read the article

  • Is it possible to use the ScalaTest BDD syntax in a JUnit environment?

    - by ebruchez
    I would like to describe tests in BDD style e.g. with FlatSpec but keep JUnit as a test runner. The ScalaTest Quick Start does not seem to show any example of this: http://www.scalatest.org/getting_started_with_junit_4 I first tried naively to write tests within @Test methods, but that doesn't work and the assertion is never tested: @Test def foobarBDDStyle { "The first name control" must "be valid" in { assert(isValid("name·1")) } // etc. } Is there any way to achieve this? It would be even better if regular tests can be mixed and matched with BDD-style tests.

    Read the article

  • Is it possible to use maven only for running selenium plugin?

    - by tputkonen
    Our pom.xml currently contains both the build settings, as well as execution of selenium using selenium-maven-plugin. I would like to split it in to two pom files, one for the build and unit tests and the second one for executing selenium tests. (This way I could first build the project in Hudson, and after successful build execute Selenium tests using another project). Is it possible to configure maven to only execute the selenium-maven-plugin?

    Read the article

  • Android Test testPreconditions

    - by user1184113
    In Android developers I've seen that testPreconditions() method is supposed to be launch before all tests. But in my app test, it's acting like a normal test. It does not run before all tests. Is there something wrong ? Here is the description about testPreconditions() from android developer : "A preconditions test checks the initial application conditions prior to executing other tests. It's similar to setUp(), but with less overhead, since it only runs once."

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55 56 57 58 59  | Next Page >