Search Results

Search found 6281 results on 252 pages for 'automated tests'.

Page 9/252 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Ethernet run tests green but won't connect

    - by Simon Gillbee
    I have a single ethernet run at home that I just added. I have a cable tester that tests for pin/pair crossover or miswired pines. The entire line tests green (all 4 LEDs light up green on the tester) but I can't get any PC to connect through the link. No link light on the ethernet connection. Any simple tests/fixes, or do I rip out the wall sockets and do it again?

    Read the article

  • Weirdness with cabal, HTF, and HUnit assertions

    - by rampion
    So I'm trying to use HTF to run some HUnit-style assertions % cat tests/TestDemo.hs {-# OPTIONS_GHC -Wall -F -pgmF htfpp #-} module Main where import Test.Framework import Test.HUnit.Base ((@?=)) import System.Environment (getArgs) -- just run some tests main :: IO () main = getArgs >>= flip runTestWithArgs Main.allHTFTests -- all these tests should fail test_fail_int1 :: Assertion test_fail_int1 = (0::Int) @?= (1::Int) test_fail_bool1 :: Assertion test_fail_bool1 = True @?= False test_fail_string1 :: Assertion test_fail_string1 = "0" @?= "1" test_fail_int2 :: Assertion test_fail_int2 = [0::Int] @?= [1::Int] test_fail_string2 :: Assertion test_fail_string2 = "true" @?= "false" test_fail_bool2 :: Assertion test_fail_bool2 = [True] @?= [False] And when I use ghc --make, it seems to work correctly. % ghc --make tests/TestDemo.hs [1 of 1] Compiling Main ( tests/TestDemo.hs, tests/TestDemo.o ) Linking tests/TestDemo ... % tests/TestDemoA ... * Tests: 6 * Passed: 0 * Failures: 6 * Errors: 0 Failures: * Main:fail_int1 (tests/TestDemo.hs:9) * Main:fail_bool1 (tests/TestDemo.hs:12) * Main:fail_string1 (tests/TestDemo.hs:15) * Main:fail_int2 (tests/TestDemo.hs:19) * Main:fail_string2 (tests/TestDemo.hs:22) * Main:fail_bool2 (tests/TestDemo.hs:25) But when I use cabal to build it, not all the tests that should fail, fail. % cat Demo.cabal ... executable test-demo build-depends: base >= 4, HUnit, HTF main-is: TestDemo.hs hs-source-dirs: tests % cabal configure Resolving dependencies... Configuring Demo-0.0.0... % cabal build Preprocessing executables for Demo-0.0.0... Building Demo-0.0.0... [1 of 1] Compiling Main ( tests/TestDemo.hs, dist/build/test-demo/test-demo-tmp/Main.o ) Linking dist/build/test-demo/test-demo ... % dist/build/test-demo/test-demo ... * Tests: 6 * Passed: 3 * Failures: 3 * Errors: 0 Failures: * Main:fail_int2 (tests/TestDemo.hs:23) * Main:fail_string2 (tests/TestDemo.hs:26) * Main:fail_bool2 (tests/TestDemo.hs:29) What's going wrong and how can I fix it?

    Read the article

  • ASP.NET MVC 2 RTM Unit Tests not compiling

    - by nmarun
    I found something weird this time when it came to ASP.NET MVC 2 release. A very handful of people ‘made noise’ about the release.. at least on the asp.net blog site, usually there’s a big ‘WOOHAA… <something> is released’, kind of a thing. Hmm… but here’s the reason I’m writing this post. I’m not sure how many of you read the release notes before downloading the version.. I did, I did, I did. Now there’s a ‘Known issues’ section in the document and I’m quoting the text as is from this section: Unit test project does not contain reference to ASP.NET MVC 2 project: If the Solution Explorer window is hidden in Visual Studio, when you create a new ASP.NET MVC 2 Web application project and you select the option Yes, create a unit test project in the Create Unit Test Project dialog box, the unit test project is created but does not have a reference to the associated ASP.NET MVC 2 project. When you build the solution, Visual Studio will display compilation errors and the unit tests will not run. There are two workarounds. The first workaround is to make sure that the Solution Explorer is displayed when you create a new ASP.NET MVC 2 Web application project. If you prefer to keep Solution Explorer hidden, the second workaround is to manually add a project reference from the unit test project to the ASP.NET MVC 2 project. This definitely looks like a bug to me and see below for a visual: At the top right corner you’ll see that the Solution Explorer is set to auto hide and there’s no reference for the TestMvc2 project and that is the reason we get compilation errors without even writing a single line of code. So thanks to <VeryBigFont>ME</VeryBigFont> and <VerySmallFont>Microsoft</VerySmallFont>) , we’ve shown the world how to resolve a major issue and to live in Peace with the rest of humanity!

    Read the article

  • TFS 2010 RC does not run Visual Studio 2008 MSTest unit tests

    - by Bernard Vander Beken
    Steps: Run the build including unit tests. Expected result: the unit tests are executed and succeed. Actual result: the unit tests are built by the build, but this is the result: 1 test run(s) completed - 0% average pass rate (0% total pass rate) 0/4 test(s) passed, 0 failed, 4 inconclusive, View Test Results Other Errors and Warnings 1 error(s), 0 warning(s) TF270015: 'MSTest.exe' returned an unexpected exit code. Expected '0'; actual '1'. All the tests are enumerated (four), but the result for each test is "Not Executed". Context: Team Foundation Server 2010 release candidate A build definition that runs projects using the Visual Studio 2008 project format and .NET 3.5 SP1. The unit tests run on a development machine, within Visual Studio. The unit tests project references C:\Program Files (x86)\Microsoft Visual Studio 9.0\Common7\IDE\PublicAssemblies\Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll Typical test class [TestClass] public class DemoTest { [TestMethod] public void DemoTestName() { } // etc }

    Read the article

  • New NCover 3.4.2 makes all my MSTest unit tests fail

    - by Steven
    Yesterday, I decided to install the newest NCover version (3.4.2). However, when I ran it on my existing .ncover configuration file, the NCover output suddenly reported that all my MSTest tests failed. Of course those tests succeed when ran within Visual Studio. Because of this, NCover isn't able to determine any coverage. Somehow the old configuration doesn't seem to work with the new version. Does anyone have any idea what the problem could be or how to solve it? Btw. Here is my ncover configuration. Project settings: Path to application to profile: c:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe Arguments for the application to profile: /testcontainer:D:\dev\MyApp\MyApp.Services.Tests.Unit\bin\Debug\MyApp.Services.Tests.Unit.dll /testcontainer:D:\dev\MyApp\MyApp.WS.Tests.Unit\bin\Debug\MyApp.WS.Tests.Unit.dll Working folder: D:\dev\MyApp

    Read the article

  • Prevent OCUnit tests from running when compilation fails

    - by mhenry1384
    I'm using Xcode 3.2.2 and the built in OCUnit test stuff. One problem I'm running into is that every time I do a build my unit tests are run, even if the build failed. Let's say I make a syntax error in one of my tests. The test fails to compile and the last successful compilation of the unit tests are run. The same thing happens if one of the dependent targets fail to build - the tests are still run. Which is obviously not what I want. How can I prevent the tests from running if the build fails? If this is not possible then I'd rather have the tests never run automatically, is that possible? Sorry if this is obvious, I'm an Xcode noob. Should I be using a better unit testing framework?

    Read the article

  • Intermittent NoClassDefFoundError error running Selenium JUnit tests

    - by Matt Sheppard
    For some time, I've been running a substantial set of JUnit / Selenium tests against a number of platforms on a nightly basis. Intermittently (about once in every 40 runs), all the tests for a given platform fail with a NoClassDefFoundError on the common superclass of all my tests as follows. java.lang.NoClassDefFoundError: [common super class of all my selenium tests] at java.lang.Class.getDeclaredConstructors0(Native Method) at java.lang.Class.privateGetDeclaredConstructors(Class.java:2389) at java.lang.Class.getConstructors(Class.java:1459) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at java.lang.reflect.Constructor.newInstance(Constructor.java:513) Re-invoking the tests will generally get the tests running normally, so it's clearly something dependent on some condition I am not considering. What might be causing this error to occur seemingly randomly?

    Read the article

  • Tests that are 2-3 times bigger than the testable code

    - by HeavyWave
    Is it normal to have tests that are way bigger than the actual code being tested? For every line of code I am testing I usually have 2-3 lines in the unit test. Which ultimately leads to tons of time being spent just typing the tests in (mock, mock and mock more). Where are the time savings? Do you ever avoid tests for code that is along the lines of being trivial? Most of my methods are less than 10 lines long and testing each one of them takes a lot of time, to the point where, as you see, I start questioning writing most of the tests in the first place. I am not advocating not unit testing, I like it. Just want to see what factors people consider before writing tests. They come at a cost (in terms of time, hence money), so this cost must be evaluated somehow. How do you estimate the savings created by your unit tests, if ever?

    Read the article

  • Is there any way to delete an HttpOnly cookie from C# Selenium tests?

    - by BenA
    I have a set of C# Selenium tests that need to delete a cookie that has the HttpOnly flag set. Unfortunately the DefaultSelenium.GetCookie() and DefaultSelenium.DeleteCookie() commands aren't able to access the cookie, because it has that HttpOnly flag set. I've confirmed this by removing the flag by hand, and checking that subsequent calls to either of those methods are then happily able to manipulate the cookie in question. Is there any other way to do this via the Selenium .NET client driver? All ideas welcome!

    Read the article

  • Tests for JUnit. How ?

    - by Belun
    How is the JUnit Framework tested ? How are the tests for their framework code created, considering that JUnit as a testing framework itself. What technology are they using ? Their own testing framework ? A smaller more basic version of it ? Another framework ? Can any knower please provide some details ?

    Read the article

  • How should one import large amounts of data for FIT/Fitnesse tests?

    - by Lachlan
    We have a scheduling engine with large amounts of test data to test all the scenarios, so test automation is critical. We're currently hoping to use FIT/Fitnesse. However a single test has quite a large table of test data, so it doesn't fit very well into the mould of "two or three inputs, one or more outputs" that Fitnesse uses in its examples. Hopefully the other functionality of Fitnesse makes it worth using it. I hear that there is a way to initialize an application for a FIT test with an Excel spreadsheet - not the Spreadsheet to Fitness function, mind you - but I haven't been able to find it so far. Once the whole spreadsheet is loaded into the application, and the application does its thing, we plan to compare either a number of output rows, or perhaps just the last row, to see if the test passes. The application is currently pulling test data from a database for manual tests, but writing to a database, then initializing from it, is not preferred because of the performance impact. The application is written in C#.

    Read the article

  • Writing good tests for Django applications

    - by Ludwik Trammer
    I've never written any tests in my life, but I'd like to start writing tests for my Django projects. I've read some articles about tests and decided to try to write some tests for an extremely simple Django app or a start. The app has two views (a list view, and a detail view) and a model with four fields: class News(models.Model): title = models.CharField(max_length=250) content = models.TextField() pub_date = models.DateTimeField(default=datetime.datetime.now) slug = models.SlugField(unique=True) I would like to show you my tests.py file and ask: Does it make sense? Am I even testing for the right things? Are there best practices I'm not following, and you could point me to? my tests.py (it contains 11 tests): # -*- coding: utf-8 -*- from django.test import TestCase from django.test.client import Client from django.core.urlresolvers import reverse import datetime from someproject.myapp.models import News class viewTest(TestCase): def setUp(self): self.test_title = u'Test title: bareksc' self.test_content = u'This is a content 156' self.test_slug = u'test-title-bareksc' self.test_pub_date = datetime.datetime.today() self.test_item = News.objects.create( title=self.test_title, content=self.test_content, slug=self.test_slug, pub_date=self.test_pub_date, ) client = Client() self.response_detail = client.get(self.test_item.get_absolute_url()) self.response_index = client.get(reverse('the-list-view')) def test_detail_status_code(self): """ HTTP status code for the detail view """ self.failUnlessEqual(self.response_detail.status_code, 200) def test_list_status_code(self): """ HTTP status code for the list view """ self.failUnlessEqual(self.response_index.status_code, 200) def test_list_numer_of_items(self): self.failUnlessEqual(len(self.response_index.context['object_list']), 1) def test_detail_title(self): self.failUnlessEqual(self.response_detail.context['object'].title, self.test_title) def test_list_title(self): self.failUnlessEqual(self.response_index.context['object_list'][0].title, self.test_title) def test_detail_content(self): self.failUnlessEqual(self.response_detail.context['object'].content, self.test_content) def test_list_content(self): self.failUnlessEqual(self.response_index.context['object_list'][0].content, self.test_content) def test_detail_slug(self): self.failUnlessEqual(self.response_detail.context['object'].slug, self.test_slug) def test_list_slug(self): self.failUnlessEqual(self.response_index.context['object_list'][0].slug, self.test_slug) def test_detail_template(self): self.assertContains(self.response_detail, self.test_title) self.assertContains(self.response_detail, self.test_content) def test_list_template(self): self.assertContains(self.response_index, self.test_title)

    Read the article

  • PHPUnit won't run tests by directory

    - by Frank Schwieterman
    I'm new to PHP, trying to get multiple tests to run at once. I was hoping to just run all tests in a directory, which seemed to work awhile (instead of using a phpunit.xml). I am able to run a test individually like so: phpunit FirstUnitTest sites\all\modules\experiment\unit-tests\FirstUnitTest.lua But when I try to run the same test by directory, its not found. I try using: phpunit sites\all\modules\experiment\unit-tests Does anyone know why this may not work?

    Read the article

  • Are unit tests also used to find bugs?

    - by Draco
    I was reading the following article and the author made it quite clear that unit tests are NOT used to find bugs. I would like to know what your thoughts are on this. I do know that unit tests makes the design of your application much more robust but isn't it the fact that finding bugs through unit tests that make the application robust, besides its other advantages? http://blog.stevensanderson.com/2009/08/24/writing-great-unit-tests-best-and-worst-practises/

    Read the article

  • Unit tests and fixtures

    - by Wizzard
    We have a bunch of unit tests which test a lot of webpages and REST API services. Currently when our tests run it pulls from these pages live but this can take ages to run sometimes, and it also feels like the tests should be testing more of our code - not just relying on them being up and responding (if that makes sense..). Is it better practice to save a valid api response and with the unit tests load this in during setup? Thoughts?

    Read the article

  • How can you write tests for Selenium (or similar) which don't fail because of minor or cosmetic changes?

    - by Sam
    I've been spending the last week or so learning selenium and building a series of web tests for a website we're about to launch. it's been great to learn, and I've picked up some xpath and css location techniques. the problem for me though, is seeing little changes break the tests - any change to a div, an id, or some autoid number that helps identify widgets breaks any number of tests - it just seems to be very brittle. so have you written selenium (or other similar) tests, and how do you deal with the brittle nature of the tests (or how do you stop them being brittle), and what sort of tests do you use selenium for?

    Read the article

  • What is the effect of creating unit tests during development on time to develop as well as time spent in maintenance activities?

    - by jgauffin
    I'm a consultant and I am going to introduce unit tests to all developers at my client site. My goal is to ensure that all new applications should have unit tests for all classes created. The client has a problem with high maintenance costs from fixing bugs in their existing applications. Their applications have a life span from between 5-15 years in which they continuously add new features. I'm quite confident that they will benefit greatly from starting with unit tests. I'm interested in the effect of unit tests on the time and cost of development: How much time will writing unit tests as part of the development process add? How much time will be saved in maintenance activities (testing and debugging) by having good unit tests?

    Read the article

  • Unit testing code paths

    - by Michael
    When unit testing using expectations, you define a set of method calls and corresponding results for those calls. These define the path through the method that you want to test. I have read that unit tests should not duplicate the code. But when you define these expectations, isn't that duplicating the code, or at least the process? How do you know when you're duplicating functionality under test?

    Read the article

  • What are some of the best automated web QA tools out there?

    - by Kant
    Issue We have a series of websites built in ASP.NET that we deploy frequently. Due to the lack of a QA team, we are unable to test the functionality and load of every web page within the site. Question What are some of the top tools for doing QA testing. The tool should include some basic functionality, such as: Notifying parties when unexpected results occur. Expected results should be configurable (i.e. Hit web page A, if the response doesn't have the string, "My intranet portal" in it, notify the appropriate parties). Any help is appreciated.

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • Completely automated DVD insert-rip-compress-eject workflow

    - by Kevin L.
    (Partially inspired by this question.) Background: I have a PC hidden away behind an HD LCD in custom-built entertainment center. The only visible part of the PC is an external DVD drive, mounted above the Wii. The PC happens to have Windows XP on it; Hackintoshing and Linux might be possible, but I've had issues with drivers for the sound card before. Let's just assume that OS X and Linux are a no-go unless they provide a truly awesome and simple solution for this particular problem. Goal: I would like to have a completely automated workflow for ripping DVDs. Something like this: Push the eject button on the DVD drive, insert the DVD. PC recognizes that this is a video DVD (as opposed to data). PC rips DVD to hard drive. PC finishes ripping, and ejects the DVD tray. PC compresses DVD image into some format that an Xbox 360 can read. PC copies finished compressed video file to a particular folder, so that it can be read into a WMP11 library and seamlessly played by the Xbox 360. PC cleans up all temporary files. Done. The impetus to have this be completely automated is that I’ll never need to switch the TV to the PC’s input and fiddle with the wireless keyboard. That’s just needless user intervention. The UI doesn’t have to be pretty. Nor do I care about speed. And I can probably bridge several of the gaps with some creative Perl use. But it seems likely that many (or all) of the parts should already exist. Any thoughts?

    Read the article

  • Good tools which generate NUnit unit tests for .NET assemblies in Visual Studio 2008

    - by andy
    Hey guys, I'm pretty new to Unit Testing so bare with me. I realize that best best practice is not to auto generate unit tests, however I'd like to use Code Generation to set-up the basic skeleton of the tests. Now, I know Visual Studio 2008 already has the built in "create tests", however, it just creates a flat list of all the classes it's going to test... and it's not for NUnit right? Ideally, I'd like the code generation to follow the folder AND namespace structure of the assembly its generating tests for. Can you guys recommend any good tools which generate NUnit unit tests for .NET assemblies in Visual Studio 2008? cheers!

    Read the article

  • Python unit-testing with nose: Making sequential tests

    - by cool-RR
    I am just learning how to do unit-testing. I'm on Python / nose / Wing IDE. (The project that I'm writing tests for is a simulations framework, and among other things it lets you run simulations both synchronously and asynchronously, and the results of the simulation should be the same in both.) The thing is, I want some of my tests to use simulation results that were created in other tests. For example, synchronous_test calculates a certain simulation in synchronous mode, but then I want to calculate it in asynchronous mode, and check that the results came out the same. How do I structure this? Do I put them all in one test function, or make a separate asynchronous_test? Do I pass these objects from one test function to another? Also, keep in mind that all these tests will run through a test generator, so I can do the tests for each of the simulation packages included with my program.

    Read the article

  • Programming tests: are they relevant?

    - by BlackVoid
    Do online programming tests have any value (except for providing an evidence to potential employers) in terms of evaluating your knowledge, or a they too broad or too narrow in general? For examples, brainbench.com and similar websites. From my experience I have never found myself scoring particularly high, although I have many years of commercial experience and is doing great at work. These tests mostly refer to things I have never worked with (WebForms or ADO .Net, who works with ADO .Net directly anyway?), yet these tests claim to be C# tests. If you were hiring a programmer, would you consider online tests as an evidence of real skill?

    Read the article

  • Measuring code coverage for selenium tests that reside in separate project

    - by ilu
    I have two separate java maven projects: one is my web app itself and other one is tellurium+selenium automation tests for my web(I moved these tests to separate projects as their code doesnt really belong to the web app project code and doesnt use java classes of my web app, also I want to reuse some parts of those tests for testing my other web apps). Therefore, project where my tests reside doesnt know anything about my web app, except tellurium/selenium conf files(host name, credentials, browser). So the question: is there any way to measure code coverage of my webb app backend that is invoked by my tellurium/sellenium tests that reside in separate project? Thanks in advance. Any help is highly appreciated.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >