Search Results

Search found 6839 results on 274 pages for 'functional tests'.

Page 19/274 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Silverlight tests not working unless RDP connection open

    - by Duncan Bayne
    I have a few Silverlight UI tests that I'm automating with White. These tests are subsequently run by a TFS build agent, which is running interactively so it can access the desktop. The build passes if I have a Remote Desktop connection open to the build agent as the tests are run; I can see the mouse pointer moving around. When the test clicks on a HyperlinkButton navigation takes place, and is subsequently verified by assertions within the test. The build fails if I do not have a Remote Desktop connection open to the build agent as the tests are run. The Internet Explorer window is created and the Silverlight app loads, but no clicks happen; the application remains on the initial page and test assertions subsequently fail. Has anyone out there found a solution to this problem?

    Read the article

  • Running NUnit tests in Visual Studio 2010 with code coverage

    - by adrianbanks
    We have recently upgraded from Visual Studio 2008 to Visual Studio 2010. As part of our code base, we have a very large set of NUnit tests. We would like to be able to run these unit tests within Visual Studio, but with code coverage enabled. We have ReSharper, so can run the tests within Visual Studio, but it does not allow the code coverage tool to do its thing and generate the coverage statistics. Is there any way to make this work, or will we have to convert the tests over to MSTest?

    Read the article

  • Visual Studio 2010 does not discover new unit tests

    - by driis
    I am writing some unit tests in Visual Studio 2010. I can run all tests by using "Run all Tests in Current Context". However, if I write a new unit test, it does not get picked up by the environment - in other words, I am not able to find it in Test List Editor, by running all tests, or anywhere else. If I unload the project and then reload it; the new test is available to run. When I am adding a unit test, I simply add a new method to an already existing TestClass and decorating it with [TestMethod] attribute - nothing fancy. What might be causing this behaviour, and how do I make it work ?

    Read the article

  • Is it possible to have mspec & NUnit tests in a single project

    - by Mike Scott
    I've got a unit test project using NUnit. When I add the mspec (machine.specifications) assembly to the references, both ReSharper and TestDriven.Net stop running the NUnit tests and only run the mspec tests. Is there a way or setting that allows both NUnit & mspec tests to co-exist and run in the same project using R# & TD.Net test runners?

    Read the article

  • Should tests be self written in TDD?

    - by martin
    We run a project, which we want to solve with test driven development. I thought about some questions that came up, when initiating the project. One question was, who should write the unit-test for a feature. Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs? If i understand the concept of TDD in the right way. The feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer? What would you say, should the tests in TDD written by the programmer himself or should another programmer write the tests that describes what a method can do?

    Read the article

  • Problem with SQLite related nUnit-tests after upgrade to VS2010 and Re#5

    - by stiank81
    After converting to Visual Studio 2010 with ReSharper5 some of my unit tests started failing. More specifically this applies to all unit tests that use NHibernate with SQLite. The problem seem to be related to SQLite somehow. The unit tests that does not involve NHibernate and SQLite are still running fine. The exception is as follows: NHibernate.HibernateException : Could not create the driver from NHibernate.Driver.SQLite20Driver, NHibernate, Version=2.1.2.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4. ----> System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation. ----> NHibernate.HibernateException : The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. TearDown : System.NullReferenceException : Object reference not set to an instance of an object. The exception is the NullReferenceException on TearDown when cleaning up NHibernate objects that wasn't successfully created, but the problem seem to be related to SQLite somehow. I run my unit tests through ReSharper, but I get the same exception when running them directly through the NUnit.exe application. However, running them through the x86 variant (NUnit-x86.exe) all tests run fine. Can it be related to some mixing of 64bit and 32bit dlls? It still runs fine through VS2008 + ReSharper4.5. Note that the target framework of my projects still is .NET3.5. Anyone seen this problem before?

    Read the article

  • Visual Studio 2010 failed tests throw exceptions

    - by Dave Hanson
    In VisualStudio2010 Ultimate RC I cannot figure out how to suppress {"CollectionAssert.AreEqual failed. (Element at index 0 do not match.)"} from Microsoft.VisualStudio.TestTools.UnitTesting.AssertFailedException If i Ctrl+Alt+E I get the exception dialog; however that exception doesn't seem to be in there to be suppressed. Does anyone else have any experience with this? I don't remember having to suppress these Assert fails in studio 2008 when running unit tests. My tests would fail and I could just click on the TestResults to see which tests failed instead of fighting through these dialogs. For now I guess I'll just run my tests through the command window.

    Read the article

  • Reducing the pain writing integration and system tests

    - by mdma
    I would like to make integration tests and system tests for my applications but producing good integration and system tests have often needed so much effort that I have not bothered. The few times I tried, I wrote custom, application-specific test harnesses, which felt like re-inventing the wheel each time. I wonder if this is the wrong approach. Is there a "standard" approach to integration and full system testing? EDIT: To clarify, it's automated tests, for desktop and web applications. Ideally a complete test suite that exercises the full functionality of the application.

    Read the article

  • TeamCity run Nunit tests in Parallel

    - by Bob Sinclar
    So I was thinking that there must be a better way to run NUnit tests for a .net project via teamcity. Currently the build of the project takes about 10 minutes , and the testing step takes 30ish minutes. I was thinking about splitting up the Nunit tests into 3 groups, assigning them each to a different agent. And then make sure they have a build dependency on the initial build before they start. This was the best way i thought of doing it, Is there a different way I should also consider? On a side note Is it possible to combine all the Nunit tests at the end to get one report from the tests being build on 3 different machines? I dont think this is possible unless someone thought of a clever hack.

    Read the article

  • Run django tests from a browser

    - by phoebebright
    I'd like to provide a browser page to help non-techies run the various tests I've created using the standard django test framework. The ideal would be for a way to display all the tests found for an application with tick boxes against each one, so the user could choose to run all tests or just a selection. Output would be displayed in a window/frame for review. Anyone know of such a thing?

    Read the article

  • Debugging maven junit tests with filtered resources?

    - by hstoerr
    We are using filtered testResources in JUnit-tests that are usually executed by the maven surefire plugin. That is, the pom contains a section <build> <testResources> <testResource> <directory>src/test/resources</directory> <filtering>true</filtering> </testResource> </testResources> ... How can I run such JUnit-tests in the debugger? If I execute the tests in eclipse the tests fail since the test resources are not filtered. If the filtered test resources would be written somewhere into the target directory, I could just use this as an additional source path - but this is not the case. If I try to run the maven build in eclipse with Debug As / maven test , the build does not stop in the breakpoints. Any other ideas?

    Read the article

  • ReSharper no longer runs unit tests

    - by Ed Woodcock
    Hey folks I'm trying to write some unit tests for an app I work on at work (In the vague hope that others might follow suit), and I was originally running these tests using NUnit and the ReSharper plugin. However, ReSharper will no longer run tests for me for some reason: It simply crosses them out with a red strikeout. There's no error code I'm afraid, and there's no mention of such behaviour on the JetBrains site. Has anyone else experienced similar benhaviour? Cheers, Ed

    Read the article

  • Dynamically create PHPUnit tests from data-file

    - by DeletedAccount
    I have a data file with input and expected outputs. An example could be: input: output: 2 3 3 5 4 Exception 5 8 ... ... Currently I have a custom solution to read from the data file and perform a test for each {input,output} pair. I would like to convert this into a PHPUnit based solution and I would like to have one test per input using the test name forXassertY. So the first three tests would be called for2assert3(), for3assert5() and for4assertException(). I do not want to convert my existing data to tests if it's possible to create the test methods dynamically and keep the data file as the basis of these tests. I want to convert it to PHPUnit as I want to add some other tests later on and also process and view the output using Hudson. Suggestions?

    Read the article

  • Having Latest Tests Results info in the notified email with Hudson

    - by Roberto
    I have a project with a lot of tests failing, so it would be great for me to receive by email the number of failed tests compare from the latest build. What i need is just the info that appears in the project's page by the test results link: Latest Test Result (10 failures / -2) Is this possible? I've already tried the email-ext plugin, but it is not telling me that info (I can have the list of failing tests with output etc., but I really just need that info above). Any ideas?

    Read the article

  • Team City + Gallio runs tests, but results are not shown

    - by Twindagger
    We recently updated to Visual Studio 2010, and as part of our upgrade we started using Gallio 3.2 prerelease builds. Everything runs fine in Visual Studio (through resharper) but I'm having problems with TeamCity integration. The tests seem to run during TeamCity builds just fine (our build takes long enough to run all our tests), but the tests are not showing up in TeamCity's test area. Here is the test target from our NANT build file (this hasn't changed in our upgrade at all). Is there a trick to getting the tests to show up in TeamCity or is this something that's broken in the latest builds of Gallio? <target name="runTests"> <gallio result-property="exitCode" failonerror="false"> <runner-extension value="TeamCityExtension,Gallio.TeamCityIntegration" /> <assemblies> <include name="..\Source\Tests\${testProject}\bin\Debug\${testProject}.dll" /> </assemblies> </gallio> </target>

    Read the article

  • Auto Re-Running of Tests that fail

    - by Tangopop
    I have a set of Selenium/MbUnit Tests that work fine, but tend to take a while to run (over 4 hours) A problem i am finding is that about 1 in 20 test seems to timeout when running. I have confirmed the Selenium GRID is working and the Selenium RC's are all fine, it just seems to be a qwerk of the system. What is really annoying though is that if i run these tests again they will usually pass. What i want to know is if there is a way for me to auto rerun the tests (probably in the code) if a perticular type of exception is caught... I have attempted to put a few lines of code in the catch statement but i know this is a very hacky way of re running the tests. Here is the code: catch (AssertionException e) { if (e.Message() == "TimeOut") //Something similar to this { this.Test(); } else { verificationErrors.AppendLine(browserList[i] + " :: " + e.Message); } } Any suggestions?

    Read the article

  • How can I change ruby log level in unit tests based on context

    - by Stuart
    I'm new to ruby so forgive me if this is simple or I get some terminology wrong. I've got a bunch of unit tests (actually they're integration tests for another project, but they use ruby test/unit) and they all include from a module that sets up an instance variable for the log object. When I run the individual tests I'd like log.level to be debug, but when I run a suite I'd like log.level to be error. Is it possible to do this with the approach I'm taking, or does the code need to be restructured? Here's a small example of what I have so far. The logging module: #!/usr/bin/env ruby require 'logger' module MyLog def setup @log = Logger.new(STDOUT) @log.level = Logger::DEBUG end end A test: #!/usr/bin/env ruby require 'test/unit' require 'mylog' class Test1 < Test::Unit::TestCase include MyLog def test_something @log.info("About to test something") # Test goes here @log.info("Done testing something") end end A test suite made up of all the tests in its directory: #!/usr/bin/env ruby Dir.foreach(".") do |path| if /it-.*\.rb/.match(File.basename(path)) require path end end

    Read the article

  • Unit tests logged (or run) multiple times

    - by HeavyWave
    I have this simple test: protected readonly ILog logger = LogManager.GetLogger(MethodBase.GetCurrentMethod().ReflectedType); private static int count = 0; [Test] public void TestConfiguredSuccessfully() { logger.Debug("in test method" + count++); } log4net is set up like this: [TestFixtureSetUp] public void SetUp() { log4net.Config.BasicConfigurator.Configure(); } The problem is, that if I run this test in nUnit once, I get the output (as expected): 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method0 But if I press RUN in nUnit.exe again (or more) I get the following: 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method1 1742 [TestRunnerThread] DEBUG Tests.TestSomthing (null) - in test method1 And so on (if I run it 5 times, I'll get 5 repeating lines). Now, if I run the same test alone from reSharper the output is fine and does not repeat. However, if I run this test along side 2 other tests in the same class, the output is repeated three times. I am totally confused. What the hell is going on here?

    Read the article

  • Unit Tests Architecture Question

    - by Tom Tresansky
    So I've started to layout unit tests for the following bit of code: public interface MyInterface { void MyInterfaceMethod1(); void MyInterfaceMethod2(); } public class MyImplementation1 implements MyInterface { void MyInterfaceMethod1() { // do something } void MyInterfaceMethod2() { // do something else } void SubRoutineP() { // other functionality specific to this implementation } } public class MyImplementation2 implements MyInterface { void MyInterfaceMethod1() { // do a 3rd thing } void MyInterfaceMethod2() { // do something completely different } void SubRoutineQ() { // other functionality specific to this implementation } } with several implementations and the expectation of more to come. My initial thought was to save myself time re-writing unit tests with something like this: public abstract class MyInterfaceTester { protected MyInterface m_object; @Setup public void setUp() { m_object = getTestedImplementation(); } public abstract MyInterface getTestedImplementation(); @Test public void testMyInterfaceMethod1() { // use m_object to run tests } @Test public void testMyInterfaceMethod2() { // use m_object to run tests } } which I could then subclass easily to test the implementation specific additional methods like so: public class MyImplementation1Tester extends MyInterfaceTester { public MyInterface getTestedImplementation() { return new MyImplementation1(); } @Test public void testSubRoutineP() { // use m_object to run tests } } and likewise for implmentation 2 onwards. So my question really is: is there any reason not to do this? JUnit seems to like it just fine, and it serves my needs, but I haven't really seen anything like it in any of the unit testing books and examples I've been reading. Is there some best practice I'm unwittingly violating? Am I setting myself up for heartache down the road? Is there simply a much better way out there I haven't considered? Thanks for any help.

    Read the article

  • Intel NIC X540-T1 non-functional in Ubuntu Server 12.04

    - by Jeff Carr
    I have installed three Intel X540-T1's in servers running Ubuntu Server 12.04, but all are non-functional, no link lights, no packets sent or received, and no connection on ip4 or ip6 whether set up as dhcp or static. Also, dmesg doesn't detect cable connection or disconnection. I updated the default ixgbe driver to Intel's latest version (3.11.33) with no change. The ethernet controller is being reported as X540-AT2 (which might be a problem that I can't figure out how to fix), but the subsystem is X540-T1 so I believe that might be intended. Does anyone have any experience with this that could assist? ifconfig eth2 eth2 Link encap:Ethernet HWaddr a0:36:9f:14:5f:ea inet addr:192.168.101.1 Bcast:192.168.101.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1<br> RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) ethtool -i eth2 driver: ixgbe version: 3.11.33 firmware-version: 0x8000037c bus-info: 0000:08:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes lspci -vvnns 08:00.0 08:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10 Gigabit X540-AT2 [8086:1528] (rev 01) Subsystem: Intel Corporation Ethernet Converged Network Adapter X540-T1 [8086:0002] Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr+ Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: Memory at e8000000 (64-bit, prefetchable) [size=2M] Region 4: Memory at e8200000 (64-bit, prefetchable) [size=16K] [virtual] Expansion ROM at e8280000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: ixgbe Kernel modules: ixgbe

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • CodeStock 2012 Review: Eric Landes( @ericlandes ) - Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.

    Automated Tests in to automated Builds! How to put the right type of automated tests in to the right automated builds.Speaker: Eric LandesTwitter: @ericlandesBlog: http://ericlandes.com/ This was one of the first sessions I attended during CodeStock 2012. Eric’s talk focused mostly on unit testing, and that the lack of proper unit testing can be compared to stealing from an employer. His point was that if you’re not doing proper unit testing then all of the time wasted on fixing issues that could have been detected with unit tests is like stealing money from employer. He makes the assumption that that time spent on fixing these issues could have been better spent developing new features that drive the business. To a point I can agree with Eric’s argument regarding unit testing and stealing from a company’s perspective. I can see how he relates resources being shifted from new development to bug fixes as stealing based on the fact that the resources used to fix bugs are directly taken from other projects. He also states that Boring/Redundant and Build/Test tasks should be automated because it reduces the changes of errors and frees up developer to do what they do best, DEVELOP! When he refers to testing, he breaks testing down in to four distinct types. Unit Test Acceptance Test (This also includes Integration Tests) Performance Test UI Test With this he also recommends that developers should not go buck wild striving for 100% code coverage because some test my not provide a great return on investment. In his experience he recommends that 70% test coverage was a very acceptable rate.

    Read the article

  • Tests for hard drive health

    - by Samik R
    I have a 5-year old hard drive (bought new at the time), but it was sitting in my closet for 5 years, unused. I have just started using it, and seems to be getting a whirring sound (rather distinct from the other noises like fans etc.). I ran a few diagnostics tests, like Seagate's SeaTools, and the SMART test, and a few generic tests and all passed. Should I be concerned? Is there any other test that I should run? It's an internal IDE WD 5400RPM drive. Being used for a desktop, which is itself pretty high-end (AMD Phenom II X6 1100T, AMD Radeon GPU etc.), but would be used rather occasionally to begin with (avg. 1-2 hrs. per day). Thanks for any pointers.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >