Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 46/328 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Groovy support in Java EE projects

    - by Martin Janicek
    As requested in the issue 144038, I've implemented support for Groovy in a Java enterprise projects. You should be able to combine Java/Groovy files, run them and thanks to the new Groovy JUnit tests support you can also run groovy tests together with your existing Java tests. I hope it will make your enterprise development (and especially enterprise testing) easier and more productive. Note: The changes will be propagated to the NetBeans daily build in a few days, so please stay in touch!

    Read the article

  • Should developers be involved in testing phases?

    - by LudoMC
    Hi, we are using a classical V-shaped development process. We then have requirements, architecture, design, implementation, integration tests, system tests and acceptance. Testers are preparing test cases during the first phases of the project. The issue is that, due to resources issues (*), test phases are too long and are often shortened due to time constraints (you know project managers... ;)). So my question is simple: should developers be involved in the tests phases and isn't it too 'dangerous'. I'm afraid it will give the project managers a false feeling of better quality as the work has been done but would the added man.days be of any value? I'm not really confident of developers doing tests (no offense here but we all know it's quite hard to break in a few clicks what you have made in severals days). Thanks for sharing your thoughts. (*) For obscure reasons, increasing the number of testers is not an option as of today. (Just upfront, it's not a duplicate of Should programmers help testers in designing tests? which talks about test preparation and not test execution, where we avoid the implication of developers)

    Read the article

  • Is there a Java unit-test framework that auto-tests getters and setters?

    - by Michael Easter
    There is a well-known debate in Java (and other communities, I'm sure) whether or not trivial getter/setter methods should be tested. Usually, this is with respect to code coverage. Let's agree that this is an open debate, and not try to answer it here. There have been several blog posts on using Java reflection to auto-test such methods. Does any framework (e.g. jUnit) provide such a feature? e.g. An annotation that says "this test T should auto-test all the getters/setters on class C, because I assert that they are standard". It seems to me that it would add value, and if it were configurable, the 'debate' would be left as an option to the user.

    Read the article

  • Make audible Ding! sound, or growl notification, when `rake test` finishes!

    - by Jordan Feldstein
    I lose a ton of productivity by getting distracted while waiting for my tests to run. Usually, I'll start to look at something while they're loading --- and 15-20 minutes later I realize my tests are long done, and I've spent 10 minutes reading online. Make a small change... rerun tests ... another 10-15 minutes wasted! How can I make my computer make some kind of alert (Sound or growl notification) when my tests finish, so I can snap back to what I was doing??

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • How best to organize projects folders for unit tests in .NET?

    - by Dan Bailiff
    So I'm trying to introduce unit testing to my group. I've successfully upgraded a VS'05 web site project to a VS'08 web application, and now have a solution with the web app project and a unit test project. The issue now is how to fit this back into the source repository such that we don't break the build system and the unit test projects are persisted as well. Right now we have something like this: c:\root c:\root\projectA c:\root\projectB c:\root\projectC where projectA contains the sln file and all other related files/folders for the project. Now I have this new solution that looks like this: c:\root\projectA (parent folder) c:\root\projectA\projectA (the production code project) c:\root\projectA\projectA_Test (the unit test project) c:\root\projectA\TestResults c:\root\projecta\projectA.sln How do I integrate this new structure back into the code repository? I'd really prefer to keep the production code folder where it was in the source repository for the sake of the build, but is this necessary? If I keep the production code project in its usual place then where do I keep my unit test projects and how do I connect them with a sln file? Is it better to use this new structure and adjust the build process? I'd love to hear how other people are dealing with this issue of upgrading legacy projects to unit testing.

    Read the article

  • How Can I Run a Regex that Tests Text for Characters in a Particular Alphabet or Script?

    - by Eli
    I'd like to make a regex in Perl that will test a string for a characters in a particular string. This would be something like: $text =~ .*P{'Chinese'}.* Is there a simple way of doing this, for English it's pretty easy by just testing for [a-zA-Z], but for a script like Chinese, or one of the Japanese scripts, I can't figure out any way of doing this short of writing out every character explicitly, which would make for some very ugly code. Ideas? I can't be the first/only person that's wanted to do this.

    Read the article

  • Test run errors with MSTest in VS2010

    - by Tomas Lycken
    When I run my Unit Tests, all tests pass, but instead of "Test run succeeded" or whatever the success message is, I get "Test run error" in the little bar that tells me how many of my tests pass, even though all my tests passed. When i click the text, I'm taken to a page that tells me the following two things happened: Warning: conflict during test run deployment: deployment item '[...]\Booking.Web.dll' directly or indirectly referenced by the test container [...]\Booking.Web.Tests.dll cannot be deployed to 'Booking.Web.dll' because otherwise the file '[...]\Booking.Web.dll' would override deployment item '[...]\Booking.Web.dll' directly or indirectly referenced by '[...]\Booking.Web.Tests.dll' Error: Cannot initialize the ASP.NET project 'Booking.Web' Exception was thrown: The website could not be configured correctly; getting ASP.NET proccess information failed. Requesting 'http://localhost:54131/VSEnterpriseHelper.axd' returned an error: The remote server returned an error: (500) Internal Server Error. I don't understand half of what it's complaining about. How do I get rid of these errors? (And for reference: Booking.Web is an ASP.NET MVC 2 project, Booking.Web.Tests is a Test project, [...] is the full local path to the projects in my environment, in most of the cases above to the /bin/debug/ folder inside the Booking.Web project)

    Read the article

  • How to change TestNG dataProvider order

    - by momad
    Hi, I am running hundreds of tests against a large publishing system and would like to paralellize the tests using TestNG. However, I cannot find any easy way of doing this. Each test case instanciates an instance of this publisher, send some messages, wait for those messages to be published, then dump out the contents of the publish queues and compare against expected outcome. Doing this with so many tests (even if I paralellize using threads, still takes a very long time to complete (1 day or more)). We've found that in testing this sort of system, it's best to start up system once, run all tests to send their messages, wait for publish to do its thing, dump all outputs, and match outputs with tests and verify. For example, instead of the following: @Test public void testRule1() { Publisher pub = new Publisher(); pub.sendRule(new Rule("test1-a")); sleep(10); // wait 10 seconds pub.dumpRules(); verifyRule("test1-a"); } We wanted to do something like the following: @Test public void testRule1(bool sendMode) { if(sendMode) { this.pub.sendRule(new Rule("test1-a")); } else { verifyRule("test1-a"); } } Where you have a dataProvider run through all the tests with sendMode = true and then perform dumpAllRules() followed by running through all of the tests again with sendMode = false. The problem is, TestNG calls the same method twice, once with sendMode = true followed by sendMode = false. Is there anyway to accomplish this in TestNG? Thanks!

    Read the article

  • How to implement or emulate an "abstract" OCUnit test class?

    - by Quinn Taylor
    I have a number of Objective-C classes organized in an inheritance hierarchy. They all share a common parent which implements all the behaviors shared among the children. Each child class defines a few methods that make it work, and the parent class raises an exception for the methods designed to be implemented/overridden by its children. This effectively makes the parent a pseudo-abstract class (since it's useless on its own) even though Objective-C doesn't explicitly support abstract classes. The crux of this problem is that I'm unit testing this class hierarchy using OCUnit, and the tests are structured similarly: one test class that exercises the common behavior, with a subclass corresponding to each of the child classes under test. However, running the test cases on the (effectively abstract) parent class is problematic, since the unit tests will fail in spectacular fashion without the key methods. (The alternative of repeating the common tests across 5 test classes is not really an acceptable option.) The non-ideal solution I've been using is to check (in each test method) whether the instance is the parent test class, and bail out if it is. This leads to repeated code in every test method, a problem that becomes increasingly annoying if one's unit tests are highly granular. In addition, all such tests are still executed and reported as successes, skewing the number of meaningful tests that were actually run. What I'd prefer is a way to signal to OCUnit "Don't run any tests in this class, only run them in its child classes." To my knowledge, there isn't (yet) a way to do that, something similar to a +(BOOL)isAbstractTest method I can implement/override. Any ideas on a better way to solve this problem with minimal repetition? Does OCUnit have any ability to flag a test class in this way, or is it time to file a Radar? Edit: Here's a link to the test code in question. Notice the frequent repetition of if (...) return; to start a method, including use of the NonConcreteClass() macro for brevity.

    Read the article

  • What Scheme Does Ghuloum Use?

    - by Don Wakefield
    I'm trying to work my way through Compilers: Backend to Frontend (and Back to Front Again) by Abdulaziz Ghuloum. It seems abbreviated from what one would expect in a full course/seminar, so I'm trying to fill in the pieces myself. For instance, I have tried to use his testing framework in the R5RS flavor of DrScheme, but it doesn't seem to like the macro stuff: src/ghuloum/tests/tests-driver.scm:6:4: read: illegal use of open square bracket I've read his intro paper on the course, An Incremental Approach to Compiler Construction, which gives a great overview of the techniques used, and mentions a couple of Schemes with features one might want to implement for 'extra credit', but he doesn't mention the Scheme he uses in the course. Update I'm still digging into the original question (investigating options such as Petit Scheme suggested by Eli below), but found an interesting link relating to Gholoum's work, so I am including it here. [Ikarus Scheme](http://en.wikipedia.org/wiki/Ikarus_(Scheme_implementation)) is the actual implementation of Ghuloum's ideas, and appears to have been part of his Ph.D. work. It's supposed to be one of the first implementations of R6RS. I'm trying to install Ikarus now, but the configure script doesn't want to recognize my system's install of libgmp.so, so my problems are still unresolved. Example The following example seems to work in PLT 2.4.2 running in DrEd using the Pretty Big (require lang/plt-pretty-big) (load "/Users/donaldwakefield/ghuloum/tests/tests-driver.scm") (load "/Users/donaldwakefield/ghuloum/tests/tests-1.1-req.scm") (define (emit-program x) (unless (integer? x) (error "---")) (emit " .text") (emit " .globl scheme_entry") (emit " .type scheme_entry, @function") (emit "scheme_entry:") (emit " movl $~s, %eax" x) (emit " ret") ) Attempting to replace the require directive with #lang scheme results in the error message foo.scm:7:3: expand: unbound identifier in module in: emit which appears to be due to a failure to load tests-driver.scm. Attempting to use #lang r6rs disables the REPL, which I'd really like to use, so I'm going to try to continue with Pretty Big. My thanks to Eli Barzilay for his patient help.

    Read the article

  • How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ?

    - by Ernst de Haan
    How do I make the manifest available during a Maven/Surefire unittest run "mvn test" ? I have an open-source project that I am converting from Ant to Maven, including its unit tests. Here's the project source repository with the Maven project: http://github.com/znerd/logdoc My question pertains to the primary module, called "base". This module has a unit test that tests the behaviour of the static method getVersion() in the class org.znerd.logdoc.Library. This method returns: Library.class.getPackage().getImplementationVersion() The getImplementationVersion() method returns a value of a setting in the manifest file. So far, so good. I have tested this in the past and it works well, as long as the manifest is indeed available on the classpath at the path META-INF/MANIFEST.MF (either on the file system or inside a JAR file). Now my challenge is that the manifest file is not available when I run the unit tests: mvn test Surefire runs the unit tests, but my unit test fails with a mesage indicating that Library.getVersion() returned null. When I want to check the JAR, I find that it has not even been generated. Maven/Surefire runs the unit tests against the classes, before the resources are added to the classpath. So can I either run the unit tests against the JAR (implicitly requiring the JAR to be generated first) or can I make sure the resources (including the manifest file) are generated/copied under target/classes before the unit tests are run? Note that I use Maven 2.2.0, Java 1.6.0_17 on Mac OS X 10.6.2, with JUnit 4.8.1.

    Read the article

  • Chicken and egg problem (restore database) when trying to write unit test against SQl Server 2008.

    - by Hamish Grubijan
    Ok, they are not unit tests but end-to-end tests. The setup is somewhat involved. Unit tests will use C#, ODBC connection. Every unit tests will try to clean up after itself, but every 20 tests or so (once per C# class) we would need to do a full database restore. I do not think I can do it over an ODBC connection, according to this document: http://www.sql-server-performance.com/articles/dba/Obtain_Exclusive_Access_to_Restore_SQL_Server_p1.aspx Msg 6104, Level 16, State 1, Line 1 Cannot use KILL to kill your own process. However, I would like to, so that 199 tests do not go amok because of a bad clean-up. Is there another way? Perhaps I can open a different "connection" such as use COM automation or something of that sort, and then kill all database connections from there? If so, how can I do that? Also, will the clients be able to re-connect automatically after a restore, or would I have to dismantle everything once every 20 tests or so? If you find this question confusing, please let me know what your questions are. Thanks!

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • Running RSpec on Google App Engine via JRuby

    - by Carl
    I'm trying to write some tests (RSpec) against the AppEngine and its datastore. I've tried to load the environment and tests via: appcfg.rb run -S spec app/tests/ And I end up with the following error: spec:19: undefined method `bin_path' for Gem:Module (NoMethodError) I can run non-appengine specs just fine by running: spec app/tests/ Any suggestions on how to get RSpec up and running with JRuby and Google App Engine would be greatly appreciated. Thank you.

    Read the article

  • Integration testing - Hibernate & DbUnit

    - by Marco
    Hi, I'm writing some integrations tests in JUnit. What happens here is that when i run all the tests together in a row (and not separately), the data persisted in the database always changes and the tests find unexpected data (inserted by the previous test) during their execution. I was thinking to use DbUnit, but i wonder if it resets the auto-increment index between each execution or not (because the tests also check the IDs of the persisted entities). Thanks M.

    Read the article

  • Null reference to DataContext when testing an ASP.NET MVC app with NUnit

    - by user252160
    I have an ASP.NET MVC application with a separate project added for tests. I know the plusses and minuses of using the connection to the database when running unit tests, and I still want to use it. Yet, every time when I run the tests with the NUnit tool, they all fail due to my Data Context being null. I heard something about having a separate config file for the tests assembly, but i am not sure whether I did it properly, or whether that works at all.

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >