Search Results

Search found 32596 results on 1304 pages for 'test developer'.

Page 43/1304 | < Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • A new name for unit tests

    - by Will
    I never used to like unit testing. I always thought it increased the amount of work I had to do. Turns out, that's only true in terms of the actual number of lines of code you write and furthermore, this is completely offset by the increase in the number of lines of useful code that you can write in an hour with tests and test driven development. Now I love unit tests as they allow me to write useful code, that quite often works first time! (knock on wood) I have found that people are reluctant to do unit tests or start a project with test driven development if they are under strict time-lines or in an environment where others don't do it, so they don't. Kinda like, a cultural refusal to even try. I think one of the most powerful things about unit testing is the confidence that it gives you to undertake refactoring. It also gives new found hope, that I can give my code to someone else to refactor/improve, and if my unit tests still work, I can use the new version of the library that they modified, pretty much, without fear. It's this last aspect of unit testing that I think needs a new name. The unit test is more like a contract of what this code should do now, and in the future. When I hear the word testing, I think of mice in cages, with multiple experiments done on them to see the effectiveness of a compound. This is not what unit testing is, we're not trying out different code to see what is the most affective approach, we're defining what outputs we expect with what inputs. In the mice example, unit tests are more like the definitions of how the universe will work as opposed to the experiments done on the mice. Am I on crack or does anyone else see this refusal to do testing and do they think it's a similar reason they don't want to do it? What reasons do you / others give for not testing? What do you think their motivations are in not unit testing? And as a new name for unit testing that might get over some of the objections, how about jContract? (A bit Java centric I know :), or Unit Contracts?

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

  • android instrumentation testsuite

    - by siri
    Hi I have written two test cases in a package com.app.myapp.test When I try to run them both of them are not getting executed, only one test case gets executed and stops. I have written the following testsuite in the same package AllTests.java public class AllTests extends TestSuite { public static Test suite() { return new TestSuiteBuilder(AllTests.class).includePackages("./src/com.ni.mypaint.test","./src/com.ni.mpaint.test").build(); /* .includeAllPackagesUnderHere() .build();*/ } Is the code and location for this testsuite is correct?

    Read the article

  • Online Launch of 3 new Telerik products JustMock, TeamPulse and WebUI Test Studio

    As you probably already know we have introduced 3 new products in the last 10 days, two of them at DevConnections in Las Vegas alone. If you didnt get a chance to attend DevConnections, we have organized an online launch so you get to see our new products first hand. Here is the schedule: Introduction to Telerik JustMock Tuesday, April 20 @11am ET Join the online launch of JustMock - a new developer productivity tool from Telerik designed to make it easy to create unit tests. In this webinar you will find out what is in the current release and learn about JustMocks future. JustMock cuts your development time and helps you create better unit tests without requiring you to change your code. It allows you to perform fast and controlled tests that are independent of external dependencies like databases, web services, or proprietary code. With JustMock, there are ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Hiring Developers - Any tips on being more efficient?

    - by DotnetDude
    I represent a software company that is in process of building a large software development team. We are picky in who we hire and have really good retention rate (most of the devs have been here for an average of 5-6 years). We've been spending a lot of developers' and HR time and have a low applications to hire ratio. Here's the process we use: HR Interview on phone - Involves asking basic behaviorial and tech questions Online test - Involves a 30 minute technical test Technical Phone interview - A 60 minute interview by a developer Onsite Interview - A 60-90 minute interview by several senior developers Although this process has been working, we've been spending way too much time on interviews. Any thoughts on how this can be done differently? Our goal is to automate any tasks if possible still retaining the quality of talent.

    Read the article

  • 12.04 on Pentium Dual Core with 1GB or ram running slow

    - by Alex
    hey i have a Lenovo Thinkpad Laptop with Ubuntu 12.04 installed. It runs slow. I tried "System profiler and Benchmark" to test the computer. but the application quits and closes after the first few benchmark test. before it even gets to the other tests. So i tried "Hardinfo" that installed on the Puppy Linux live cd. that did the same thing (the apps look just a like). the memory usage isnt the problem on this pc. its the cpu processes. just running the "system profiler" app that comes with ubuntu uses about 34% on each core, default with nothing running its 5-10% on each core. i cant really find what the deal is other than that ubuntu is a cpu hog. so im testing unity2D at the moment to see how it goes. if you have any other suggestions, feel free to answer this question. thanks

    Read the article

  • Emulating Test::More::done_testing - what is the most idiomatic way?

    - by DVK
    I have to build unit tests for in environment with a very old version of Test::More (perl5.8 with $Test::More::VERSION being '0.80') which predates the addition of done_testing(). Upgrading to newer Test::More is out of the question for practical reasons. And I am trying to avoid using no_tests - it's generally a bad idea not catching when your unit test dies prematurely. What is the most idiomatic way of running a configurable amount of tests, assuming no no_tests or done_testing() is used? Details: My unit tests usually take the form of: use Test::More; my @test_set = ( [ "Test #1", $param1, $param2, ... ] ,[ "Test #1", $param1, $param2, ... ] # ,... ); foreach my $test (@test_set) { run_test($test); } sub run_test { # $expected_tests += count_tests($test); ok(test1($test)) || diag("Test1 failed"); # ... } The standard approach of use Test::More tests => 23; or BEGIN {plan tests => 23} does not work since both are obviously executed before @tests is known. My current approach involves making @tests global and defining it in the BEGIN {} block as follows: use Test::More; BEGIN { our @test_set = (); # Same set of tests as above my $expected_tests = 0; foreach my $test (@tests) { my $expected_tests += count_tests($test); } plan tests = $expected_tests; } our @test_set; # Must do!!! Since first "our" was in BEGIN's scope :( foreach my $test (@test_set) { run_test($test); } # Same sub run_test {} # Same I feel this can be done more idiomatically but not certain how to improve. Chief among the smells is the duplicate our @test_test declarations - in BEGIN{} and after it.

    Read the article

  • Regular expression test can't decide between true and false (JavaScript)

    - by nw
    I get this behavior in both Chrome (Developer Tools) and Firefox (Firebug). Note the regex test returns alternating true/false values: > var re = /.*?\bbl.*\bgr.*/gi; undefined > re /.*?\\bbl.*\\bgr.*/gi > re.test("Blue-Green"); true > re.test("Blue-Green"); false > re.test("Blue-Green"); true > re.test("Blue-Green"); false However, testing the same regex as a literal: > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true > /.*?\bbl.*\bgr.*/gi.test("Blue-Green"); true I can't explain this and it's making debugging very difficult. Can anyone explain this behavior?

    Read the article

  • A basic load test question

    - by user236131
    I have a very basic load test question. I am running a load test using VSTS 2008 and I have test rig with controller + 10 agents. This load test is against a SharePoint farm I have. My goal of the load test is to find out the resource utilization on web+app+db tiers of my farm for any given load scenario. An example of a load scenario is Usage profile: Average collaboration (as defined by SCCP) User Load: 500 (using step load pattern=a step of 50 every 2 mins and a warm up time of 2mins for every step) Think time: 0 Load duration: 8hrs Now, the question is: Is it fair to expect that metrics like Requests/sec, %processor time on web front end / App / DB, Test/sec, and etc become flat or enter a steady state at one point in time during the load test. Like I said, the goal is not to create a bottleneck but to only measure the utilization of resources by the above load profile. I am asking this question because I see something different. At one point in the load test, requests/sec becomes more or less flat. But processor utilization on the web/DB servers keeps increasing. After digging through the data a bit, I see that "tests running" counter also steadily increased over time. So, if I run the load test for more than 8hrs, %processor may go up further. This way, I don't know what to consider as the load excreted by the load profile. What does this "tests running" counter really signify? How is this different from tests/sec? Another question is: how can I find out why "tests running" counter shows an increase overtime? Thanks for your time

    Read the article

  • Pass command line arguments to JUnit test case being run programmatically

    - by __nv__
    I am attempting to run a JUnit Test from a Java Class with: JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.run(classToRun); Problem is my JUnit test requires a database connection that is currently hardcoded in the JUnit test itself. What I am looking for is a way to run the JUnit test programmatically(above) but pass a database connection to it that I create in my Java Class that runs the test, and not hardcoded within the JUnit class. Basically something like JUnitCore core = new JUnitCore(); core.addListener(new RunListener()); core.addParameters(java.sql.Connection); core.run(classToRun); Then within the classToRun: @Test Public void Test1(Connection dbConnection){ Statement st = dbConnection.createStatement(); ResultSet rs = st.executeQuery("select total from dual"); rs.next(); String myTotal = rs.getString("TOTAL"); //btw my tests are selenium testcases:) selenium.isTextPresent(myTotal); } I know about The @Parameters, but it doesn't seem applicable here as it is more for running the same test case multiple times with differing values. I want all of my test cases to share a database connection that I pass in through a configuration file to my java client that then runs those test cases (also passed in through the configuration file). Is this possible? P.S. I understand this seems like an odd way of doing things.

    Read the article

  • New TFS Template Available - "Agile Dev in a Waterfall Environment"–GovDev

    - by Hosam Kamel
      Microsoft Team Foundation Server (TFS) 2010 is the collaboration platform at the core of Microsoft’s application lifecycle management solution. In addition to core features like source control, build automation and work-item tracking, TFS enables teams to align projects with industry processes such as Agile, Scrum and CMMi via the use of customable XML Process Templates. Since 2005, TFS has been a welcomed addition to the Microsoft developer tool line-up by Government Agencies of all sizes and missions. However, many government development teams consistently struggle with leveraging an iterative development process all while providing the structure, visibility and status reporting that is required by many Government, waterfall-centric, project methodologies. GovDev is an open source, TFS Process Template that combines the formality of CMMi/Waterfall with the flexibility of Agile/Iterative: The GovDev for TFS Accelerator also implements two new custom reports to support the customized process and provide the real-time visibility across the lifecycle with full traceability and drill down to tasks, tests and code: The TFS Accelerator contains: A custom TFS process template that implements a requirements centric, yet iterative process with extreme traceability throughout the lifecycle. A custom “Requirements Traceability Report” that provides a single view of traceability for the project.   Within the Traceability Report, you can also view live status indicators and “click-through” to the individual assets (even changesets). A custom report that focuses on “Contributions by Team Member” tracking things like “number of check-ins” and “Net lines added”.  Fully integrated documentation on the entire process and features. For a 45min demo of GovDev, visit: https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032508359&culture=en-us Download it from Codeplex here.     Originally posted at "Hosam Kamel| Developer & Platform Evangelist" http://blogs.msdn.com/hkamel

    Read the article

  • Click No Browse: How to Navigate Objects Without Opening Them

    - by thatjeffsmith
    Oracle SQL Developer by default automatically opens the object editor when you click on an object in your connection tree or schema browser. For most folks this is very convenient. But if you are selecting objects to drag them to a model or to the worksheet, this can get annoying as the focus of the screen changes when you don’t want it to. The other scenario this feature might disrupt more than delight is when you want to click around the database in the tree and every time you click on an object, the object editor automatically changes to the selected object. You can disable this automatic browsing behavior in SQL Developer by modifying this preference: Tools Preferences Database ObjectViewer Open Object on Single Click Disable this if you don’t want an object to open when you click on it OK, I do realize my description of the problem may have confused the heck out of you just now. So instead of more words, how about a couple of animations of the object-click behavior with the option ON and OFF? Preference Disabled Click, no open. Double click, open. Preference Enabled (Default) As you click on objects, they are automatically opened

    Read the article

  • Including BLOB images in your PDF Reports

    - by thatjeffsmith
    Earlier this year we walked through how to work with BLOBs in Oracle SQL Developer. So you already know how to INSERT, UPDATE and view the BLOBs stored in your tables. But now I want to show you how to include those images in your PDF reports. You know how to work with SQL Developer reports, right? No? OK, let’s do a quick run down memory lane then: How to Build a Bar Chart Child reports – click on parent record for on-the-fly children records Alright, so if you have a GRID report that contains a BLOB column, you have the option of including the BLOB contents when you create a PDF export: At design time, specify how you want the BLOB content to be treated when you export to PDF Note that you must specify the treatment of the BLOBs in the report design. You won’t be prompted when you launch the Export wizard dialog. When you open your PDF, there will be a link to the image. Click it. Click then confirm. It will launch the default image viewer on your machine. I hope your pictures are more excited than mine.

    Read the article

  • Homoscedascity test for Two-Way ANOVA

    - by aL3xa
    I've been using var.test and bartlett.test to check basic ANOVA assumptions, among others, homoscedascity (homogeniety, equality of variances). Procedure is quite simple for One-Way ANOVA: bartlett.test(x ~ g) # where x is numeric, and g is a factor var.test(x ~ g) But, for 2x2 tables, i.e. Two-Way ANOVA's, I want to do something like this: bartlett.test(x ~ c(g1, g2)) # or with list; see latter: var.test(x ~ list(g1, g2)) Of course, ANOVA assumptions can be checked with graphical procedures, but what about "an arithmetic option"? Is that, at all, manageable? How do you test homoscedascity in Two-Way ANOVA?

    Read the article

  • JUnit Best Practice: Different Fixtures for each @Test

    - by Juri Glass
    Hi I understand that there are @Before and @BeforeClass, which are used to define fixtures for the @Test's. But what should I use if I need different fixtures for each @Test? Should I define the fixture in the @Test? Should I create a test class for each @Test? I am asking for the best practices here, since both solutions aren't clean in my opinion. With the first solution, I would test the initialization code. And with the second solution I would break the "one test class for each class" pattern.

    Read the article

  • Unit Testing in QTestLib - running single test / tests in class / all tests

    - by Dave
    I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button. All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes. I'm sure I'm missing something. I'd like to be able to easily: Run a single test method Run the tests in an entire class Run all tests Any of those would call the appropriate setup / teardown functions.

    Read the article

  • Perl unit test - start a tcp server & continue

    - by John
    I am trying to write a unit test for a client server application. To test the client, in my unit test, I want to first start my tcp server (which itself is another perl file). I tried to start the tcp server by forking: if (! fork()) { system ("$^X server.pl") == 0 or die "couldn't start server" } So when I call "make test" after "perl Makefile.PL", this test starts & I can see the server starting but after that the unit test just hangs there. So I guess I need to start this server in background and I tried the "&" at the end to force it to start in background & then test to continue. But, I still couldn't succeed. What am I doing wrong? Thanks.

    Read the article

  • MS Test : How do I enforce exception message with ExpectedException attribute

    - by CRice
    I thought these two tests should behave identically, in fact I have written the test in my project using MS Test only to find out now that it does not respect the expected message in the same way that nunit does. nunit (fails): [Test, ExpectedException(typeof(System.FormatException), ExpectedMessage = "blah")] public void Validate() { int.Parse("dfd"); } ms test (passes): [TestMethod, ExpectedException(typeof(System.FormatException), "blah")] public void Validate() { int.Parse("dfd"); } No matter what message I give the ms test, it will pass. Is there any way to get the ms test to fail if the message is not right? Can I even create my own exception attribute? I would rather not have to write a try catch block for every test where this occurs.

    Read the article

  • Maven - Selenium - Possible to run only one test

    - by Jonas Söderström
    Hi We are using JUnit - Selenium for our web tests. We use Maven to start them and build a surefire report. The test suite is pretty large and takes a while to run and sometimes single tests fail because the browser won't start. I want to be able run a SINGLE test using maven so I retest the tests that fail and update the report. I can use mvn test -Dtest=TESTCLASSNAME to run all the tests in one test class, but this is not good enough since it takes about 10 minutes to run all the tests in our most complicated test classes and it's very likely that some other test will fail (because the browser wont start) and this will mess up my report. I know I can run one test from Eclipse but that is not what I am looking for. Any help on this would be very appriciated

    Read the article

  • Unit Test this - Simple method but don't know what's to test!

    - by user309705
    a very simple method, but don't know what's to test! I'd like to test this method in Business Logic Layer, and the _dataAccess apparently is from data layer. public DataSet GetLinksByAnalysisId(int analysisId) { DataSet result = new DataSet(); result = _dataAccess.SelectAnalysisLinksOverviewByAnalysisId(analysisId); return result; } All Im testing really is to test _dataAccess.SelectAnalysisLinksOverviewByAnalysisId() is get called! here's my test code (using Rhino mock) [TestMethod] public void Test() { var _dataAccess = MockRepository.GenerateMock<IDataAccess>(); _dataAccess.Expect(x => x.SelectAnalysisLinksOverviewByAnalysisId(_settings.UserName, 0, out dateExecuted)); var analysisBusinessLogic = new AnalysisLinksBusinessLogic(_dataAccess); analysisBusinessLogic.GetLinksByAnalysisId(_settings, 0); _dataAccess.VerifyAllExpectations(); } Let me know if you writing the test for this method what would you test against? Many Thanks!

    Read the article

  • Rails test across multiple environments

    - by DSimon
    Is there some way to change Rails environments mid-way through a test? Or, alternately, what would be the right way to set up a test suite that can start up Rails in one environment, run the first half of my test in it, then restart Rails in another environment to finish the test? The two environments have separate databases. Some necessary context: I'm writing a Rails plugin that allows multiple installations of a Rails app to communicate with each other with user assistance, so that a user without Internet access can still use the app. They'll run a local version of an app, and upload their work to the online app by saving a file to a thumbdrive and taking it to an Internet cafe. The plugin adds two special environments to Rails: "offline-production" and "offline-test". I want to write functional tests that involve both the "test" and "offline-test" environments, to represent the main online version of the app and the local offline version of the app respectively.

    Read the article

  • adding org.springframework.test package to spring-2.5.6-SEC01.jar in netbeans

    - by John
    This is probably very simple, however I can't get this to work. I want to use AbstractDependencyInjectionSpringContextTest class which is in org.springframework.test package. This package is not included in Netbeans' spring library, so I want to add it. So what I have tried so far is: copy and paste "test" directory (downloaded from spring) into the Netbeans' spring-2.5.6-SEC01.jar file (copy it to org.springframework directory in that jar so I can use org.springframework.test to import it). If I go to project/libraries in Netbeans it is there, but when I try to import org.springframework.test.*; the autocomplition doesn't give me the option to choose test directory from org.soringframework package. create a new library which points to "test" directory and add it to the project- as there is no any jar file in "test" I'm not sure what path should I use to import it. I'm pretty sure this is something very simple but I'm still a novice and can't figure this out.

    Read the article

  • Cannot get principal id on my Spock test

    - by Ant's
    I have a controller like this : @Secured(['ROLE_USER','IS_AUTHENTICATED_FULLY']) def userprofile(){ def user = User.get(springSecurityService.principal.id) params.id = user.id redirect (action : "show", params:params) } I want to test the controller above controller in spock, so I wrote a test code like this: def 'userProfile test'() { setup: mockDomain(User,[new User(username:"amtoasd",password:"blahblah")]) when: controller.userprofile() then: response.redirectUrl == "/user/show/1" } When I run my test, this test fails with this error message : java.lang.NullPointerException: Cannot get property 'principal' on null object at mnm.schedule.UserController.userprofile(UserController.groovy:33) And in case of Integration test: class UserSpec extends IntegrationSpec { def springSecurityService def 'userProfile test'() { setup: def userInstance = new User(username:"antoaravinth",password:"secrets").save() def userInstance2 = new User(username:"antoaravinthas",password:"secrets").save() def usercontroller = new UserController() usercontroller.springSecurityService = springSecurityService when: usercontroller.userprofile() then: response.redirectUrl == "/user/sho" } } I get the same error as well. What went wrong? Thanks in advance.

    Read the article

< Previous Page | 39 40 41 42 43 44 45 46 47 48 49 50  | Next Page >