Search Results

Search found 11400 results on 456 pages for 'automated testing'.

Page 24/456 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Regression Testing and Deployment Strategy

    - by user279516
    I'd like some advice on a deployment strategy. If a development team creates an extensive framework, and many (20-30) applications consume it, and the business would like application updates at least every 30 days, what is the best deployment strategy? The reason I ask is that there seems to be a lot of waste (and risk) in using an agile approach of deploying changes monthly, if 90% of the applications don't change. What I mean by this is that the framework can change during the month, and so can a few applications. Because the framework changed, all applications should be regression-tested. If, say, 10 of the applications don't change at all during the year, then those 10 applications are regression-tested EVERY MONTH, when they didn't have any feature changes or hot fixes. They had to be tested simply because the business is rolling updates every month. And the risk that is involved... if a mission-critical application is deployed, that takes a few weeks, and multiple departments, to test, is it realistic to expect to have to constantly regression-test this application? One option is to make any framework updates backward-compatible. While this would mean that applications don't need to change their code, they would still need to be tested because the underlying framework changed. And the risk involved is great; a constantly changing framework (and deploying this framework) means the mission-critical app can never just enjoy the same code base for a long time. These applications share the same database, hence the need for the constant testing. I'm aware of TDD and automated tests, but that doesn't exist at the moment. Any advice?

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Best current framework for unit testing EJB3 / JPA

    - by kennygrimm
    Starting new project using EJB 3 / JPA, mainly stateless session beans and batch jobs. I've used JUnit in the past on standard Java webapps and it seemed to work pretty well. In EJB2 unit testing was a pain and required a running container such as JBoss to make the calls into. Now that we're going to be working in EJB3 / JPA I'd like to know what companies are using to write and run these tests. Are Junit and JMock still considered relevant or are there other newer frameworks that have come around that we should investigate?

    Read the article

  • Unit Testing iPhone - Linker Errors

    - by sliver
    I've followed the Unit Testing Applications guide from the iPhone Development documentation. I followed all the steps and it worked with the TestCase from the documentation. But as soon as I changed the TestCase to test real Code from my project I ended up with linker errors. All classes that are used in the TestCase are reported as missing. I've already searched the internet and found that the Bundle Loader property must be set to "$(BUILT_PRODUCTS_DIR)/MyApplication.app/Contents/MacOS/MyApplication". But this also fails because the file could not be found. Any ideas what I have to do to tell the linker where to search for the missing files?

    Read the article

  • iPhone - strange error when testing on simulator

    - by lostInTransit
    Hi I was testing my app on the simulator when it crashed on clicking a button of a UIAlertView. I stopped debugging there, made some changes to the code and built the app again. Now when I run the application, I get this error in the console Couldn't register com.myApp.debug with the bootstrap server. Error: unknown error code. This generally means that another instance of this process was already running or is hung in the debugger.Program received signal: “SIGABRT”. I tried removing the app from the simulator, doing a clean build but I still get this error when I try to run the app. Can someone please let me know what I should do to be able to run the app on my simulator again. Thanks.

    Read the article

  • Nginx A/B testing

    - by Alex
    Hey, I'm trying to do A/B testing and I'm using Nginx fo this purpose. My Nginx config file looks like this: events { worker_connections 1024; } error_log /usr/local/experiments/apps/reddit_test/error.log notice; http { rewrite_log on; server { listen 8081; access_log /usr/local/experiments/apps/reddit_test/access.log combined; location / { if ($remote_addr ~ "[02468]$") { rewrite ^(.+)$ /experiment$1 last; } rewrite ^(.+)$ /main$1 last; } location /main { internal; proxy_pass http://www.reddit.com/r/lisp; } location /experiment { internal; proxy_pass http://www.reddit.com/r/haskell; } } } This is kind of working, but css and js files woon't load. Can anyone tell me what's wrong with this config file or what would be the right way to do it? Thanks, Alex

    Read the article

  • Selecting Users For A/B (Champion/Challenger) Testing

    - by Gordon Guthrie
    We have a framework that offers A/B split testing. You have a 'champion' version of a page and you develop a 'challenger' version of it. Then you run the website and allocate some of your users the champion and some the challenger and measure their different responses. If the challenger is better than the champion at achieving your metrics then you dethrone the champion and develop a new challenger... My question is what mechanisms should I consider to allocate the versions? A number of options spring to mind: odd or even IP address (or sub-segments) ****.****.****.123 gets champion but .124 gets challenger cookie push - check for a champion/challenger cookie, if it doesn't exist randomly allocate the user to one and push the cookie Best practice? suggestions? comments? experience?

    Read the article

  • Documenting user scenarios and measuring/testing

    - by Rimian
    Please forgive me as I don't quite remember the exact terms for what I am talking about... hence my question. Recently I worked on a large Agile team where I encountered a method of defining user scenarios (much like user stories). These scenarios were a few very basic short sentences with keywords and a structure that could be understood by humans (especially project managers) and could also be coded against using some Java Framework (for verifying tests). The exact structure of this mini language used keywords like "when" and "and" or "if" which was how the framework parsed and verified the result. The purpose of this framework was to interface between management and the acceptance testing framework. So essentially management could write the tests themselves using English. The scenario went something like this: "When a user visits URL and User clicks on X Something happens (that can be measured)" Can anyone help me remember exactly what I am talking about? Many thanks

    Read the article

  • Unit testing .Net CF apps on Windows Mobile 6.5.3 in Visual Studio 2008

    - by Johann Gerell
    Did anyone get that to work? I mean, unit testing .Net CF apps on Windows Mobile 6.5.3 in Visual Studio 2008. It works great for a WM 6 Pro target, but not for a WM 6.5.3 target. I get this error: The test adapter ('Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestAdapter, Microsoft.VisualStudio.QualityTools.Tips.UnitTest.Adapter, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a') required to execute this test could not be loaded. Check that the test adapter is installed properly. Not enough storage is available to process this command. Yes, I can read the error text, but I don't understand the failed run. Any clues?

    Read the article

  • Testing harness for online teaching?

    - by candeira
    I have been asked to teach an online programming course, and I am looking for a test harness especially geared to education. Some students will have significant coding experience, but others will be total newbies. The course is an introduction to software development, mostly taught in C with some C++ and Java thrown in. In any case, I would like to read their source code only after a test suite has made sure that it compiles and executes properly. The students will also benefit from having a tool they can check their code against before submitting it. However, the Learning Management System my employer is using doesn't have such a system. Do you know of any LMS software that includes this feature? Which testing harness would you recommend in case I have to roll my own?

    Read the article

  • Unit Testing Codeigniter Classes with fooStack - clashes

    - by DrPep
    I'm having 'fun' testing interactions in a CodeIgniter based web app. It seems when running the entire test suite "phpunit AllTests.php" it loads all of the test classes, their targets (Systems under Test) and creates a PHPUnit_Framework_TestSuite instance which presumably iterates over the classes which extend CIUnit_TestCase and runs them. The problem comes where you have multiple classes referencing another class (such as a library). As all the classes are loaded into the same process space, PHP reports "cannot redefine class xyz". Have I missed something here or doing something haenously wrong? In my test class i'm doing something like: include_once dirname(__FILE__).'/../CIUnit.php'; include_once dirname(__FILE__).'/../../libraries/ProductsService.php'; class testProductsService extends CIUnit_TestCase { public function testGetProducts_ReturnsArrayOfProducts(){ $service = new ProductsService(); $products = $service->getProducts(); $this->assertTrue(is_array($products)); } } The problem manifests as I have a controller which does: $this->load->library('ProductsService');

    Read the article

  • Python unit-testing with nose: Making sequential tests

    - by cool-RR
    I am just learning how to do unit-testing. I'm on Python / nose / Wing IDE. (The project that I'm writing tests for is a simulations framework, and among other things it lets you run simulations both synchronously and asynchronously, and the results of the simulation should be the same in both.) The thing is, I want some of my tests to use simulation results that were created in other tests. For example, synchronous_test calculates a certain simulation in synchronous mode, but then I want to calculate it in asynchronous mode, and check that the results came out the same. How do I structure this? Do I put them all in one test function, or make a separate asynchronous_test? Do I pass these objects from one test function to another? Also, keep in mind that all these tests will run through a test generator, so I can do the tests for each of the simulation packages included with my program.

    Read the article

  • Recommended Model Based Testing Tools

    - by ModelTester
    Does anyone have any suggestions on what Model Based Testing Tools to use? Is Spec Explorer/SPEC# worth it's weight in tester training? What I have traditionally done is create a Visio Model where I call out the states and associated variables, outputs and expected results from each state. Then in a completely disconnected way, I data drive my test scripts with those variables based on that model. But, they are not connected. I want a way to create a model, associate the variables in a business friendly way, that will then build the data parameters for the scripts. I can't be the first person to need this. Is there a tool out there that will do basically that? Short of developing it myself.

    Read the article

  • Unit Testing, IDataContext and Stored Procedures via Linq

    - by Terry_Brown
    hey folks, I'm currently using Stephen Walther's approach to unit testing Linq to SQL and the datacontext (http://stephenwalther.com/blog/archive/2008/08/17/asp-net-mvc-tip-33-unit-test-linq-to-sql.aspx) which is working a treat for all things linq. My Dal layer takes in an IDataContext, I use DI (via Unity) to concrete that up as the correct DataContext object and all works well. But - we have a company policy here of writes going via Stored Procs. Has anyone come up with a solution for mocking/faking the data context and still allowing stored procs to effectively be unit tested? I've got no real experience of any of the mocking frameworks (Rhino etc.) so if these are the correct means of doing this, I'd love some pointers on guides for them. Many thanks, Terry

    Read the article

  • Working with a Java Mail Server for Testing

    - by Charlie
    I'm in the process of testing an application that takes mail out of a mailbox, performs some action based on the content of that mail, and then sends a response mail depending on the result of the action. I'm looking for a way to write tests for this application. Ideally, I'd like for these tests to bring up their own mail server, push my test emails to a folder on this mail server, and have my application scrape the mail out of the mail server that my test started. Configuring the application to use the mailserver is not difficult, but I do not know where to look for a programatic way of starting a mail server in Java. I've looked at JAMES, but I am unable to figure out how to start the server from within my test. So the question is this: What can I use for a mail server in Java that I can configure and start entirely within Java?

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • Grails Testing hickups

    - by egervari
    I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails: void testCount() { mockDomain(UserAccount) new UserAccount(firstName: "Ken").save() new UserAccount(firstName: "Bob").save() new UserAccount(firstName: "Dave").save() assertEquals(3, UserAccount.count()) } For some reason, I get 0 returned back. Did I forget to do something? The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options. Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests..... Help?

    Read the article

  • Legacy Database, Fluent NHibernate, and Testing my mappings

    - by sdanna
    As the post title implies, I have a legacy database (not sure if that matters), I'm using Fluent NHibernate and I'm attempting to test my mappings using the Fluent NHibernate PersistenceSpecification class. My question is really a process one, I want to test these when I build locally in Visual Studio using the built in Unit Testing framework for now. Obviously this implies (I think) that I'm going to need a database. What are some options for getting this into the build? If I use an in memory database does NHibernate or Fluent NHibernate have some some mechanism for sucking the database schema from a target database or maybe the in memory database can do this? Will I need to manually get the schema to feed to an in memory database? Ideally I would like to get this this setup to where the other developers don't really have to think about it other than when they break the build because the tests don't pass.

    Read the article

  • Rails Unit Testing with MyISAM Tables

    - by tadman
    I've got an application that requires the use of MyISAM on a few tables, but the rest are the traditional InnoDB type. The application itself is not concerned with transactions where it applies to these records, but performance is a concern. The Rails testing environment assumes the engine used is transactional, though, so when the test database is generated from the schema.rb it is imported with the same engine. Is it possible to over-ride this behaviour in a simple manner? I've resorted to an awful hack to ensure the tables are the correct type by appending this to test_helper.rb: (ActiveRecord::Base.connection.select_values("SHOW TABLES") - %w[ schema_info ]).each do |table_name| ActiveRecord::Base.connection.execute("ALTER TABLE `#{table_name}` ENGINE=InnoDB") end Is there a better way to make a MyISAM-backed model be testable?

    Read the article

  • Unit Testing Model Classes that inherit from NSManagedObject

    - by Matt Baker
    So...I'm trying to get unit tests set up in my iPhone App but I'm having some issues. I'm trying to test my model classes but they inherit directly from NSManagedObject. I'm sure this is a problem but I don't know how to get around it. Everything is building and running as expected but I get this error when calling any method on the class I'm testing: Unknown.m:0:0 unrecognized selector sent to instance 0xc2b120 If I follow this structure (http://chanson.livejournal.com/115621.html) to create my object in my tests I end up with another error entirely but it still doesn't help me. Basically my question is this: how can I test a class that inherits from NSManagedObject?

    Read the article

  • Should a Unit-test replicate functionality or Test output?

    - by Daniel Beardsley
    I've run into this dilemma several times. Should my unit-tests duplicate the functionality of the method they are testing to verify it's integrity? OR Should unit tests strive to test the method with numerous manually created instances of inputs and expected outputs? I'm mainly asking the question for situations where the method you are testing is reasonably simple and it's proper operation can be verified by glancing at the code for a minute. Simplified example (in ruby): def concat_strings(str1, str2) return str1 + " AND " + str2 end Simplified functionality-replicating test for the above method: def test_concat_strings 10.times do str1 = random_string_generator str2 = random_string_generator assert_equal (str1 + " AND " + str2), concat_strings(str1, str2) end end I understand that most times the method you are testing won't be simple enough to justify doing it this way. But my question remains; is this a valid methodology in some circumstances (why or why not)?

    Read the article

  • What is wrong with Stubs for unit testing?

    - by MatthewMartin
    I just watched this funny YouTube Video about unit testing (it's Hitler with fake subtitles chewing out his team for not doing good unit tests--skip it if you're humor impaired) where stubs get roundly criticized. But I don't understand what wrong with stubs. I haven't started using a mocking framework and I haven't started feeling the pain from not using one. Am I in for a world a hurt sometime down the line, having chosen handwritten stubs and fakes instead of mocks (like Rhinomock etc)? (using Fowler's taxonomy) What are the considerations for picking between a mock and handwritten stub?

    Read the article

  • Silverlight Unit Testing Framework running tests in external class library

    - by Jonas Follesø
    I'm currently looking into different options for unit testing Silverlight applications. One of the frameworks available is the Silverlight Unit Test Framework from Microsoft (developed primary by Jeff Wilcox, http://www.jeff.wilcox.name/2010/05/sl3-utf-bits/). One of the scenarios I'm looking into is running the same tests on both Silverlight 3 (PC) and Windows Phone 7. The Silverlight Unit Test Framework (SLUT) runs on both PC and phone. To prevent having to copy or link files I would like to put my tests into a shared test library, that can be loaded by either a WP7 application using the SLUT, or a Silverlight 3 application using SLUT. So my question is: will SLUT load unit tests defined in a referenced class library, or only in the executing assembly?

    Read the article

  • Unit Testing Model Classes that derive from NSManagedObject

    - by Matt Baker
    So...I'm trying to get unit tests set up in my iPhone App but I'm having some issues. I'm trying to test my model classes but they inherit directly from NSManagedObject. I'm sure this is a problem but I don't know how to get around it. Everything is building and running as expected but I get this error when calling any method on the class I'm testing: Unknown.m:0:0 unrecognized selector sent to instance 0xc2b120 If I follow this structure (http://chanson.livejournal.com/115621.html) to create my object in my tests I end up with another error entirely but it still doesn't help me. Basically my question is this: how can I test a class that inherits from NSManagedObject?

    Read the article

  • Testing with Unittest Python

    - by chrissygormley
    Hello, I am runninig test's with Python Unittest. I am running tests but I want to do negative testing and I would like to test if a function throw's an exception, it passes but if no exception is thrown the test fail's. The script I have is: try: result = self.client.service.GetStreamUri(self.stream, self.token) self.assertFalse except suds.WebFault, e: self.assertTrue else: self.assertTrue This alway's passes as True even when the function work's perfectly. I have also tried various other way's including: try: result = self.client.service.GetStreamUri(self.stream, self.token) self.assertFalse except suds.WebFault, e: self.assertTrue except Exception, e: self.assertTrue Does anyone have any suggestions? Thanks

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >