Search Results

Search found 10010 results on 401 pages for 'a b testing'.

Page 81/401 | < Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >

  • Test massive website

    - by Ant
    My company has just migrated all the code for our website to 3 identical servers on an off-site location. Now it is our job to test them. However, the amount of websites/functionality that we have to test is exorbitant, and multiply that times 3! To check every single link and every single function is a daunting task. We are in the process of manually doing that right now. My question to you guys/girls is this... Is there a way to automate the testing so we don't have to waste our time clicking, waiting, and checking the response, times 3? ;-) Let me know if you need any other info. Thanks!

    Read the article

  • How to instantiate a Singleton multiple times?

    - by Sebi
    I need a singleton in my code. I implemented it in Java and it works well. The reason I did it, is to ensure that in a mulitple environment, there is only one instance of this class. But now I want to test my Singleton object locally with a Unit test. For this reason I need to simulate another instance of this Singleton (the object that would be from another device). So is there a possiblity to instantiate a Singleton a second time for testing purpose or do I have to mock it? I'm not sure, but I think it could be possible by using a different class loader?

    Read the article

  • Junit test that creates other tests

    - by Benju
    Normally I would have one junit test that shows up in my integration server of choice as one test that passes or fails (in this case I use teamcity). What I need for this specific test is the ability to loop through a directory structure testing that our data files can all be parsed without throwing an exception. Because we have 30,000+ files that that 1-5 seconds each to parse this test will be run in its own suite. The problem is that I need a way to have one piece of code run as one junit test per file so that if 12 files out of 30,000 files fail I can see which 12 failed not just that one failed, threw a runtimeexception and stopped the test. I realize that this is not a true "unit" test way of doing things but this simulation is very important to make sure that our content providers are kept in check and do not check in invalid files. Any suggestions?

    Read the article

  • How to use condition in a TestGen4Web script for a child popup window?

    - by GotoError
    I have TestGen4Web script for automating testing on a web-based user interface that has a popup window (hey i didn't write that ui..). In order to write a complete test script that branches the flow based on the some presence of some content in the popup window, I need to write a simple if condition that does something like if document.getElementById("xyz").value - that will run on the popup window and not the parent window. Any ideas on how to accomplish this? currently, the condition fails because it runs on the parent window. Also, how to extract some text from the dom and spit it out to a file at the end of the test?

    Read the article

  • Best way to test class methods without running __init__

    - by KenFar
    I've got a simple class that gets most of its arguments via init, which also runs a variety of private methods that do most of the work. Output is available either through access to object variables or public methods. Here's the problem - I'd like my unittest framework to directly call the private methods called by init with different data - without going through init. What's the best way to do this? So far, I've been refactoring these classes so that init does less and data is passed in separately. This makes testing easy, but I think the usability of the class suffers a little.

    Read the article

  • How to prevent unit test from using util from test project?

    - by calucier
    I am using eclipse and I have two projects, project1 and project1-test. Below is the example layout of my projects: project1 -src --my.package ----MyClass.java --my.package.util ----util.java project1-test -src --my.package ----MyClassTest.java --my.package.util ----util.java MyClass.java makes a static call to the util.java in project1. MyClassTests.java is testing MyClass.java. When the test class runs, it fails and complains that MyClass.java is referencing a method in util.java that doesn't exist. Under project1, the method being referenced exists in util.java but under project1-test, the method doesn't. When I run MyClassTests.java, the util.java that is being referenced from MyClass.java is from project1-test when it should be project1. Is there some way to make MyClass.java not reference util.java from project1-test when running MyClassTest.java?

    Read the article

  • is it a good idea to write tests for environments other than development?

    - by jcollum
    Let's say I have a (fairly typical) set of environments: PROD, UAT, QA, DEV. Is it a good idea to run your tests across all environments? Here's what I'm thinking of. I have a proc in SQL that my code depends on, I'll call it proc_getActiveCustomers. If that proc isn't present my app will go south real fast. So I write a test that checks for the existence of this proc in the database. Nothing new here. But when I then deploy my app to the QA environment, would I also want to have a test that checks that environment for the existence of proc_getActiveCustomers? I think this is a good idea but I've never heard much about testing in environments outside of development. Makes me wonder if there's some downside I'm not aware of. The direction that I'm going is to have a list of environments in code and then passing that environment into my unit test.

    Read the article

  • Finding data file location while using Microsoft Test Framework

    - by Nair
    I have been using NUnit and now I am switching to the Microsoft Unit Test frame work. In my test project I have a folder called TestData and I kept all my test input data files there. I want to use that files for my unit testing. In my test code, I am using Application name space and assembly name space but I can not get to the data folder directly until unless I write a code to find and replace some string to point to the data folder. I am sure someone might have run into the same problem, is the solution to change the path through program or is there a API call which will let us get to executing assembly folders? Thanks,

    Read the article

  • How should I mock out my data connectivity

    - by BobTheBuilder
    I'm trying to unit test my Data Access Layer and I'm in the process of trying to mock my data connectivity to unit test my DAL and I'm coming unstuck trying to mock out the creation of the commands. I thought about using a queue of IDbParameters for the creation of the parameters, but the unit tests then require that the parameters are configured in the right order. I'm using MOQ and having looked around for some documentation to walk me through this, I'm finding lots of recommendation not to do this, but to write a wrapper for the connection, but it's my contention that my DAL is supposed to be the wrapper for my database and I don't feel I should be writing wrappers... if I do, how do I unit test the connectivity to the database for my wrapper? By writing another wrapper? It seems like it's turtles all the way down. So does anyone have any recommendations or tutorials regarding this particular area of unit testing/mocking?

    Read the article

  • How do I set up gaeunit 2.0a with my Django app?

    - by J. Frankenstein
    I am trying to set up Google App Engine unit testing for my web application. I downloaded the file from here. I followed the instructions in the readmen by copying the directory gaeunit into the directory with the rest of my apps and registering 'gaeunit' in settings.py. This didn't seem sufficient to actually get things going. I also stuck url('^test(.*)', include('gaeunit.urls')) into my urls.py file. When I go to the url http://localhost:8000/test, I get the following error: [Errno 2] No such file or directory: '../../gaeunit/test' Any suggestions? I'm not sure what I've done wrong. Thanks!

    Read the article

  • How to write an unit test for WCF behaviors?

    - by katie77
    I am new to unit testing. How do I write a unit test for a method when I am extending a WCF behavior. Since I am not sure of when the class is being instantiated, or I can not change the method signature. In the behavior implementation, I am getting the header and looking up a value in the config. public class IncomingValidator : IDispatchMessageInspector { public object AfterReceiveRequest(ref Message request, IClientChannel channel, InstanceContext instanceContext) { // Grab the header and see if one of the particular values(read from config) is there. } public void BeforeSendReply(ref Message reply, object correlationState) { } }

    Read the article

  • Dev Environment Tests Not 100% Compatible with Staging/Production in Rails

    - by aronchick
    We use a bunch of specific apps/APIs that (unfortunately) differ quite a bit from dev to staging/production. We use tests and continuous integration at each stage, but in dev, the tests fail annoyingly (throwing dialogs, etc - thanks Windows for the 64-bit notification!). I hate to write custom code, but are there some best practices for how to allow a subset of testing in ruby/rails - or for patching out specific tests when you're running on Windows? Some specific situations that: Identify.exe does not support 64-bit Windows and throws a dialog. Sethostname is not supported, and throws an error (at least it's command line).

    Read the article

  • How do you manage the testing of your Android software on physical devices?

    - by Philip Regan
    I'm in charge of managing mobile application development at my company, and I am currently building a mobile device "library" for testing. Essentially, we want to have a representative device in-house for each of the OSes we are developing for, currently iOS (iPhone-only), Blackberry, and Android. Simulators only go so far, but I'm placing into the process a step to test software on the devices themselves. The problem we're finding is with Android. I don't think any of us here ever really understood just how fragmented the whole platform is until we started looking at devices to acquire. We are going to wait until v2.3 of Android is released, but which products to choose? Do we go by the most popular by market share? Do we get a small range of products by specs from least to most powerful overall? We're trying to avoid having to manage a dozen different devices to test each app, if not because of cost if only for the repeated time sink. How do you manage the testing of your Android software on physical devices?

    Read the article

  • Performance Testing &ndash; Quick Reference Guide &ndash; Released up on CodePlex

    - by Shawn Cicoria
    Why performance test at all right?  Well, physics still plays a role in what we do.  Why not take a better look at your application – need help, well, the Rangers team just released the following to help: The following has both VS2008 & VS2010 content: http://vstt2008qrg.codeplex.com/ Visual Studio Performance Testing Quick Reference Guide (Version 2.0) The final released copy is here and ready for full time use. Please enjoy and post feedback on the discussion board. This document is a collection of items from public blog sites, Microsoft® internal discussion aliases (sanitized) and experiences from various Test Consultants in the Microsoft Services Labs. The idea is to provide quick reference points around various aspects of Microsoft Visual Studio® performance testing features that may not be covered in core documentation, or may not be easily understood. The different types of information cover: How does this feature work under the covers? How can I implement a workaround for this missing feature? This is a known bug and here is a fix or workaround. How do I troubleshoot issues I am having

    Read the article

  • How to mock an SqlDataReader using Moq - Update

    - by Simon G
    Hi, I'm new to moq and setting up mocks so i could do with a little help. Title says it all really - how do I mock up an SqlDataReader using Moq? Thanks Update After further testing this is what I have so far: private IDataReader MockIDataReader() { var moq = new Mock<IDataReader>(); moq.Setup( x => x.Read() ).Returns( true ); moq.Setup( x => x.Read() ).Returns( false ); moq.SetupGet<object>( x => x["Char"] ).Returns( 'C' ); return moq.Object; } private class TestData { public char ValidChar { get; set; } } private TestData GetTestData() { var testData = new TestData(); using ( var reader = MockIDataReader() ) { while ( reader.Read() ) { testData = new TestData { ValidChar = reader.GetChar( "Char" ).Value }; } } return testData; } The issue you is when I do reader.Read in my GetTestData() method its always empty. I need to know how to do something like reader.Stub( x => x.Read() ).Repeat.Once().Return( true ) as per the rhino mock example: http://stackoverflow.com/questions/1792984/mocking-a-datareader-and-getting-a-rhino-mocks-exceptions-expectationviolationexc

    Read the article

  • Assert.AreEqual() Exception in VS2010

    - by Tom Miller
    I am fairly new to unit testing and am using VS2010 to develop in and run my tests. I have a simple test, illustrated below, that simply compares 2 System.Data.DataTableReader objects. I know that they are equal as they are both created using the same object types, the same input file and I have verified that the objects "look" the same. I realize I may be dealing with a couple of issues, one being whether or not this is the proper use of Assert.AreEqual or even the proper way to test this scenario, and the other being the main issue I am dealing with which is why this test fails with this exception: Failed 00:00:00.1000660 0 Assert.AreEqual failed. Expected:<System.Data.DataTableReader>. Actual:<System.Data.DataTableReader>. Here is the unit test code that is failing: public void EntriesTest() { AuditLog target = new AuditLog(); target.Init(); DataSet ds = new DataSet(); ds.ReadXml(TestContext.DataRow["AuditLogPath"].ToString()); DataTableReader expected = ds.Tables[0].CreateDataReader(); DataTableReader actual = target.Entries.Tables[0].CreateDataReader(); Assert.AreEqual<DataTableReader>(expected, actual); } Any help would be greatly appreciated!

    Read the article

  • How can I beta test web Perl modules under Apache/mod_perl on production web server?

    - by DVK
    We have a setup where most code, before being promoted to full production, is deployed in BETA mode - meaning, it runs in full production environment (using production database - usually production data; and production web server). We call that stage BETA testing. One of the main requirements is that BETA code promotion to production must be a simple "cp" command from beta to production directory - no code/filename changes. For non-web Perl code, achieving seamless BETA test is quite doable (see details here): Perl programs live in a standard location under production root (/usr/code/scripts) with production perl modules living under the same root (/usr/code/lib/perl) The BETA code has 100% same code paths except under beta root (/usr/code/beta/) A special module manipulates @INC of any script based on whether the script was called from /usr/code/scripts or /usr/code/test/scripts, to include beta libraries for beta scripts. This setup works fine up till we need to beta test our web Perl code (the setup is EmbPerl and Apache/mod_perl). The hang-up is as follows: if both a production Perl module and BETA Perl module have the same name (e.g. /usr/code/lib/perl/MyLib1.pm and /usr/code/beta/lib/perl/MyLib1.pm), then mod_perl will only be able to load ONE of these modules into memory - and there's no way we are aware of for a particular web page to affect which version of the module is currently loaded due to concurrency issues. Leaving aside the obvious non-programming solution (get a bloody BETA web server) which for political/organizational reasons is not feasible, is there any way we can somehow hack around this problem in either Perl or mod_perl? I played around with various approaches to unloading Perl modules that %INC has listed, but the problem remains that another user might load a beta page at just the right (or rather wrong) moment and have the beta module loaded which will be used for my production page.

    Read the article

  • How should I Test a Genetic Algorithm

    - by James Brooks
    I have made a quite few genetic algorithms; they work (they find a reasonable solution quickly). But I have now discovered TDD. Is there a way to write a genetic algorithm (which relies heavily on random numbers) in a TDD way? To pose the question more generally, How do you test a non-deterministic method/function. Here is what I have thought of: Use a specific seed. Which wont help if I make a mistake in the code in the first place but will help finding bugs when refactoring. Use a known list of numbers. Similar to the above but I could follow the code through by hand (which would be very tedious). Use a constant number. At least I know what to expect. It would be good to ensure that a dice always reads 6 when RandomFloat(0,1) always returns 1. Try to move as much of the non-deterministic code out of the GA as possible. which seems silly as that is the core of it's purpose. Links to very good books on testing would be appreciated too.

    Read the article

  • Problem with Authlogic and Unit/Functional Tests in Rails

    - by mmacaulay
    I'm learning how unit testing is done in Rails, and I've run into a problem involving Authlogic. According to the Documentation there are a few things required to use Authlogic stuff in your tests: test_helper.rb: require "authlogic/test_case" class ActiveSupport::TestCase setup :activate_authlogic end Then in my functional tests I can login users: UserSession.create(users(:tester)) The problem seems to stem from the setup :activate_authlogic line in test_helper.rb, whenever that is included, I get the following errors when running functional tests: NoMethodError: undefined method `request=' for nil:NilClass authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `send' authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `method_missing' If I remove setup :activate_authlogic and add instead Authlogic::Session::Base.controller = Authlogic::ControllerAdapters::RailsAdapter.new(self) to test_helper.rb, my functional tests seem to work but now my unit tests fail: NoMethodError: undefined method `params' for ActiveSupport::TestCase:Class authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:30:in `params' authlogic (2.1.3) lib/authlogic/session/params.rb:96:in `params_credentials' authlogic (2.1.3) lib/authlogic/session/params.rb:72:in `params_enabled?' authlogic (2.1.3) lib/authlogic/session/params.rb:66:in `persist_by_params' authlogic (2.1.3) lib/authlogic/session/callbacks.rb:79:in `persist' authlogic (2.1.3) lib/authlogic/session/persistence.rb:55:in `persisting?' authlogic (2.1.3) lib/authlogic/session/persistence.rb:39:in `find' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:96:in `get_session_information' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `each' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `get_session_information' /test/unit/user_test.rb:23:in `test_should_save_user_with_email_password_and_confirmation' What am I doing wrong?

    Read the article

  • How set EnqueueCallBack to my generic callback

    - by CrazyJoe
    using System; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Ink; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using Microsistec.Domain; using Microsistec.Client; using Microsoft.VisualStudio.TestTools.UnitTesting; using System.Collections.Generic; using Microsistec.Tools; using System.Json; using Microsistec.SystemConfig; using System.Threading; using Microsoft.Silverlight.Testing; namespace Test { [TestClass] public class SampleTest : SilverlightTest { [TestMethod, Asynchronous] public void login() { List<PostData> data = new List<PostData>(); data.Add(new PostData("email", "xxx")); data.Add(new PostData("password", MD5.GetHashString("xxx"))); WebClient.sendData(Config.DataServerURL + "/user/login", data, LoginCallBack); EnqueueCallback(?????????); EnqueueTestComplete(); } [Asynchronous] public void LoginCallBack(object sender, System.Net.UploadStringCompletedEventArgs e) { string json = Microsistec.Client.WebClient.ProcessResult(e); var result = JsonArray.Parse(json); Assert.Equals("1", result["value"].ToString()); TestComplete(); } } Im tring to set ???????? value but my callback is generic, it is setup on my WebClient .SendData, how i implement my EnqueueCallback to a my already functio LoginCallBack???

    Read the article

  • Dashboard for collaborative science / data processing projects

    - by rescdsk
    Hi, Continuous Integration servers like Hudson are a pretty amazing addition to software development. I work in an academic research lab, and I'd love to apply similar principles to scientific data analysis. I want a dashboard-like view of which collections of data are fine, which ones are failing their tests (simple shell scripts, mostly), and so on. A lot like the Chromium dashboard (WARNING: page takes a long time to load). It takes work from at least 4 people, and maybe 10 or 12 hours of computer time, to bring our data (from behavioral studies) from its raw form to its final, easily-analyzed form. I've tried Hudson and buildbot, but neither is really appropriate to our workflow. We just want to run a bunch of tests on maybe fifty independent collections of subject data, and display the results nicely. SO! Does anyone have a recommendation of a way to generate this kind of report easily? Or, can you think of a good way to shoehorn this kind of workflow into a continuous integration server? Or, can you recommend a unit testing dashboard that could deal with tests that are little shell scripts rather than little functions? Thank you!

    Read the article

  • How to test a class that makes HTTP request and parse the response data in Obj-C?

    - by GuidoMB
    I Have a Class that needs to make an HTTP request to a server in order to get some information. For example: - (NSUInteger)newsCount { NSHTTPURLResponse *response; NSError *error; NSURLRequest *request = ISKBuildRequestWithURL(ISKDesktopURL, ISKGet, cookie, nil, nil); NSData *data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; if (!data) { NSLog(@"The user's(%@) news count could not be obtained:%@", username, [error description]); return 0; } NSString *regExp = @"Usted tiene ([0-9]*) noticias? no leídas?"; NSString *stringData = [[NSString alloc] initWithData:data encoding:NSASCIIStringEncoding]; NSArray *match = [stringData captureComponentsMatchedByRegex:regExp]; [stringData release]; if ([match count] < 2) return 0; return [[match objectAtIndex:1] intValue]; } The things is that I'm unit testing (using OCUnit) the hole framework but the problem is that I need to simulate/fake what the NSURLConnection is responding in order to test different scenarios and because I can't relay on the server to test my framework. So the question is Which is the best ways to do this?

    Read the article

  • Unit tests for deep cloning

    - by Will Dean
    Let's say I have a complex .NET class, with lots of arrays and other class object members. I need to be able to generate a deep clone of this object - so I write a Clone() method, and implement it with a simple BinaryFormatter serialize/deserialize - or perhaps I do the deep clone using some other technique which is more error prone and I'd like to make sure is tested. OK, so now (ok, I should have done it first) I'd like write tests which cover the cloning. All the members of the class are private, and my architecture is so good (!) that I haven't needed to write hundreds of public properties or other accessors. The class isn't IComparable or IEquatable, because that's not needed by the application. My unit tests are in a separate assembly to the production code. What approaches do people take to testing that the cloned object is a good copy? Do you write (or rewrite once you discover the need for the clone) all your unit tests for the class so that they can be invoked with either a 'virgin' object or with a clone of it? How would you test if part of the cloning wasn't deep enough - as this is just the kind of problem which can give hideous-to-find bugs later?

    Read the article

  • How to test a DAO with JPA implementation ?

    - by smallufo
    Hi I came from the Spring camp , I don't want to use Spring , and am migrating to JavaEE6 , But I have problem testing DAO + JPA , here is my simplified sample : public interface PersonDao { public Person get(long id); } This is a very basic DAO , because I came from Spring , I believe DAO still have it value , so I decided to add a DAO layer . public class PersonDaoImpl implements PersonDao , Serializable { @PersistenceContext(unitName = "test", type = PersistenceContextType.EXTENDED) EntityManager entityManager ; public PersonDaoImpl() { } @Override public Person get(long id) { return entityManager .find(Person.class , id); } } This is a JPA-implemented DAO , I hope the EE container or the test container able to inject the EntityManager. public class PersonDaoImplTest extends TestCase { @Inject protected PersonDao personDao; @Override protected void setUp() throws Exception { //personDao = new PersonDaoImpl(); } public void testGet() { System.out.println("personDao = " + personDao); // NULL ! Person p = personDao.get(1L); System.out.println("p = " + p); } } This is my test file . OK , here comes the problem : Because JUnit doesn't understand @javax.inject.Inject , the PersonDao will not be able to injected , the test will fail. How do I find a test framework that able to inject the EntityManager to the PersonDaoImpl , and @Inject the PersonDaoImpl to the PersonDao of TestCase ? I tried unitils.org , but cannot find a sample like this , it just directly inject the EntityManagerFactory to the TestCast , not what I want ...

    Read the article

  • Mocking with Boost::Test

    - by Billy ONeal
    Hello everyone :) I'm using the Boost::Test library for unit testing, and I've in general been hacking up my own mocking solutions that look something like this: //In header for clients struct RealFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return FindFirstFile(lpFileName, lpFindFileData); }; }; template <typename FirstFile_T = RealFindFirstFile> class DirectoryIterator { //.. Implementation } //In unit tests (cpp) #define THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING 42 struct FakeFindFirstFile { static HANDLE FindFirst(LPCWSTR lpFileName, LPWIN32_FIND_DATAW lpFindFileData) { return THE_ANSWER_TO_LIFE_THE_UNIVERSE_AND_EVERYTHING; }; }; BOOST_AUTO_TEST_CASE( MyTest ) { DirectoryIterator<FakeFindFirstFile> LookMaImMocked; //Test } I've grown frustrated with this because it requires that I implement almost everything as a template, and it is a lot of boilerplate code to achieve what I'm looking for. Is there a good method of mocking up code using Boost::Test over my Ad-hoc method? I've seen several people recommend Google Mock, but it requires a lot of ugly hacks if your functions are not virtual, which I would like to avoid. Oh: One last thing. I don't need assertions that a particular piece of code was called. I simply need to be able to inject data that would normally be returned by Windows API functions.

    Read the article

< Previous Page | 77 78 79 80 81 82 83 84 85 86 87 88  | Next Page >