Search Results

Search found 26297 results on 1052 pages for 'unit test'.

Page 58/1052 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • How to pass variables using Unittest suite

    - by chrissygormley
    Hello I have test's using unittest. I have a test suite and I am trying to pass variables through into each of the tests. The below code shows the test suite used. class suite(): def suite(self): #Function stores all the modules to be tested modules_to_test = ('testmodule1', 'testmodule2') alltests = unittest.TestSuite() for module in map(__import__, modules_to_test): alltests.addTest(unittest.findTestCases(module)) return alltests It calls tests, I would like to know how to pass variables into the tests from this class. An example test script is below: class TestThis(unittest.TestCase): def runTest(self): assertEqual('1', '1') class TestThisTestSuite(unittest.TestSuite): # Tests to be tested by test suite def makeTestThisTestSuite(): suite = unittest.TestSuite() suite.addTest("TestThis") return suite def suite(): return unittest.makeSuite(TestThis) if __name__ == '__main__': unittest.main() So from the class suite() I would like to enter in a value to change the value that is in assert value. Eg. assertEqual(self.value, '1'). I have tried sys.argv for unittest and it doesn't seem to work. Thanks for any help.

    Read the article

  • .NET developer has a few hours to cram for a Java proficiency test. What to do?

    - by Paul Sasik
    I have an interview tomorrow morning and just found out that I will also be taking an hour-long Java proficiency test! I am a certified C# .NET developer but have barely touched Java since college. (Yes, I am thinking about switching from .NET development to Java!) I'm not going to be able to effectively cram the whole of the Java library by tomorrow morning. What are some key ideas that I should study that might actually make a difference in such a short time frame? (The examiners will be taking into account that I have had little exposure to Java.) Thanks! Edit NOTE: I think that the exam is going to be written, so no specific IDE.

    Read the article

  • Separate Database for Integration Testing

    - by john doe
    I am performance integration testing where I fire up the ASPX pages using WatiN and fill the fields and insert into the database. There are couple of problems that I am facing. 1) Should I use a completely separate database for integration testing? I already gave db_test and db_dev. db_test is for unit testing and is cleared after each test. db_dev is for developers. 2) When I run WatiN test which are contained in a separate assembly (not separate from unit test assembly which should be better since WatiN test take so much time to run). So WatiN test fire up the WebApps project and uses their web.config which is pointing to the dev database. Is there anyway I can tell WatiN to use a separate web.config which contains a different database name?

    Read the article

  • Including uncovered files in Devel::Cover reports

    - by Markus
    I have a project setup like this: bin/fizzbuzz-game.pl lib/FizzBuzz.pm test/TestFizzBuzz.pm test/TestFizzBuzz.t When I run coverage on this, using perl -MDevel::Cover=-db,/tmp/cover_db test/*.t ... I get the following output: ----------------------------------- ------ ------ ------ ------ ------ ------ File stmt bran cond sub time total ----------------------------------- ------ ------ ------ ------ ------ ------ lib/FizzBuzz.pm 100.0 100.0 n/a 100.0 1.4 100.0 test/TestFizzBuzz.pm 100.0 n/a n/a 100.0 97.9 100.0 test/TestFizzBuzz.t 100.0 n/a n/a 100.0 0.7 100.0 Total 100.0 100.0 n/a 100.0 100.0 100.0 ----------------------------------- ------ ------ ------ ------ ------ ------ That is: the totally-uncovered file bin/fizzbuzz-game.pl is not included in the results. How do I fix this?

    Read the article

  • Problem using System.Xml in unit test in MonoDevelop (MonoTouch)

    - by hambonious
    I'm new to the MonoDevelop and MonoTouch environment so hopefully I'm just missing something easy here. When I have a unit test that requires the System.Xml or System.Xml.Linq namespaces, I get the following error when I run the test: System.IO.FileNotFoundException : Could not load file or assembly 'System.Xml.Linq, Version=2.0.5.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. Things I've verified: I have the proper usings in the test. The project builds with no problems. Using these namespaces work fine when I run the app in the emulator. I've written a very simple unit test to prove that unit testing works at all (and it does). I'm a test driven kinda guy so I can't wait to get this working so I can progress with my app. Thanks in advance.

    Read the article

  • Python: How to run unittest.main() for all source files in a subdirectory?

    - by Pete
    I am developing a Python module with several source files, each with its own test class derived from unittest right in the source. Consider the directory structure: dirFoo\ test.py dirBar\ __init__.py Foo.py Bar.py To test either Foo.py or Bar.py, I would add this at the end of the Foo.py and Bar.py source files: if __name__ == "__main__": unittest.main() And run Python on either source, i.e. $ python Foo.py ........... ---------------------------------------------------------------------- Ran 11 tests in 2.314s OK Ideally, I would have "test.py" automagically search dirBar for any unittest derived classes and make one call to "unittest.main()". What's the best way to do this in practice? I tried using Python to call execfile for every *.py file in dirBar, which runs once for the first .py file found & exits the calling test.py, plus then I have to duplicate my code by adding unittest.main() in every source file--which violates DRY principles.

    Read the article

  • how to interpret testng console log message for passing tests?

    - by Kunal Cholera
    I ran testng tests for my test classes, I get the following three types of log output when I run the passing tests. I am using org.testng.AssertJUnit.assertTrue() methods in my tests. [testng] PASSED: testMethod1 on null(test.foo.bar.Class1) [testng] PASSED: testMethod2 on Default test name(test.foo.bar.jar.Class2) [testng] PASSED: testMethod3 Can any one please tell me why for some tests it says "on null" for some it says "on Default test name ... " and for some it does not say anything on the console output. Specifically I want to make it consistent message. Environment : linux Testng framework : 6.3.2beta please advise. thanks.

    Read the article

  • How can I know when SQL Full Text Index Population is finished?

    - by GarethOwen
    We are writing unit tests for our ASP.NET application that run against a test SQL Server database. That is, the ClassInitialize method creates a new database with test data, and the ClassCleanup deletes the database. We do this by running .bat scripts from code. The classes under test are given a connection string that connects to the unit test database rather than a production database. Our problem is, that the database contains a full text index, which needs to be fully populated with the test data in order for our tests to run as expected. As far as I can tell, the fulltext index is always populated in the background. I would like to be able to either: Create the full text index, fully populated, with a synchronous (transact-SQL?) statement, or Find out when the fulltext population is finished, is there a callback option, or can I ask repeatedly? My current solution is to force a delay at the end the class initialize method - 5 seconds seems to work - because I can't find anything in the documentation.

    Read the article

  • understand SimpleTimeZone and DST Test

    - by Cygnusx1
    I Have an issue with the use of SimpleTimeZone class in Java. First, the JavaDoc is nice but not quite easy to understand in regards of the start and end Rules. But with the help of some example found on the web, i managed to get it right (i still don't understand why 8 represents the second week of a month in day_of_month!!! but whatever) Now i have written a simple Junit test to validate what i understand: package test; import static org.junit.Assert.assertEquals; import java.sql.Timestamp; import java.util.Calendar; import java.util.GregorianCalendar; import java.util.SimpleTimeZone; import org.apache.log4j.Logger; import org.junit.Test; public class SimpleTimeZoneTest { Logger log = Logger.getLogger(SimpleTimeZoneTest.class); @Test public void testTimeZoneWithDST() throws Exception { Calendar testDateEndOut = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 01, 59, 59); Calendar testDateEndIn = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 02, 00, 00); Calendar testDateStartOut = new GregorianCalendar(2012, Calendar.MARCH, 11, 01, 59, 59); Calendar testDateStartIn = new GregorianCalendar(2012, Calendar.MARCH, 11, 02, 00, 00); SimpleTimeZone est = new SimpleTimeZone(-5 * 60 * 60 * 1000, "EST"); est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); est.setEndRule(Calendar.NOVEMBER, 1, Calendar.SUNDAY, 2 * 60 * 60 * 1000); Calendar theCal = new GregorianCalendar(est); theCal.setTimeInMillis(testDateEndOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be In DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateEndIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be in DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); } } Ok, i want to test the date limits to see if the inDaylightTime return the right thing! So, my rules are : DST start the second sunday of March at 2am DST end the first sunday of november at 2am In 2012 (now) this give us the march 11 at 2am and November 4 at 2am You can see my test dates are set properly!!! Well here is the output of my test run: 2012-11-01 18:22:44,344 INFO [test.SimpleTimeZoneTest] - < Cal date = 2012-11-04 01:59:59.0 : Eastern Standard Time> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal use DST = true> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal In DST = false> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <offset = -18000000> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <DTS offset= 3600000> My first assert just fails and tell me that 2012-11-04 01:59:59 is not inDST... !!!!??? If i put 2012-11-04 00:59:59, the test pass! This 1 hour gap just puzzle me... can anyone explain this behavior? Oh, btw, if anyone could elaborate on the : est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); Why 8 means second week of march... and the -SUNDAY. I can't figure out this thing on a real calendar example!!! Thanks

    Read the article

  • Using DNFS for test purposes

    - by rene.kundersma
    Because of other priorities such as bringing the first v2 Database Machine in Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things about DNFS, the build-in NFS client we have in Oracle RDBMS since 11.1. What DNFS is and how to set it up can all be found here . As you see this documentation is actually the "Clusterware Installation Guide". I think that is weird, I would expect this to be part of the Admin Guide, especially the "Tablespace" chapter. I do however want to show what I did not find in the documentation that quickly (and solved after talking to my famous colleague "the prutser"): First, a quick setup: 1. The standard ODM library needs to be replaced with the NFS ODM library: [oracle@ocm01 ~]$ cp $ORACLE_HOME/lib/libodm11.so $ORACLE_HOME/lib/libodm11.so_stub [oracle@ocm01 ~]$ ln -s $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.so After changing to this library you will notice the following in your alert.log: Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 2. The intention is to mount the datafiles over normal NAS (like NetApp). But, in case you want to test yourself and use an exported NFS filesystem, it should look like the following: [oracle@ocm01 ~]$ cat /etc/exports /u01/scratch/nfs *(rw,sync,insecure) Please note the "insecure" option in the export, since you will not be able to use DNFS without it if you export a filesystem from a host. Without the "insecure" option the NFS server considers the port used by the database "insecure" and the database is unable to acquire the mount: Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 3. Before configuring the new Oracle stanza for NFS we still need to configure a regular kernel NFS mount: [root@ocm01 ~]# cat /etc/fstab | grep nfs ocm01.nl.oracle.com:/u01/scratch/nfs /incoming nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 4. Then a so called Oracle-'nfstab' needs to be created that specifies what the available exports to use: [oracle@ocm01 ~]$ cat /etc/oranfstab server:ocm01.nl.oracle.com path:192.168.1.40 export:/u01/scratch/nfs mount:/incoming 5. Creating a tablespace with a datafile on the NFS location: SQL create tablespace rk datafile '/incoming/rk.dbf' size 10M; Tablespace created. Be sure to know that it may happen that you do not specify the insecure option (like I did). In that case you will still see output from the query v$dnfs_servers: SQL select * from v$dnfs_servers; ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX -- -------------------- ----------------- --------- ---------- ------ ------ 1 ocm01.nl.oracle.com /u01/scratch/nfs 684 2049 32768 32768 But, querying v$dnfsfiles and v$dnfs_channels will now return any result, and indeed, you will see the following message in the alert-log when you create a file : Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 After correcting the export: SQL select * from v$dnfs_files; FILENAME FILESIZE PNUM SVR_ID --------------- -------- ------ ------ /incoming/rk.dbf 10493952 20 1 Rene Kundersma Oracle Technology Services, The Netherlands

    Read the article

  • Is it possible to force the WCF test client to accept a self-signed certificate?

    - by Lawrence Johnston
    I have a WCF web service running in IIS 7 using a self-signed certificate (it's a proof of concept to make sure this is the route I want to go). It's required to use SSL. Is it possible to use the WCF Test Client to debug this service without needing a non-self-signed certificate? When I try I get this error: Error: Cannot obtain Metadata from https:///Service1.svc If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https:///Service1.svc Metadata contains a reference that cannot be resolved: 'https:///Service1.svc'. Could not establish trust relationship for the SSL/TLS secure channel with authority ''. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.HTTP GET Error URI: https:///Service1.svc There was an error downloading 'https:///Service1.svc'. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.

    Read the article

  • Unit Test For NpgsqlCommand With Rhino Mocks

    - by J Pollack
    My unit test keeps getting the following error: "System.InvalidOperationException: The Connection is not open." The Test [TestFixture] public class Test { [Test] public void Test1() { NpgsqlConnection connection = MockRepository.GenerateStub<NpgsqlConnection>(); // Tried to fake the open connection connection.Stub(x => x.State).Return(ConnectionState.Open); connection.Stub(x => x.FullState).Return(ConnectionState.Open); DbQueries queries = new DbQueries(connection); bool procedure = queries.ExecutePreProcedure("201003"); Assert.IsTrue(procedure); } } Code Under Test using System.Data; using Npgsql; public class DbQueries { private readonly NpgsqlConnection _connection; public DbQueries(NpgsqlConnection connection) { _connection = connection; } public bool ExecutePreProcedure(string date) { var command = new NpgsqlCommand("name_of_procedure", _connection); command.CommandType = CommandType.StoredProcedure; NpgsqlParameter parameter = new NpgsqlParameter {DbType = DbType.String, Value = date}; command.Parameters.Add(parameter); command.ExecuteScalar(); return true; } } How would you test the code using Rhino Mocks 3.6? PS. NpgsqlConnection is a connection to a PostgreSQL server.

    Read the article

  • Android's RelativeLayout Unit Test setup

    - by dqminh
    i'm trying to write an unit test for my Android's RelativeLayout. Currently, my testcode setup is as follow: public class SampleRelativeLayoutTest extends AndroidTestCase { private ViewGroup testView; private ImageView icon; private TextView title; @Override protected void setUp() throws Exception { super.setUp(); // inflate the layout final Context context = getContext(); final LayoutInflater inflater = LayoutInflater.from(context); testView = (ViewGroup) inflater.inflate(R.layout.sample_layout, null); // manually measure and layout testView.measure(500, 500); testView.layout(0, 0, 500, 500); // init variables icon = (ImageView) testView.findViewById(R.id.icon); title = (TextView) testView.findViewById(R.id.title); } However, I encountered NullPointerException with the following stack trace java.lang.NullPointerException at android.widget.RelativeLayout.onMeasure(RelativeLayout.java:427) at android.view.View.measure(View.java:7964) at com.dqminh.test.view.SampleRelativeLayoutTest.setUp(SampleRelativeLayoutTest.java:33) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:169) at android.test.AndroidTestRunner.runTest(AndroidTestRunner.java:154) at android.test.InstrumentationTestRunner.onStart(InstrumentationTestRunner.java:430) at android.app.Instrumentation$InstrumentationThread.run(Instrumentation.java:1447) What should I change in my setUp() code to make the test run properly ?

    Read the article

  • How can I Fail a WebTest?

    - by craigb
    I'm using Microsoft WebTest and want to be able to do something similar to NUnit's Assert.Fail(). The best i have come up with is to throw new webTestException() but this shows in the test results as an Error rather than a Failure. Other than reflecting on the WebTest to set a private member variable to indicate the failure, is there something I've missed? EDIT: I have also used the Assert.Fail() method, but this still shows up as an error rather than a failure when used from within WebTest, and the Outcome property is read-only (has no public setter). EDIT: well now I'm really stumped. I used reflection to set the Outcome property to Failed but the test still passes! Here's the code that sets the Oucome to failed: public static class WebTestExtensions { public static void Fail(this WebTest test) { var method = test.GetType().GetMethod("set_Outcome", BindingFlags.NonPublic | BindingFlags.Instance); method.Invoke(test, new object[] {Outcome.Fail}); } } and here's the code that I'm trying to fail: public override IEnumerator<WebTestRequest> GetRequestEnumerator() { this.Fail(); yield return new WebTestRequest("http://google.com"); } Outcome is getting set to Oucome.Fail but apparently the WebTest framework doesn't really use this to determine test pass/fail results.

    Read the article

  • I need to duplicate software packages installed on one HPUX system to a test system

    - by oneodd1
    I am very inexperienced with HPUX and need to duplicate a production server for a test environment. I have 11.11 on the prod server and have completed the base install on the test server. What I need is a way to add the installed packages from the prod server to the test server. Unfortunately I haven't the foggiest idea how to do this. I've thought of using ignite backup and restore but don't have matching tape types between the two. The other thought was using swlist to gather the installed packages and then going to the website to download and install on the test environment. Has anyone successfully done something like this? Pointers?

    Read the article

  • Basic jUnit Questions

    - by Epitaph
    I was testing a String multiplier class with a multiply() method that takes 2 numbers as inputs (as String) and returns the result number (as String) `public String multiply(String num1, String num2); I have done the implementation and created a test class with the following test cases involving the input String parameter as 1) valid numbers 2) characters 3) special symbol 4) empty string 5) Null value 6) 0 7) Negative number 8) float 9) Boundary values 10) Numbers that are valid but their product is out of range 11) numbers will + sign (+23) 1) I'd like to know if "each and every" assertEquals() should be in it's own test method? Or, can I group similar test cases like testInvalidArguments() to contains all asserts involving invalid characters since ALL of them throw the same NumberFormatException ? 2) If testing an input value like character ("a"), do I need to include test cases for ALL scenarios? "a" as the first argument "a" as the second argument "a" and "b" as the 2 arguments 3) As per my understanding, the benefit of these unit tests is to find out the cases where the input from a user might fail and result in an exception. And, then we can give the user with a meaningful message (asking them to provide valid input) instead of an exception. Is that the correct? And, is it the only benefit? 4) Are the 11 test cases mentioned above sufficient? Did I miss something? Did I overdo? When is enough? 5) Following from the above point, have I successfully tested the multiply() method?

    Read the article

  • test of ICMP block

    - by Marcos
    In my bash scripts I have been using something like: until fping -u google.com; do echo "$0[$$] Network/DNS down?? $(date)" 1>&2 && sleep $(($RANDOM%(1 + ++trynum * 1) +1)).222; done to test for online connectivity. It halts in place, sleeping growing random intervals, until it can ping google.com again. Problem: At some sites ICMP pings are blocked altogether, and web pages are still reachable. What's a short way to test for this general case? Based on that test I will switch over to an http-based test like the exit status of curl -s google.com >/dev/null if that is a good one.

    Read the article

  • Google App Engine: Unit testing concurrent access to memcache

    - by Phuong Nguyen de ManCity fan
    Would you guys show me a way to simulating concurrent access to memcache on Google App Engine? I'm trying with LocalServiceTestHelpers and threads but don't have any luck. Every time I try to access Memcache within a thread, then I get this error: ApiProxy$CallNotFoundException: The API package 'memcache' or call 'Increment()' was not found I guess that the testing library of GAE SDK tried to mimic the real environment and thus setup the environment for only one thread (the thread that running the test) which cannot be seen by other thread. Here is a piece of code that can reproduce the problem package org.seamoo.cache.memcacheImpl; import org.testng.Assert; import org.testng.annotations.AfterMethod; import org.testng.annotations.BeforeMethod; import org.testng.annotations.Test; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceFactory; import com.google.appengine.tools.development.testing.LocalMemcacheServiceTestConfig; import com.google.appengine.tools.development.testing.LocalServiceTestHelper; public class MemcacheTest { LocalServiceTestHelper helper; public MemcacheTest() { LocalMemcacheServiceTestConfig memcacheConfig = new LocalMemcacheServiceTestConfig(); helper = new LocalServiceTestHelper(memcacheConfig); } /** * */ @BeforeMethod public void setUp() { helper.setUp(); } /** * @see LocalServiceTest#tearDown() */ @AfterMethod public void tearDown() { helper.tearDown(); } @Test public void memcacheConcurrentAccess() throws InterruptedException { final MemcacheService service = MemcacheServiceFactory.getMemcacheService(); Runnable runner = new Runnable() { @Override public void run() { // TODO Auto-generated method stub service.increment("test-key", 1L, 1L); try { Thread.sleep(200L); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } service.increment("test-key", 1L, 1L); } }; Thread t1 = new Thread(runner); Thread t2 = new Thread(runner); t1.start(); t2.start(); while (t1.isAlive()) { Thread.sleep(100L); } Assert.assertEquals((Long) (service.get("test-key")), new Long(4L)); } }

    Read the article

  • Unit test for Web Forms MVP presenter has a null Model

    - by jacksonakj
    I am using Web Forms MVP to write an DotNetNuke user control. When the 'SubmitContactUs' event is raised in my unit test the presenter attempts to set the 'Message' property on the Modal. However the View.Modal is null in the presenter. Shouldn't the Web Forms MVP framework automatically build a new View.Model object in the presenter? It could be that the 'Arrange' portion of my test is missing something that the presenter needs. Any help would be appreciated. Here is my test: using System; using AthleticHost.ContactUs.Core.Presenters; using AthleticHost.ContactUs.Core.Views; using Xunit; using Moq; namespace AthleticHost.ContactUs.Tests { public class ContactUsPresenterTests { [Fact] public void ContactUsPresenter_Sets_Message_OnSubmit() { // Arrange var view = new Mock<IContactUsView>(); var presenter = new ContactUsPresenter(view.Object); // Act view.Raise(v => v.Load += null, new EventArgs()); view.Raise(v => v.SubmitContactUs += null, new SubmitContactUsEventArgs("Chester", "Tester", "[email protected]", "http://www.test.com", "This is a test of the emergancy broadcast system...")); presenter.ReleaseView(); // Assert Assert.Contains("Chester Tester", view.Object.Model.Message); } } }

    Read the article

  • Rhino Mocks Partial Mock

    - by dotnet crazy kid
    I am trying to test the logic from some existing classes. It is not possible to re-factor the classes at present as they are very complex and in production. What I want to do is create a mock object and test a method that internally calls another method that is very hard to mock. So I want to just set a behaviour for the secondary method call. But when I setup the behaviour for the method, the code of the method is invoked and fails. Am I missing something or is this just not possible to test without re-factoring the class? I have tried all the different mock types (Strick,Stub,Dynamic,Partial ect.) but they all end up calling the method when I try to set up the behaviour. using System; using MbUnit.Framework; using Rhino.Mocks; namespace MMBusinessObjects.Tests { [TestFixture] public class PartialMockExampleFixture { [Test] public void Simple_Partial_Mock_Test() { const string param = "anything"; //setup mocks MockRepository mocks = new MockRepository(); var mockTestClass = mocks.StrictMock<TestClass>(); //record beahviour *** actualy call into the real method stub *** Expect.Call(mockTestClass.MethodToMock(param)).Return(true); //never get to here mocks.ReplayAll(); //this is what i want to test Assert.IsTrue(mockTestClass.MethodIWantToTest(param)); } public class TestClass { public bool MethodToMock(string param) { //some logic that is very hard to mock throw new NotImplementedException(); } public bool MethodIWantToTest(string param) { //this method calls the if( MethodToMock(param) ) { //some logic i want to test } return true; } } } }

    Read the article

  • Multiple test Active Directory envirovments hand in hand with production domain controllers

    - by MadBoy
    What's the best approach of having multiple test environments next to production one? We have multiple programming teams that build solutions that use Active Directory very often. We have tried different approaches, starting with their own domain controllers (in same subnet), or additional OU's in our production AD that the team gets control over and can create/delete accounts within that one OU. We thought of possible 4 solutions: Setting up separate OU's in ou production env. Creating subdomains for our contoso.com domain like test.contoso.com, something.contoso.com and delegating control to the teams (would we need additional DC's or the two that we have already would be enough to hold this? Setting up additional test domain controler that has a trust to our main domain and all teams can use the test domain controler as they please. Setting up single domain controller for every team/project. We're taking in consideration amount of resources needed, security (for example having multiple domain controlers with multiple passwords may lead users to use simpler passwords) and overall best practices for this scenario.

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Mocking non-virtual methods in C++ without editing production code?

    - by wk1989
    Hello, I am a fairly new software developer currently working adding unit tests to an existing C++ project that started years ago. Due to a non-technical reason, I'm not allowed to modify any existing code. The base class of all my modules has a bunch of methods for Setting/Getting data and communicating with other modules. Since I just want to unit testing each individual module, I want to be able to use canned values for all my inter-module communication methods. I.e. for a method Ping() which checks if another module is active, I want to have it return true or false based on what kind of test I'm doing. I've been looking into Google Test and Google Mock, and it does support mocking non-virtual methods. However the approach described (http://code.google.com/p/googlemock/wiki/CookBook#Mocking_Nonvirtual_Methods) requires me to "templatize" the original methods to take in either real or mock objects. I can't go and templatize my methods in the base class due to the requirement mentioned earlier, so I need some other way of mocking these virtual methods Basically, the methods I want to mock are in some base class, the modules I want to unit test and create mocks of are derived classes of that base class. There are intermediate modules in between my base Module class and the modules that I want to test. I would appreciate any advise! Thanks, JW EDIT: A more concrete examples My base class is lets say rootModule, the module I want to test is leafModule. There is an intermediate module which inherits from rootModule, leafModule inherits from this intermediate module. In my leafModule, I want to test the doStuff() method, which calls the non virtual GetStatus(moduleName) defined in the rootModule class. I need to somehow make GetStatus() to return a chosen canned value. Mocking is new to me, so is using mock objects even the right approach?

    Read the article

  • Test-Path returns differetn results in 64Bit and 32Bit powershell

    - by StarSpace
    I am developing a script which should run under 64 and 32Bit Powershell. Unfortunately it seems that Test-Path return different results in 64 and 32 Environment. Both sessions are running under same user, this user has full access on specific registry key. 64Bit Powershell >test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices True 32Bit Powershell(x86) >test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices False Any Idea?

    Read the article

  • TDD with limited resources

    - by bunglestink
    I work in a large company, but on a just two man team developing desktop LOB applications. I have been researching TDD for quite a while now, and although it is easy to realize its benefits for larger applications, I am having a hard time trying to justify the time to begin using TDD on the scale of our applications. I understand its advantages in automating testing, improving maintainability, etc., but on our scale, writing even basic unit tests for all of our components could easily double development time. Since we are already undermanned with extreme deadlines, I am not sure what direction to take. While other practices such as agile iterative development make perfect since, I am kind of torn over the productivity trade-offs of TDD on a small team. Are the advantages of TDD worth the extra development time on small teams with very tight schedules?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >