Search Results

Search found 24159 results on 967 pages for 'droid test'.

Page 19/967 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • How to unit test business rules?

    - by Robert Lamb
    I need a unit test to make sure I am accumulating vacation hours properly. But vacation hours accumulate according to a business rule, if the rule changes, then the unit test breaks. Is this acceptable? Should I expose the rule through a method and then call that method from both my code and my test to ensure that the unit test isn't so fragile? My question is: What is the right way to unit test business rules that may change?

    Read the article

  • How can you unit test a DelegateCommand

    - by Damian
    I am trying to unit test my ViewModel and my SaveItem(save, CanSave) delegate command. I want to ensure that CanSave is called and returns the correct value given certain conditions. Basically, how can I invoke the delegate command from my unit test, actually it's more of an integration test. Obviously I could just test the return value of the CanSave method but I am trying to use BDD to the letter, ie. no code without a test first.

    Read the article

  • Programming Practice/Test Contest?

    - by Emmanuel
    My situation: I'm on a programming team, and this year, we want to weed out the weak link by making a competition to get the best coder from our group of candidates. Focus is on IEEExtreme-like contests. What I've done: I've been trying already for 2 weeks to get a practice or test site, like UVa or codechef. The plan after I find one: Send them (the candidates) a list of direct links to the problems (making them the "contest's problem list) get them to email me their correct answers' code at the time the judge says they have solved it and accept the fastest one into the team. Issues: We had practiced on UVa already (on programming challenges too), so our former teammate (which will be in the candidate group) already has an advantage if we used it. Codechef has all it's answers public, and since it shows the latest ones it will be extremely hard to verify if the answer was copied. And I've found other sites, like SPOJ, but they share at least some problems with codechef, making them inherit the issue of Codechef So, what alternatives do you think there are? Any site that may work? Any place to get all stuff to set up a Mooshak or similar contest (as in the stuff to get the problems, instructions to set up the server itself are easy to google)? Any other idea?

    Read the article

  • Automated Error Reporting in .NET Reflector - harnessing the most powerful test rig in existence

    - by Alex.Davies
    I know a testing system that will find more bugs than all the unit testing, integration testing, and QA you could possibly do. And the chances are you're not using it. It's called your users. It's a cliché that you should test so that you find your bugs rather than your users. Of course you should. But it's also a cliché that no software is ever shipped bug-free. Lost cause? No, opportunity! I think .NET Reflector 6 is pretty stable. In fact I know exactly how stable it is, because some (surprisingly high) proportion of its users tell me every time it crashes: If they press "Send Error Report", I get: And then I fix it. As a rough guess, while a standard stack trace is enough to fix a problem 30% of the time, having all those local variables in the stack trace means I can fix it about 80% of the time. How does this all happen? Did it take ages to code this swish system? Nope, it was one checkbox in SmartAssembly. It adds some clever code to your assembly to capture local variables every time an exception is thrown, and to ask your user to report it to you, with a variety of other useful information. Of course not all bugs show up as exceptions. But if you get used to knowing that SmartAssembly will tell you when an exception happens, you begin to change your coding style. Now, as long as an exception gets thrown in any situation you don't expect, you'll fix it if it ever happens. You'll start throwing exceptions liberally, and stop having to think about whether tiny edge cases are possible, as long as they throw an exception if they happen.

    Read the article

  • Junit test bluej [closed]

    - by user1721929
    Can someone make a junit test of this? public class PersonName { int NumberNames(String wholename) { // store the name passed in to the method String testname=wholename; // initialize number of names found int numnames=0; // on each iteration remove one name while(testname.length()>numnames) { // take the "white space" from the beginning and end testname = testname.trim(); // determine the position of the first blank // .. end of the first word int posBlank= testname.indexOf(' '); // cut off word /** * it continues to the stop sign because that is where you commanded it to end */ testname=testname.substring(posBlank+1,testname.length()); // System.out.println(numnames); // System.out.println(testname); numnames++; System.out.println(testname); } return numnames; } public static void main(String args[]) { PersonName One= new PersonName(); System.out.println(One.NumberNames("Bobby")); System.out.println(One.NumberNames("Bobby Smith")); System.out.println(One.NumberNames("Bobby L. Smith")); System.out.println(One.NumberNames(" Bobby Paul Smith Jr. ")); } }

    Read the article

  • Service Testing made easy with SO-Aware Test Workbench

    - by cibrax
    I happy to announce today a new addition to our SO-Aware service repository toolset, SO-Aware Test Workbench, a WPF desktop application for doing functional and load testing against existing WCF Services. This tool is completely integrated to the SO-Aware service repository, which makes configuring new load and functional tests for WCF Soap and REST services a breeze. From now on, the service repository can play a very important role in an organization by facilitating collaboration between developers and testers. Developers can create and register new services in the repository with all the related artifacts like configuration. On the other hand, Testers can just pick one of the existing services in the repository and create functional or load tests from there, with no need to deal with specific details of the service implementation, location or configuration settings. Developers and Testers can later use the result of those tests to modify the services or adjust different settings on the tests or service configuration. Gustavo Machado, one of the developers behind this project, has written an excellent post describing all the functionality that can find today in the tool. You can also see the tool in action in this Endpoint Tv episode with Jesus and Ron Jacobs.

    Read the article

  • Dreamweaver CS5 Test server works but cannot connect to host server through files window

    - by Toni
    I've been managing this site for a long time and update coupons on it approximately every 60 days. For some reason, I'm not having problems: I opened DW CS5 today and made the changes necessary to update coupons. I was able to connect to the host server with no problem but most of my coupon images were not showing up. DW tells me I have 70 broken links, which can't be the case because I've reviewed them. Some links work and are the same as the broken links other than the file name. Unable to figure it out, I thought maybe restarting my Mac would help. However, upon logging back into DW, I am now unable to connect to the host server. I get an FTP error notice that the file doesn't exist or there is a permissions problem. Funny thing is, I can connect successfully if I test the connection through the Site Management window. I have connected to my host server through FileZilla and see all the files there, unfortunately, I still can't get the web pages to display the coupons. Has anyone else had this issue and if so, what is the solution? I feel like this is probably a simple fix, but I cannot for the life of me determine what it is! If anyone knows a solution, I'd really appreciate the help! -Toni

    Read the article

  • Introducing a (new) test method to a team

    - by Jon List
    A couple of months ago i was hired in a new job. (I'm fresh out of my Masters in software engineering) The company mainly consists of ERP consultants, but I was hired in their fairly small web department (6 developers), our main task is ERP/ecom integration (ERP-integrated web shops). The department is growing, and recently my manager asked me to start thinking about introducing tests to the team, i love a challenge, but frankly I'm a bit scared (I'm the least experience member of the team). Currently the method of testing is clicking around in the web shop and asking the customer if the products are there, if they look okay, and if orders are posted correctly to the ERP. We are getting a lot of support cases on previous projects, where a customer or a customer's customer have run into errors, which - i suppose - is why my manager wants more structured testing. Off the top of my head, I though of some (obvious?) improvements, like looking at the requirement specification, having an issue tracker, enabling team members to register their time on a "tests"-line on the budget, and to circulate tasks amongst members of the team. But as i see it we have three main challenges: general website testing. (javascript, C#, ASP.NET and CMS integration tests) (live) ERP integration testing (customers rarely want to pay for test environments). adopting a method in the team I like the responsibility, but I am afraid that I'm in a little bit over my head. I expect that my manager expects me to set up some kind of workshop for the team where I present some techniques and ideas and where we(the team) can find some solutions together. What I learned in school was mostly unit testing and program verification, not so much testing across multiple systems and applications. What I'm looking for here, is references/advice/pointers/anecdotes; anything that might help me to get smarter and to improve the current method of my team. Thanks!! (TL;DR: read the bold parts)

    Read the article

  • Alternative to NV Occlusion Query - getting the number of fragments which passed the depth test

    - by Etan
    In "modern" environments, the "NV Occlusion Query" extension provide a method to get the number of fragments which passed the depth test. However, on the iPad / iPhone using OpenGL ES, the extension is not available. What is the most performant approach to implement a similar behaviour in the fragment shader? Some of my ideas: Render the object completely in white, then count all the colors together using a two-pass shader where first a vertical line is rendered and for each fragment the shader computes the sum over the whole row. Then, a single vertex is rendered whose fragment sums all the partial sums of the first pass. Doesn't seem to be very efficient. Render the object completely in white over a black background. Downsample recursively, abusing the hardware linear interpolation between textures until being at a reasonably small resolution. This leads to fragments which have a greyscale level depending on the number of white pixels where in their corresponding region. Is this even accurate enough? ... ?

    Read the article

  • Unit testing and Test Driven Development questions

    - by Theomax
    I'm working on an ASP.NET MVC website which performs relatively complex calculations as one of its functions. This functionality was developed some time ago (before I started working on the website) and defects have occurred whereby the calculations are not being calculated properly (basically these calculations are applied to each user which has certain flags on their record etc). Note; these defects have only been observed by users thus far, and not yet investigated in code while debugging. My questions are: Because the existing unit tests all pass and therefore do not indicate that the defects that have been reported exist; does this suggest the original code that was implemented is incorrect? i.e either the requirements were incorrect and were coded accordingly or just not coded as they were supposed to be coded? If I use the TDD approach, would I disgregard the existing unit tests as they don't show there are any problems with the calculations functionality - and I start by making some failing unit tests which test/prove there are these problems occuring, and then add code to make them pass? Note; if it's simply a bug that is occurring that can be found while debugging the code, do the unit tests need to be updated since they are already passing?

    Read the article

  • .NET developer has a few hours to cram for a Java proficiency test. What to do?

    - by Paul Sasik
    I have an interview tomorrow morning and just found out that I will also be taking an hour-long Java proficiency test! I am a certified C# .NET developer but have barely touched Java since college. (Yes, I am thinking about switching from .NET development to Java!) I'm not going to be able to effectively cram the whole of the Java library by tomorrow morning. What are some key ideas that I should study that might actually make a difference in such a short time frame? (The examiners will be taking into account that I have had little exposure to Java.) Thanks! Edit NOTE: I think that the exam is going to be written, so no specific IDE.

    Read the article

  • understand SimpleTimeZone and DST Test

    - by Cygnusx1
    I Have an issue with the use of SimpleTimeZone class in Java. First, the JavaDoc is nice but not quite easy to understand in regards of the start and end Rules. But with the help of some example found on the web, i managed to get it right (i still don't understand why 8 represents the second week of a month in day_of_month!!! but whatever) Now i have written a simple Junit test to validate what i understand: package test; import static org.junit.Assert.assertEquals; import java.sql.Timestamp; import java.util.Calendar; import java.util.GregorianCalendar; import java.util.SimpleTimeZone; import org.apache.log4j.Logger; import org.junit.Test; public class SimpleTimeZoneTest { Logger log = Logger.getLogger(SimpleTimeZoneTest.class); @Test public void testTimeZoneWithDST() throws Exception { Calendar testDateEndOut = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 01, 59, 59); Calendar testDateEndIn = new GregorianCalendar(2012, Calendar.NOVEMBER, 4, 02, 00, 00); Calendar testDateStartOut = new GregorianCalendar(2012, Calendar.MARCH, 11, 01, 59, 59); Calendar testDateStartIn = new GregorianCalendar(2012, Calendar.MARCH, 11, 02, 00, 00); SimpleTimeZone est = new SimpleTimeZone(-5 * 60 * 60 * 1000, "EST"); est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); est.setEndRule(Calendar.NOVEMBER, 1, Calendar.SUNDAY, 2 * 60 * 60 * 1000); Calendar theCal = new GregorianCalendar(est); theCal.setTimeInMillis(testDateEndOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be In DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateEndIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("End date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartIn.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be in DST", true, theCal.getTimeZone().inDaylightTime(theCal.getTime())); theCal.setTimeInMillis(testDateStartOut.getTimeInMillis()); log.info(" Cal date = " + new Timestamp(theCal.getTimeInMillis()) + " : " + theCal.getTimeZone().getDisplayName()); log.info(" Cal use DST = " + theCal.getTimeZone().useDaylightTime()); log.info(" Cal In DST = " + theCal.getTimeZone().inDaylightTime(theCal.getTime())); log.info("offset = " + theCal.getTimeZone().getOffset(theCal.getTimeInMillis())); log.info("DTS offset= " + theCal.getTimeZone().getDSTSavings()); assertEquals("Start date Should be Out DST", false, theCal.getTimeZone().inDaylightTime(theCal.getTime())); } } Ok, i want to test the date limits to see if the inDaylightTime return the right thing! So, my rules are : DST start the second sunday of March at 2am DST end the first sunday of november at 2am In 2012 (now) this give us the march 11 at 2am and November 4 at 2am You can see my test dates are set properly!!! Well here is the output of my test run: 2012-11-01 18:22:44,344 INFO [test.SimpleTimeZoneTest] - < Cal date = 2012-11-04 01:59:59.0 : Eastern Standard Time> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal use DST = true> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - < Cal In DST = false> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <offset = -18000000> 2012-11-01 18:22:44,345 INFO [test.SimpleTimeZoneTest] - <DTS offset= 3600000> My first assert just fails and tell me that 2012-11-04 01:59:59 is not inDST... !!!!??? If i put 2012-11-04 00:59:59, the test pass! This 1 hour gap just puzzle me... can anyone explain this behavior? Oh, btw, if anyone could elaborate on the : est.setStartRule(Calendar.MARCH, 8, -Calendar.SUNDAY, 2 * 60 * 60 * 1000); Why 8 means second week of march... and the -SUNDAY. I can't figure out this thing on a real calendar example!!! Thanks

    Read the article

  • Using DNFS for test purposes

    - by rene.kundersma
    Because of other priorities such as bringing the first v2 Database Machine in Netherlands into production I did spend less time on my blog that planned. I do however like to tell some things about DNFS, the build-in NFS client we have in Oracle RDBMS since 11.1. What DNFS is and how to set it up can all be found here . As you see this documentation is actually the "Clusterware Installation Guide". I think that is weird, I would expect this to be part of the Admin Guide, especially the "Tablespace" chapter. I do however want to show what I did not find in the documentation that quickly (and solved after talking to my famous colleague "the prutser"): First, a quick setup: 1. The standard ODM library needs to be replaced with the NFS ODM library: [oracle@ocm01 ~]$ cp $ORACLE_HOME/lib/libodm11.so $ORACLE_HOME/lib/libodm11.so_stub [oracle@ocm01 ~]$ ln -s $ORACLE_HOME/lib/libnfsodm11.so $ORACLE_HOME/lib/libodm11.so After changing to this library you will notice the following in your alert.log: Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 2.0 2. The intention is to mount the datafiles over normal NAS (like NetApp). But, in case you want to test yourself and use an exported NFS filesystem, it should look like the following: [oracle@ocm01 ~]$ cat /etc/exports /u01/scratch/nfs *(rw,sync,insecure) Please note the "insecure" option in the export, since you will not be able to use DNFS without it if you export a filesystem from a host. Without the "insecure" option the NFS server considers the port used by the database "insecure" and the database is unable to acquire the mount: Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 3. Before configuring the new Oracle stanza for NFS we still need to configure a regular kernel NFS mount: [root@ocm01 ~]# cat /etc/fstab | grep nfs ocm01.nl.oracle.com:/u01/scratch/nfs /incoming nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600 4. Then a so called Oracle-'nfstab' needs to be created that specifies what the available exports to use: [oracle@ocm01 ~]$ cat /etc/oranfstab server:ocm01.nl.oracle.com path:192.168.1.40 export:/u01/scratch/nfs mount:/incoming 5. Creating a tablespace with a datafile on the NFS location: SQL create tablespace rk datafile '/incoming/rk.dbf' size 10M; Tablespace created. Be sure to know that it may happen that you do not specify the insecure option (like I did). In that case you will still see output from the query v$dnfs_servers: SQL select * from v$dnfs_servers; ID SVRNAME DIRNAME MNTPORT NFSPORT WTMAX RTMAX -- -------------------- ----------------- --------- ---------- ------ ------ 1 ocm01.nl.oracle.com /u01/scratch/nfs 684 2049 32768 32768 But, querying v$dnfsfiles and v$dnfs_channels will now return any result, and indeed, you will see the following message in the alert-log when you create a file : Direct NFS: NFS3ERR 1 Not owner. path ocm01.nl.oracle.com mntport 930 nfsport 2049 After correcting the export: SQL select * from v$dnfs_files; FILENAME FILESIZE PNUM SVR_ID --------------- -------- ------ ------ /incoming/rk.dbf 10493952 20 1 Rene Kundersma Oracle Technology Services, The Netherlands

    Read the article

  • Is it possible to force the WCF test client to accept a self-signed certificate?

    - by Lawrence Johnston
    I have a WCF web service running in IIS 7 using a self-signed certificate (it's a proof of concept to make sure this is the route I want to go). It's required to use SSL. Is it possible to use the WCF Test Client to debug this service without needing a non-self-signed certificate? When I try I get this error: Error: Cannot obtain Metadata from https:///Service1.svc If this is a Windows (R) Communication Foundation service to which you have access, please check that you have enabled metadata publishing at the specified address. For help enabling metadata publishing, please refer to the MSDN documentation at http://go.microsoft.com/fwlink/?LinkId=65455.WS-Metadata Exchange Error URI: https:///Service1.svc Metadata contains a reference that cannot be resolved: 'https:///Service1.svc'. Could not establish trust relationship for the SSL/TLS secure channel with authority ''. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.HTTP GET Error URI: https:///Service1.svc There was an error downloading 'https:///Service1.svc'. The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. The remote certificate is invalid according to the validation procedure.

    Read the article

  • I need to duplicate software packages installed on one HPUX system to a test system

    - by oneodd1
    I am very inexperienced with HPUX and need to duplicate a production server for a test environment. I have 11.11 on the prod server and have completed the base install on the test server. What I need is a way to add the installed packages from the prod server to the test server. Unfortunately I haven't the foggiest idea how to do this. I've thought of using ignite backup and restore but don't have matching tape types between the two. The other thought was using swlist to gather the installed packages and then going to the website to download and install on the test environment. Has anyone successfully done something like this? Pointers?

    Read the article

  • Bad linking in Qt unit test -- missing the link to the moc file?

    - by dwj
    I'm trying to unit test a class that inherits QObject; the class itself is located up one level in my directory structure. When I build the unit test I get the standard unresolved errors if a class' MOC file cannot be found: test.obj : error LNK2001: unresolved external symbol "public: virtual void * __thiscall UnitToTest::qt_metacast(char const *)" (?qt_metacast@UnitToTest@@UAEPAXPBD@Z) + 2 missing functions The MOC file is created but appears to not be linking. I've been poking around SO, the web, and Qt's docs for quite a while and have hit a wall. How do I get the unit test to include the MOC file in the link? ==== My project file is dead simple: TEMPLATE = app TARGET = test DESTDIR = . CONFIG += qtestlib INCLUDEPATH += . .. DEPENDPATH += . HEADERS += test.h SOURCES += test.cpp ../UnitToTest.cpp stubs.cpp DEFINES += UNIT_TEST My directory structure and files: C:. | UnitToTest.cpp | UnitToTest.h | \---test | test.cpp (Makefiles removed for clarity) | test.h | test.pro | stubs.cpp | +---debug | UnitToTest.obj | test.obj | test.pdb | moc_test.cpp | moc_test.obj | stubs.obj Edit: Additional information The generated Makefile.Debug shows the moc file missing: SOURCES = test.cpp \ ..\test.cpp \ stubs.cpp debug\moc_test.cpp OBJECTS = debug\test.obj \ debug\UnitToTest.obj \ debug\stubs.obj \ debug\moc_test.obj

    Read the article

  • test of ICMP block

    - by Marcos
    In my bash scripts I have been using something like: until fping -u google.com; do echo "$0[$$] Network/DNS down?? $(date)" 1>&2 && sleep $(($RANDOM%(1 + ++trynum * 1) +1)).222; done to test for online connectivity. It halts in place, sleeping growing random intervals, until it can ping google.com again. Problem: At some sites ICMP pings are blocked altogether, and web pages are still reachable. What's a short way to test for this general case? Based on that test I will switch over to an http-based test like the exit status of curl -s google.com >/dev/null if that is a good one.

    Read the article

  • Multiple test Active Directory envirovments hand in hand with production domain controllers

    - by MadBoy
    What's the best approach of having multiple test environments next to production one? We have multiple programming teams that build solutions that use Active Directory very often. We have tried different approaches, starting with their own domain controllers (in same subnet), or additional OU's in our production AD that the team gets control over and can create/delete accounts within that one OU. We thought of possible 4 solutions: Setting up separate OU's in ou production env. Creating subdomains for our contoso.com domain like test.contoso.com, something.contoso.com and delegating control to the teams (would we need additional DC's or the two that we have already would be enough to hold this? Setting up additional test domain controler that has a trust to our main domain and all teams can use the test domain controler as they please. Setting up single domain controller for every team/project. We're taking in consideration amount of resources needed, security (for example having multiple domain controlers with multiple passwords may lead users to use simpler passwords) and overall best practices for this scenario.

    Read the article

  • Is an Adnroid-based Phone a Suitable MP3 Player for Music Streamed over the Internet?

    - by James McFarland
    I am considering getting an HTC phone running Android from Verizon Wireless when I next upgrade my phone. I also have an online account with a music vendor, where I have rights to listen to my collection, but not download the MP3s. Further, I have an unlimited data plan and Wi-Fi, so I have full access to bandwidth volume without any concerns. I am especially interested in mounting my phone in a car kit, and streaming my online music to my car's sound system while driving. If you are experienced in this scenario, or have tried this scenario - Is is reasonable to expect my HTC Android phone to provide me with streaming music via my cell data plan anywhere I get cell service?

    Read the article

  • Why doesn't my test run tearDownAfterTestClass when it fails

    - by Memor-X
    in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this try { if(!mysqli_query($connection,$query)) { $this->assertTrue(false); } } catch (Exception $exc) { $msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL; $msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;; $this->fail($msg); } the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass so i am wondering, why isn't my tearDownAfterTestClass running when a test fails NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

    Read the article

  • Test-Path returns differetn results in 64Bit and 32Bit powershell

    - by StarSpace
    I am developing a script which should run under 64 and 32Bit Powershell. Unfortunately it seems that Test-Path return different results in 64 and 32 Environment. Both sessions are running under same user, this user has full access on specific registry key. 64Bit Powershell >test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices True 32Bit Powershell(x86) >test-path HKLM:\SOFTWARE\Citrix\ProvisioningServices False Any Idea?

    Read the article

  • How to skip certain tests with Test::Unit

    - by Daniel Abrahamsson
    In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite. My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test. The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality. As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >