Search Results

Search found 4783 results on 192 pages for 'tests'.

Page 68/192 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • PHPUnit installed but class PHPUnit_TestCase not found

    - by Greg K
    Talk about falling at the first hurdle. My test script: <?php require_once('PHPUnit/Framework.php'); class TransferResponseTest extends PHPUnit_TestCase { ... } Running my test case: $ phpunit TransferResponseTest Fatal error: Class 'PHPUnit_TestCase' not found in /Volumes/Data/greg/code/syndicate/tests/TransferResponseTest.php on line 5 $ php -i | grep include_path include_path => .:/usr/lib/php => .:/usr/lib/php $ ls -l /usr/lib/php/PHPUnit/ total 8 drwxr-xr-x 16 root wheel 544 27 Mar 19:03 Extensions drwxr-xr-x 28 root wheel 952 27 Mar 19:03 Framework -rw-r--r-- 1 root wheel 3193 27 Mar 19:03 Framework.php drwxr-xr-x 8 root wheel 272 27 Mar 19:03 Runner drwxr-xr-x 5 root wheel 170 27 Mar 19:03 TextUI drwxr-xr-x 32 root wheel 1088 27 Mar 19:03 Util I copied /etc/php.ini-default to /etc/php.ini and explicitly specified the include path as /usr/lib/php/ with an end / but still no success. $ php -i | grep include_path include_path => .:/usr/lib/php/ => .:/usr/lib/php/ $ phpunit TransferResponseTest.php PHP Fatal error: Class 'PHPUnit_TestCase' not found in /Volumes/Data/greg/code/syndicate/tests/TransferResponseTest.php on line 5 $ phpunit --version PHPUnit 3.4.11 by Sebastian Bergmann. Any ideas?

    Read the article

  • Internet Explorer 8 64bit and Selenium Not working.

    - by chobo2
    I am trying to get selenium tests to run. Yet every time I try to run a tests that should run IE I get a error on line 863 of htmlutils.js It says that I should disable my popup blocker. The thing is I went to IE tools- turn of popup block. So it is disabled and I get this error. Is there something else I need to disable. I actually don't even know what version of Internet explorer it is running since I am using Windows 7 Pro 64bit version. So when I do use IE I use 64bit version but I am under the understanding if the site or something like that does not support 64bit it goes to 32bit. So not sure what I need to do it to make it work. This is the lines where it does function openSeparateApplicationWindow(url, suppressMozillaWarning) { // resize the Selenium window itself window.resizeTo(1200, 500); window.moveTo(window.screenX, 0); var appWindow = window.open(url + '?start=true', 'selenium_main_app_window'); if (appWindow == null) { var errorMessage = "Couldn't open app window; is the pop-up blocker enabled?" LOG.error(errorMessage); throw new Error("Couldn't open app window; is the pop-up blocker enabled?"); } Where is this log.error message stored? Maybe I can post that too.

    Read the article

  • Unit Testing.... a data provider ?

    - by TomTom
    Given problem: I like unit tests. I develop connectivity software to external systems that pretty much and often use a C++ library The return of this systems is nonndeterministic. Data is received while running, but making sure it is all correctly interpreted is hard. How can I test this properly? I can run a unit test that does a connect. Sadly, it will then process a life data stream. I can say I run the test for 30 or 60 seconds before disconnecting, but getting code ccoverage is impossible - I simply dont even comeclose to get all code paths EVERY ONCE PER DAY (error code paths are rarely run). I also can not really assert every result. Depending on the time of the day we talk of 20.000 data callbacks per second - all of which are not relly determined good enough to validate each of them for consistency. Mocking? Well, that would leave me testing an empty shell of myself because the code handling the events basically is the to be tested case, and in many cases we talk here of a COMPLEX c level structure - hard to have mocking frameworks that integrate from Csharp to C++ Anyone any idea? I am short on giving up using unit tests for this part of the application.

    Read the article

  • PHPUnit Selenium captureScreenshotOnFailure does not work?

    - by user342775
    I am using PHPUnit 3.4.12 to drive my selenium tests. I'd like to be able to get a screenshot taken automatically when a test fails. This should be supported as explained at http://www.phpunit.de/manual/current/en/selenium.html#selenium.seleniumtestcase.examples.WebTest2.php class WebTest { protected $captureScreenshotOnFailure = true; protected $screenshotPath = 'C:\selenium'; protected $screnshotUrl = 'http://localhost/screenshots'; public function testLandingPage($selenium) { $selenium->open("http://www.example.com"); $selenium->fail("fail"); ... } } As you can see, I am making the test to fail and in theory when it does it should take a screenshot and put it in C:\selenium, as I am running the selenium RC server on Windows. However, when I run the test it will just give me the following: [root@testbox selenium]$ sh run PHPUnit 3.4.12 by Sebastian Bergmann. F Time: 8 seconds, Memory: 5.50Mb There was 1 failure: 1) WebTest::testLandingPage fail /home/root/selenium/WebTest.php:32 FAILURES! Tests: 1, Assertions: 0, Failures: 1. I do not see any screenshot in C:\selenium. I can however get a screenshot with $selenium-captureScreenshot("C:/selenium/image.png"); Any ideas or suggestions most welcome. Thanks

    Read the article

  • validate that URI is valid http URI

    - by Alfred
    Hi all, My problem: First of hopefully this is not a duplicate, but I could not find the right answer(right away). I would like to validate that an URI(http) is valid in Java. I came up with the following tests but I can't get them to pass. First I used getPort(), but then http://www.google.nl will return -1 on getPort(). This are the test I want to have passed Test: @Test public void testURI_Isvalid() throws Exception { assertFalse(HttpUtils.validateHTTP_URI("ttp://localhost:8080")); assertFalse(HttpUtils.validateHTTP_URI("ftp://localhost:8080")); assertFalse(HttpUtils.validateHTTP_URI("http://localhost:8a80")); assertTrue(HttpUtils.validateHTTP_URI("http://localhost:8080")); final String justWrong = "/schedule/get?uri=http://localhost:8080&time=1000000"; assertFalse(HttpUtils.validateHTTP_URI(justWrong)); assertTrue(HttpUtils.validateHTTP_URI("http://www.google.nl")); } This is what I came up with after I removed the getPort() part but this does not pass all my unit tests. Production code: public static boolean validateHTTP_URI(String uri) { final URI u; try { u = URI.create(uri); } catch (Exception e1) { return false; } return "http".equals(u.getScheme()); } This is the first test that is failing because I am no longer validating the getPort() part. Hopefully somebody can help me out. I think I am not using the right class to validate url's?? P.S: I don't want to connect to the server to validate the URI is correct. At least not yet in this step. I only want to validate scheme.

    Read the article

  • iPhone UnitTesting UITextField value and otest error 133

    - by Justin Galzic
    Are UITextFields not meant to be part of the LogicTests and instead part of the ApplicationTest target? I have a factory class that is responsible for creating and returning an (iPhone) UITextField and I'm trying to unit test it. It is part of my Logic Test target and when I try to build and run the tests, I get a build error about: /Developer/Tools/RunPlatformUnitTests.include:451:0 Test rig '/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.2.sdk/ 'Developer/usr/bin/otest' exited abnormally with code 133 (it may have crashed). In the build window, this points to the following line in: "RunPlatformUnitTests.include" RPUTIFail ${LINENO} "Test rig '${TEST_RIG}' exited abnormally with code ${TEST_RIG_RESULT} (it may have crashed)." My unit test looks like this: #import <SenTestingKit/SenTestingKit.h> #import <UIKit/UIKit.h> // Test-subject headers. #import "TextFieldFactory.h" @interface TextFieldFactoryTests : SenTestCase { } @end @implementation TextFieldFactoryTests #pragma mark Test Setup/teardown - (void) setUp { NSLog(@"%@ setUp", self.name); } - (void) tearDown { NSLog(@"%@ tearDown", self.name); } #pragma mark Tests - (void) testUITextField_NotASecureField { NSLog(@"%@ start", self.name); UITextField *textField = [TextFieldFactory createTextField:YES]; NSLog(@"%@ end", self.name); } The class I'm trying to test: // Header file #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> @interface TextFieldFactory : NSObject { } +(UITextField *)createTextField:(BOOL)isSecureField; @end // Implementation file #import "TextFieldFactory.h" @implementation TextFieldFactory +(UITextField *)createTextField:(BOOL)isSecureField { // x,y,z,w are constants declared else where UITextField *textField = [[[UITextField alloc] initWithFrame:CGRectMake(x, y, z, w)] autorelease]; // some initialization code return textField; } @end

    Read the article

  • flushing database cache in SWI-Prolog

    - by JPro
    We are using swi-prolog to run our testcases. Whenever the test starts, I am opening the connection to MYSQL database and storing the Name of the Test hat is being done and then closing the DB. These tests run for about 2 days continuously. After the tests are done, the results basically gets stored in folder in the server. There is a predicate in another prolog file that is called to update the results to the MYSQL database. The code is simple, I use odbc library and just call odbc_* predicates to connect and update the mysql by issuing direct queries. The actual problem is : If I try to call the Predicate from the same Prolog window, where the test just got completed, I get an error as updating to the DB server. Although I do not get any error in the connection. If I close the session of that prolog with halt and closing all the open prolog windows , then open an other complete new instance of Prolog and run the predicate the update goes well. I have a feeling that there is some connection reference to the MySQL DB in Prolog database. Is there any way to clear the database in prolog so that I can run the same predicate without closing any existing prolog windows? Any ideas appreciated. Thanks.

    Read the article

  • Ant target generate empty suite xml file

    - by user200317
    I am using ant for my project and I have been trying to generate JUnit report using ant target. The problem I run in to is that at the end of the execution my TESTS-TestSuites.xml is empty. But all the other individual test xml files have data. And due to this my html reports are empty, in the sense results shows "0". Here is my ant target <!-- JUnit Reporting --> <target name="test-report" depends="build-all" description="Generate Test Results as HTML"> <taskdef name="junitreport" classname="org.apache.tools.ant.taskdefs.optional.junit.XMLResultAggregator"/> <junit printsummary="on" haltonfailure="off" haltonerror="off" fork="yes"> <batchtest fork="yes" todir="${test.reports}" filtertrace="on"> <fileset dir="${build.classes}" includes="**/Test*Selenium.class"/> </batchtest> <formatter type="plain" usefile="false"/> <formatter type="xml" usefile="true"/> <classpath> <path refid="classpath"/> <path refid="application"/> </classpath> </junit> <echo message="running JUnit Report" /> <junitreport todir="${test.reports}"> <fileset dir="${test.reports}"> <include name="Test-*.xml" /> </fileset> <report format="frames" todir="${test.reports.html}" /> </junitreport> </target> This is what I get as ant print summary, [junitreport] Processing C:\YukonSelenium\reports\TESTS-TestSuites.xml to C:\DOCUME~1\user\LOCALS~1\Temp\null1848051184 [junitreport] Loading stylesheet jar:file:/C:/DevApps/apache-ant-1.7.1/lib/ant junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl [junitreport] Transform time: 859ms [junitreport] Deleting: C:\DOCUME~1\user\LOCALS~1\Temp\null1848051184 Here's how junit report looks like http://www.freeimagehosting.net/image.php?43dd69d3b8.jpg Thanks in advance,

    Read the article

  • iPad receiving memory warning with low memory use

    - by Fer
    I have an UIWebKit with a HTML, this HTML have several images and text, but just displaying it gives me the memory warning. So I did some tests: The same HTML with different images, fullsize, and after the same images but reduced 50% from it's original size, for the 50% reduced images, I went to preview and reduced all images in 50% The surprising part is the 50% test, you can see that even with 16 images, the memory peak is 4.90MB. That's really surprising. Notice that these values are not always the same, they change but there's not a huge difference between the tests. In the 50% issue, in the 8 and 16 images, although the memory is low, sometimes a memory warning appears, but the performance enhance is noticeable compared to the full size images standing still = memory after scrolling all article 1 Image = [standing still 5MB] [rotating 5.6MB] 2 Images = [standing still 6.99MB] [rotating 7.7MB] 3 Images = [standing still 9.04MB] [rotating 10.9MB] 4 Images = [standing still 10.89MB] [rotating 13.20MB] 8 Images = [standing still 23.14MB] [rotating 25.20MB] (sometimes crashes) 16 Images = [standing still 27.14MB and app crashes] 50% 1 Image = [standing still 3.2MB] [rotating 3.67MB] 2 Image = [standing still 3.2MB] [rotating 3.70MB] 3 Image = [standing still 3.3MB] [rotating 3.79MB] 4 Image = [standing still 3.3MB] [rotating 3.80MB] 8 Images = [standing still 4.29MB] [rotating 4,63MB] (sometimes crashes) 16 Images = [standing still 4.79MB] [rotating 4,90MB] (sometimes crashes) My question is: The app sometimes crashed with 16 small images. Why? The memory was much lower. What is the limit of memory use? These numbers are helpful if you also tell us the maximum. But, the maximum seemed different with the 50% size images. 13.2MB works for large images and 3.8 for small images. Anything higher sometimes crashes. That makes no sense.

    Read the article

  • Explicit behavior with checks vs. implicit behavior

    - by Silviu
    I'm not sure how to construct the question but I'm interested to know what do you guys think of the following situations and which one would you prefer. We're working at a client-server application with winforms. And we have a control that has some fields automatically calculated upon filling another field. So we're having a field currency which when filled by the user would determine an automatic filling of another field, maybe more fields. When the user fills the currency field, a Currency object would be retrieved from a cache based on the string introduced by the user. If entered currency is not found in the cache a null reference is returned by the cache object. Further down when asking the application layer to compute the other fields based on the currency, given a null currency a null specific field would be returned. This way the default, implicit behavior is to clear all fields. Which is the expected behavior. What i would call the explicit implementation would be to verify that the Currency object is null in which case the depending fields are cleared explicitly. I think that the latter version is more clear, less error prone and more testable. But it implies a form of redundancy. The former version is not as clear and it implies a certain behavior from the application layer which is not expressed in the tests. Maybe in the lower layer tests but when the need arises to modify the lower layers, so that given a null currency something else should be returned, i don't think a test that says just that without a motivation is going to be an impediment for introducing a bug in upper layers. What do you guys think?

    Read the article

  • file layout and setuptools configuration for the python bit of a multi-language library

    - by dan mackinlay
    So we're writing a full-text search framework MongoDb. MongoDB is pretty much javascript-native, so we wrote the javascript library first, and it works. Now I'm trying to write a python framework for it, which will be partially in python, but partially use those same stored javascript functions - the javascript functions are an intrinsic part of the library. On the other hand, the javascript framework does not depend on python. since they are pretty intertwined it seems like it's worthwhile keeping them in the same repository. I'm trying to work out a way of structuring the whole project to give the javascript and python frameworks equal status (maybe a ruby driver or whatever in the future?), but still allow the python library to install nicely. Currently it looks like this: (simplified a little) javascript/jstest/test1.js javascript/mongo-fulltext/search.js javascript/mongo-fulltext/util.js python/docs/indext.rst python/tests/search_test.py python/tests/__init__.py python/mongofulltextsearch/__init__.py python/mongofulltextsearch/mongo_search.py python/mongofulltextsearch/util.py python/setup.py I've skipped out a few files for simplicity, but you get the general idea; it' a pretty much standard python project... except that it depends critcally ona whole bunch of javascript which is stored in a sibling directory tree. What's the preferred setup for dealing with this kind of thing when it comes to setuptools? I can work out how to use package_data etc to install data files that live inside my python project as per the setuptools docs. The problem is if i want to use setuptools to install stuff, including the javascript files from outside the python code tree, and then also access them in a consistent way when I'm developing the python code and when it is easy_installed to someone's site. Is that supported behaviour for setuptools? Should i be using paver or distutils2 or Distribute or something? (basic distutils is not an option; the whole reason I'm doing this is to enable requirements tracking) How should i be reading the contents of those files into python scripts?

    Read the article

  • TDD - testing business rules/validation in ASP.NET MVC

    - by csetzkorn
    Hi, I am using the sharp architecture so I can easily use mocks etc. in my unit tests and/or during TDD. I have quite complicated business rules and would like to test them at the controller level. I am just wondering how other people do this? For me validation tests business rules at three levels: (1) Property level (e.g. property is required) (2) Intra property level (e.g. start date < end date) (3) Persistence level (e.g. name is unique, parent cannot be child of child) My validation framework also assigns errors to properties. I am just wondering what other people do? Do you write a test for each business rule and check whether the correct error message is assigned to the correct property (i.e. looking at the ASP.MVC ModelState)? I hope my question makes sense. Thanks a lot! Best wishes, Christian

    Read the article

  • PyQt and unittest - how to handle signals and slots

    - by Einar
    Hello, some small application I'm developing uses a module I have written to check certain web services via a REST API. I've been trying to add unit tests to it so I don't break stuff, and I stumbled upon a problem. I use a lot of signal-slot connections to perform operations asynchronously. For example a typical test would be (pseudo-Python), with postDataDownloaded as a signal: def testConnection(self): "Test connection and posts retrieved" def length_test(): self.assertEqual(len(self.client.post_data), 5) self.client.postDataReady.connect(length_test) self.client.get_post_list(limit=5) Now, unittest will report this test to be "ok" when running, regardless of the result (as another slot is being called), even if asserts fail (I will get an unhandled AssertionError). Example when deliberatiely making the test fail: Test connection and posts retrieved ... ok [... more tests...] OK Traceback (most recent call last): [...] AssertionError: 4 != 5 The slot inside the test is merely an experiment: I get the same results if it's outside (instance method). I also have to add that the various methods I'm calling all make HTTP requests, which means they take a bit of time (I need to mock the request - in the mean time I'm using SimpleHTTPServer to fake the connections and give them proper data). Is there a way around this problem?

    Read the article

  • Netbeans Profile JUnit 4 problem

    - by Krishna K
    I have a unit test that takes 200 sec to run. I am trying to use NetBeans profiler to speed it up. But the profiler doesn't run the unit test. It just creates an object of the test and exits. Doesn't run the actual test methods or @Before / @After methods. This is a maven project with surefire and junit 4. And partial output is below. Profiler Agent: Waiting for connection on port 5140, timeout 10 seconds (Protocol version: 9) Profiler Agent: Established local connection with the tool ------------------------------------------------------- T E S T S ------------------------------------------------------- Running com.cris.puzzle.solvers.SudokuSolverTest Tests run: 0, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.031 sec Results : Tests run: 0, Failures: 0, Errors: 0, Skipped: 0 Profiler Agent: Connection with agent closed Profiler Agent: Connection with agent closed Profiler Agent: Initializing... Profiler Agent: Options: >C:/Program Files/NetBeans 6.8/profiler3/lib,5140,10< Profiler Agent: Initialized succesfully ------------------------------------------------------------------------ BUILD SUCCESSFUL ------------------------------------------------------------------------ Total time: 14 seconds Does anyone know how to make it work? Thank you.

    Read the article

  • Robotium - Write to file in eclipse workspace or computer file system

    - by Flavio Capaccio
    I'm running some tests using Robotium on an Android application that interacts with a web-portal. I'd like to save some information to file; for example I need to save the id of the username I created from the app and I want to make it read from Selenium to run tests on web-portal to verify a webpage for that user has been created. Is it possible? Could someone suggest me a solution or a work-around? This is an example of code, but it doesn't work (I want to write to a file for example on c:\myworkspace\filename.txt a string): public void test_write_file(){ if(!solo.searchText("HOME")){ signIn("39777555333", VALID_PASSWORD); } try { String content = "This is the content to write into file"; File file = new File("filename.txt"); // if file doesnt exists, then create it if (!file.exists()) { file.createNewFile(); } FileWriter fw = new FileWriter(file.getAbsoluteFile()); BufferedWriter bw = new BufferedWriter(fw); bw.write(content); bw.close(); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } assertTrue(solo.searchText("HOME")); }

    Read the article

  • Google App Engine: JDO does the job, JPA does not

    - by Phuong Nguyen de ManCity fan
    I have setup a project using both Jdo and Jpa. I used Jpa Annotation to Declare my Entity. Then I setup my testCases based on LocalTestHelper (from Google App Engine Documentation). When I run the test, a call to makePersistent of Jdo:PersistenceManager is perfectly OK; a call to persist of Jpa:EntityManager raised an error: java.lang.IllegalArgumentException: Type ("org.seamoo.persistence.jpa.model.ExampleModel") is not that of an entity but needs to be for this operation at org.datanucleus.jpa.EntityManagerImpl.assertEntity(EntityManagerImpl.java:888) at org.datanucleus.jpa.EntityManagerImpl.persist(EntityManagerImpl.java:385) Caused by: org.datanucleus.exceptions.NoPersistenceInformationException: The class "org.seamoo.persistence.jpa.model.ExampleModel" is required to be persistable yet no Meta-Data/Annotations can be found for this class. Please check that the Meta-Data/annotations is defined in a valid file location. at org.datanucleus.ObjectManagerImpl.assertClassPersistable(ObjectManagerImpl.java:3894) at org.datanucleus.jpa.EntityManagerImpl.assertEntity(EntityManagerImpl.java:884) ... 27 more How can it be the case? Below is the link to the source code of the maven projects that reproduce that problem: http://seamoo.com/jpa-bug-reproduce.tar.gz Execute the maven test goal over the parent pom you will notice that 3/4 tests from org.seamoo.persistence.jdo.JdoGenericDAOImplTest passed, while all tests from org.seamoo.persistence.jpa.JpaGenericDAOImplTest failed.

    Read the article

  • Linking errors when building against Boost Unit Test Framework

    - by Rafid
    I am trying to use Boost Unit Test Framework by building a stand alone library as detailed here: http://www.boost.org/doc/libs/1_35_0/libs/test/doc/components/utf/compilation.html So I created a VC library project containing the mentioned files and build it and it was successful. Then I created a test project and referenced the library project I just created, but when I tried to build it, I got the following linking errors: 1>Type.obj : error LNK2019: unresolved external symbol "bool __cdecl boost::test_tools::tt_detail::check_impl(class boost::test_tools::predicate_result const &,class boost::unit_test::lazy_ostream const &,class boost::unit_test::basic_cstring<char const >,unsigned __int64,enum boost::test_tools::tt_detail::tool_level,enum boost::test_tools::tt_detail::check_type,unsigned __int64,...)" (?check_impl@tt_detail@test_tools@boost@@YA_NAEBVpredicate_result@23@AEBVlazy_ostream@unit_test@3@V?$basic_cstring@$$CBD@63@_KW4tool_level@123@W4check_type@123@3ZZ) referenced in function "public: void __cdecl test1::test_method(void)" (?test_method@test1@@QEAAXXZ) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::break_memory_alloc(long)" (?break_memory_alloc@debug@boost@@YAXJ@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(framework.obj) : error LNK2019: unresolved external symbol "void __cdecl boost::debug::detect_memory_leaks(bool)" (?detect_memory_leaks@debug@boost@@YAX_N@Z) referenced in function "void __cdecl boost::unit_test::framework::init(class boost::unit_test::test_suite * (__cdecl*)(int,char * * const),int,char * * const)" (?init@framework@unit_test@boost@@YAXP6APEAVtest_suite@23@HQEAPEAD@ZH0@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::attach_debugger(bool)" (?attach_debugger@debug@boost@@YA_N_N@Z) referenced in function "public: int __cdecl boost::detail::system_signal_exception::operator()(unsigned int,struct _EXCEPTION_POINTERS *)" (??Rsystem_signal_exception@detail@boost@@QEAAHIPEAU_EXCEPTION_POINTERS@@@Z) 1>BoostUnitTestFramework.lib(execution_monitor.obj) : error LNK2019: unresolved external symbol "bool __cdecl boost::debug::under_debugger(void)" (?under_debugger@debug@boost@@YA_NXZ) referenced in function "public: int __cdecl boost::execution_monitor::execute(class boost::unit_test::callback0<int> const &)" (?execute@execution_monitor@boost@@QEAAHAEBV?$callback0@H@unit_test@2@@Z) 1>BoostUnitTestFramework.lib(unit_test_main.obj) : error LNK2019: unresolved external symbol "class boost::unit_test::test_suite * __cdecl init_unit_test_suite(int,char * * const)" (?init_unit_test_suite@@YAPEAVtest_suite@unit_test@boost@@HQEAPEAD@Z) referenced in function main 1>C:\Users\Rafid\Workspace\MyPhysics\Builds\VC10\Tests\Debug\Tests.exe : fatal error LNK1120: 6 unresolved externals They seem to be mainly caused by Boost debug library, but I can't see a reason why I should get linking errors putting in mind that Boost debug library only need to be included as header files, rather than linking against as a library! Any ideas?!

    Read the article

  • playframework auto-test Jenkins CI wait for completion?

    - by notbrain
    I am trying to set up Jenkins CI for a playframework.org application but am having trouble properly launching play after the auto-test command is run. The tests all run fine, but it seems as though my script is launching both play auto-test and play start --%ci at the same time. When the play start --%ci command runs, it gets a pid and everything, but it's not running. FILE: auto-test.sh, jenkins runs this with execute shell #!/bin/bash # pwd is jenkins workspace dir # change into approot dir cd customer-portal; # kill any previous play launches if [ -e "server.pid" ] then kill `cat server.pid`; rm -rf server.pid; fi # drop and re-create the DB mysql --user=USER --password=PASS --host=HOSTNAME < ../setupdb.sql # auto-test the most recent build /usr/local/lib/play/play auto-test; # this is inadequate for waiting for auto-test to complete? # how to wait for actual process completion? # sleep 60; wait; # Conditional start based on tests # Launch normal on pass, test on fail # if [ -e "./test-result/result.passed" ] then /usr/local/lib/play/play start --%ci; exit 0; else /usr/local/lib/play/play test; exit 1; fi

    Read the article

  • GH-Unit for unit testing Objective-C code, why am I getting linking errors?

    - by djhworld
    Hi there, I'm trying to dive into the quite frankly terrible world of unit testing using Xcode (such a convoluted process it seems.) Basically I have this test class, attempting to test my Show.h class #import <GHUnit/GHUnit.h> #import "Show.h" @interface ShowTest : GHTestCase { } @end @implementation ShowTest - (void)testShowCreate { Show *s = [[Show alloc] init]; GHAssertNotNil(s,@"Was nil."); } @end However when I try to build and run my tests it moans with this error: - Undefined symbols: "_OBJC_CLASS_$_Show", referenced from: __objc_classrefs__DATA@0 in ShowTest.o ld: symbol(s) not found collect2: ld returned 1 exit status Now I'm presuming this is a linking error. I tried following every step in the instructions located here: - http://github.com/gabriel/gh-unit/blob/master/README.md And step 2 of these instructions confused me: - In the Target 'Tests' Info window, General tab: Add a linked library, under Mac OS X 10.5 SDK section, select GHUnit.framework Add a linked library, select your project. Add a direct dependency, and select your project. (This will cause your application or framework to build before the test target.) How am I supposed to add my project to the linked library list when all it accepts it .dylib, .framework and .o files. I'm confused! Thanks for any help that is received.

    Read the article

  • Javassist failure in hibernate: invalid constant type: 60

    - by Kaleb Pederson
    I'm creating a cli tool to manage an existing application. Both the application and the tests build fine and run fine but despite that I receive a javassist failure when running my cli tool that exists within the jar: INFO: Bytecode provider name : javassist ... INFO: Hibernate EntityManager 3.5.1-Final Exception in thread "main" javax.persistence.PersistenceException: Unable to configure EntityManagerFactory at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:371) at org.hibernate.ejb.HibernatePersistence.createEntityManagerFactory(HibernatePersistence.java:55) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:48) at javax.persistence.Persistence.createEntityManagerFactory(Persistence.java:32) ... at com.sophware.flexipol.admin.AdminTool.<init>(AdminTool.java:40) at com.sophware.flexipol.admin.AdminTool.main(AdminTool.java:69) Caused by: java.lang.RuntimeException: Error while reading file:flexipol-jar-with-dependencies.jar at org.hibernate.ejb.packaging.NativeScanner.getClassesInJar(NativeScanner.java:131) at org.hibernate.ejb.Ejb3Configuration.addScannedEntries(Ejb3Configuration.java:467) at org.hibernate.ejb.Ejb3Configuration.addMetadataFromScan(Ejb3Configuration.java:457) at org.hibernate.ejb.Ejb3Configuration.configure(Ejb3Configuration.java:347) ... 11 more Caused by: java.io.IOException: invalid constant type: 60 at javassist.bytecode.ConstPool.readOne(ConstPool.java:1027) at javassist.bytecode.ConstPool.read(ConstPool.java:970) at javassist.bytecode.ConstPool.<init>(ConstPool.java:127) at javassist.bytecode.ClassFile.read(ClassFile.java:693) at javassist.bytecode.ClassFile.<init>(ClassFile.java:85) at org.hibernate.ejb.packaging.AbstractJarVisitor.checkAnnotationMatching(AbstractJarVisitor.java:243) at org.hibernate.ejb.packaging.AbstractJarVisitor.executeJavaElementFilter(AbstractJarVisitor.java:209) at org.hibernate.ejb.packaging.AbstractJarVisitor.addElement(AbstractJarVisitor.java:170) at org.hibernate.ejb.packaging.FileZippedJarVisitor.doProcessElements(FileZippedJarVisitor.java:119) at org.hibernate.ejb.packaging.AbstractJarVisitor.getMatchingEntries(AbstractJarVisitor.java:146) at org.hibernate.ejb.packaging.NativeScanner.getClassesInJar(NativeScanner.java:128) ... 14 more Since I know the jar is fine as the unit and integration tests run against it, I thought it might be a problem with javassist, so I tried cglib. The bytecode provider then shows as cglib but I still get the exact same stack trace with javassist present in it. cglib is definitely in the classpath: $ unzip -l flexipol-jar-with-dependencies.jar | grep cglib | wc -l 383 I've tried with both hibernate 3.4 and 3.5 and get the exact same error. Is this a problem with javassist?

    Read the article

  • asp mvc unit test HttpContext.Current.Cache?

    - by Paul Creasey
    Here is the first part of my controller code: public class ControlMController : Controller { IControlMService _controlMservice; public IList<User> Users { get { if (System.Web.HttpContext.Current.Cache["users"] == null) { System.Web.HttpContext.Current.Cache["users"] = _controlMservice.GetUsers(); } return (IList<User>)System.Web.HttpContext.Current.Cache["users"]; } } public ControlMController(IControlMService controlMservice) { this._controlMservice = controlMservice; var users = Users; ViewData["Users"] = users; ViewData["jqSelectUsers"] = string.Join(";", users.Select(x => x.UserID + ":" + x.Name).ToArray()); } I'm trying to test it, and because i'm caching using the HttpContext, i'm struggling with null reference exceptions. I've tried using MvcContrib.TestHelper; here is my sample test... [TestMethod] public void EventDetails_Returns_view_with_correct_event() { var builder = new TestControllerBuilder(); var controller = builder.CreateController<ControlMController>( new ControlMService( new MockControlMRepository() )); var view = (controller.EventDetails(1) as ViewResult); Assert.AreEqual(1, (view.ViewData.Model as Event).EventId); } (I haven't quite got round to using DI for my tests! I'm still getting the same null reference exception when the code hits the httpcontext: Error 1 TestCase 'SupportTool.Tests.Services.ControlM.ControlMControllerTests.EventDetails_Returns_view_with_correct_event' failed: System.NullReferenceException: Object reference not set to an instance of an object. at SupportTool.web.Controllers.ControlMController.get_Users() Any ideas?

    Read the article

  • No Commons Logging in Android?

    - by Joe Boese
    Hello all, I have a pretty big library I developed specifically for use in my Android Application. However business logic itself has no dependency on Android. To preserve that, I used Commons Logging throughout this library and it's respective JUnit tests (which I run in Eclipse). However now that I am starting to integrate it into an Activity which I launch on Android, I am unable to get my logging to work. In Eclipse/JUnit, I had simply pulled in log4j's jar file as well as a log4j.properties file. This doesn't seem to work when deploying to a device. After struggling with attempting to get that to work for several hours, I gave up and tried replacing all my commons logging stuff with android.util.Log. Now I can log on the device.. but all JUnit tests are broken. When any JUnit tries to log using android.util.Log, it throws a RuntimeException 'Stub!'. I would prefer to revert to my commons logging approach.. if anyone can help with that.. otherwise.. what can I do to get my JUnit test cases running using 'android.util.Log'? Many thanks in advance.. I've spent more than a few hours on this and I'd like to move on to writing real code again! Joe

    Read the article

  • ocunit testing on iPhone

    - by Magnus Poromaa
    Hi I am trying to get ocunit working in my project from XCode. Since I also need to debug in the unit tests I am using a script that automates the setup (see below). I just include it in the project under resources and change the name to the .ocunit file I want it to run. The problem I get is that it cant find the bundle file and therefore exists with an error. Can anyone who has a clue about XCode and objective-c take a look at it and tell me what is wrong. Also how am I supposed to produce the .ocunit file that I need to run. By setting up a new unit test target for the iPhone and add tests to it or? Hope someone has a clue since I just started ny iPhone development and need to get it up and running quickly Apple Script -- The only customized value we need is the name of the test bundle tell me to activate tell application "Xcode" activate set thisProject to project of active project document tell thisProject set testBundleName to name of active target set unitTestExecutable to make new executable at end of executables set name of unitTestExecutable to testBundleName set path of unitTestExecutable to "/Applications/TextEdit.app" tell unitTestExecutable -- Add a "-SenTest All" argument make new launch argument with properties {active:true, name:"-SenTest All"} -- Add the magic set injectValue to "$(BUILT_PRODUCTS_DIR)/" & testBundleName & ".octest" make new environment variable with properties {active:true, name:"XCInjectBundle", value:injectValue} make new environment variable with properties {active:true, name:"XCInjectBundleInto", value:"/Applications/TextEdit.app/Contents/MacOS/TextEdit"} make new environment variable with properties {active:true, name:"DYLD_INSERT_LIBRARIES", value:"$(DEVELOPER_LIBRARY_DIR)/PrivateFrameworks/DevToolsBundleInjection.framework/DevToolsBundleInjection"} make new environment variable with properties {active:true, name:"DYLD_FALLBACK_FRAMEWORK_PATH", value:"$(DEVELOPER_LIBRARY_DIR)/Frameworks"} end tell end tell end tell Cheers Magnus

    Read the article

  • Assigning static final int in a JUnit (4.8.1) test suite

    - by Dr. Monkey
    I have a JUnit test class in which I have several static final ints that can be redefined at the top of the tester code to allow some variation in the test values. I have logic in my @BeforeClass method to ensure that the developer has entered values that won't break my tests. I would like to improve variation further by allowing these ints to be set to (sensible) random values in the @BeforeClass method if the developer sets a boolean useRandomValues = true;. I could simply remove the final keyword to allow the random values to overwrite the initialisation values, but I have final there to ensure that these values are not inadvertently changed, as some tests rely on the consistency of these values. Can I use a constructor in a JUnit test class? Eclipse starts putting red underlines everywhere if I try to make my @BeforeClass into a constructor for the test class, and making a separate constructor doesn't seem to allow assignment to these variables (even if I leave them unassigned at their declaration); Is there another way to ensure that any attempt to change these variables after the @BeforeClass method will result in a compile-time error? Can I make something final after it has been initialised?

    Read the article

  • Subversion versus Vault

    - by WebDude
    I'm currently reviewing the benefits of moving from SVN to a SourceGear Vault. Has anyone got advice or a link to a detailed comparison between the two? Bear in mind I would have to move my current Source Control system across which works strongly in SVN's favor Here is some info I have found out thus far from my own investigations. I have been taking some time tests between the two and vault seems to perform most operations much faster. Time tests used the same server as the repository, the same workstation client, and the same project. Time Comparisons SVN Add/Commit    12:30 Get Latest Revision    5:35 Tagging/Labelling    0:01 Branching    N/A - I don't think true branching exists in SVN Vault Add/Commit    4:45 Get Latest Revision    0:51 Tagging/Labelling    0:30 Branching    3:23 (can't get this to format correctly) I also found an online source comparing some other points. This is the kind of information i'm looking for. Usage Comparisons Subversion is edit/merge/commit only. Vault allows you to do either edit/merge/commit or checkout/edit/checkin. Vault looks and acts just like VSS, which makes the learning curve effectively zero for VSS users. Vault has a VS plugin, but it only works if you're going to run in checkout-mode. Subversion has clients for pretty much every OS you can imagine; Vault has a GUI client for Windows and a command line client for Mono. Both will support remote work, since both use HTTP as their transport (Subversion uses extended DAV, Vault uses SOAP). Subversion installation, especially w/ Apache, is more complex. Subversion has a lot of third party support. Vault has just a few things. My question Has anyone got advice or a link to a detailed comparison between the two?

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >