Search Results

Search found 8185 results on 328 pages for 'technical tests'.

Page 75/328 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Unit testing code which uses Umbraco v4 API

    - by Jason Evans
    I'm trying to write a suite of integration tests, where I have custom code which uses the Umbraco API. The Umbraco database lives in a SQL Server CE 4.0 database (*.sdf file) and I've managed to get that association working fine. My problem looks to be dependancies in the Umbraco code. For example, I would like to create a new user for my tests, so I try: var user = User.MakeNew("developer", "developer", "mypassword", "[email protected]", adminUserType); Now as you can see, I have pass a user type which is an object. I've tried two separate ways to create the user type, both of which fail due to a null object exception: var adminUserType = UserType.GetUserType(1); var adminUserType2 = new UserType(1); The problem is that in each case the UserType code calls it's Cache method which uses the HttpRuntime class, which naturally is null. My question is this: Can someone suggest a way of writing integration tests against Umbraco code? Will I have to end up using a mocking framework such as TypeMock or JustMock?

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Difference between float and double

    - by VaioIsBorn
    I know, i've read about the difference between double precision and single precision etc. But they should give the same results on most cases right ? I was solving a problem on a programming contest and there were calculations with floating point numbers that were not really big so i decided to use float instead of double, and i checked it - i was getting the correct results. But when i send the solution, it said only 1 of 10 tests was correct. I checked again and again, until i found that using float is not the same using double. I put double for the calculations and double for the output, and the program gave the SAME results, but this time it passed all the 10 tests correctly. I repeat, the output was the SAME, the results were the SAME, but putting float didn't work - only double. The values were not so big too, and the program gave the same results on the same tests both with float and double, but the online judge accepted only the double-provided solution. Why ? What is the difference ?

    Read the article

  • Java: design for using many executors services and only few threads

    - by Guillaume
    I need to run in parallel multiple threads to perform some tests. My 'test engine' will have n tests to perform, each one doing k sub-tests. Each test result is stored for a later usage. So I have n*k processes that can be ran concurrently. I'm trying to figure how to use the java concurrent tools efficiently. Right now I have an executor service at test level and n executor service at sub test level. I create my list of Callables for the test level. Each test callable will then create another list of callables for the subtest level. When invoked a test callable will subsequently invoke all subtest callables test 1 subtest a1 subtest ...1 subtest k1 test n subtest a2 subtest ...2 subtest k2 call sequence: test manager create test 1 callable test1 callable create subtest a1 to k1 testn callable create subtest an to kn test manager invoke all test callables test1 callable invoke all subtest a1 to k1 testn callable invoke all subtest an to kn This is working fine, but I have a lot of new treads that are created. I can not share executor service since I need to call 'shutdown' on the executors. My idea to fix this problem is to provide the same fixed size thread pool to each executor service. Do you think it is a good design ? Do I miss something more appropriate/simple for doing this ?

    Read the article

  • read xml in javascript problem

    - by Najmi
    hai all, i have a problem with my code to read the xml.I have use ajax to read xml data and populate it in combobox. My problem is it only read the first data.Here is my code my xml like this <area> <code>1</code> <name>area1</name> </area> <area> <code>2</code> <name>area2</name> </area> and my javascript if(http.readyState == 4 && http.status == 200) { //get select elements var item = document.ProblemMaintenanceForm.elements["probArea"]; //empty combobox item.options.length = 0; //read xml data from action file var test = http.responseXML.getElementsByTagName("area"); alert(test.length); for ( var i=0; i < test.length; i++ ){ var tests = test[i]; item.options[item.options.length] = new Option(tests.getElementsByTagName("name")[i].childNodes[i].nodeValue,tests.getElementsByTagName("code")[i].childNodes[i].nodeValue); } }

    Read the article

  • What is the purpose of unit testing an interface repository

    - by ahsteele
    I am unit testing an ICustomerRepository interface used for retrieving objects of type Customer. As a unit test what value am I gaining by testing the ICustomerRepository in this manner? Under what conditions would the below test fail? For tests of this nature is it advisable to do tests that I know should fail? i.e. look for id 4 when I know I've only placed 5 in the repository I am probably missing something obvious but it seems the integration tests of the class that implements ICustomerRepository will be of more value. [TestClass] public class CustomerTests : TestClassBase { private Customer SetUpCustomerForRepository() { return new Customer() { CustId = 5, DifId = "55", CustLookupName = "The Dude", LoginList = new[] { new Login { LoginCustId = 5, LoginName = "tdude" }, new Login { LoginCustId = 5, LoginName = "tdude2" } } }; } [TestMethod] public void CanGetCustomerById() { // arrange var customer = SetUpCustomerForRepository(); var repository = Stub<ICustomerRepository>(); // act repository.Stub(rep => rep.GetById(5)).Return(customer); // assert Assert.AreEqual(customer, repository.GetById(5)); } } Test Base Class public class TestClassBase { protected T Stub<T>() where T : class { return MockRepository.GenerateStub<T>(); } } ICustomerRepository and IRepository public interface ICustomerRepository : IRepository<Customer> { IList<Customer> FindCustomers(string q); Customer GetCustomerByDifID(string difId); Customer GetCustomerByLogin(string loginName); } public interface IRepository<T> { void Save(T entity); void Save(List<T> entity); bool Save(T entity, out string message); void Delete(T entity); T GetById(int id); ICollection<T> FindAll(); }

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • About Interview structure for test automation lab developers

    - by Ikaso
    Hi, I am interviewing new applicants for a team that is doing test automation on our company product(s). The team is composed of junior software developers and a team leader. The product runs on windows and has both managed and unmanaged parts. The test automation is done on both client side (user mode and kernel mode) and server side (IIS, Windows Services, backend). We are doing mainly intergration tests and black box tests. I am trying to figure out how to organize my interview. My overall idea is to ask about a project they have done, then ask some technical questions (multithreading, GC, design patterns) and one programming question. Please note that there is another interview done before me with 2 programming questions. My programming question is rather simple (for example: reversing a singly-linked linked list). My coworkers think that my questions will not find good developers since my questions are rather simple and well known, but so far most of the applicants fail those questions. My questions are: Should I change the structure of my interview for this kind of job? What questions do you ask to figure our if the applicant is test oriented? (Maybe I should provide a buggy implementation of a problem and let them find the bugs and then ask them about what tests they would have done) Regards,

    Read the article

  • Does new JUnit 4.8 @Category render test suites almost obsolete?

    - by grigory
    Given question 'How to run all tests belonging to a certain Category?' and the answer would the following approach be better for test organization? define master test suite that contains all tests (e.g. using ClasspathSuite) design sufficient set of JUnit categories (sufficient means that every desirable collection of sets is identifiable using one or more categories) define targeted test suites based on master test suite and set of categories For example: identify categories for speed (slow, fast), dependencies (mock, database, integration), function (), domain ( demand that each test is properly qualified (tagged) with relevant set of categories. create master test suite using ClasspathSuite (all tests found in classpath) create targeted suites by qualifying master test suite with categories, e.g. mock test suite, fast database test suite, slow integration for domain X test suite, etc. My question is more like soliciting approval rate for such approach vs. classic test suite approach. One unbeatable benefit is that every new test is immediately contained by relevant suites with no suite maintenance. One concern is proper categorization of each test.

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • using spring, hibernate and scala, is there a better way to load test data than dbunit?

    - by egervari
    Here are some things I really dislike about dbunit: 1) You cannot specify the exact ordering the inserts because dbunit likes to group your inserts by table name, and not by the order you define them in the XML file. This is a problem when you have records depending on other records in other tables, so you have to disable foreign key constraints during your tests... which actually sucks because these foreign key constraints will get fired in production while your tests won't be aware of them! 2) They seem hellbent on forcing you to use an xml namespace to define your xml... and I honestly can't be bothered to do this. I like the data.xml without any namespace. It works. But they are so hellbent on deprecating it. 3) Creating different xml files is hard on a per test basis, so it actually encourages creating data for your entire app. Unfortunately, this process is a little bloated too once the data grows in size and things get inter tangled. There has got to be a better way to split up your test data into chunks without having to copy/paste a lot of the test data across all of your tests. 4) Keeping track of id references in a big xml file is just impossible. If you have 130 domain classes, it just gets bewildering. This model simply does not scale. Is there something less bloated and better in the Spring/Hibernate space? db unit has worn out its welcome and I'm really looking for something better.

    Read the article

  • How to unit test generic classes

    - by Rowland Shaw
    I'm trying to set up some unit tests for an existing compact framework class library. However, I've fallen at the first hurdle, where it appears that the test framework is unable to load the types involved (even though they're both in the class library being tested) Test method MyLibrary.Tests.MyGenericClassTest.MyMethodTest threw exception: System.MissingMethodException: Could not load type 'MyLibrary.MyType' from assembly 'MyLibrary, Version=1.0.3778.36113, Culture=neutral, PublicKeyToken=null'.. My code is loosely: public class MyGenericClass<T> : List<T> where T : MyType, new() { public bool MyMethod(T foo) { throw new NotImplementedException(); } } With test methods: public void MyMethodTestHelper<T>() where T : MyType, new() { MyGenericClass<T> target = new MyGenericClass<T>(); foo = new T(); expected = true; actual = target.MyMethod(foo); Assert.AreEqual(expected, actual); } [TestMethod()] public void MyMethodTest() { MyMethodTestHelper<MyType>(); } I'm a bit stumped though, as I can't even get it to break in the debugger to get to the inner exception, so what else do I check? EDIT this does seem to be something specific to the Compact Framework - recompiling the class libraries and the unit tests for the full framework, gives the expected output (i.e. the debugger stops when I'm going to throw a NotImplementedException).

    Read the article

  • How to invoke make install for one subdirectory of Qt project

    - by chalup
    I'm working on custom library and I wish users could just use it by adding: CONFIG += mylib to their pro files. This can be done by installing mylib.prf file to %QTDIR%/mkspec/features. I've checked out in Qt Mobility project how to create and install such file, but there is one thing I'd like to do differently. If I correctly understood the pro/pri files of Qt Mobility, inside the example projects they don't really use CONFIG += mobility, instead they add QtMobility sources to include path and share the *.obj directory with main library project. For my library I'd like to have examples that are as independent projects as possible, i.e. projects that can be compiled from anywhere once MyLib is compiled and installed. I have following directory structure: mylib | |- examples |- src |- tests \- mylib.pro It seems that the easiest way to achieve what I described above is creating mylib.pro like this: TEMPLATE = subdirs SUBDIRS += src SUBDIRS += examples tests:SUBDIRS += tests And somehow enforce invoking "cd src && make install" after building src. What is the best way to do this? Of course any other suggestions for automatic library deployment before examples compilation are welcome.

    Read the article

  • django test client trouble

    - by Anton Koval'
    I've got a problem... we're writing project using django, and i'm trying to use django.test.client with nose test-framework for tests. Our code is like this: from simplejson import loads from urlparse import urljoin from django.test.client import Client TEST_URL = "http://smakly.localhost:9090/" def test_register(): cln = Client() ref_data = {"email": "[email protected]", "name": "???????", "website": "http://hot.bear.com", "xhr": "true"} print urljoin(TEST_URL, "/accounts/register/") response = loads(cln.post(urljoin(TEST_URL, "/accounts/register/"), ref_data)) print response["message"] and in nose output I catch: Traceback (most recent call last): File "/home/psih/work/svn/smakly/eggs/nose-0.11.1-py2.6.egg/nose/case.py", line 183, in runTest self.test(*self.arg) File "/home/psih/work/svn/smakly/src/smakly.tests/smakly/tests/frontend/test_profile.py", line 25, in test_register response = loads(cln.post(urljoin(TEST_URL, "/accounts/register/"), ref_data)) File "/home/psih/work/svn/smakly/parts/django/django/test/client.py", line 313, in post response = self.request(**r) File "/home/psih/work/svn/smakly/parts/django/django/test/client.py", line 225, in request response = self.handler(environ) File "/home/psih/work/svn/smakly/parts/django/django/test/client.py", line 69, in __call__ response = self.get_response(request) File "/home/psih/work/svn/smakly/parts/django/django/core/handlers/base.py", line 78, in get_response urlconf = getattr(request, "urlconf", settings.ROOT_URLCONF) File "/home/psih/work/svn/smakly/parts/django/django/utils/functional.py", line 273, in __getattr__ return getattr(self._wrapped, name) AttributeError: 'Settings' object has no attribute 'ROOT_URLCONF' My settings.py file does have this attribute. If I get the data from the server with standard urllib2.urllopen().read() it works in the proper way. Any ideas how I can solve this case?

    Read the article

  • How do you prove a function works?

    - by glenn I.
    I've recently gotten the testing religion and have started primarily with unit testing. I code unit tests which illustrate that a function works under certain cases, specifically using the exact inputs I'm using. I may do a number of unit tests to exercise the function. Still, I haven't actually proved anything other than the function does what I expect it to do under the scenarios I've tested. There may be other inputs and scenarios I haven't thought of and thinking of edge cases is expensive, particularly on the margins. This is all not very satisfying to do me. When I start to think of having to come up with tests to satisfy branch and path coverage and then integration testing, the prospective permutations can become a little maddening. So, my question is, how can one prove (in the same vein of proving a theorem in mathematics) that a function works (and, in a perfect world, compose these 'proofs' into a proof that a system works)? Is there a certain area of testing that covers an approach where you seek to prove a system works by proving that all of its functions work? Does anybody outside of academia bother with an approach like this? Are there tools and techniques to help? I realize that my use of the word 'work' is not precise. I guess I mean that a function works when it does what some spec (written or implied) states that it should do and does nothing other than that. Note, I'm not a mathematician, just a programmer.

    Read the article

  • Compilig + testing an Android library with the JDK?

    - by Jarle Hansen
    Hi all, I am creating a library for Android that others can include in their own project. So far I have been working on it as a normal Java project with JDK 1.6 setup as system library. This works just fine in Eclipse when I add the android.jar. The issue comes when I try to my build script. I am running Gradle and doing a normal compile and test build cycle. My thoughts were that it does not matter if I compile it with a normal JDK, since this is not a standalone application. The benefits by creating a normal Java project is that Gradle does support this much better. My project also does not contain any UI at all. However, the problem is that of course android.jar and the JDK contains lots of the same classes and I think that this is what messes up my build script. Everything crashes when running the tests (the tests are in the same project under src/test/java). My question is, how should I create this project that is meant to be included in Android projects as a third party library? Should I create it as an Android project in Eclipse even though I am only creating a library that does not use any of the UI features? Also, should the tests be in a separate project? Thanks for all responses!

    Read the article

  • Xcode Unit Testing - Accessing Resources from the application's bundle?

    - by Ben Scheirman
    I'm running into an issue and I wanted to confirm that I'm doing things the correct way. I can test simple things with my SenTestingKit tests, and that works okay. I've set up a Unit Test Bundle and set it as a dependency on the main application target. It successfully runs all tests whenever I press cmd+B. Here's where I'm running into issues. I have some XML files that I need to load from the resources folder as part of the application. Being a good unit tester, I want to write unit tests around this to make sure that they are loading properly. So I have some code that looks like this: NSString *filePath = [[NSBundle mainBundle] pathForResource:@"foo" ofType:@"xml"]; This works when the application runs, but during a unit test, mainBundle points to the wrong bundle, so this line of code returns nil. So I changed it up to utilize a known class like this: NSString *filePath = [[NSBundle bundleForClass:[Config class]] pathForResource:@"foo" ofType:@"xml"]; This doesn't work either, because in order for the test to even compile code like this, it Config needs to be part of the Unit Test Target. If I add that, then the bundle for that class becomes the Unit Test bundle. (Ugh!) Am I approaching this the wrong way?

    Read the article

  • How can I get 100% test coverage in a Perl module that uses DBI?

    - by BrianH
    I am a bit new to the Devel::Cover module, but have found it very useful in making sure I am not missing tests. A problem I am running into is understanding the report from Devel::Cover. I've looked at the documentation, but can't figure out what I need to test to get 100% coverage. Here is the output from the cover report: line err stmt bran cond sub pod time code ... 36 sub connect_database { 37 3 3 1 1126 my $self = shift; 38 3 100 24 if ( !$self->{dsn} ) { 39 1 7 croak 'dsn not supplied - cannot connect'; 40 } 41 *** 2 33 21 $self->{dbh} = DBI->connect( $self->{dsn}, q{}, q{} ) 42 || croak "$DBI::errstr"; 43 1 11 return $self; 44 } ... line err % l !l&&r !l&&!r expr ----- --- ------ ------ ------ ------ ---- 41 *** 33 1 0 0 'DBI'->connect($$self{'dsn'}, '', '') || croak("$DBI::errstr") And here is and example of my code that tests this specific line: my $database = MyModule::Database->new( { dsn => 'Invalid DSN' }); throws_ok( sub { $database->connect_database() }, qr/Can't connect to data source/, 'Test connection exception (invalid dsn)' ); This test passes - the connect does throw an error and fulfills my "throws_ok" test. I do have some tests that test for a successful connection, which is why I think I have 33% coverage, but if I'm reading it correctly, cover thinks I am not testing the "|| croak" part of the statement. I thought I was, with the "throws_ok" test, but obviously I am missing something. Does anyone have advice on how I can test my DBI-connect line successfully? Thanks!

    Read the article

  • How to setup and teardown temporary django db for unit testing?

    - by blokeley
    I would like to have a python module containing some unit tests that I can pass to hg bisect --command. The unit tests are testing some functionality of a django app, but I don't think I can use hg bisect --command manage.py test mytestapp because mytestapp would have to be enabled in settings.py, and the edits to settings.py would be clobbered when hg bisect updates the working directory. Therefore, I would like to know if something like the following is the best way to go: import functools, os, sys, unittest sys.path.append(path_to_myproject) os.environ['DJANGO_SETTINGS_MODULE'] = 'myapp.settings' def with_test_db(func): """Decorator to setup and teardown test db.""" @functools.wraps def wrapper(*args, **kwargs): try: # Set up temporary django db func(*args, **kwargs) finally: # Tear down temporary django db class TestCase(unittest.TestCase): @with_test_db def test(self): # Do some tests using the temporary django db self.fail('Mark this revision as bad.') if '__main__' == __name__: unittest.main() I should be most grateful if you could advise either: If there is a simpler way, perhaps subclassing django.test.TestCase but not editing settings.py or, if not; What the lines above that say "Set up temporary django db" and "Tear down temporary django db" should be?

    Read the article

  • Python 2.6 does not like appending to existing archives in zip files

    - by user313661
    Hello, In some Python unit tests of a program I'm working on we use in-memory zipfiles for end to end tests. In SetUp() we create a simple zip file, but in some tests we want to overwrite some archives. For this we do "zip.writestr(archive_name, zip.read(archive_name) + new_content)". Something like import zipfile from StringIO import StringIO def Foo(): zfile = StringIO() zip = zipfile.ZipFile(zfile, 'a') zip.writestr( "foo", "foo content") zip.writestr( "bar", "bar content") zip.writestr( "foo", zip.read("foo") + "some more foo content") print zip.read("bar") Foo() The problem is that this works fine in Python 2.4 and 2.5, but not 2.6. In Python 2.6 this fails on the print line with "BadZipfile: File name in directory "bar" and header "foo" differ." It seems that it is reading the correct file bar, but that it thinks it should be reading foo instead. I'm at a loss. What am I doing wrong? Is this not supported? I tried searching the web but could find no mention of similar problems. I read the zipfile documentation, but could not find anything (that I thought was) relevant, especially since I'm calling read() with the filename string. Any ideas? Thank you in advance!

    Read the article

  • White-box testing in Javascript - how to deal with privacy?

    - by Max Shawabkeh
    I'm writing unit tests for a module in a small Javascript application. In order to keep the interface clean, some of the implementation details are closed over by an anonymous function (the usual JS pattern for privacy). However, while testing I need to access/mock/verify the private parts. Most of the tests I've written previously have been in Python, where there are no real private variables (members, identifiers, whatever you want to call them). One simply suggests privacy via a leading underscore for the users, and freely ignores it while testing the code. In statically typed OO languages I suppose one could make private members accessible to tests by converting them to be protected and subclassing the object to be tested. In Javascript, the latter doesn't apply, while the former seems like bad practice. I could always wall back to black box testing and simply check the final results. It's the simplest and cleanest approach, but unfortunately not really detailed enough for my needs. So, is there a standard way of keeping variables private while still retaining some backdoors for testing in Javascript?

    Read the article

  • How can this Ambient Context become null?

    - by Mark Seemann
    Can anyone help me explain how TimeProvider.Current can become null in the following class? public abstract class TimeProvider { private static TimeProvider current = DefaultTimeProvider.Instance; public static TimeProvider Current { get { return TimeProvider.current; } set { if (value == null) { throw new ArgumentNullException("value"); } TimeProvider.current = value; } } public abstract DateTime UtcNow { get; } public static void ResetToDefault() { TimeProvider.current = DefaultTimeProvider.Instance; } } Observations All unit tests that directly reference TimeProvider also invokes ResetToDefault() in their Fixture Teardown. There is no multithreaded code involved. Once in a while, one of the unit tests fail because TimeProvider.Current is null (NullReferenceException is thrown). This only happens when I run the entire suite, but not when I just run a single unit test, suggesting to me that there is some subtle test interdependence going on. It happens approximately once every five or six test runs. When a failure occurs, it seems to be occuring in the first executed tests that involves TimeProvider.Current. More than one test can fail, but only one fails in a given test run. FWIW, here's the DefaultTimeProvider class as well: public class DefaultTimeProvider : TimeProvider { private readonly static DefaultTimeProvider instance = new DefaultTimeProvider(); private DefaultTimeProvider() { } public override DateTime UtcNow { get { return DateTime.UtcNow; } } public static DefaultTimeProvider Instance { get { return DefaultTimeProvider.instance; } } } I suspect that there's some subtle interplay going on with static initialization where the runtime is actually allowed to access TimeProvider.Current before all static initialization has finished, but I can't quite put my finger on it. Any help is appreciated.

    Read the article

  • Error with threads during automatic testing on TeamCity 5

    - by yeyeyerman
    Hello, I'm having some problems executing the tests of the application I'm developing. All the tests execute normally with ReSharper and in NCover. However, the execution of one of these tests in TeamCity is generating an error. This test initializes two objects, the object under test and a simulator of a real object. Both objects will communicate throug a serial link in a representation of the real scenario. ObjectSimulator r_simulator = new ObjectSimulator(...); ObjectDriver r_driver = new ObjectDriver(...); Assert.IsTrue(r_driver.Connect() == ErrorCode.Success); The simulator just do the following in the constructor public class ObjectSimulator { ... public ObjectSimulator() { // serial port configuration m_port = new SerialPort(); m_port.DataReceived += DataReceivedEvent; } ... } The main object has two threads. The main thread of the application and a timer to refresh a watchdog timer in the real object. public ErrorCode Connect() { ... StartSynchroTimer(); Thread.Sleep(4); // to check if the timer is working properly ... } The problem is comming from the Thread.Sleep() call, as when I remove it everything works. It seems like the ObjectSimulator also sleeps and doesn't receive the DataReceived event. How can I resolve this issue?

    Read the article

  • Ensure that all getter methods were called in a JUnit test

    - by Freiheit
    I have a class that uses XStream and is used as a transfer format for my application. I am writing tests for other classes that map this transfer format to and from a different messaging standard. I would like to ensure that all getters on my class are called within a test to ensure that if a new field is added, my test properly checks for it. A rough outline of the XStream class @XStreamAlias("thing") public class Thing implements Serializable { private int id; private int someField; public int getId(){ ... } public int someField() { ... } } So now if I update that class to be: @XStreamAlias("thing") public class Thing implements Serializable { private int id; private int someField; private String newField; public int getId(){ ... } public int getSomeField() { ... } public String getNewField(){ ... } } I would want my test to fail because the old tests are not calling getNewField(). The goal is to ensure that if new getters are added, that we have some way of ensuring that the tests check them. Ideally, this would be contained entirely in the test and not require modifying the underlying Thing class. Any ideas? Thanks for looking!

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >