Search Results

Search found 4935 results on 198 pages for 'organizational unit'.

Page 16/198 | < Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >

  • Implementing Unit Testing on the iPhone

    - by james.ingham
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged).

    Read the article

  • Implementing Unit Testing with the iPhone SDK

    - by ing0
    Hi, So I've followed this tutorial to setup unit testing on my app when I got a little stuck. At bullet point 8 in that tutorial it shows this image, which is what I should be expecting when I build: However this isn't what I get when I build. I get this error message: Command /bin/sh failed with exit code 1 as well as the error message the unit test has created. Then, when I expand on the first error I get this: PhaseScriptExecution "Run Script" "build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh" cd "/Users/james/Desktop/FYP/3D Pool" setenv ACTION build setenv ALTERNATE_GROUP staff ... setenv XCODE_VERSION_MAJOR 0300 setenv XCODE_VERSION_MINOR 0320 setenv YACC /Developer/usr/bin/yacc /bin/sh -c "\"/Users/james/Desktop/FYP/3D Pool/build/3D Pool.build/Debug-iphonesimulator/LogicTests.build/Script-1A6BA6AE10F28F40008AC2A8.sh\"" /Developer/Tools/RunPlatformUnitTests.include:412: note: Started tests for architectures 'i386' /Developer/Tools/RunPlatformUnitTests.include:419: note: Running tests for architecture 'i386' (GC OFF) objc[12589]: GC: forcing GC OFF because OBJC_DISABLE_GC is set Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' started at 2010-01-04 21:05:06 +0000 Test Suite 'LogicTests' started at 2010-01-04 21:05:06 +0000 Test Case '-[LogicTests testFail]' started. /Users/james/Desktop/FYP/3D Pool/LogicTests.m:17: error: -[LogicTests testFail] : Must fail to succeed. Test Case '-[LogicTests testFail]' failed (0.000 seconds). Test Suite 'LogicTests' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.000) seconds Test Suite '/Users/james/Desktop/FYP/3D Pool/build/Debug-iphonesimulator/LogicTests.octest(Tests)' finished at 2010-01-04 21:05:06 +0000. Executed 1 test, with 1 failure (0 unexpected) in 0.000 (0.002) seconds /Developer/Tools/RunPlatformUnitTests.include:448: error: Failed tests for architecture 'i386' (GC OFF) /Developer/Tools/RunPlatformUnitTests.include:462: note: Completed tests for architectures 'i386' Command /bin/sh failed with exit code 1 Now this is very odd as it is running the tests (and succeeding as you can see my STFail firing) because if I add a different test which passes I get no errors, so the tests are working fine. But why am I getting this extra build fail? It may also be of note that when downloading solutions/templates which should work out the box, I get the same error. I'm guessing I've set something up wrong here but I've just followed a tutorial 100% correctly!! If anyone could help I'd be very grateful! Thanks EDIT: According to this blog, this post and a few other websites, I'm not the only one getting this problem. It has been like this since the release of xCode 3.2, assuming the apple dev center documents and tutorials etc are pre-3.2 as well. However some say its a known issue whereas others seem to think this was intentional. I for one would like both the extended console and in code messages, and I certainly do not like the "Command /bin/sh..." error and really think they would have documented such an update. Hopefully it will be fixed soon anyway. UPDATE: Here's confirmation it's something changed since the release of xCode 3.2.1. This image: is from my test build using 3.2.1. This one is from an older version (3.1.4): . (The project for both was unchanged). Edit: Image URLS updated.

    Read the article

  • Best current framework for unit testing EJB3 / JPA

    - by kennygrimm
    Starting new project using EJB 3 / JPA, mainly stateless session beans and batch jobs. I've used JUnit in the past on standard Java webapps and it seemed to work pretty well. In EJB2 unit testing was a pain and required a running container such as JBoss to make the calls into. Now that we're going to be working in EJB3 / JPA I'd like to know what companies are using to write and run these tests. Are Junit and JMock still considered relevant or are there other newer frameworks that have come around that we should investigate?

    Read the article

  • Visual Studio 2010 and TFS 2008: Building unit test projects

    - by Peter
    Hi, We are currently taking VS2010 for a testdrive and so far we are a little stumped with how it just won't cooperate with our existing Team Foundation Server 2008. We still have all our projects on .NET 3.5 and whenever we are now building a solution that contains a unit test project (which automatically builds in .NET 4.0) the TFS won't build it. The .NET 4.0 framework is installed on the TFS 2008. The error we're receiving is: [Any CPU/Release] c:\WINDOWS\Microsoft.NET\Framework\v3.5\Microsoft.Common.targets(0,0): warning MSB3245: Could not resolve this reference. Could not locate the assembly "Microsoft.VisualStudio.QualityTools.UnitTestFramework, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=MSIL". Check to make sure the assembly exists on disk. If this reference is required by your code, you may get compilation errors. As a temporary workaround we are now forced to remove all our test projects in order for our solutions to build.

    Read the article

  • Unit Testing iPhone - Linker Errors

    - by sliver
    I've followed the Unit Testing Applications guide from the iPhone Development documentation. I followed all the steps and it worked with the TestCase from the documentation. But as soon as I changed the TestCase to test real Code from my project I ended up with linker errors. All classes that are used in the TestCase are reported as missing. I've already searched the internet and found that the Bundle Loader property must be set to "$(BUILT_PRODUCTS_DIR)/MyApplication.app/Contents/MacOS/MyApplication". But this also fails because the file could not be found. Any ideas what I have to do to tell the linker where to search for the missing files?

    Read the article

  • Some unit tests fail in automated Team Build task

    - by weenet
    I have an odd situation. I have a suite of unit tests that pass on my dev machine. They pass on the build machine if run from visual studio. But 5 of them reliably fail during the automated build. There is nothing noteworthy about the ones that fail that I can see (and I've stared at them a long time). Anyone seen anything like this? Is there a way to see the test output in the Team Build log? All I get is Passed or Failed messages, but not the Assert message. Thanks!

    Read the article

  • Where to place unit test project

    - by Karsten
    I'm thinking about where to put the unit/integration test project. I follow the 1 test project pr. project convention I can think of 3 ways, that all seems good to me, which make it kind of hard to choose :-) Test project is put under a Tests sub folder to the project it tests. Test project is put next to the project it tests, in a "project".Tests folder. I believe this is what Roy Osherove recommends. Put all test projects in a sub folder in the root. e.g. \Tests\"project".Tests Something else? What you choose and why?

    Read the article

  • I'm going to write 'Unit of Work', please help me find out all gimmicks

    - by o..o
    Hi everybody, I'm going to write my own DAL in C#. I decided to use 'Unit of Work' pattern (next mentioned as uow) with request as a scope and Identity map stored in HttpContext.Items. I have right now question about implementing of CRUD methods. How/where are they implemented? Are they implemented in every single business class (as in active records pattern) or are implemented somehow in uow class (if so, how)? I also suppose that I need to use as the scope not just the request, but also the db connection. But how? Should I open the connection a the start of the request and close it on uow disposing? Every advice is strongly appreciated, especially Your "real world" experiences. Thank you all :)

    Read the article

  • Loading fixtures in django unit tests

    - by loder
    I'm trying to start writing unit tests for django and I'm having some questions about fixtures: I made a fixture of my whole project db (not certain application) and I want to load it for each test, because it looks like loading only the fixture for certain app won't be enough. I'd like to have the fixture stored in /proj_folder/fixtures/proj_fixture.json. I've set the FIXTURE_DIRS = ('/fixtures/',) in my settings.py. Then in my testcase I'm trying fixtures = ['proj_fixture.json'] but my fixtures don't load. How can this be solved? How to add the place for searching fixtures? In general, is it ok to load the fixture for the whole test_db for each test in each app (if it's quite small)? Thanks!

    Read the article

  • ASP.NET MVC unit test controller with HttpContext

    - by user299592
    I am trying to write a unit test for my one controller to verify if a view was returned properly, but this controller has a basecontroller that accesses the HttpContext.Current.Session. Everytime I create a new instance of my controller is calls the basecontroller constructor and the test fails with a null pointer exception on the HttpContext.Current.Session. Here is the code: public class BaseController : Controller { protected BaseController() { ViewData["UserID"] = HttpContext.Current.Session["UserID"]; } } public class IndexController : BaseController { public ActionResult Index() { return View("Index.aspx"); } } [TestMethod] public void Retrieve_IndexTest() { // Arrange const string expectedViewName = "Index"; IndexController controller = new IndexController(); // Act var result = controller.Index() as ViewResult; // Assert Assert.IsNotNull(result, "Should have returned a ViewResult"); Assert.AreEqual(expectedViewName, result.ViewName, "View name should have been {0}", expectedViewName); } Any ideas on how to mock the Session that is accessed in the base controller so the test in the descendant controller will run?

    Read the article

  • Easiest way of unit testing C code with Python

    - by Jon Mills
    I've got a pile of C code that I'd like to unit test using Python's unittest library (in Windows), but I'm trying to work out the best way of interfacing the C code so that Python can execute it (and get the results back). Does anybody have any experience in the easiest way to do it? Some ideas include: Wrapping the code as a Python C extension using the Python API Wrap the C code using SWIG Add a DLL wrapper to the C code and load it into Python using ctypes Add a small XML-RPC server to the c-code and call it using xmlrpclib (yes, I know this seems a bit far-out!) Is there a canonical way of doing this? I'm going to be doing this quite a lot, with different C modules, so I'd like to find a way which is least effort.

    Read the article

  • Unit testing several implementation of the same trait/interface

    - by paradigmatic
    I program mostly in scala and java, using scalatest in scala and junit for unit testing. I would like to apply the very same tests to several implementations of the same interface/trait. The idea is to verify that the interface contract is enforced and to check Liskov substitution principle. For instance, when testing implementations of lists, tests could include: An instance should be empty, if and only if and only if it has zero size. After calling clear, the size sould be zero. Adding an element in the middle of a list, will increment by one the index of rhs elements. etc. What are the best practices ?

    Read the article

  • Best practices for file system dependencies in unit/integration tests

    - by Olvagor
    I just started writing tests for a lot of code. There's a bunch of classes with dependencies to the file system, that is they read CSV files, read/write configuration files and so on. Currently the test files are stored in the test directory of the project (it's a Maven2 project) but for several reasons this directory doesn't always exist, so the tests fail. Do you know best practices for coping with file system dependencies in unit/integration tests? Edit: I'm not searching an answer for that specific problem I described above. That was just an example. I'd prefer general recommendations how to handle dependencies to the file system/databases etc.

    Read the article

  • Unit Testing in QTestLib - running single test / tests in class / all tests

    - by Dave
    I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button. All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes. I'm sure I'm missing something. I'd like to be able to easily: Run a single test method Run the tests in an entire class Run all tests Any of those would call the appropriate setup / teardown functions.

    Read the article

  • Python unit-testing with nose: Making sequential tests

    - by cool-RR
    I am just learning how to do unit-testing. I'm on Python / nose / Wing IDE. (The project that I'm writing tests for is a simulations framework, and among other things it lets you run simulations both synchronously and asynchronously, and the results of the simulation should be the same in both.) The thing is, I want some of my tests to use simulation results that were created in other tests. For example, synchronous_test calculates a certain simulation in synchronous mode, but then I want to calculate it in asynchronous mode, and check that the results came out the same. How do I structure this? Do I put them all in one test function, or make a separate asynchronous_test? Do I pass these objects from one test function to another? Also, keep in mind that all these tests will run through a test generator, so I can do the tests for each of the simulation packages included with my program.

    Read the article

  • XCode 3.2 does not mark unit test assert failures in the editor

    - by Cliff
    I've been off in Java land for about a month or so and now, upon returning to XCode I feel lost. I've upgraded 1st to 3.1.2 then recently to 3.2 and also got a new Mac with Snow Leopard so I'm not exactly sure when the problem surfaced. I just know that I used to get little red bubbles in my unit test next to the failing asserts and that no longer seems to happen. Is there a way to restore this? I've been trying to use Apple's own SenTesting framework instead of GoogleTools for mac like I used to. Should I revert to Google Tools? Does anyone have an answer?

    Read the article

  • Re-using unit tests for models using STI

    - by TenJack
    I have a number of models that use STI and I would like to use the same unit test to test each model. For example, I have: class RegularList < List class OtherList < List class ListTest < ActiveSupport::TestCase fixtures :lists def test_word_count list = lists(:regular_list) assert_equal(0, list.count) end end How would I go about using the test_word_count test for the OtherList model. The test is much longer so I would rather not have to retype it for each model. Thanks.

    Read the article

  • How to set up Flex Unit 4

    - by macke
    Does anyone know of any guides or any sort of documentation on how to set up the new Flex Builder 4 beta? I've been pulling my hair all day long, none of my tests are executed and I can't for the life of me understand what's wrong. There are no errors at all. It's as if the metadata tags are not recognized, is there some special compiler argument to recognize them? I seem to remember that this is not necessary as long as the swc files that are included were compiled with said arguments. I'm using Beta 1 of Flex Unit 4 which can be found here: http://opensource.adobe.com/wiki/display/flexunit/FlexUnit+4+feature+overview I'm actually using the very same code that is included with that package to run the tests, but no tests are executed. Neither mine nor the ones included in the FU4 package. The lack of documentation certainly doesn't help much...

    Read the article

  • Typical size of unit tests compared to test code

    - by Frank Schwieterman
    I'm curious what a reasonable / typical value is for the ratio of test code to production code when people are doing TDD. Looking at one component, I have 530 lines of test code for 130 lines of production code. Another component has 1000 lines of test code for 360 lines of production code. So the unit tests are requiring roughly 3x to 5x as much code. This is for Javascript code. I don't have much tested C# code handy, but I think for another project I was looking at 2x to 3x as much test code then production code. It would seem to me that the lower that value is, assuming the tests are sufficient, would reflect higher quality tests. Pure speculation, I just wonder what ratios other people see. I know lines of code is an loose metric, but since I code in the same style for both test and production (same spacing format, same ammount of comments, etc) the values are comparable.

    Read the article

  • Robust unit-testing of HTML in PHP

    - by asbja
    I'm adding unit-tests to an older PHP codebase at work. I will be testing and then rewriting a lot of HTML generation code and currently I'm just testing if the generated strings are identical to the expected string, like so: (using PHPUnit) public function testConntype_select() { $this->assertEquals( '<select><option value="blabla">Some text</option></select>', conntype_select(1); // A value from the test dataset. ); } This way has the downside that attribute ordering, whitespace and a lot of other irrelevant details are tested as well. I'm wondering if there are any better ways to do this. For example if there are any good and easy ways to compare the generated DOM trees. I found very similar questions for ruby, but couldn't find anything for PHP.

    Read the article

  • unit test service layer - NUnit, NHibernate

    - by csetzkorn
    Hi, I would like to unit test a DEPENDENT service layer which allows me to perform CRUD operation without mocking using NUnit. I know this is probably bad practice but I want to give it a try anyway - even if the tests have to run over night. My data is persisted using NHibernate and I have implemented a little library that 'bootstraps' the database which I could use in a [Setup] method. I am just wondering if someone has done something similar and what the fastest method for bootstrapping the database is. I am using something like this: var cfg = new Configuration(); cfg.Configure(); cfg.AddAssembly("Bla"); new SchemaExport(cfg).Execute(false, true, false); to establish the db schema. After that I populate some lookup tables from some Excel tables. Any feedback would be very much appreciated. Thanks. Christian

    Read the article

  • Should I unit test my JavaScript?

    - by Joseph Silvashy
    I'm curious to if it would be valuable, I'd like to start using QUnit, but I really don't know where to get started. Actually I'm not going to lie, I'm new to testing in general, not just with JS. I'm hoping to get some tips to how I would start using unit testing with an app that already has a large amount of JavaScript (ok so about 500 lines, not huge, be enough to make me wonder if I have regression that goes unnoticed). How would you recommend getting started and Where would I write my tests? (for example its rails app, where is a logical place to have my JS tests, it would be cool if they could go in the /test directory but it's outside the public directory and thus not possible... err is it?)

    Read the article

< Previous Page | 12 13 14 15 16 17 18 19 20 21 22 23  | Next Page >