Search Results

Search found 22986 results on 920 pages for 'allocation unit size'.

Page 70/920 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Array help needed for unit conversion application

    - by Manolis
    I have a project to do in Visual Basic. My problem is that the outcome is always wrong (ex. instead of 2011 it gives 2000). And i cannot set as Desired unit the Inch(1) or feet(3), it gives the Infinity error. And if i put as Original and Desired unit the inch(1), the outcome is "Not a Number". Here's the code i made so far. The project is about arrays. Any help appreciated. Public Class Form1 Private Sub btnConvert_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles btnConvert.Click Dim original(9) As Long Dim desired(9) As Long Dim a As Integer Dim o As Integer Dim d As Integer Dim inch As Long, fathom As Long, furlong As Long, kilometer As Long Dim meter As Long, miles As Long, rod As Long, yard As Long, feet As Long a = Val(Input3.Text) o = Val(Input1.Text) d = Val(Input2.Text) inch& = 0.0833 rod& = 16.5 yard& = 3 furlong& = 660 meter& = 3.28155 kilometer& = 3281.5 fathom& = 6 miles& = 5280 original(1) = inch original(2) = fathom original(3) = feet original(4) = furlong original(5) = kilometer original(6) = meter original(7) = miles original(8) = rod original(9) = yard desired(1) = inch desired(2) = fathom desired(3) = feet desired(4) = furlong desired(5) = kilometer desired(6) = meter desired(7) = miles desired(8) = rod desired(9) = yard If o < 1 Or o > 9 Or d < 1 Or d > 9 Then MessageBox.Show("Units must range from 1-9.", "Error", _ MessageBoxButtons.OK, _ MessageBoxIcon.Information) Return End If Output.Text = (a * original(o)) / desired(d) End Sub End Class

    Read the article

  • How to not pass around the container when using IoC in Winforms

    - by L2Type
    I'm new to the world of IoC and having a problem with implementing it in a Winforms application. I have an extremely basic application Winform application that uses MVC, it is one controller that does all the work and a working dialog (obviously with a controller). So I load all my classes in to my IoC container in program.cs and create the main form controller using the container. But this is where I am having problems, I only want to create the working dialog controller when it's used and inside a using statement. At first I passed in the container but I've read this is bad practice and more over the container is a static and I want to unit test this class. So how do you create classes in a unit test friendly way without passing in the container, I was considering the abstract factory pattern but that alone would solve my problem without using the IoC. I'm not using any famous framework, I borrowed a basic one from this blog post http://www.kenegozi.com/Blog/2008/01/17/its-my-turn-to-build-an-ioc-container-in-15-minutes-and-33-lines.aspx How do I do this with IoC? Is this the wrong use for IoC?

    Read the article

  • Why not lump all service classes into a Factory method (instead of injecting interfaces)?

    - by Andrew
    We are building an ASP.NET project, and encapsulating all of our business logic in service classes. Some is in the domain objects, but generally those are rather anemic (due to the ORM we are using, that won't change). To better enable unit testing, we define interfaces for each service and utilize D.I.. E.g. here are a couple of the interfaces: IEmployeeService IDepartmentService IOrderService ... All of the methods in these services are basically groups of tasks, and the classes contain no private member variables (other than references to the dependent services). Before we worried about Unit Testing, we'd just declare all these classes as static and have them call each other directly. Now we'll set up the class like this if the service depends on other services: public EmployeeService : IEmployeeService { private readonly IOrderService _orderSvc; private readonly IDepartmentService _deptSvc; private readonly IEmployeeRepository _empRep; public EmployeeService(IOrderService orderSvc , IDepartmentService deptSvc , IEmployeeRepository empRep) { _orderSvc = orderSvc; _deptSvc = deptSvc; _empRep = empRep; } //methods down here } This really isn't usually a problem, but I wonder why not set up a factory class that we pass around instead? i.e. public ServiceFactory { virtual IEmployeeService GetEmployeeService(); virtual IDepartmentService GetDepartmentService(); virtual IOrderService GetOrderService(); } Then instead of calling: _orderSvc.CalcOrderTotal(orderId) we'd call _svcFactory.GetOrderService.CalcOrderTotal(orderid) What's the downfall of this method? It's still testable, it still allows us to use D.I. (and handle external dependencies like database contexts and e-mail senders via D.I. within and outside the factory), and it eliminates a lot of D.I. setup and consolidates dependencies more. Thanks for your thoughts!

    Read the article

  • Using Moq callbacks correctly according to AAA

    - by Hadi Eskandari
    I've created a unit test that tests interactions on my ViewModel class in a Silverlight application. To be able to do this test, I'm mocking the service interface, injected to the ViewModel. I'm using Moq framework to do the mocking. to be able to verify bounded object in the ViewModel is converted properly, I've used a callback: [Test] public void SaveProposal_Will_Map_Proposal_To_WebService_Parameter() { var vm = CreateNewCampaignViewModel(); var proposal = CreateNewProposal(1, "New Proposal"); Services.Setup(x => x.SaveProposalAsync(It.IsAny<saveProposalParam>())).Callback((saveProposalParam p) => { Assert.That(p.plainProposal, Is.Not.Null); Assert.That(p.plainProposal.POrderItem.orderItemId, Is.EqualTo(1)); Assert.That(p.plainProposal.POrderItem.orderName, Is.EqualTo("New Proposal")); }); proposal.State = ObjectStates.Added; vm.CurrentProposal = proposal; vm.Save(); } It is working fine, but if you've noticed, using this mechanism the Assert and Act part of the unit test have switched their parts (Assert comes before Acting). Is there a better way to do this, while preserving correct AAA order?

    Read the article

  • Problem while executing test case in VS2008 test project

    - by sukumar
    Hi all I have the situation as follows I have develpoed one test project in visual studio 2008 to test my target project. I was getting the following exception when i ran test case in my PC System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) at System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) at System.Reflection.Assembly.InternalLoadFrom(String assemblyFile, Evidence securityEvidence, Byte[] hashValue, AssemblyHashAlgorithm hashAlgorithm, Boolean forIntrospection, StackCrawlMark& stackMark) at System.Reflection.Assembly.LoadFrom(String assemblyFile) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.GetType(UnitTestElement unitTest, String type) at Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestExecuter.ResolveMethods(). but the same project runs successfully in my colleague PC. as per my Understanding System.IO.FileNotFoundException will occur in case of missing out the dlls. i checked up with dependency walker to trace out the missed dll.dependency walke traced out the following dlls 1)MFC90D.dll 2)mSvcr90d.dll 3)msvcp90d.dll i copied this dlls to C:\windows\system32 from Microsoft visual studio 9.0 dir and again i ran the dependency walker.this time dependency walker is able to open the given testproject dll with 0 errors .even then the same exception comes up when i ran the test. i got fed up with this. can any one tell why it is behaving as PC dependent.is there any thing that i still missing? any suggestion can be helpfull Thakns in Advance Sukumar i

    Read the article

  • simpletest - Why does setReturnValue() seem to change behaviour depending whether test is run in iso

    - by JW
    I am using SimpleTest version 1.0.1 for a unit test. I create a new mock object within a test method and on it i do: $MockDbAdaptor->setReturnValue('query',1); Now, when i run this in a standalone unit test my tested object is happy to see 1 returned when query() is called on the mock db adaptor. However, when this exact same test is run as part of my 'all_tests' TestSuite, the test is failing. This happens because a call to the mock's query() method does not appear to return any value - thus causing my test subject to complain and trigger an unexpected exception that fails the test. So, the behaviour of setReturnValue() seems to change depending on whether the test is run in isolation or not. I can get it to work in both a standalone and TestSuite contexts by using this instead: $MockDbAdaptor->setReturnValueAt(0,'query',1); So my immediate problem can be fixed ...but it feels like a hack. I thought if i create a new mock within a test method then why is the setReturnValue() behaviour getting affected by the context in which the test class instance is run? It feel like a bug.

    Read the article

  • PHPUnit - multiple stubs of same class

    - by keithjgrant
    I'm building unit tests for class Foo, and I'm fairly new to unit testing. A key component of my class is an instance of BarCollection which contains a number of Bar objects. One method in Foo iterates through the collection and calls a couple methods on each Bar object in the collection. I want to use stub objects to generate a series of responses for my test class. How do I make the Bar stub class return different values as I iterate? I'm trying to do something along these lines: $stubs = array(); foreach ($array as $value) { $barStub->expects($this->any()) ->method('GetValue')) ->will($this->returnValue($value)); $stubs[] = $barStub; } // populate stubs into `Foo` // assert results from `Foo->someMethod()` So Foo->someMethod() will produce data based on the results it receives from the Bar objects. But this gives me the following error whenever the array is longer than one: There was 1 failure: 1) testMyTest(FooTest) with data set #2 (array(0.5, 0.5)) Expectation failed for method name is equal to <string:GetValue> when invoked zero or more times. Mocked method does not exist. /usr/share/php/PHPUnit/Framework/MockObject/Mock.php(193) : eval()'d code:25 One thought I had was to use ->will($this->returnCallback()) to invoke a callback method, but I don't know how to indicate to the callback which Bar object is making the call (and consequently what response to give). Another idea is to use the onConsecutiveCalls() method, or something like it, to tell my stub to return 1 the first time, 2 the second time, etc, but I'm not sure exactly how to do this. I'm also concerned that if my class ever does anything other than ordered iteration on the collection, I won't have a way to test it.

    Read the article

  • Convert a Unit Vector to a Quaternion

    - by Hmm
    So I'm very new to quaternions, but I understand the basics of how to manipulate stuff with them. What I'm currently trying to do is compare a known quaternion to two absolute points in space. I'm hoping what I can do is simply convert the points into a second quaternion, giving me an easy way to compare the two. What I've done so far is to turn the two points into a unit vector. From there I was hoping I could directly plug in the i j k into the imaginary portion of the quaternion with a scalar of zero. From there I could multiply one quaternion by the other's conjugate, resulting in a third quaternion. This third quaternion could be converted into an axis angle, giving me the degree by which the original two quaternions differ by. Is this thought process correct? So it should just be [ 0 i j k ]. I may need to normalize the quaternion afterwards, but I'm not sure about that. I have a bad feeling that it's not a direct mapping from a vector to a quaternion. I tried looking at converting the unit vector to an axis angle, but I'm not sure this would work, since I don't know what angle to give as an input.

    Read the article

  • DSL to generate test data

    - by queen3
    There're several ways to generate data for tests (not only unit tests), for example, Object Mother, builders, etc. Another useful approach is to write test data as plain text: product: Main; prices: 145, 255; Expire: 10-Apr-2011; qty: 2; includes: Sub product: Sub; prices: 145, 255; Expire: 10-Apr-2011; qty: 2 and then parse it into C# objects. This is easy to use in unit tests (because deep inner collections can be written in single line), this is even more convenient to use in FitNesse-like system (because this DSL naturally fits into wiki), and so on. So I use this and write parser, but it's tedious to write each time. I'm not a big expert in DSL/language parsers, but I think they can help here. What would be the right one to use? I only heard about: DSL (I mean, any DSL) Boo (that I think can do DSL) ANTLR but I don't even know which one to pick and where to start. So the question: is it reasonable to use some kind of DSL to generate test data? What would you suggest to do so? Are there any existing cases?

    Read the article

  • Visual Studio 2010 Professional - Problem Unit-Testing Web Services

    - by Ben
    Have created a very simple Web Service (asmx) in Visual Studio 2010 Professional, and am trying to use the auto-generated unit test cases. I get something that seems quite familiar on this site: The web site could not be configured correctly; getting ASP.NET process information failed. Requesting http://localhost:81/zfp/VSEnterpriseHelper.axd return an error: The remote server returned an error: (500) Internal Server Error. http://stackoverflow.com/questions/260432/500-error-running-visual-studio-asp-net-unit-test I have tried: 1. Running the tests on IIS rather than ASP.NET Development Server 2. Adding and then removing the XML fragment to my Web Service's .config file 3. Giving the MACHINE\ASPNET account Full control to the local folder My current questions: 1. Why am I being bothered with this instrumentation / code coverage DLL, when this doesn't seem to be something that ships with Visual Studio 2010 Professional? Is there any way I can turn it off? 2. I'm placing the node under in Web.config - is that the correct node? 3. Is it possible to bind to a web service without using the webby test attributes? I've seen other people advising making the Web Service as light-weight as possible. I'm trying to call it with jQuery / AJAX / JSON, so being able to debug the actual web service would be really helpful. Best wishes, Ben

    Read the article

  • Automatic testing of GUI related private methods

    - by Stein G. Strindhaug
    When it comes to GUI programming (at least for web) I feel that often the only thing that would be useful to unit test is some of the private methods*. While unit testing makes perfect sense for back-end code, I feel it doesn't quite fit the GUI classes. What is the best way to add automatic testing of these? * Why I think the only methods useful to test is private: Often when I write GUI classes they don't even have any public methods except for the constructor. The public methods if any is trivial, and the constructor does most of the job calling private methods. They receive some data from server does a lot of trivial output and feeds data to the constructor of other classes contained inside it, adding listeners that calls a (more or less directly) calls the server... Most of it pretty trivial (the hardest part is the layout: css, IE, etc.) but sometimes I create some private method that does some advanced tricks, which I definitely do not want to be publicly visible (because it's closely coupled to the implementation of the layout, and likely to change), but is sufficiently complicated to break. These are often only called by the constructor or repeatedly by events in the code, not by any public methods at all. I'd like to have a way to test this type of methods, without making it public or resorting to reflection trickery. (BTW: I'm currently using GWT, but I feel this applies to most languages/frameworks I've used when coding for GUI)

    Read the article

  • Testing subpackage modules in Python 3

    - by Mitchell Model
    I have been experimenting with various uses of hierarchies like this and the differences between absolute and relative imports, and can't figure out how to do routine things with the package, subpackages, and modules without simply putting everything on sys.path. I have a two-level package hierarchy: MyApp __init__.py Application __init__.py Module1 Module2 ... Domain __init__.py Module1 Module2 ... UI __init__.py Module1 Module2 ... I want to be able to do the following: Run test code in a Module's "if main" when the module imports from other modules in the same directory. Have one or more test code modules in each subpackage that runs unit tests on the modules in the subpackage. Have a set of unit tests that reside in someplace reasonable, but outside the subpackages, either in a sibling package, at the top-level package, or outside the top-level package (though all these might end up doing is running the tests in each subpackage) "Enter" the structure from any of the three subpackage levels, e.g. run code that just uses Domain modules, run code that just uses Application modules, but Application uses code from both Application and Domain modules, and run code from GUI uses code from both GUI and Application; for instance, Application test code would import Application modules but not Domain modules. After developing the bulk of the code without subpackages, continue developing and testing after organizing the modules into this hierarchy. I know how to use relative imports so that external code that puts MyApp on its sys.path can import MyApp, import any subpackages it wants, and import things from their modules, while the modules in each subpackage can import other modules from the same subpackage or from sibling packages. However, the development needs listed above seem incompatible with subpackage structuring -- in other words, I can't have it both ways: a well-structured multi-level package hierarchy used from the outside and also used from within, in particular for testing but also because modules from one design level (in particular the UI) should not import modules from a design level below the next one down. Sorry for the long essay, but I think it fairly represents the struggles a lot of people have been having adopting to the new relative import mechanisms.

    Read the article

  • Using ZLib unit to compress files vs using ZipForge

    - by user193655
    There are many questions on zipping in Delphi, anyway this is not a duplicate. I am using ZipForge for zip/unzip capability in my application. Currently I use 2 features of ZipForge: 1) zip and unzip (!) 2) password protect the archives Now I am removing the password from all the archives so I need only to zip and unzip files. I zip them just for minimizing bandwith when uploading/downloading files from the server. So my idea is to process all files once for unzipping them (with password) and rezipping them without password. I have nothing against ZipForge, anyway it is an extra component, every time I upgrade to a newest Delphi version I have to wait for the new IDE support and moreover the more components the more problems during the installation. So since what I do is very simple I'd like to replace ZipForge with 2 simple functinos using the ZLib unit. I found (and tested) the functions here on Torry's. What do you think of using Zlib unit? Do you see any potential problem that I would not have with ZipForge? Can you comment on speed?

    Read the article

  • Maven Cobertura: unit test failed but build success

    - by Pavel Drobushevich
    Hi all, I've configured cobertura code coverage in my pom: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.4</version> <configuration> <instrumentation> <excludes> <exclude>**/*Exception.class</exclude> </excludes> </instrumentation> <formats> <format>xml</format> <format>html</format> </formats> </configuration> </plugin> And start test by following command: mvn clean cobertura:cobertura But if one of unit test fail Cobertura only log this information and doesn't mark build fail. Tests run: 287, Failures: 1, Errors: 1, Skipped: 0 Flushing results... Flushing results done Cobertura: Loaded information on 139 classes. Cobertura: Saved information on 139 classes. [ERROR] There are test failures. ................................. [INFO] BUILD SUCCESS How to configure Cobertura marks build failed in one of unit test fail? Thanks in advance.

    Read the article

  • Should the code being tested compile to a DLL or an executable file?

    - by uriDium
    I have a solution with two projects. One for project for the production code and another project for the unit tests. I did this as per the suggestions I got here from SO. I noticed that in the Debug Folder that it includes the production code in executable form. I used NUnit to run the tests after removing the executable and they all fail trying to find the executable. So it definitely is trying to find it. I then did a quick read to find out which is better, a DLL or an executable. It seems that an DLL is much faster as they share memory space where communication between executables is slower. Unforunately our production code needs to be an exectuable. So the unit tests will be slightly slower. I am not too worried about that. But the project does rely on code written in another library which is also in executable format at the moment. Should the projects that expose some sort of SDK rather be compiled to an DLL and then the projects that use the SDK be compiled to executable?

    Read the article

  • Access violation using LocalAlloc()

    - by PaulH
    I have a Visual Studio 2008 Windows Mobile 6 C++ application that is using an API that requires the use of LocalAlloc(). To make my life easier, I created an implementation of a standard allocator that uses LocalAlloc() internally: /// Standard library allocator implementation using LocalAlloc and LocalReAlloc /// to create a dynamically-sized array. /// Memory allocated by this allocator is never deallocated. That is up to the /// user. template< class T, int max_allocations > class LocalAllocator { public: typedef T value_type; typedef size_t size_type; typedef ptrdiff_t difference_type; typedef T* pointer; typedef const T* const_pointer; typedef T& reference; typedef const T& const_reference; pointer address( reference r ) const { return &r; }; const_pointer address( const_reference r ) const { return &r; }; LocalAllocator() throw() : c_( NULL ) { }; /// Attempt to allocate a block of storage with enough space for n elements /// of type T. n>=1 && n<=max_allocations. /// If memory cannot be allocated, a std::bad_alloc() exception is thrown. pointer allocate( size_type n, const void* /*hint*/ = 0 ) { if( NULL == c_ ) { c_ = LocalAlloc( LPTR, sizeof( T ) * n ); } else { HLOCAL c = LocalReAlloc( c_, sizeof( T ) * n, LHND ); if( NULL == c ) LocalFree( c_ ); c_ = c; } if( NULL == c_ ) throw std::bad_alloc(); return reinterpret_cast< T* >( c_ ); }; /// Normally, this would release a block of previously allocated storage. /// Since that's not what we want, this function does nothing. void deallocate( pointer /*p*/, size_type /*n*/ ) { // no deallocation is performed. that is up to the user. }; /// maximum number of elements that can be allocated size_type max_size() const throw() { return max_allocations; }; private: /// current allocation point HLOCAL c_; }; // class LocalAllocator My application is using that allocator implementation in a std::vector< #define MAX_DIRECTORY_LISTING 512 std::vector< WIN32_FIND_DATA, LocalAllocator< WIN32_FIND_DATA, MAX_DIRECTORY_LISTING > > file_list; WIN32_FIND_DATA find_data = { 0 }; HANDLE find_file = ::FindFirstFile( folder.c_str(), &find_data ); if( NULL != find_file ) { do { // access violation here on the 257th item. file_list.push_back( find_data ); } while ( ::FindNextFile( find_file, &find_data ) ); ::FindClose( find_file ); } // data submitted to the API that requires LocalAlloc()'d array of WIN32_FIND_DATA structures SubmitData( &file_list.front() ); On the 257th item added to the vector<, the application crashes with an access violation: Data Abort: Thread=8e1b0400 Proc=8031c1b0 'rapiclnt' AKY=00008001 PC=03f9e3c8(coredll.dll+0x000543c8) RA=03f9ff04(coredll.dll+0x00055f04) BVA=21ae0020 FSR=00000007 First-chance exception at 0x03f9e3c8 in rapiclnt.exe: 0xC0000005: Access violation reading location 0x01ae0020. LocalAllocator::allocate is called with an n=512 and LocalReAlloc() succeeds. The actual Access Violation exception occurs within the std::vector< code after the LocalAllocator::allocate call: 0x03f9e3c8 0x03f9ff04 > MyLib.dll!stlp_std::priv::__copy_trivial(const void* __first = 0x01ae0020, const void* __last = 0x01b03020, void* __result = 0x01b10020) Line: 224, Byte Offsets: 0x3c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::_M_insert_overflow(_WIN32_FIND_DATAW* __pos = 0x01b03020, _WIN32_FIND_DATAW& __x = {...}, stlp_std::__true_type& __formal = {...}, unsigned int __fill_len = 1, bool __atend = true) Line: 112, Byte Offsets: 0x5c C++ MyLib.dll!stlp_std::vector<_WIN32_FIND_DATAW,LocalAllocator<_WIN32_FIND_DATAW,512> >::push_back(_WIN32_FIND_DATAW& __x = {...}) Line: 388, Byte Offsets: 0xa0 C++ MyLib.dll!Foo(unsigned long int cbInput = 16, unsigned char* pInput = 0x01a45620, unsigned long int* pcbOutput = 0x1dabfbbc, unsigned char** ppOutput = 0x1dabfbc0, IRAPIStream* __formal = 0x00000000) Line: 66, Byte Offsets: 0x1e4 C++ If anybody can point out what I may be doing wrong, I would appreciate it. Thanks, PaulH

    Read the article

  • Seeking suggestions on redesigning the interface

    - by ratkok
    As a part of maintaining large piece of legacy code, we need to change part of the design mainly to make it more testable (unit testing). One of the issues we need to resolve is the existing interface between components. The interface between two components is a class that contains static methods only. Simplified example: class ABInterface { static methodA(); static methodB(); ... static methodZ(); }; The interface is used by component A so that different methods can use ABInterface::methodA() in order to prepare some input data and then invoke appropriate functions within component B. Now we are trying to redesign this interface for various reasons: Extending our unit test coverage - we need to resolve this dependency between the components and stubs/mocks are to be introduced The interface between these components diverged from the original design (ie. a lots of newer functions, used for the inter-component i/f are created outside this interface class). The code is old, changed a lot over the time and needs to be refactored. The change should not be disruptive for the rest of the system. We try to limit leaving many test-required artifacts in the production code. Performance is very important and should be no (or very minimal) degradation after the redesign. Code is OO in C++. I am looking for some ideas what approach to take. Any suggestions on how to do this efficiently?

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • Set asisde space for ADT when reading integers from file

    - by That Guy
    I'm using C to make maze solver. The program should be able read a maze from a text file containing a grid of 1 and 0 representing walls and paths. This file could be of any size as the user selects which maze to use. The program should then show the maze being solved. As the maze is being solved it should show where has been walked and how many steps have been taken. I have made an ADT called Cell containing a bool for wall or path and an integer for steps taken. I now need to populate a 2D array of Cells which means I need to set aside enough space to store a Cell for every integer in the maze file. What would be the best way to do this?

    Read the article

  • How do I prevent qFatal() from aborting the application?

    - by Dave
    My Qt application uses Q_ASSERT_X, which calls qFatal(), which (by default) aborts the application. That's great for the application, but I'd like to suppress that behavior when unit testing the application. (I'm using the Google Test Framework.) I have by unit tests in a separate project, statically linking to the class I'm testing. The documentation for qFatal() reads: Calls the message handler with the fatal message msg. If no message handler has been installed, the message is printed to stderr. Under Windows, the message is sent to the debugger. If you are using the default message handler this function will abort on Unix systems to create a core dump. On Windows, for debug builds, this function will report a _CRT_ERROR enabling you to connect a debugger to the application. ... To supress the output at runtime, install your own message handler with qInstallMsgHandler(). So here's my main.cpp file: #include <gtest/gtest.h> #include <QApplication> void testMessageOutput(QtMsgType type, const char *msg) { switch (type) { case QtDebugMsg: fprintf(stderr, "Debug: %s\n", msg); break; case QtWarningMsg: fprintf(stderr, "Warning: %s\n", msg); break; case QtCriticalMsg: fprintf(stderr, "Critical: %s\n", msg); break; case QtFatalMsg: fprintf(stderr, "My Fatal: %s\n", msg); break; } } int main(int argc, char **argv) { qInstallMsgHandler(testMessageOutput); testing::InitGoogleTest(&argc, argv); return RUN_ALL_TESTS(); } But my application is still stopping at the assert. I can tell that my custom handler is being called, because the output when running my tests is: My Fatal: ASSERT failure in MyClass::doSomething: "doSomething()", file myclass.cpp, line 21 The program has unexpectedly finished. What can I do so that my tests keep running even when an assert fails?

    Read the article

  • method used like a type error in a unit test

    - by Josepth Vodary
    I am trying to unit test a simple factory - but it keeps telling me that I am trying to use a method like a type? My unit test using System; using System.Text; using System.Collections.Generic; using System.Linq; using Microsoft.VisualStudio.TestTools.UnitTesting; using Home; namespace HomeTest { [TestClass] public class TestFactory { [TestMethod] public void DoTestFactory() { InventoryType.InventorySelect select = new InventoryType.InventorySelect(); select.inventoryTypes.Add("cds"); Home.Services.Factory.CreateInventory get = new Home.Services.Factory.CreateInventory(); get.InventoryImpl(); if (select.Validate() == true) Console.WriteLine("Test Passed"); else if (select.Validate() == false) Console.WriteLine("Test Returned False"); else Console.WriteLine("Test Failed To Run"); Console.ReadLine(); } } } My facotry using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Home.Services { public class Factory { public InventorySvc CreateInventory() { return new InventoryImpl(); } } }

    Read the article

  • Ability to switch Persistence Unit dynamically within the application (JPA)

    - by MVK
    My application data access layer is built using Spring and EclipseLink and I am currently trying to implement the following feature - Ability to switch the current/active persistence unit dynamically for a user. I tried various options and finally ended up doing the following. In the persistence.xml, declare multiple PUs. Create a class with as many EntityManagerFactory attributes as there are PUs defined. This will act as a factory and return the appropriate EntityManager based on my logic public class MyEntityManagerFactory { @PersistenceUnit(unitName="PU_1") private EntityManagerFactory emf1; @PersistenceUnit(unitName="PU_2") private EntityManagerFactory emf2; public EntityManager getEntityManager(int releaseId) { // Logic goes here to return the appropriate entityManeger } } My spring-beans xml looks like this.. <!-- First persistence unit --> <bean class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean" id="emFactory1"> <property name="persistenceUnitName" value="PU_1" /> </bean> <bean class="org.springframework.orm.jpa.JpaTransactionManager" id="transactionManager1"> <property name="entityManagerFactory" ref="emFactory1"/> </bean> <tx:annotation-driven transaction-manager="transactionManager1"/> The above section is repeated for the second PU (with names like emFactory2, transactionManager2 etc). I am a JPA newbie and I know that this is not the best solution. I appreciate any assistance in implementing this requirement in a better/elegant way! Thanks!

    Read the article

  • What's a good way to do testing a plug-in on multiple Windows and Outlook versions?

    - by Andrei
    Hello, We're building a plug-in for Outlook that should work on multiple Windows versions (XP, Vista, 7) and also with different Outlook versions (2003, 2007, 2010). The testing problem I am facing right now, is that I can't figure out a good/convenient/thorough way to test the application on multiple Windows and Outlook versions. At the moment, I have a VirtualBox which runs many virtual machines, with different Windows versions and Outlook versions. So I would have a virtual machine with Windows 7 testing Outlook 2010, and another one with Windows 7 testing Outlook 2007, Windows Vista with Outlook 2010 and so on, going through some of the possible combinations. It kind of gets the job done, although it is cumbersome and takes a long time to test. Some of the testing included in the application is unit testing, but this is also rather tied in with the machine I test it on (windows 7 with outlook 2010). For example, I was using ManagementObject recently, which worked fine on my system (and thus passed the unit test for that method), however, using that object threw an exception in another person's system, which crashed the application. I work on Visual Studio 2010 Ultimate. The questions: Is there a more elegant way to make the testing process more streamline and more efficient? Any other testing methods you recommend? How would you deal with this problem? Thanks! Looking forward to your replies.

    Read the article

  • How can I split abstract testcases in JUnit?

    - by Willi Schönborn
    I have an abstract testcase "AbstractATest" for an interface "A". It has several test methods (@Test) and one abstract method: protected abstract A unit(); which provides the unit under testing. No i have multiple implementations of "A", e.g. "DefaultA", "ConcurrentA", etc. My problem: The testcase is huge (~1500 loc) and it's growing. So i wanted to split it into multiple testcases. How can organize/structure this in Junit 4 without the need to have a concrete testcase for every implementation and abstract testcase. I want e.g. "AInitializeTest", "AExectueTest" and "AStopTest". Each being abstract and containing multiple tests. But for my concrete "ConcurrentA", i only want to have one concrete testcase "ConcurrentATest". I hope my "problem" is clear. EDIT Looks like my description was not that clear. Is it possible to pass a reference to a test? I know parameterized tests, but these require static methods, which is not applicable to my setup. Subclasses of an abstract testcase decide about the parameter.

    Read the article

  • allocator with no template

    - by Merni
    Every stl container take an allocator as a second object, template < class T, class Allocator = allocator<T> > class vector; If you write your own class It is possible to use your own allocator. But is it possible to write your own allocator without using templates? For example, writing this function is not easy if you are not allowed to use templates pointer allocate(size_type n, const_pointer = 0) { void* p = std::malloc(n * sizeof(T)); if (!p) throw std::bad_alloc(); return static_cast<pointer>(p); } Because how could you know the size of T?

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >