Search Results

Search found 32072 results on 1283 pages for 'catch unit test'.

Page 19/1283 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Is application-specific data required for good unit testing?

    - by stinkycheeseman
    I am writing unit tests for a fairly simple function that depends on a fairly complicated set of data. Essentially, the object I am manipulating represents a graph and this function determines whether to chart a line, bar, or pie chart based on the data that came back from the server. This is a simplified version, using jQuery: setDefaultChartType: function (graphObject) { var prop1 = graphObject.properties.key; var numCols = 0; $.each(graphObject.columns, function (colIndex, column) { numCols++; }); if ( numCols > 6 || ( prop1 > 1 && graphObject.data.length == 1) ) { graphObject.setChartType("line"); } else if ( numCols <=6 && prop1 == 1 ) { graphObject.setChartType("bar"); } else if ( numCols <=6 && prop1 > 1 ) { graphObject.setChartType("pie"); } } My question is, should I use mock data that is procured from the actual database? Or can I just fabricate data that fits the different cases? I'm afraid that fabricating data will not expose bugs arising from changes in the database, but on the other hand, it would require a lot more effort to keep the test data up-to-date that I'm not sure is necessary.

    Read the article

  • Any pre-rolled System.IO abstraction libraries out there for Unit Testing?

    - by Binary Worrier
    To test methods that use the file system we need to basically put System.IO behind a set of interfaces that we can then mock, I do this with a DiskIO class and interface. As my DiskIO code gets larger (and the grumblings from the we're unconvinced about this TDD thing crowd here in work get louder), I went looking for a comprehensive open source library that already does this and found . . . nothing. I may be looking in the wrong place or have approached this problem in completely the wrong way. I can't be the only idiot in this position, do these libraries exist, if so where are they? Any you've used and would recommend? Thanks P.S. I'm happy with my current approach i.e. starting with what we need, and adding only when the need arises. Unfortunately the we're unconvinced about this TDD thing crowd remain unconvinced, and think that I can't be right.

    Read the article

  • Generic test suite for ASP.NET Membership/Role/Profile/Session Providers

    - by SztupY
    Hi! I've just created custom ASP.NET Membership, Role, Profile and Session State providers, and I was wondering whether there exists a test suite or something similar to test the implementation of the providers. I've checked some of the open source providers I could find (like the NauckIt.PostgreSQL provider), but neither of them contained unit tests, and all of the forum topics I've found mentioned only a few test cases (like checking whether creating a user works), but this is clearly not a complete test suite for a Membership provider. (And I couldn't find anything for the other three providers) Are there more or less complete test suites for the above mentioned providers, or are there custom providers out there that have at least some testing avaialable?

    Read the article

  • How to unit test synchronized code

    - by gillJ
    Hi, I am new to Java and junit. I have the following peice of code that I want to test. Would appreciate if you could send your ideas about what's the best way to go about testing it. Basically, the following code is about electing a leader form a Cluster. The leader holds a lock on the shared cache and services of the leader get resumed and disposed if it somehow looses the lock on the cache. How can i make sure that a leader/thread still holds the lock on the cache and that another thread cannot get its services resumed while the first is in execution? public interface ContinuousService { public void resume(); public void pause(); } public abstract class ClusterServiceManager { private volatile boolean leader = false; private volatile boolean electable = true; private List<ContinuousService> services; protected synchronized void onElected() { if (!leader) { for (ContinuousService service : services) { service.resume(); } leader = true; } } protected synchronized void onDeposed() { if (leader) { for (ContinuousService service : services) { service.pause(); } leader = false; } } public void setServices(List<ContinuousService> services) { this.services = services; } @ManagedAttribute public boolean isElectable() { return electable; } @ManagedAttribute public boolean isLeader() { return leader; } public class TangosolLeaderElector extends ClusterServiceManager implements Runnable { private static final Logger log = LoggerFactory.getLogger(TangosolLeaderElector.class); private String election; private long electionWaitTime= 5000L; private NamedCache cache; public void start() { log.info("Starting LeaderElector ({})",election); Thread t = new Thread(this, "LeaderElector ("+election+")"); t.setDaemon(true); t.start(); } public void run() { // Give the connection a chance to start itself up try { Thread.sleep(1000); } catch (InterruptedException e) {} boolean wasElectable = !isElectable(); while (true) { if (isElectable()) { if (!wasElectable) { log.info("Leadership requested on election: {}",election); wasElectable = isElectable(); } boolean elected = false; try { // Try and get the lock on the LeaderElectorCache for the current election if (!cache.lock(election, electionWaitTime)) { // We didn't get the lock. cycle round again. // This code to ensure we check the electable flag every now & then continue; } elected = true; log.info("Leadership taken on election: {}",election); onElected(); // Wait here until the services fail in some way. while (true) { try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} if (!cache.lock(election, 0)) { log.warn("Cache lock no longer held for election: {}", election); break; } else if (!isElectable()) { log.warn("Node is no longer electable for election: {}", election); break; } // We're fine - loop round and go back to sleep. } } catch (Exception e) { if (log.isErrorEnabled()) { log.error("Leadership election " + election + " failed (try bfmq logs for details)", e); } } finally { if (elected) { cache.unlock(election); log.info("Leadership resigned on election: {}",election); onDeposed(); } // On deposition, do not try and get re-elected for at least the standard wait time. try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } else { // Not electable - wait a bit and check again. if (wasElectable) { log.info("Leadership NOT requested on election ({}) - node not electable",election); wasElectable = isElectable(); } try { Thread.sleep(electionWaitTime); } catch (InterruptedException e) {} } } } public void setElection(String election) { this.election = election; } @ManagedAttribute public String getElection() { return election; } public void setNamedCache(NamedCache nc) { this.cache = nc; }

    Read the article

  • Test-driven Development: Writing tests for private / protected variables

    - by Chetan
    I'm learning TDD, and I have a question about private / protected variables. My question is: If a function I want to test is operating on a private variable, how should I test it? Here is the example I'm working with: I have a class called Table that contains an instance variable called internalRepresentation that is a 2D array. I want to create a function called multiplyValuesByN that multiplies all the values in the 2D array by the argument n. So I write the test for it (in Python): def test_multiplyValuesByN (self): t = Table(3, 3) # 3x3 table, filled with 0's t.set(0, 0, 4) # Set value at position (0,0) to 4 t.multiplyValuesByN(3) assertEqual(t.internalRepresentation, [[12, 0, 0], [0, 0, 0], [0, 0, 0]]) Now, if I make internalRepresentation private or protected, this test will not work. How am I supposed to write the test so it doesn't depend on internalRepresentation but still tests that it looks correct after calling multiplyValuesByN?

    Read the article

  • Problem with Authlogic and Unit/Functional Tests in Rails

    - by mmacaulay
    I'm learning how unit testing is done in Rails, and I've run into a problem involving Authlogic. According to the Documentation there are a few things required to use Authlogic stuff in your tests: test_helper.rb: require "authlogic/test_case" class ActiveSupport::TestCase setup :activate_authlogic end Then in my functional tests I can login users: UserSession.create(users(:tester)) The problem seems to stem from the setup :activate_authlogic line in test_helper.rb, whenever that is included, I get the following errors when running functional tests: NoMethodError: undefined method `request=' for nil:NilClass authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `send' authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:63:in `method_missing' If I remove setup :activate_authlogic and add instead Authlogic::Session::Base.controller = Authlogic::ControllerAdapters::RailsAdapter.new(self) to test_helper.rb, my functional tests seem to work but now my unit tests fail: NoMethodError: undefined method `params' for ActiveSupport::TestCase:Class authlogic (2.1.3) lib/authlogic/controller_adapters/abstract_adapter.rb:30:in `params' authlogic (2.1.3) lib/authlogic/session/params.rb:96:in `params_credentials' authlogic (2.1.3) lib/authlogic/session/params.rb:72:in `params_enabled?' authlogic (2.1.3) lib/authlogic/session/params.rb:66:in `persist_by_params' authlogic (2.1.3) lib/authlogic/session/callbacks.rb:79:in `persist' authlogic (2.1.3) lib/authlogic/session/persistence.rb:55:in `persisting?' authlogic (2.1.3) lib/authlogic/session/persistence.rb:39:in `find' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:96:in `get_session_information' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `each' authlogic (2.1.3) lib/authlogic/acts_as_authentic/session_maintenance.rb:95:in `get_session_information' /test/unit/user_test.rb:23:in `test_should_save_user_with_email_password_and_confirmation' What am I doing wrong?

    Read the article

  • Using Unit Tests While Developing Static Libraries in Obj-C

    - by macinjosh
    I'm developing a static library in Obj-C for a CocoaTouch project. I've added unit testing to my Xcode project by using the built in OCUnit framework. I can run tests successfully upon building the project and everything looks good. However I'm a little confused about something. Part of what the static library does is to connect to a URL and download the resource there. I constructed a test case that invokes the method that creates a connection and ensures the connection is successful. However when my tests run a connection is never made to my testing web server (where the connection is set to go). It seems my code is not actually being ran when the tests happen? Also, I am doing some NSLog calls in the unit tests and the code they run, but I never see those. I'm new to unit testing so I'm obviously not fully grasping what is going on here. Can anyone help me out here? P.S. By the way these are "Logical Tests" as Apple calls them so they are not linked against the library, instead the implementation files are included in the testing target.

    Read the article

  • Unit Testing - Validation of ViewModel ASP.NET MVC 2

    - by dean nolan
    I am currently unit testing a service that adds users to a repository. I am using dependency injection to test using a fake repository. The repository has a method CreateUser(User user) which just adds it to the database or in this case a List of Users. The logic for the creation is in the UserServices class. The application has a form for creating a user that requires some properties such as name and address. This is an MVC 2 app and I will be using the new validation using data annotations. This makes me wonder about a few things: 1) Should I annotate a POCO object that will map to the database? Or should I create a specific View Model that has these annotations and pass this data to the UserServices class? 2)Should the UserServicesClass also check this data? Would I best be constructing a Usr out of the ViewModel and passing this into the Service as a parameter? 3) The actual unit testing would depend on 2), I either populate a User object and pass that in, or I pass a large list of strings to the method CreateUser. Writing this out I get a basic idea that I should probably annotate the view model only, pass in a user (constructed by the view model if the data is valid) and also just construct the user in the unit test also. Is this the best way to go?

    Read the article

  • codingBat separateThousands using regex (and unit testing how-to)

    - by polygenelubricants
    This question is a combination of regex practice and unit testing practice. Regex part I authored this problem separateThousands for personal practice: Given a number as a string, introduce commas to separate thousands. The number may contain an optional minus sign, and an optional decimal part. There will not be any superfluous leading zeroes. Here's my solution: String separateThousands(String s) { return s.replaceAll( String.format("(?:%s)|(?:%s)", "(?<=\\G\\d{3})(?=\\d)", "(?<=^-?\\d{1,3})(?=(?:\\d{3})+(?!\\d))" ), "," ); } The way it works is that it classifies two types of commas, the first, and the rest. In the above regex, the rest subpattern actually appears before the first. A match will always be zero-length, which will be replaceAll with ",". The rest basically looks behind to see if there was a match followed by 3 digits, and looks ahead to see if there's a digit. It's some sort of a chain reaction mechanism triggered by the previous match. The first basically looks behind for ^ anchor, followed by an optional minus sign, and between 1 to 3 digits. The rest of the string from that point must match triplets of digits, followed by a nondigit (which could either be $ or \.). My question for this part is: Can this regex be simplified? Can it be optimized further? Ordering rest before first is deliberate, since first is only needed once No capturing group Unit testing part As I've mentioned, I'm the author of this problem, so I'm also the one responsible for coming up with testcases for them. Here they are: INPUT, OUTPUT "1000", "1,000" "-12345", "-12,345" "-1234567890.1234567890", "-1,234,567,890.1234567890" "123.456", "123.456" ".666666", ".666666" "0", "0" "123456789", "123,456,789" "1234.5678", "1,234.5678" "-55555.55555", "-55,555.55555" "0.123456789", "0.123456789" "123456.789", "123,456.789" I haven't had much experience with industrial-strength unit testing, so I'm wondering if others can comment whether this is a good coverage, whether I've missed anything important, etc (I can always add more tests if there's a scenario I've missed).

    Read the article

  • how do i implement / build / create an 'in memory database' for my unit test

    - by Michel
    Hi all, i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime. So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database" But what is that / how do i implement that? Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :) At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go. Michel Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together. For the real database we're using the entity framework, and this works very simple. And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data. It sounds weird that there is so much work in the fake layer, and so little in the real layer. I hope this all makes sense :)

    Read the article

  • grailsApplication access in Grails unit Test

    - by Reza
    I am trying to write unit tests for a service which use grailsApplication.config to do some settings. It seems that in my unit tests that service instance could not access the config file (null pointer) for its setting while it could access that setting when I run "run-app". How could I configure the service to access grailsApplication service in my unit tests. class MapCloudMediaServerControllerTests { def grailsApplication @Before public void setUp(){ grailsApplication.config= ''' video{ location="C:\\tmp\\" // or shared filesystem drive for a cluster yamdi{ path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\yamdi" } ffmpeg { fileExtension = "flv" // use flv or mp4 conversionArgs = "-b 600k -r 24 -ar 22050 -ab 96k" path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffmpeg" makethumb = "-an -ss 00:00:03 -an -r 2 -vframes 1 -y -f mjpeg" } ffprobe { path="C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\ffprobe" params="" } flowplayer { version = "3.1.2" } swfobject { version = "" qtfaststart { path= "C:\\FFmpeg\\ffmpeg-20121125-git-26c531c-win64-static\\bin\\qtfaststart" } } ''' } @Test void testMpegtoFlvConvertor() { log.info "In test Mpg to Flv Convertor function!" def controller=new MapCloudMediaServerController() assert controller!=null controller.videoService=new VideoService() assert controller.videoService!=null log.info "Is the video service null? ${controller.videoService==null}" controller.videoService.grailsApplication=grailsApplication log.info "Is grailsApplication null? ${controller.videoService.grailsApplication==null}" //Very important part for simulating the HTTP request controller.metaClass.request = new MockMultipartHttpServletRequest() controller.request.contentType="video/mpg" controller.request.content= new File("..\\MapCloudMediaServer\\web-app\\videoclips\\sample3.mpg").getBytes() controller.mpegtoFlvConvertor() byte[] videoOut=IOUtils.toByteArray(controller.response.getOutputStream()) def outputFile=new File("..\\MapCloudMediaServer\\web-app\\videoclips\\testsample3.flv") outputFile.append(videoOut) } }

    Read the article

  • Android Unit Testing - Resolution & Verification Problems

    - by Bill
    I just switched the way my Android project is being built and non of my unit tests work any more...I get errors like WARN/dalvikvm(575): VFY: unable to resolve static field X in ..... WARN/dalvikvm(575): VFY: unable to find class referenced in signature These errors only come from my Unit Tests, where classes defined in it can't even see other classes defined in the unit test. Before each project had its own directory with copies of the 3rd party jar files. I've read around that Dex does weird things with references but haven't been able to figure out how to fix this problem. Is there a better way to do this? I would love to see an example of a large Android workspace where there are multiple projects, jar references, etc... Is it possible to fix this with an Order/Export tweak ? The project is structured like this: Eclipse Workspace (PROJECT_HOME classpath variable) lib 3rd-party jars android.jar Java Project A Looks in PROJECT_HOME Java Project B Looks in PROJECT_HOME Depends on project A Android Project Depends on A & B Looks in PROJECT_HOME Android Test Project Depends on A , B, Android Project Looks in PROJECT_HOME

    Read the article

  • Unit Tests Architecture Question

    - by Tom Tresansky
    So I've started to layout unit tests for the following bit of code: public interface MyInterface { void MyInterfaceMethod1(); void MyInterfaceMethod2(); } public class MyImplementation1 implements MyInterface { void MyInterfaceMethod1() { // do something } void MyInterfaceMethod2() { // do something else } void SubRoutineP() { // other functionality specific to this implementation } } public class MyImplementation2 implements MyInterface { void MyInterfaceMethod1() { // do a 3rd thing } void MyInterfaceMethod2() { // do something completely different } void SubRoutineQ() { // other functionality specific to this implementation } } with several implementations and the expectation of more to come. My initial thought was to save myself time re-writing unit tests with something like this: public abstract class MyInterfaceTester { protected MyInterface m_object; @Setup public void setUp() { m_object = getTestedImplementation(); } public abstract MyInterface getTestedImplementation(); @Test public void testMyInterfaceMethod1() { // use m_object to run tests } @Test public void testMyInterfaceMethod2() { // use m_object to run tests } } which I could then subclass easily to test the implementation specific additional methods like so: public class MyImplementation1Tester extends MyInterfaceTester { public MyInterface getTestedImplementation() { return new MyImplementation1(); } @Test public void testSubRoutineP() { // use m_object to run tests } } and likewise for implmentation 2 onwards. So my question really is: is there any reason not to do this? JUnit seems to like it just fine, and it serves my needs, but I haven't really seen anything like it in any of the unit testing books and examples I've been reading. Is there some best practice I'm unwittingly violating? Am I setting myself up for heartache down the road? Is there simply a much better way out there I haven't considered? Thanks for any help.

    Read the article

  • Python, unit test - Pass command line arguments to setUp of unittest.TestCase

    - by sberry2A
    I have a script that acts as a wrapper for some unit tests written using the Python unittest module. In addition to cleaning up some files, creating an output stream and generating some code, it loads test cases into a suite using unittest.TestLoader().loadTestsFromTestCase() I am already using optparse to pull out several command-line arguments used for determining the output location, whether to regenerate code and whether to do some clean up. I also want to pass a configuration variable, namely an endpoint URI, for use within the test cases. I realize I can add an OptionParser to the setUp method of the TestCase, but I want to instead pass the option to setUp. Is this possible using loadTestsFromTestCase()? I can iterate over the returned TestSuite's TestCases, but can I manually call setUp on the TestCases? ** EDIT ** I wanted to point out that I am able to pass the arguments to setUp if I iterate over the tests and call setUp manually like: (options, args) = op.parse_args() suite = unittest.TestLoader().loadTestsFromTestCase(MyTests.TestSOAPFunctions) for test in suite: test.setUp(options.soap_uri) However, I am using xmlrunner for this and its run method takes a TestSuite as an argument. I assume it will run the setUp method itself, so I would need the parameters available within the XMLTestRunner. I hope this makes sense.

    Read the article

  • Spring Test / JUnit problem - unable to load application context

    - by HDave
    I am using Spring for the first time and must be doing something wrong. I have a project with several Bean implementations and now I am trying to create a test class with Spring Test and JUnit. I am trying to use Spring Test to inject a customized bean into the test class. Here is my test-applicationContext.xml: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="............."> <bean id="MyUuidFactory" class="com.myapp.UuidFactory" scope="singleton" > <property name="typeIdentifier" value="CLS" /> </bean> <bean id="ThingyImplTest" class="com.myapp.ThingyImplTest" scope="singleton"> <property name="uuidFactory"> <idref local="MyUuidFactory" /> </property> </bean> </beans> The injection of MyUuidFactory instance goes along with the following code from within the test class: private UuidFactory uuidFactory; public void setUuidFactory(UuidFactory uuidFactory) { this.uuidFactory = uuidFactory; } However, when I go to run the test (in Eclipse or command line) I get the following error (stack trace omitted for brevity): Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'MyImplTest' defined in class path resource [test-applicationContext.xml]: Initialization of bean failed; nested exception is org.springframework.beans.ConversionNotSupportedException: Failed to convert property value of type 'java.lang.String' to required type 'com.myapp.UuidFactory' for property 'uuidFactory'; nested exception is java.lang.IllegalStateException: Cannot convert value of type [java.lang.String] to required type [com.myapp.UuidFactory] for property 'uuidFactory': no matching editors or conversion strategy found Funny thing is, the Eclipse/Spring XML editor shows errors of I misspell any of the types or idrefs. If I leave the bean in, but comment out the dependency injection, everything work until I get a NullPointerException while running the test...which makes sense.

    Read the article

  • Unit testing Monorail's RenderText method

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new FooController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( foo.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • Juju stuck in "pending" state when using LXC

    - by Andre
    So I'm trying to get started with Juju, and tried to do this locally using LXC. I followed the instructions here: How do I configure juju for local usage? Unfortunately this doesn't seem to work for me. status shows the following: $ juju status machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 exposed: true relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 open-ports: [] public-address: null 2012-05-10 14:09:38,155 INFO 'status' command finished successfully As you can see the agent-state is 'pending' and there is no public address where I'm able to access the newly created site. Am I missing something here? UPDATE: Tried destroying the environment an doing everything again (multiple times). This is the output for debug-log: ~$ juju debug-log 2012-05-11 08:50:23,790 INFO Enabling distributed debug log. 2012-05-11 08:50:23,806 INFO Tailing logs - Ctrl-C to stop. 2012-05-11 08:50:42,338 Machine:0: juju.agents.machine DEBUG: Units changed old:set([]) new:set(['mysql/0']) 2012-05-11 08:50:42,339 Machine:0: juju.agents.machine DEBUG: Starting service unit: mysql/0 ... 2012-05-11 08:50:42,459 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/mysql-1 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:50:42,620 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c54b6c> for mysql/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:50:42,648 Machine:0: unit.deploy DEBUG: Starting service unit mysql/0... 2012-05-11 08:50:42,649 Machine:0: unit.deploy DEBUG: Creating master container... 2012-05-11 08:54:33,992 Machine:0: unit.deploy DEBUG: Created master container andre-local-0-template 2012-05-11 08:54:33,993 Machine:0: unit.deploy INFO: Creating container mysql-0... 2012-05-11 08:56:18,760 Machine:0: unit.deploy INFO: Container created for mysql/0 2012-05-11 08:56:19,466 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:56:19,569 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started container for mysql/0 2012-05-11 08:56:22,707 Machine:0: unit.deploy INFO: Started service unit mysql/0 2012-05-11 08:56:23,012 Machine:0: juju.agents.machine DEBUG: Units changed old:set(['mysql/0']) new:set(['wordpress/0', 'mysql/0']) 2012-05-11 08:56:23,039 Machine:0: juju.agents.machine DEBUG: Starting service unit: wordpress/0 ... 2012-05-11 08:56:23,154 Machine:0: unit.deploy DEBUG: Downloading charm cs:precise/wordpress-0 to /home/andre/.juju/data/andre-local/charms 2012-05-11 08:56:23,396 Machine:0: unit.deploy DEBUG: Using <juju.machine.unit.UnitContainerDeployment object at 0x9c519cc> for wordpress/0 in /home/andre/.juju/data/andre-local 2012-05-11 08:56:23,620 Machine:0: unit.deploy DEBUG: Starting service unit wordpress/0... 2012-05-11 08:56:23,621 Machine:0: unit.deploy INFO: Creating container wordpress-0... 2012-05-11 08:58:24,739 Machine:0: unit.deploy INFO: Container created for wordpress/0 2012-05-11 08:58:25,163 Machine:0: unit.deploy DEBUG: Charm extracted into container 2012-05-11 08:58:25,397 Machine:0: unit.deploy DEBUG: Starting container... 2012-05-11 08:58:27,982 Machine:0: unit.deploy INFO: Started container for wordpress/0 2012-05-11 08:58:27,983 Machine:0: unit.deploy INFO: Started service unit wordpress/0 This is the result for the status command (with verbose flag): ~$ juju -v status 2012-05-11 08:51:53,464 DEBUG Initializing juju status runtime 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@658: Client environment:zookeeper.version=zookeeper C client 3.3.5 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@662: Client environment:host.name=andre-ufo 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@669: Client environment:os.name=Linux 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@670: Client environment:os.arch=3.2.0-24-generic-pae 2012-05-11 08:51:53,625:4030(0xb7345b00):ZOO_INFO@log_env@671: Client environment:os.version=#37-Ubuntu SMP Wed Apr 25 10:47:59 UTC 2012 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@679: Client environment:user.name=andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@687: Client environment:user.home=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@log_env@699: Client environment:user.dir=/home/andre 2012-05-11 08:51:53,626:4030(0xb7345b00):ZOO_INFO@zookeeper_init@727: Initiating client connection, host=192.168.122.1:41779 sessionTimeout=10000 watcher=0xb7780620 sessionId=0 sessionPasswd=<null> context=0x9242ee8 flags=0 2012-05-11 08:51:53,627:4030(0xb6b90b40):ZOO_INFO@check_events@1585: initiated connection to server [192.168.122.1:41779] 2012-05-11 08:51:53,649:4030(0xb6b90b40):ZOO_INFO@check_events@1632: session establishment complete on server [192.168.122.1:41779], sessionId=0x1373ae057d90007, negotiated timeout=10000 2012-05-11 08:51:53,651 DEBUG Environment is initialized. machines: 0: agent-state: running dns-name: localhost instance-id: local instance-state: running services: mysql: charm: cs:precise/mysql-1 relations: db: - wordpress units: mysql/0: agent-state: pending machine: 0 public-address: null wordpress: charm: cs:precise/wordpress-0 relations: db: - mysql units: wordpress/0: agent-state: pending machine: 0 public-address: null

    Read the article

  • The Agile Engineering Rules of Test Code

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Lots of test code gets written, a lot of it is waste, some of it is well engineered waste.Companies hire Agile Engineering Coaches because agile engineering is easy to do wrong.Very easy.So here's a quick tool you can use for self coaching.It's what I call, "The Agile Engineering Rules of Test Code" and it's going to act as a sort of table of contents for some future posts.The Agile Engineering Rules of Test Code Malcolm Anderson   Test code is not throw away code Test code is production code   8 questions to determine the quality of your test code Does the test code have appropriate comments?Is the test code executed as part of the build?Every Time?Is the test code getting refactored?Does everyone use the same test code?Can the test code be described as “Well Maintained”?Can a bright six year old tell you why any particular test failed?Are the tests independent and infinitely repeatable?

    Read the article

  • castle monorail unit test rendertext

    - by MikeWyatt
    I'm doing some maintenance on an older web application written in Monorail v1.0.3. I want to unit test an action that uses RenderText(). How do I extract the content in my test? Reading from controller.Response.OutputStream doesn't work, since the response stream is either not setup properly in PrepareController(), or is closed in RenderText(). Example Action public DeleteFoo( int id ) { var success= false; var foo = Service.Get<Foo>( id ); if( foo != null && CurrentUser.IsInRole( "CanDeleteFoo" ) ) { Service.Delete<Foo>( id ); success = true; } CancelView(); RenderText( "{ success: " + success + " }" ); } Example Test (using Moq) [Test] public void DeleteFoo() { var controller = new MyController (); PrepareController ( controller ); var foo = new Foo { Id = 123 }; var mockService = new Mock < Service > (); mockService.Setup ( s => s.Get<Foo> ( foo.Id ) ).Returns ( foo ); controller.Service = mockService.Object; controller.DeleteTicket ( ticket.Id ); mockService.Verify ( s => s.Delete<Foo> ( foo.Id ) ); Assert.AreEqual ( "{success:true}", GetResponse ( Response ) ); } // response.OutputStream.Seek throws an "System.ObjectDisposedException: Cannot access a closed Stream." exception private static string GetResponse( IResponse response ) { response.OutputStream.Seek ( 0, SeekOrigin.Begin ); var buffer = new byte[response.OutputStream.Length]; response.OutputStream.Read ( buffer, 0, buffer.Length ); return Encoding.ASCII.GetString ( buffer ); }

    Read the article

  • How do I unit test a finalizer?

    - by GraemeF
    I have the following class which is a decorator for an IDisposable object (I have omitted the stuff it adds) which itself implements IDisposable using a common pattern: public class DisposableDecorator : IDisposable { private readonly IDisposable _innerDisposable; public DisposableDecorator(IDisposable innerDisposable) { _innerDisposable = innerDisposable; } #region IDisposable Members public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } #endregion ~DisposableDecorator() { Dispose(false); } protected virtual void Dispose(bool disposing) { if (disposing) _innerDisposable.Dispose(); } } I can easily test that innerDisposable is disposed when Dispose() is called: [Test] public void Dispose__DisposesInnerDisposable() { var mockInnerDisposable = new Mock<IDisposable>(); new DisposableDecorator(mockInnerDisposable.Object).Dispose(); mockInnerDisposable.Verify(x => x.Dispose()); } But how do I write a test to make sure innerDisposable does not get disposed by the finalizer? I want to write something like this but it fails, presumably because the finalizer hasn't been called by the GC thread: [Test] public void Finalizer__DoesNotDisposeInnerDisposable() { var mockInnerDisposable = new Mock<IDisposable>(); new DisposableDecorator(mockInnerDisposable.Object); GC.Collect(); mockInnerDisposable.Verify(x => x.Dispose(), Times.Never()); }

    Read the article

  • Unit testing with Mocks when SUT is leveraging Task Parallel Libaray

    - by StevenH
    I am trying to unit test / verify that a method is being called on a dependency, by the system under test. The depenedency is IFoo. The dependent class is IBar. IBar is implemented as Bar. Bar will call Start() on IFoo in a new (System.Threading.Tasks.)Task, when Start() is called on Bar instance. Unit Test (Moq): [Test] public void StartBar_ShouldCallStartOnAllFoo_WhenFoosExist() { //ARRANGE //Create a foo, and setup expectation var mockFoo0 = new Mock<IFoo>(); mockFoo0.Setup(foo => foo.Start()); var mockFoo1 = new Mock<IFoo>(); mockFoo1.Setup(foo => foo.Start()); //Add mockobjects to a collection var foos = new List<IFoo> { mockFoo0.Object, mockFoo1.Object }; IBar sutBar = new Bar(foos); //ACT sutBar.Start(); //Should call mockFoo.Start() //ASSERT mockFoo0.VerifyAll(); mockFoo1.VerifyAll(); } Implementation of IBar as Bar: class Bar : IBar { private IEnumerable<IFoo> Foos { get; set; } public Bar(IEnumerable<IFoo> foos) { Foos = foos; } public void Start() { foreach(var foo in Foos) { Task.Factory.StartNew( () => { foo.Start(); }); } } } I appears that the issue is obviously due to the fact that the call to "foo.Start()" is taking place on another thread (/task), and the mock (Moq framework) can't detect it. But I could be wrong. Thanks

    Read the article

  • Why is testing MVC Views frowned upon?

    - by Peter Bernier
    I'm currently setting the groundwork for an ASP.Net MVC application and I'm looking into what sort of unit-tests I should be prepared to write. I've seen in multiple places people essentially saying 'don't bother testing your views, there's no logic and it's trivial and will be covered by an integration test'. I don't understand how this has become the accepted wisdom. Integration tests serve an entirely different purpose than unit tests. If I break something, I don't want to know a half-hour later when my integration tests break, I want to know immediately. Sample Scenario : Lets say we're dealing with a standard CRUD app with a Customer entity. The customer has a name and an address. At each level of testing, I want to verify that the Customer retrieval logic gets both the name and the address properly. To unit-test the repository, I write an integration test to hit the database. To unit-test the business rules, I mock out the repository, feed the business rules appropriate data, and verify my expected results are returned. What I'd like to do : To unit-test the UI, I mock out the business rules, setup my expected customer instance, render the view, and verify that the view contains the appropriate values for the instance I specified. What I'm stuck doing : To unit-test the repository, I write an integration test, setup an appropriate login, create the required data in the database, open a browser, navigate to the customer, and verify the resulting page contains the appropriate values for the instance I specified. I realize that there is overlap between the two scenarios discussed above, but the key difference it time and effort required to setup and execute the tests. If I (or another dev) removes the address field from the view, I don't want to wait for the integration test to discover this. I want is discovered and flagged in a unit-test that gets multiple times daily. I get the feeling that I'm just not grasping some key concept. Can someone explain why wanting immediate test feedback on the validity of an MVC view is a bad thing? (or if not bad, then not the expected way to get said feedback)

    Read the article

  • Sensible unit test possible?

    - by nkr1pt
    Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists? I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code. I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed. How would you tackle this? public class Extractor { private EventBus eventBus; private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()}; private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{}; private ExtractCommand[] macExtractCommands = new ExtractCommand[]{}; @Inject public Extractor(EventBus eventBus) { this.eventBus = eventBus; } public boolean extract(DownloadCandidate downloadCandidate) { for (ExtractCommand command : getSystemSpecificExtractCommands()) { if (command.extract(downloadCandidate)) { eventBus.fireEvent(this, new ExtractCompletedEvent()); return true; } } eventBus.fireEvent(this, new ExtractFailedEvent()); return false; } private ExtractCommand[] getSystemSpecificExtractCommands() { String os = System.getProperty("os.name"); if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return linuxExtractCommands; } else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return windowsExtractCommands; } else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return macExtractCommands; } return null; } }

    Read the article

  • Visual Studio Unit Test failure to start

    - by swmi
    Hi, I am having an issue when starting the tests under debug mode in Visual Studio 2008 Team Test where it gives the following error: "Failed to queue test run '{user@machinename}': Object reference not set to an instance of an object." I googled for the error but no joy. Don't even understand what it means as it is too brief. Has anyone come across this? Note that I can run tests fine if I am not debugging and I get the same error irrespective of the test I run. Thank you, Swati ETA: Being new to Visual Studio Team Test, I didn't know there was a better exception log then what I was seeing. Anyhow, here it is: <Exception> System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. ShowToolWindow [T](T&amp; toolWindow, String errorMessage, Boolean show) at Microsoft.VisualStudio.TestTools.TestCaseManagement.QualityToolsPackage. OpenTestResultsToolWindow() at Microsoft.VisualStudio.TestTools.TestCaseManagement.SolutionIntegrationManager. DebugTarget(DebugInfo debugInfo, Boolean prepareEnvironment) at Microsoft.VisualStudio.TestTools.TestManagement.DebugProcessLauncher.Launch( String exeFileName, String args, String workingDir, EventHandler processExitedHandler, Process&amp; process) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.StartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.RestartProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy.PrepareProcess( TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.LocalControllerProxy. InitializeController(TestRun run) at Microsoft.VisualStudio.TestTools.TestManagement.ControllerProxy.QueueTestRunWorker( Object state) </Exception>

    Read the article

  • Copy-and-Pasted Test Code: How Bad is This?

    - by joshin4colours
    My current job is mostly writing GUI test code for various applications that we work on. However, I find that I tend to copy and paste a lot of code within tests. The reason for this is that the areas I'm testing tend to be similar enough to need repetition but not quite similar enough to encapsulate code into methods or objects. I find that when I try to use classes or methods more extensively, tests become more cumbersome to maintain and sometimes outright difficult to write in the first place. Instead, I usually copy a big chunk of test code from one section and paste it to another, and make any minor changes I need. I don't use more structured ways of coding, such as using more OO-principles or functions. Do other coders feel this way when writing test code? Obviously I want to follow DRY and YAGNI principles, but I find that test code (automated test code for GUI testing anyway) can make these principles tough to follow. Or do I just need more coding practice and a better overall system of doing things? EDIT: The tool I'm using is SilkTest, which is in a proprietary language called 4Test. As well, these tests are mostly for Windows desktop applications, but I also have tested web apps using this setup as well.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >