Search Results

Search found 108585 results on 4344 pages for 'test user'.

Page 2/4344 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • User generated articles, how to do meta description?

    - by Tom Gullen
    If users submit a lot of good quality articles on the site, what is the best way to approach the meta description tag? I see two options: Have a description box and rely on them to fill it sensibly and in a good quality way Just exclude the meta description Method 1 is bad initially, but I'm willing to put time in going through and editing/checking all of them on a permanent basis. Method 2 is employed by the stack exchange site, and lets the search bots extract the best part of the page in the SERP. Thoughts? Ideas? I'm thinking a badly formed description tag is more damaging than not having one at all at the end of the day. I don't expect content to ever become unwieldy and too much to manage.

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • user generated / user specific functions

    - by pedalpete
    I'm looking for the most elegant and secure method to do the following. I have a calendar, and groups of users. Users can add events to specific days on the calendar, and specify how long each event lasts for. I've had a few requests from users to add the ability for them to define that events of a specific length include a break, of a certain amount of time, or require that a specific amount of time be left between events. For example, if event is 2 hours, include a 20min break. for each event, require 30 minutes before start of next event. The same group that has asked for an event of 2 hours to include a 20 min break, could also require that an event 3 hours include a 30 minute break. In the end, what the users are trying to get is an elapsed time excluding breaks calculated for them. Currently I provide them a total elapsed time, but they are looking for a running time. However, each of these requests is different for each group. Where one group may want a 30 minute break during a 2 hour event, and another may want only 10 minutes for each 3 hour event. I was kinda thinking I could write the functions into a php file per group, and then include that file and do the calculations via php and then return a calculated total to the user, but something about that doesn't sit right with me. Another option is to output the groups functions to javascript, and have it run client-side, as I'm already returning the duration of the event, but where the user is part of more than one group with different rules, this seems like it could get rather messy. I currently store the start and end time in the database, but no 'durations', and I don't think I should be storing the calculated totals in the db, because if a group decides to change their calculations, I'd need to change it throughout the db. Is there a better way of doing this? I would just store the variables in mysql, but I don't see how I can then say to mysql to calculate based on those variables. I'm REALLY lost here. Any suggestions? I'm hoping somebody has done something similar and can provide some insight into the best direction. If it helps, my table contains eventid, user, group, startDate, startTime, endDate, endTime, type The json for the event which I return to the user is {"eventid":"'.$eventId.'", "user":"'.$userId.'","group":"'.$groupId.'","type":"'.$type.'","startDate":".$startDate.'","startTime":"'.$startTime.'","endDate":"'.$endDate.'","endTime":"'.$endTime.'","durationLength":"'.$duration.'", "durationHrs":"'.$durationHrs.'"} where for example, duration length is 2.5 and duration hours is 2:30.

    Read the article

  • Tips on creating user interfaces and optimizing the user experience

    - by Saif Bechan
    I am currently working on a project where a lot of user interaction is going to take place. There is also a commercial side as people can buy certain items and services. In my opinion a good blend of user interface, speed and security is essential for these types of websites. It is fairly easy to use ajax and JavaScript nowadays to do almost everything, as there are a lot of libraries available such as jQuery and others. But this can have some performance and incompatibility issues. This can lead to users just going to the next website. The overall look of the website is important too. Where to place certain buttons, where to place certain types of articles such as faq and support. Where and how to display error messages so that the user sees them but are not bothering him. And an overall color scheme is important too. The basic question is: How to create an interface that triggers a user to buy/use your services I know psychology also plays a huge role in how users interact with your website. The color scheme for example is important. When the colors are irritating on a website you just want to click away. I have not found any articles that explain those concept. Does anyone have any tips and/or recourses where i can get some articles that guide you in making the correct choices for your website.

    Read the article

  • Windows service fails to start with custom user until started once with local user

    - by Gauls
    All of a sudden my Windows service application after installation does not start. (Some services stop automatically if they have no work to do.) The service uses a custom user. If I change the logon setting to use the local system account, the service starts fine. Then when I go back and change the login setting to use this custom account (local user - custom user under user group), the service will start. Why doesn't it work in the first place?

    Read the article

  • Apache Simple Configuration Issue: per-user directory is accessing /~user instead of ~user

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work?

    Read the article

  • Is your test method self-validating ?

    - by mehfuzh
    Writing state of art unit tests that can validate your every part of the framework is challenging and interesting at the same time, its like becoming a samurai. One of the key concept in this is to keep our test synced all the time as underlying code changes and thus breaking them to the furthest unit as possible.  This also means, we should avoid  multiple conditions embedded in a single test. Let’s consider the following example of transfer funds. [Fact] public void ShouldAssertTranserFunds() {     var currencyService = Mock.Create<ICurrencyService>();     //// current rate     Mock.Arrange(() => currencyService.GetConversionRate("AUS", "CAD")).Returns(0.88f);       Account to = new Account { Currency = "AUS", Balance = 120 };     Account from = new Account { Currency = "CAD" };       AccountService accService = new AccountService(currencyService);       Assert.Throws<InvalidOperationException>(() => accService.TranferFunds(to, from, 200f));       accService.TranferFunds(to, from, 100f);       Assert.Equal(from.Balance, 88);     Assert.Equal(20, to.Balance); } At first look,  it seems ok but as you look more closely , it is actually doing two tasks in one test. At line# 10 it is trying to validate the exception for invalid fund transfer and finally it is asserting if the currency conversion is successfully made. Here, the name of the test itself is pretty vague. The first rule for writing unit test should always reflect to inner working of the target code, where just by looking at their names it is self explanatory. Having a obscure name for a test method not only increase the chances of cluttering the test code, but it also gives the opportunity to add multiple paths into it and eventually makes things messy as possible. I would rater have two test methods that explicitly describes its intent and are more self-validating. ShouldThrowExceptionForInvalidTransferOperation ShouldAssertTransferForExpectedConversionRate Having, this type of breakdown also helps us pin-point reported bugs easily rather wasting any time on debugging for something more general and can minimize confusion among team members. Finally, we should always make our test F.I.R.S.T ( Fast.Independent.Repeatable.Self-validating.Timely) [ Bob martin – Clean Code]. Only this will be enough to ensure, our test is as simple and clean as possible.   Hope that helps

    Read the article

  • Xuggler errors as soon as you import git

    - by user3241507
    I downloaded the Git straight into Eclipse for Xuggler (Here is the git). But as soon as it loads, there are so many errors I don't know what to do. Most of the errors are "cannot be resolved" type errors. Description Resource Path Location Type The import org.junit cannot be resolved AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 22 Java Problem The import junit cannot be resolved AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 28 Java Problem TestCase cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 30 Java Problem The import org.slf4j cannot be resolved AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 23 Java Problem The import org.slf4j cannot be resolved AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 24 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 94 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 97 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 102 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 103 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 86 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 89 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 90 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 93 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com /xuggle/ferry line 114 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 120 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 125 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 126 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 106 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 107 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 110 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 111 Java Problem The method assertTrue(String, boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 53 Java Problem The method assertTrue(String, boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 49 Java Problem Ignore cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 57 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 56 Java Problem Before cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 37 Java Problem LoggerFactory cannot be resolved AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 32 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 44 Java Problem The method getName() is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 40 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 81 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 75 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 85 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 82 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 64 Java Problem The method assertTrue(String, boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 61 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 72 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 69 Java Problem NameAwareTestClassRunner cannot be resolved BufferTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 44 Java Problem The method assertTrue(String, boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 167 Java Problem The method debug(String, int, String) in the type Logger is not applicable for the arguments (String, int) AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 166 Java Problem The method fail(String) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 163 Java Problem After cannot be resolved to a type BufferTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 47 Java Problem NameAwareTestClassRunner cannot be resolved to a type BufferTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 35 Java Problem The method debug(String, int, String) in the type Logger is not applicable for the arguments (String) AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 162 Java Problem Test cannot be resolved to a type AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 135 Java Problem Before cannot be resolved to a type BufferTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 41 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 131 Java Problem LoggerFactory cannot be resolved BufferTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 38 Java Problem The method assertTrue(boolean) is undefined for the type AtomicIntegerTest AtomicIntegerTest.java /xuggle-xuggler-main/test/src/com/xuggle/ferry line 130 Java Problem The import org.junit cannot be resolved AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 22 Java Problem The import org.slf4j cannot be resolved AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 23 Java Problem The import org.slf4j cannot be resolved AudioSamplesTest.java /xuggle-xuggler-main/test /src/com/xuggle/xuggler line 24 Java Problem The import junit cannot be resolved AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 31 Java Problem TestCase cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 33 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 35 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 82 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 80 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 89 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 84 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 94 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 93 Java Problem Test cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 99 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 96 Java Problem Before cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 37 Java Problem LoggerFactory cannot be resolved AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 35 Java Problem The method getName() is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 40 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 40 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 60 Java Problem Test cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 43 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 67 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 62 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 157 Java Problem Test cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 161 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 154 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 155 Java Problem The method assertEquals(int, long) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 172 Java Problem The method assertEquals(long, long) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 173 Java Problem The method assertNotNull(IAudioSamples) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 168 Java Problem The method assertTrue(boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 171 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 124 Java Problem The method assertNotNull(IAudioSamples) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 129 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 117 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 119 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 145 Java Problem Logger cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 150 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 141 Java Problem The method assertTrue(String, boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 143 Java Problem The method assertTrue(boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 216 Java Problem The method assertEquals(IBuffer.Type, IBuffer.Type) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 212 Java Problem The method assertTrue(boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 208 Java Problem The method assertNotNull(IAudioSamples) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 204 Java Problem The method assertEquals(IBuffer.Type, IBuffer.Type) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 218 Java Problem The method assertEquals(int, long) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 187 Java Problem The method assertTrue(boolean) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 186 Java Problem The method assertNotNull(IAudioSamples) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 183 Java Problem Test cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 176 Java Problem Test cannot be resolved to a type AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 197 Java Problem The method assertEquals(long, long) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 192 Java Problem The method assertEquals(long, long) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 191 Java Problem The method assertEquals(long, long) is undefined for the type AudioSamplesTest AudioSamplesTest.java /xuggle-xuggler-main/test/src/com/xuggle/xuggler line 188 Java Problem For a school project, I would like to build a simple live video stream program (final year in high school) like skype, except not as complicated. Can anyone help me solve these errors? or Is there another platform I can use that would be better/easier?

    Read the article

  • py.test import context problems (causes Django unit test failure)

    - by dhill
    I made a following test: # main.py import imported print imported.f.__module__ # imported.py def f(): pass # test_imported.py (py.test test case) import imported def test_imported(): result = imported.f.__module__ assert result == 'imported' Running python main.py, gives me imported, but running py.test gives me error and result value is moduletest.imported (moduletest is the name of the directory I keep the test in. It doesn't contain __init__.py, moduletest is the only directory containing *.py files in ~/tmp). How can I fix result value? The long story: I'm getting strange errors, while testing Django application. A call to reverse() from (django.urlresolvers). with function object foo as argument in tests crashes with NoReverseMatch: Reverse for 'site.app.views.foo'. The same call inside application works. I checked and it is converted to 'app.views.foo' (without site prefix). I first suspected my customised test setup for Django, but then I made above test.

    Read the article

  • How long can a user remember what they were working on? [migrated]

    - by GlenPeterson
    A web application lets the user browse its screens for future or past months. The time period the user is currently viewing follows the user through every screen of the system. But users can be logged in for a month or more. After a certain period of inactivity, we will prompt the user: You were viewing November 2008 when you last clicked. Want to view the current (default) time period instead? How long between user clicks should we wait to show this message? I'm guessing somewhere between 30 minutes and 3 hours most people will forget what they were doing, but I'd love to have some data, or someone's experience to base it on. Other suggestions related to this issue?

    Read the article

  • Can't build and run an android test project created using "ant create test-project" when tested proj

    - by Mike
    I have a module that builds an app called MyApp. I have another that builds some testcases for that app, called MyAppTests. They both build their own APKs, and they both work fine from within my IDE. I'd like to build them using ant so that I can take advantage of continuous integration. Building the app module works fine. I'm having difficulty getting the Test module to compile and run. Using Christopher's tip from a previous question, I used android create test-project -p MyAppTests -m ../MyApp -n MyAppTests to create the necessary build files to build and run my test project. This seems to work great (once I remove an unnecessary test case that it constructed for me and revert my AndroidManifest.xml to the one I was using before it got replaced by android create), but I have two problems. The first problem: The project doesn't compile because it's missing libraries. $ ant run-tests Buildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1567 KB/s (288354 bytes in 0.179s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/gen [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin [mkdir] Created dir: /Users/mike/Projects/myapp/android/MyAppTests/bin/classes -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:8: package com.google.gson does not exist [javac] import com.google.gson.JsonElement; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:9: package com.google.gson does not exist [javac] import com.google.gson.JsonParser; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:11: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class GsonTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:6: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:7: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:11: package com.google.gson does not exist [javac] import com.google.gson.JsonObject; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:15: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class HttpTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class LinksTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:4: package roboguice.test does not exist [javac] import roboguice.test.RoboUnitTestCase; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:5: package roboguice.util does not exist [javac] import roboguice.util.RoboAsyncTask; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:6: package roboguice.util does not exist [javac] import roboguice.util.RoboLooperThread; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:12: cannot find symbol [javac] symbol: class RoboUnitTestCase [javac] public class SafeAsyncTest extends RoboUnitTestCase<MyApplication> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource': class file for roboguice.inject.InjectResource not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectResource' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView': class file for roboguice.inject.InjectView not found [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyApp/bin/classes/com/myapp/activity/Stories.class: warning: Cannot find annotation method 'value()' in type 'roboguice.inject.InjectView' [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:15: cannot find symbol [javac] symbol : class JsonParser [javac] location: class com.myapp.test.GsonTest [javac] final JsonParser parser = new JsonParser(); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:18: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e = parser.parse(s); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/GsonTest.java:20: cannot find symbol [javac] symbol : class JsonElement [javac] location: class com.myapp.test.GsonTest [javac] final JsonElement e2 = parser.parse(s2); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.HttpTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getResources().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:62: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:82: cannot find symbol [javac] symbol : method assertTrue(java.lang.String,boolean) [javac] location: class com.myapp.test.HttpTest [javac] assertTrue(result[0], result[0].contains("Search")); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:87: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:90: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:117: cannot find symbol [javac] symbol : class JsonObject [javac] location: class com.myapp.test.HttpTest [javac] final JsonObject[] result = {null}; [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/HttpTest.java:120: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.HttpTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:27: cannot find symbol [javac] symbol : method assertTrue(boolean) [javac] location: class com.myapp.test.LinksTest [javac] assertTrue(m.matches()); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/LinksTest.java:28: cannot find symbol [javac] symbol : method assertEquals(java.lang.String,java.lang.String) [javac] location: class com.myapp.test.LinksTest [javac] assertEquals( map.get(url), m.group(1) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:19: cannot find symbol [javac] symbol : method getInstrumentation() [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals("MyApp", getInstrumentation().getTargetContext().getString(com.myapp.R.string.app_name)); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:27: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:65: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:74: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:105: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:113: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:144: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:154: cannot find symbol [javac] symbol : class RoboLooperThread [javac] location: class com.myapp.test.SafeAsyncTest [javac] new RoboLooperThread() { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java:187: cannot find symbol [javac] symbol : method assertEquals(com.myapp.test.SafeAsyncTest.State,com.myapp.test.SafeAsyncTest.State) [javac] location: class com.myapp.test.SafeAsyncTest [javac] assertEquals(State.TEST_SUCCESS,state[0]); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:11: cannot access roboguice.activity.GuiceListActivity [javac] class file for roboguice.activity.GuiceListActivity not found [javac] public class StoriesTest extends ActivityUnitTestCase<Stories> { [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:21: cannot access roboguice.application.GuiceApplication [javac] class file for roboguice.application.GuiceApplication not found [javac] setApplication( new MyApplication( getInstrumentation().getTargetContext() ) ); [javac] ^ [javac] /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/StoriesTest.java:22: incompatible types [javac] found : com.myapp.activity.Stories [javac] required: android.app.Activity [javac] final Activity activity = startActivity(intent, null, null); [javac] ^ [javac] 39 errors [javac] 6 warnings BUILD FAILED /opt/local/android-sdk-mac/platforms/android-1.6/templates/android_rules.xml:248: Compile failed; see the compiler error output for details. Total time: 24 seconds That's not a hard problem to solve. I'm not sure it's the right thing to do, but I copied the missing libraries (roboguice and gson) from the MyApp/libs directory to the MyAppTests/libs directory and everything seems to compile fine. But that leads to the second problem, which I'm currently stuck on. The tests compile fine but they won't run: $ cp ../MyApp/libs/gson-r538.jar libs/ $ cp ../MyApp/libs/roboguice-1.1-SNAPSHOT.jar libs/ 0 10:23 /Users/mike/Projects/myapp/android/MyAppTests $ ant run-testsBuildfile: build.xml [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -install-tested-project: [setup] Project Target: Google APIs [setup] Vendor: Google Inc. [setup] Platform Version: 1.6 [setup] API level: 4 [setup] WARNING: No minSdkVersion value set. Application will install on all Android versions. -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 1 source file to /Users/mike/Projects/myapp/android/MyApp/bin/classes -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyApp/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyApp-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyApp/bin/MyApp-debug.apk onto default emulator or device... [exec] 1396 KB/s (288354 bytes in 0.201s) [exec] pkg: /data/local/tmp/MyApp-debug.apk [exec] Success -compile-tested-if-test: -dirs: [echo] Creating output directories if needed... -resource-src: [echo] Generating R.java / Manifest.java from the resources... -aidl: [echo] Compiling aidl files into Java classes... compile: [javac] Compiling 5 source files to /Users/mike/Projects/myapp/android/MyAppTests/bin/classes [javac] Note: /Users/mike/Projects/myapp/android/MyAppTests/src/com/myapp/test/SafeAsyncTest.java uses unchecked or unsafe operations. [javac] Note: Recompile with -Xlint:unchecked for details. -dex: [echo] Converting compiled files and external libraries into /Users/mike/Projects/myapp/android/MyAppTests/bin/classes.dex... [echo] -package-resources: [echo] Packaging resources [aaptexec] Creating full resource package... -package-debug-sign: [apkbuilder] Creating MyAppTests-debug-unaligned.apk and signing it with a debug key... [apkbuilder] Using keystore: /Users/mike/.android/debug.keystore debug: [echo] Running zip align on final apk... [echo] Debug Package: /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk install: [echo] Installing /Users/mike/Projects/myapp/android/MyAppTests/bin/MyAppTests-debug.apk onto default emulator or device... [exec] 1227 KB/s (94595 bytes in 0.075s) [exec] pkg: /data/local/tmp/MyAppTests-debug.apk [exec] Success run-tests: [echo] Running tests ... [exec] [exec] android.test.suitebuilder.TestSuiteBuilder$FailedToCreateTests:INSTRUMENTATION_RESULT: shortMsg=Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_RESULT: longMsg=java.lang.IllegalAccessError: Class ref in pre-verified class resolved to unexpected implementation [exec] INSTRUMENTATION_CODE: 0 BUILD SUCCESSFUL Total time: 38 seconds Any idea what's causing the "Class ref in pre-verified class resolved to unexpected implementation" error?

    Read the article

  • Integrating JavaScript Unit Tests with Visual Studio

    - by Stephen Walther
    Modern ASP.NET web applications take full advantage of client-side JavaScript to provide better interactivity and responsiveness. If you are building an ASP.NET application in the right way, you quickly end up with lots and lots of JavaScript code. When writing server code, you should be writing unit tests. One big advantage of unit tests is that they provide you with a safety net that enable you to safely modify your existing code – for example, fix bugs, add new features, and make performance enhancements -- without breaking your existing code. Every time you modify your code, you can execute your unit tests to verify that you have not broken anything. For the same reason that you should write unit tests for your server code, you should write unit tests for your client code. JavaScript is just as susceptible to bugs as C#. There is no shortage of unit testing frameworks for JavaScript. Each of the major JavaScript libraries has its own unit testing framework. For example, jQuery has QUnit, Prototype has UnitTestJS, YUI has YUI Test, and Dojo has Dojo Objective Harness (DOH). The challenge is integrating a JavaScript unit testing framework with Visual Studio. Visual Studio and Visual Studio ALM provide fantastic support for server-side unit tests. You can easily view the results of running your unit tests in the Visual Studio Test Results window. You can set up a check-in policy which requires that all unit tests pass before your source code can be committed to the source code repository. In addition, you can set up Team Build to execute your unit tests automatically. Unfortunately, Visual Studio does not provide “out-of-the-box” support for JavaScript unit tests. MS Test, the unit testing framework included in Visual Studio, does not support JavaScript unit tests. As soon as you leave the server world, you are left on your own. The goal of this blog entry is to describe one approach to integrating JavaScript unit tests with MS Test so that you can execute your JavaScript unit tests side-by-side with your C# unit tests. The goal is to enable you to execute JavaScript unit tests in exactly the same way as server-side unit tests. You can download the source code described by this project by scrolling to the end of this blog entry. Rejected Approach: Browser Launchers One popular approach to executing JavaScript unit tests is to use a browser as a test-driver. When you use a browser as a test-driver, you open up a browser window to execute and view the results of executing your JavaScript unit tests. For example, QUnit – the unit testing framework for jQuery – takes this approach. The following HTML page illustrates how you can use QUnit to create a unit test for a function named addNumbers(). <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Using QUnit</title> <link rel="stylesheet" href="http://github.com/jquery/qunit/raw/master/qunit/qunit.css" type="text/css" /> </head> <body> <h1 id="qunit-header">QUnit example</h1> <h2 id="qunit-banner"></h2> <div id="qunit-testrunner-toolbar"></div> <h2 id="qunit-userAgent"></h2> <ol id="qunit-tests"></ol> <div id="qunit-fixture">test markup, will be hidden</div> <script type="text/javascript" src="http://code.jquery.com/jquery-latest.js"></script> <script type="text/javascript" src="http://github.com/jquery/qunit/raw/master/qunit/qunit.js"></script> <script type="text/javascript"> // The function to test function addNumbers(a, b) { return a+b; } // The unit test test("Test of addNumbers", function () { equals(4, addNumbers(1,3), "1+3 should be 4"); }); </script> </body> </html> This test verifies that calling addNumbers(1,3) returns the expected value 4. When you open this page in a browser, you can see that this test does, in fact, pass. The idea is that you can quickly refresh this QUnit HTML JavaScript test driver page in your browser whenever you modify your JavaScript code. In other words, you can keep a browser window open and keep refreshing it over and over while you are developing your application. That way, you can know very quickly whenever you have broken your JavaScript code. While easy to setup, there are several big disadvantages to this approach to executing JavaScript unit tests: You must view your JavaScript unit test results in a different location than your server unit test results. The JavaScript unit test results appear in the browser and the server unit test results appear in the Visual Studio Test Results window. Because all of your unit test results don’t appear in a single location, you are more likely to introduce bugs into your code without noticing it. Because your unit tests are not integrated with Visual Studio – in particular, MS Test -- you cannot easily include your JavaScript unit tests when setting up check-in policies or when performing automated builds with Team Build. A more sophisticated approach to using a browser as a test-driver is to automate the web browser. Instead of launching the browser and loading the test code yourself, you use a framework to automate this process. There are several different testing frameworks that support this approach: · Selenium – Selenium is a very powerful framework for automating browser tests. You can create your tests by recording a Firefox session or by writing the test driver code in server code such as C#. You can learn more about Selenium at http://seleniumhq.org/. LTAF – The ASP.NET team uses the Lightweight Test Automation Framework to test JavaScript code in the ASP.NET framework. You can learn more about LTAF by visiting the project home at CodePlex: http://aspnet.codeplex.com/releases/view/35501 jsTestDriver – This framework uses Java to automate the browser. jsTestDriver creates a server which can be used to automate multiple browsers simultaneously. This project is located at http://code.google.com/p/js-test-driver/ TestSwam – This framework, created by John Resig, uses PHP to automate the browser. Like jsTestDriver, the framework creates a test server. You can open multiple browsers that are automated by the test server. Learn more about TestSwarm by visiting the following address: https://github.com/jeresig/testswarm/wiki Yeti – This is the framework introduced by Yahoo for automating browser tests. Yeti uses server-side JavaScript and depends on Node.js. Learn more about Yeti at http://www.yuiblog.com/blog/2010/08/25/introducing-yeti-the-yui-easy-testing-interface/ All of these frameworks are great for integration tests – however, they are not the best frameworks to use for unit tests. In one way or another, all of these frameworks depend on executing tests within the context of a “living and breathing” browser. If you create an ASP.NET Unit Test then Visual Studio will launch a web server before executing the unit test. Why is launching a web server so bad? It is not the worst thing in the world. However, it does introduce dependencies that prevent your code from being tested in isolation. One of the defining features of a unit test -- versus an integration test – is that a unit test tests code in isolation. Another problem with launching a web server when performing unit tests is that launching a web server can be slow. If you cannot execute your unit tests quickly, you are less likely to execute your unit tests each and every time you make a code change. You are much more likely to fall into the pit of failure. Launching a browser when performing a JavaScript unit test has all of the same disadvantages as launching a web server when performing an ASP.NET unit test. Instead of testing a unit of JavaScript code in isolation, you are testing JavaScript code within the context of a particular browser. Using the frameworks listed above for integration tests makes perfect sense. However, I want to consider a different approach for creating unit tests for JavaScript code. Using Server-Side JavaScript for JavaScript Unit Tests A completely different approach to executing JavaScript unit tests is to perform the tests outside of any browser. If you really want to test JavaScript then you should test JavaScript and leave the browser out of the testing process. There are several ways that you can execute JavaScript on the server outside the context of any browser: Rhino – Rhino is an implementation of JavaScript written in Java. The Rhino project is maintained by the Mozilla project. Learn more about Rhino at http://www.mozilla.org/rhino/ V8 – V8 is the open-source Google JavaScript engine written in C++. This is the JavaScript engine used by the Chrome web browser. You can download V8 and embed it in your project by visiting http://code.google.com/p/v8/ JScript – JScript is the JavaScript Script Engine used by Internet Explorer (up to but not including Internet Explorer 9), Windows Script Host, and Active Server Pages. Internet Explorer is still the most popular web browser. Therefore, I decided to focus on using the JScript Script Engine to execute JavaScript unit tests. Using the Microsoft Script Control There are two basic ways that you can pass JavaScript to the JScript Script Engine and execute the code: use the Microsoft Windows Script Interfaces or use the Microsoft Script Control. The difficult and proper way to execute JavaScript using the JScript Script Engine is to use the Microsoft Windows Script Interfaces. You can learn more about the Script Interfaces by visiting http://msdn.microsoft.com/en-us/library/t9d4xf28(VS.85).aspx The main disadvantage of using the Script Interfaces is that they are difficult to use from .NET. There is a great series of articles on using the Script Interfaces from C# located at http://www.drdobbs.com/184406028. I picked the easier alternative and used the Microsoft Script Control. The Microsoft Script Control is an ActiveX control that provides a higher level abstraction over the Window Script Interfaces. You can download the Microsoft Script Control from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac After you download the Microsoft Script Control, you need to add a reference to it to your project. Select the Visual Studio menu option Project, Add Reference to open the Add Reference dialog. Select the COM tab and add the Microsoft Script Control 1.0. Using the Script Control is easy. You call the Script Control AddCode() method to add JavaScript code to the Script Engine. Next, you call the Script Control Run() method to run a particular JavaScript function. The reference documentation for the Microsoft Script Control is located at the MSDN website: http://msdn.microsoft.com/en-us/library/aa227633%28v=vs.60%29.aspx Creating the JavaScript Code to Test To keep things simple, let’s imagine that you want to test the following JavaScript function named addNumbers() which simply adds two numbers together: MvcApplication1\Scripts\Math.js function addNumbers(a, b) { return 5; } Notice that the addNumbers() method always returns the value 5. Right-now, it will not pass a good unit test. Create this file and save it in your project with the name Math.js in your MVC project’s Scripts folder (Save the file in your actual MVC application and not your MVC test application). Creating the JavaScript Test Helper Class To make it easier to use the Microsoft Script Control in unit tests, we can create a helper class. This class contains two methods: LoadFile() – Loads a JavaScript file. Use this method to load the JavaScript file being tested or the JavaScript file containing the unit tests. ExecuteTest() – Executes the JavaScript code. Use this method to execute a JavaScript unit test. Here’s the code for the JavaScriptTestHelper class: JavaScriptTestHelper.cs   using System; using System.IO; using Microsoft.VisualStudio.TestTools.UnitTesting; using MSScriptControl; namespace MvcApplication1.Tests { public class JavaScriptTestHelper : IDisposable { private ScriptControl _sc; private TestContext _context; /// <summary> /// You need to use this helper with Unit Tests and not /// Basic Unit Tests because you need a Test Context /// </summary> /// <param name="testContext">Unit Test Test Context</param> public JavaScriptTestHelper(TestContext testContext) { if (testContext == null) { throw new ArgumentNullException("TestContext"); } _context = testContext; _sc = new ScriptControl(); _sc.Language = "JScript"; _sc.AllowUI = false; } /// <summary> /// Load the contents of a JavaScript file into the /// Script Engine. /// </summary> /// <param name="path">Path to JavaScript file</param> public void LoadFile(string path) { var fileContents = File.ReadAllText(path); _sc.AddCode(fileContents); } /// <summary> /// Pass the path of the test that you want to execute. /// </summary> /// <param name="testMethodName">JavaScript function name</param> public void ExecuteTest(string testMethodName) { dynamic result = null; try { result = _sc.Run(testMethodName, new object[] { }); } catch { var error = ((IScriptControl)_sc).Error; if (error != null) { var description = error.Description; var line = error.Line; var column = error.Column; var text = error.Text; var source = error.Source; if (_context != null) { var details = String.Format("{0} \r\nLine: {1} Column: {2}", source, line, column); _context.WriteLine(details); } } throw new AssertFailedException(error.Description); } } public void Dispose() { _sc = null; } } }     Notice that the JavaScriptTestHelper class requires a Test Context to be instantiated. For this reason, you can use the JavaScriptTestHelper only with a Visual Studio Unit Test and not a Basic Unit Test (These are two different types of Visual Studio project items). Add the JavaScriptTestHelper file to your MVC test application (for example, MvcApplication1.Tests). Creating the JavaScript Unit Test Next, we need to create the JavaScript unit test function that we will use to test the addNumbers() function. Create a folder in your MVC test project named JavaScriptTests and add the following JavaScript file to this folder: MvcApplication1.Tests\JavaScriptTests\MathTest.js /// <reference path="JavaScriptUnitTestFramework.js"/> function testAddNumbers() { // Act var result = addNumbers(1, 3); // Assert assert.areEqual(4, result, "addNumbers did not return right value!"); }   The testAddNumbers() function takes advantage of another JavaScript library named JavaScriptUnitTestFramework.js. This library contains all of the code necessary to make assertions. Add the following JavaScriptnitTestFramework.js to the same folder as the MathTest.js file: MvcApplication1.Tests\JavaScriptTests\JavaScriptUnitTestFramework.js var assert = { areEqual: function (expected, actual, message) { if (expected !== actual) { throw new Error("Expected value " + expected + " is not equal to " + actual + ". " + message); } } }; There is only one type of assertion supported by this file: the areEqual() assertion. Most likely, you would want to add additional types of assertions to this file to make it easier to write your JavaScript unit tests. Deploying the JavaScript Test Files This step is non-intuitive. When you use Visual Studio to run unit tests, Visual Studio creates a new folder and executes a copy of the files in your project. After you run your unit tests, your Visual Studio Solution will contain a new folder named TestResults that includes a subfolder for each test run. You need to configure Visual Studio to deploy your JavaScript files to the test run folder or Visual Studio won’t be able to find your JavaScript files when you execute your unit tests. You will get an error that looks something like this when you attempt to execute your unit tests: You can configure Visual Studio to deploy your JavaScript files by adding a Test Settings file to your Visual Studio Solution. It is important to understand that you need to add this file to your Visual Studio Solution and not a particular Visual Studio project. Right-click your Solution in the Solution Explorer window and select the menu option Add, New Item. Select the Test Settings item and click the Add button. After you create a Test Settings file for your solution, you can indicate that you want a particular folder to be deployed whenever you perform a test run. Select the menu option Test, Edit Test Settings to edit your test configuration file. Select the Deployment tab and select your MVC test project’s JavaScriptTest folder to deploy. Click the Apply button and the Close button to save the changes and close the dialog. Creating the Visual Studio Unit Test The very last step is to create the Visual Studio unit test (the MS Test unit test). Add a new unit test to your MVC test project by selecting the menu option Add New Item and selecting the Unit Test project item (Do not select the Basic Unit Test project item): The difference between a Basic Unit Test and a Unit Test is that a Unit Test includes a Test Context. We need this Test Context to use the JavaScriptTestHelper class that we created earlier. Enter the following test method for the new unit test: [TestMethod] public void TestAddNumbers() { var jsHelper = new JavaScriptTestHelper(this.TestContext); // Load JavaScript files jsHelper.LoadFile("JavaScriptUnitTestFramework.js"); jsHelper.LoadFile(@"..\..\..\MvcApplication1\Scripts\Math.js"); jsHelper.LoadFile("MathTest.js"); // Execute JavaScript Test jsHelper.ExecuteTest("testAddNumbers"); } This code uses the JavaScriptTestHelper to load three files: JavaScripUnitTestFramework.js – Contains the assert functions. Math.js – Contains the addNumbers() function from your MVC application which is being tested. MathTest.js – Contains the JavaScript unit test function. Next, the test method calls the JavaScriptTestHelper ExecuteTest() method to execute the testAddNumbers() JavaScript function. Running the Visual Studio JavaScript Unit Test After you complete all of the steps described above, you can execute the JavaScript unit test just like any other unit test. You can use the keyboard combination CTRL-R, CTRL-A to run all of the tests in the current Visual Studio Solution. Alternatively, you can use the buttons in the Visual Studio toolbar to run the tests: (Unfortunately, the Run All Impacted Tests button won’t work correctly because Visual Studio won’t detect that your JavaScript code has changed. Therefore, you should use either the Run Tests in Current Context or Run All Tests in Solution options instead.) The results of running the JavaScript tests appear side-by-side with the results of running the server tests in the Test Results window. For example, if you Run All Tests in Solution then you will get the following results: Notice that the TestAddNumbers() JavaScript test has failed. That is good because our addNumbers() function is hard-coded to always return the value 5. If you double-click the failing JavaScript test, you can view additional details such as the JavaScript error message and the line number of the JavaScript code that failed: Summary The goal of this blog entry was to explain an approach to creating JavaScript unit tests that can be easily integrated with Visual Studio and Visual Studio ALM. I described how you can use the Microsoft Script Control to execute JavaScript on the server. By taking advantage of the Microsoft Script Control, we were able to execute our JavaScript unit tests side-by-side with all of our other unit tests and view the results in the standard Visual Studio Test Results window. You can download the code discussed in this blog entry from here: http://StephenWalther.com/downloads/Blog/JavaScriptUnitTesting/JavaScriptUnitTests.zip Before running this code, you need to first install the Microsoft Script Control which you can download from here: http://www.microsoft.com/downloads/en/details.aspx?FamilyID=d7e31492-2595-49e6-8c02-1426fec693ac

    Read the article

  • How do you unit test a unit test?

    - by FlySwat
    I was watching Rob Connerys webcasts on the MVCStoreFront App, and I noticed he was unit testing even the most mundane things, things like: public Decimal DiscountPrice { get { return this.Price - this.Discount; } } Would have a test like: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,80); } While, I am all for unit testing, I sometimes wonder if this form of test first development is really beneficial, for example, in a real process, you have 3-4 layers above your code (Business Request, Requirements Document, Architecture Document), where the actual defined business rule (Discount Price is Price - Discount) could be misdefined. If that's the situation, your unit test means nothing to you. Additionally, your unit test is another point of failure: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,90); } Now the test is flawed. Obviously in a simple test, it's no big deal, but say we were testing a complicated business rule. What do we gain here? Fast forward two years into the application's life, when maintenance developers are maintaining it. Now the business changes its rule, and the test breaks again, some rookie developer then fixes the test incorrectly...we now have another point of failure. All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work? What am I missing here? Please teach me to love TDD, as I'm having a hard time accepting it as useful so far. I want too, because I want to stay progressive, but it just doesn't make sense to me. EDIT: A couple people keep mentioned that testing helps enforce the spec. It has been my experience that the spec has been wrong as well, more often than not, but maybe I'm doomed to work in an organization where the specs are written by people who shouldn't be writing specs.

    Read the article

  • CakePHP Test Fixtures Drop My Tables Permanently After Running A Test Case

    - by Frank
    I'm not sure what I've done wrong in my CakePHP unit test configuration. Every time I run a test case, the model tables associated with my fixtures are missing form my test database. After running an individual test case I have to re-import my database tables using phpMyAdmin. Here are the relevant files: This is the class I'm trying to test comment.php. This table is dropped after the test. App::import('Sanitize'); class Comment extends AppModel{ public $name = 'Comment'; public $actsAs = array('Tree'); public $belongsTo = array('User' => array('fields'=>array('id', 'username'))); public $validate = array( 'text' = array( 'rule' =array('between', 1, 4000), 'required' ='true', 'allowEmpty'='false', 'message' = "You can't leave your comment text empty!") ); database.php class DATABASE_CONFIG { var $default = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'projectdb', 'prefix' = '' ); var $test = array( 'driver' = 'mysql', 'persistent' = false, 'host' = 'project.db', 'login' = 'projectman', 'password' = 'projectpassword', 'database' = 'testprojectdb', 'prefix' = '' ); } My comment.test.php file. This is the table that keeps getting dropped. <?php App::import('Model', 'Comment'); class CommentTestCase extends CakeTestCase { public $fixtures = array('app.comment', 'app.user'); function start(){ $this-Comment =& ClassRegistry::init('Comment'); $this-Comment-useDbConfig = 'test_suite'; } This is my comment_fixture.php class: <?php class CommentFixture extends CakeTestFixture { var $name = "Comment"; var $import = 'Comment'; } And just in case, here is a typical test method in the CommentTestCase class function testMsgNotificationUserComment(){ $user_id = '1'; $submission_id = '1'; $parent_id = $this-Comment-commentOnModel('Submission', $submission_id, '0', $user_id, "Says: A"); $other_user_id = '2'; $msg_id = $this-Comment-commentOnModel('Submission', $submission_id, $parent_id, $other_user_id, "Says: B"); $expected = array(array('Comment'=array('id'=$msg_id, 'text'="Says: B", 'submission_id'=$submission_id, 'topic_id'='0', 'ack'='0'))); $result = $this-Comment-getMessages($user_id); $this-assertEqual($result, $expected); } I've been dealing with this for a day now and I'm starting to be put off by CakePHP's unit testing. In addition to this issue -- Servral times now I've had data inserted into by 'default' database configuration after running tests! What's going on with my configuration?!

    Read the article

  • Need to make a scheduled task run as another user but keep the current user’s environment

    - by Chad Marmon
    I need to backup users .pst files. The current method I am trying is making a shadow copy using Diskshadow. My script works great all but Diskshadow needs to be ran as administrator but also needs to retain the logged-on user's environment variables; specifically, the %USERNAME% and %HOMESHARE% variables so the right user’s files get copied up to the right network location. I have for the most part got this to work), but there’s no straightforward (or secure, at least) way to pass the password. If I set up a scheduled task to run the script as a domain user with local admin privs, the environment variables get lost. I need to run this script automagically so that there should be no user interaction. If I could figure out how to make a scheduled task run as another user but keep the current user’s environment, I think this would work, but I’ve been beating my head against that for a while now, without any luck.

    Read the article

  • What's the better user experience: Waiting once at startup for a long time or waiting frequently for a short time?

    - by Roflcoptr
    I'm currently design an application that involves a lot of calculation. Now I have generally two possibilities which I have both tested: 1) During startup of the application I calculated only the most important values and these values that consume a lot of time. So the user has to wait approximately 15 seconds during startup. But on the other hand a lot of user interactions require recalculation so that the user often has to wait 2-3 seconds after clicking somewhere until the application has calculated and loaded all values 2) I load everything during startup. This takes from 90 to 120 seconds... This is quite a long time, but the big advantage is that all the user interactions are executed immediately. So what would you generally consider the better approach? Loading all time-consuming operations during startup or when needed?

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • copying user profile on windows 7

    - by SwissCoder
    Is there a tool or a trick to easily duplicate a windows profile? My problem is that I have a local user profile and I like to copy that for another user. Additionaly that profile was created locally when a domain-user logged in, and I like to create a copy of that profile for a non-domain-user. Hope it's clear what my problem is. Thank you for reading! I've just seen there is a similar question: Copy Windows 7 profile from one domain user to another Now I like to know if it is possible to simply change the user-profile's Name and Password. Is this somehow possible?

    Read the article

  • SQL Server User Mapping - Limit view of databases for a user

    - by Jaime
    Hi there, I am adding a new Login with SQL Server Authentication. I set its Server Role as public and then went into User Mapping, selecting the only database this user should have access to. I then change the Default Schema to dbo and made this user the db_owner. I then connect to the instance using the new user's credentials and I can see not only the database he should have access to but all the other attached databases. How can I limit this user to just see the database he has access to? Thanks in advance!

    Read the article

  • Icon on user account desktop before the user has logged in.

    - by JHamill
    Currently working on a Windows 7 deployment project and I have a requirement to place an RDP icon on a specific users desktop, lets call this user 'Guest'. The image itself will be completely vanilla and all user accounts will be created using commands in the Unattend file. The 'Guest' account will not be a local admin and so it will not be the account used for autologon during the application of the unattend file. As a result of this, the 'Guest' profile will not have been created so I'm unable to simply place the icon at C:\Users\Guest\Desktop. Is there a way to place an icon on this specific users desktop prior to logging in with it? I know there are ways around this i.e. include this account in the base image and log in with it in order to create the profile but I'd like to keep the base image as vanilla as possible. Any ideas or pointers would be greatly appreciated. Thanks in advance.

    Read the article

  • Microsoft Test Manager error in displaying test steps caused by malware

    - by terje
    Sometimes the tool is blamed for errors which are not the fault of the tool – this is one such story.  It was however, not so easy to get to the bottom of it, so I hope sharing this story can help some others. One of our test developers started to get this message inside the test steps part of a test case in the MTM. saying “Could not load file or assembly ‘0 bytes from System, Version=4.0.0.0,……..” The same error came up inside Visual Studio when we opened a test case there. Then we noted a similar error on another piece of software – this error: A System.BadImageFormatException, and same message as above, but just for framework 2.0. We found this  description which pointed to a malware problem (See bottom of that post), that is a fake anti-spyware program called “Additional Guard”.  We checked the computer in question using Malwarebytes Anti-Malware tool.  It found and cleaned out 753 registry keys!!  After this cleanup operation the error was gone.  This is a great tool !  The “Additional Guard” program had been inadvertently installed, and then uninstalled afterwards, but the corrupted keys were of course not removed.  We also noted that this computer had full corporate virus scanning and malware protection, but still this nasty little thing still slipped through. Technorati Tags: Malware,BadImageFormatException,Microsoft Test Manager,Malwarebytes

    Read the article

  • Should a developer create test cases and then run through test cases

    - by Eben Roux
    I work for a company where the development manager expects a developer to create test cases before writing any code. These test cases have to then be maintained by the developers. Every-so-often a developer will be expected to run through the test cases. From this you should be able to gather that the company in question is rather small and there are no testers. Coming from a Software Architect position and having to write / execute test cases wearing my 'tester' hat is somewhat of a shock to the system. I do it anyway but it does seem to be a rather expensive exercise :) EDIT: I seem to need to elaborate here: I am not talking about unit-testing, TDD, etc. :) I am talking about that bit of testing a tester does. Once I have developed a system (with my unit tests / tdd / etc.) the software goes through a testing phase. Should a developer be that tester and developer those test cases? I think the misunderstanding may stem from the fact that developers, typically, are not involved with this type of testing and, therefore, assumed I am referring to that testing we do do: unit testing. But alas, no. I hope that clears it up.

    Read the article

  • Should devs, testers and business users have one unified test script?

    - by Carlos Jaime C. De Leon
    In development, I would normally have my own test scripts that would document the data, scenarios and execution steps that I plan to test; this is my dev test plan. When the functionality has been deployed to Test, testers test it using their own test script that they wrote. In UAT, the business user then tests using their own test plan. In retrospect, it looks like this provides a better coverage, with dev tests having a mix of black and white box testing, while testers and business users focus on black box testing. But on the other hand, this brings up distinct test cases that only are executed per stage (ie. some cases which testers thought of are only executed on Test stage) and it would like the dev missed it, which makes it a finding/bug. Is it worth consolidating the test scripts from the start? Thus using one unified test script, or is it abit difficult to do this upfront?

    Read the article

  • User Interface Annoyances

    - by Jim McKeeth
    I am looking for some of the most annoying user interface features that are common and keep being repeated. The first one that comes to mind is the modal pop up message box that developers like to use to let you know you did something right, but gets frustrating the 1000th time you have to close it. I would rather see the annoyances that are common in many applications instead of the one really odd ones that are only in one or two applications. Please: One per answer.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >