Hello all,
I am currently developing a location based iPhone application. Is there any way to test the app other than taking the iPhone to different places?
Thanks
I setup a NUnit test as such:
new PersistenceSpecification<MyTable>(_session)
.CheckProperty(c => c.ActionDate, DateTime.Now);
When I run the test via NUnit I get the following error:
SomeNamespace.MapTest:
System.ApplicationException : Expected '2/23/2010 11:08:38 AM' but got
'2/23/2010 11:08:38 AM' for Property 'ActionDate'
The ActionDate field is a datetime field in a SQL 2008 database. I use Auto Mapping and declare the ActionDate as a DateTime property in C#.
If I change the test to use DateTime.Today the tests pass.
My question is why is the test failing with DateTime.Now? Is NHibernate losing some precision when saving the date to the database and if so how do prevent the lose? Thank you.
Hi,
What I want to achieve is after loading my object from the database, to generate a code that will give me a block which initializes my object based on its current values so that I can use this code-block in my unit tests again and again without loading it from Db anymore.
Is there any tool around to achieve such a goal for VS?
thanks
I need a URL to just test basic http connectivity. It needs to be consistent and:
Always be up
Never change drastically due to IP or user agent. (IE: 301 Location redirect/ huge difference in content... minor would be tolerable)
The URL itself has a consistent content-length. (IE: it doesn't vary from by 2kb at most, ever)
A few examples, yet none match all 3 criteria:
One example of always up: www.google.com (yet it 301 redirects based on IP location).
Another good one is http://www.google.com/webhp?hl=en. but the problem there is that based on a given holiday, the content-length can really vary.
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
Hi everyone,
I am trying to call a method that passes an object called parameters.
public void LoadingDataLockFunctionalityTest()
{
DataCache_Accessor target = DataCacheTest.getNewDataCacheInstance();
target.itemsLoading.Add("WebFx.Caching.TestDataRetrieverFactorytestsync", true);
DataParameters parameters = new DataParameters("WebFx.Core",
"WebFx.Caching.TestDataRetrieverFactory",
"testsync");
parameters.CachingStrategy = CachingStrategy.TimerDontWait;
parameters.CacheDuration = 0;
string data = (string)target.performGetForTimerDontWaitStrategy(parameters);
TestSyncDataRetriever.SimulateLoadingForFiveSeconds = true;
Thread t1 = new Thread(delegate()
{
string s = (string)target.performGetForTimerDontWaitStrategy(parameters);
Console.WriteLine(s ?? String.Empty);
});
t1.Start();
t1.Join();
Thread.Sleep(1000);
ReaderWriterLockSlim rw = DataCache_Accessor.GetLoadingLock(parameters);
Assert.IsTrue(rw.IsWriteLockHeld);
Assert.IsNotNull(data);
}
My test is failing all the time and i am not able step through the method..
Can someone please put me in the right direction
Thanks
Currently we are generating HTML Reports for Automation, but those reports are not good enough to explain number of scenario which we cover in Automation, Is there anything we can use with Selenium to generate a proper reports which can give a complete overview and can easily understand by anyone
First Thing we can show a complete pie charts which cover number of test case passed and Failed.
Second thing we can show, what are test cases are there in this build.
So I've installed PHP Unit via PEAR (all the files are there, I've checked). However, when I try to run a test I get:
Warning: require_once(PHPUnit/Framework.php) [function.require-once]: failed to open stream: No such file or directory in C:\WAMP\www\ExampleTests\arraytest.php on line 2
I'm guessing this has something to do with my PHPUnit installation not updating the include_path properly, but I'm not too sure what to update it to?
I'm on Windows (7), using WAMP.
Cheers!
EDIT: The bottom of PHP.ini contains:
;***** Added by go-pear
include_path=".;C:\WAMP\bin\php\php5.3.10\pear"
;*****
I also get the error:
Fatal error: require_once() [function.require]: Failed opening required 'PHPUnit/Framework.php' (include_path='.;C:\php\pear')
However, after looking in PHP.ini, there's no include path that points to C:\php\pear?
The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:
new_project/
antigravity/
antigravity.py
test/
test_antigravity.py
setup.py
etc.
for example see this Python project howto.
My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.
I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing.
The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.
So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
I'm using Selenium's IWebDriver to write Unit Tests in C#.
Such is an example:
IWebDriver defaultDriver = new InternetExplorerDriver();
var ddl = driver.FindElements(By.TagName("select"));
The last line retrieves the select HTML element wrapped in a IWebElement.
I need a way to simulate selection to a specific option in that select list but I can't figure out how to do it.
Upon some research, I found examples where people are using the ISelenium DefaultSelenium class to accomplish the following, but I am not making use of this class because I'm doing everything with IWebDriver and INavigation (from defaultDriver.Navigate()).
I also noticed that ISelenium DefaultSelenium contains a ton of other methods that aren't available in the concrete implementations of IWebDriver.
So is there any way I can use IWebDriver and INavigation in conjunction with ISelenium DefaultSelenium ?
So I had a class that referenced a class that referenced another class that called a web service.
So I learn how to create an interface using partial classes.
I inject the web service through the constructor.
Then my unit test fails because I am newing up the actual web service in the second level of the class. So I end up modifying all three classes to pass the web service down through the constructor... was not happy :-( gave up....
what should I be doing in this case?
I've create a method that calculates the harmonic mean based on a list of doubles.
But when I'm running the test it keeps failing even thou the output result are the same.
My harmonic mean method:
public static double GetHarmonicMean(List<double> parameters)
{
var cumReciprocal = 0.0d;
var countN = parameters.Count;
foreach( var param in parameters)
{
cumReciprocal += 1.0d/param;
}
return 1.0d/(cumReciprocal/countN);
}
My test method:
[TestMethod()]
public void GetHarmonicMeanTest()
{
var parameters = new List<double> { 1.5d, 2.3d, 2.9d, 1.9d, 5.6d };
const double expected = 2.32432293165495;
var actual = OwnFunctions.GetHarmonicMean(parameters);
Assert.AreEqual(expected, actual);
}
After running the test the following message is showing:
Assert.AreEqual failed. Expected:<2.32432293165495. Actual:<2.32432293165495.
For me that are both the same values.
Can somebody explain this? Or am I doing something wrong?
There is a simple controller that a querystring is read in constructor of it.
public class ProductController : Controller
{
parivate string productName;
public ProductController()
{
productName = Request.QueryString["productname"];
}
public ActionResult Index()
{
ViewData["Message"] = productName;
return View();
}
}
Also I have a function in unit test that create an instance of this Controller and I fill the querystring by a Mock object like below.
[TestClass]
public class ProductControllerTest
{
[TestMethod]
public void test()
{
// Arrange
var querystring = new System.Collections.Specialized.NameValueCollection { { "productname", "sampleproduct"} };
var mock = new Mock<ControllerContext>();
mock.SetupGet(p => p.HttpContext.Request.QueryString).Returns(querystring);
var controller = new ProductController();
controller.ControllerContext = mock.Object;
// Act
var result = controller.Index() as ViewResult;
// Assert
Assert.AreEqual("Index", result.ViewName);
}
}
Unfortunately Request.QueryString["productname"] is null in constructor of ProductController when I run test unit.
Is ther any way to fill a querystrin by a mocking and get it in constructor of a control?
I have two classes:
public abstract class AbstractFoobar { ... }
and
public class ConcreteFoobar extends AbstractFoobar { ... }
I have corresponding test classes for these two classes:
public class AbstractFoobarTest { ... }
and
public class ConcreteFoobarTest extends AbstractFoobarTest { ... }
When I run ConcreteFoobarTest (in JUnit), the annotated @Test methods in AbstractFoobarTest get run along with those declared directly on ConcreteFoobarTest because they are inherited.
Is there anyway to skip them?
I am quite new to TDD and am going with NUnit and Moq. I have got a method where I expect an exception, so I wanted to play a little with the frameworks features.
My test code looks as follows:
[Test]
[ExpectedException(ExpectedException = typeof(MockException), ExpectedMessage = "Actual differs from expected")]
public void Write_MessageLogWithCategoryInfoFail()
{
string message = "Info Test Message";
Write_MessageLogWithCategory(message, "Info");
_LogTest.Verify(writeMessage =>
writeMessage.Info("This should fail"),
"Actual differs from expected"
);
}
But I always receive the errormessage that the error message that the actual exception message differs from the expected message. What am I doing wrong?
We have splitted our grails application into several inplace-plugins.
Now we want to have the tests in the same plugin like the classes which they test.
Is it possible to configure our application (e.g. in BuildConfig.groovy) so that the tests in the plugins are executed too when we run "test-app"?
Can anyone help me explain how TimeProvider.Current can become null in the following class?
public abstract class TimeProvider
{
private static TimeProvider current =
DefaultTimeProvider.Instance;
public static TimeProvider Current
{
get { return TimeProvider.current; }
set
{
if (value == null)
{
throw new ArgumentNullException("value");
}
TimeProvider.current = value;
}
}
public abstract DateTime UtcNow { get; }
public static void ResetToDefault()
{
TimeProvider.current = DefaultTimeProvider.Instance;
}
}
Observations
All unit tests that directly reference TimeProvider also invokes ResetToDefault() in their Fixture Teardown.
There is no multithreaded code involved.
Once in a while, one of the unit tests fail because TimeProvider.Current is null (NullReferenceException is thrown).
This only happens when I run the entire suite, but not when I just run a single unit test, suggesting to me that there is some subtle test interdependence going on.
It happens approximately once every five or six test runs.
When a failure occurs, it seems to be occuring in the first executed tests that involves TimeProvider.Current.
More than one test can fail, but only one fails in a given test run.
FWIW, here's the DefaultTimeProvider class as well:
public class DefaultTimeProvider : TimeProvider
{
private readonly static DefaultTimeProvider instance =
new DefaultTimeProvider();
private DefaultTimeProvider() { }
public override DateTime UtcNow
{
get { return DateTime.UtcNow; }
}
public static DefaultTimeProvider Instance
{
get { return DefaultTimeProvider.instance; }
}
}
I suspect that there's some subtle interplay going on with static initialization where the runtime is actually allowed to access TimeProvider.Current before all static initialization has finished, but I can't quite put my finger on it.
Any help is appreciated.
I'm unable to run any selenium tests since I updated Firefox to 3.6. Is it happening just to me or is it everybody?
Error message I get is: Could not start Selenium session: Failed to start browser session
We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs?
If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer?
What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?
Is there any good framework for comparing whole objects?
now i do
assertEquals("[email protected]", obj.email);
assertEquals("5", obj.shop);
if bad email is returned i never get to know if it had the right shop, i would like to get a list of incorrect fields.
I am using JUnit to write some higher level tests for legacy code that does not have unit tests.
Much of this code "swallows" a variety of unchecked exceptions like NullPointerExceptions (e.g., by just printing stack trace and returning null). Therefore the unit test can pass even through there is a cascade of disasters at various points in the lower level code.
Is there any way to have a test fail on the first unchecked exception even if they are swallowed?
The only alternative I can think of is to write a custom JUnit wrapper that redirects System.err and then analyzes the output for exceptions.
Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset.
Consider a test which pulls data out of the database by primary key:
class ChangeLogTest(django.test.TestCase):
def test_one(self):
do_something_which_creates_two_log_entries()
log = LogEntry.objects.get(id=1)
assert_log_entry_correct(log)
log = LogEntry.objects.get(id=2)
assert_log_entry_correct(log)
This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails.
This is actually a two part question:
Is it possible to force ./manage.py test to flush the database between each test case?
Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?
Is it better to write/record selenium tests in html format and run them directly in the server with "-htmlSuite" or to write the tests in java/C#/... and run them in the server using selenium-rc?
What is the recommended solution?