Hello all,
I am currently developing a location based iPhone application. Is there any way to test the app other than taking the iPhone to different places?
Thanks
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
I'm using Selenium's IWebDriver to write Unit Tests in C#.
Such is an example:
IWebDriver defaultDriver = new InternetExplorerDriver();
var ddl = driver.FindElements(By.TagName("select"));
The last line retrieves the select HTML element wrapped in a IWebElement.
I need a way to simulate selection to a specific option in that select list but I can't figure out how to do it.
Upon some research, I found examples where people are using the ISelenium DefaultSelenium class to accomplish the following, but I am not making use of this class because I'm doing everything with IWebDriver and INavigation (from defaultDriver.Navigate()).
I also noticed that ISelenium DefaultSelenium contains a ton of other methods that aren't available in the concrete implementations of IWebDriver.
So is there any way I can use IWebDriver and INavigation in conjunction with ISelenium DefaultSelenium ?
Given an application, how can I measure
the amount of data read and written by that application?
the time spent reading/writing to disk?
The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64.
The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given.
File access may be local as well as on network shares.
Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation).
Any ideas?
The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:
new_project/
antigravity/
antigravity.py
test/
test_antigravity.py
setup.py
etc.
for example see this Python project howto.
My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.
I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing.
The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.
So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
We have a number of integration tests that fail when our staging server goes down for weekly maintenance. When the staging server is down we send a specific response that I could detect in my integration tests. When I get this response instead of failing the tests I'm wondering if it is possible to skip/ignore that test even though it has started running. This would keep our test reports a bit cleaner.
Does anybody have suggestions?
Currently we are generating HTML Reports for Automation, but those reports are not good enough to explain number of scenario which we cover in Automation, Is there anything we can use with Selenium to generate a proper reports which can give a complete overview and can easily understand by anyone
First Thing we can show a complete pie charts which cover number of test case passed and Failed.
Second thing we can show, what are test cases are there in this build.
I've create a method that calculates the harmonic mean based on a list of doubles.
But when I'm running the test it keeps failing even thou the output result are the same.
My harmonic mean method:
public static double GetHarmonicMean(List<double> parameters)
{
var cumReciprocal = 0.0d;
var countN = parameters.Count;
foreach( var param in parameters)
{
cumReciprocal += 1.0d/param;
}
return 1.0d/(cumReciprocal/countN);
}
My test method:
[TestMethod()]
public void GetHarmonicMeanTest()
{
var parameters = new List<double> { 1.5d, 2.3d, 2.9d, 1.9d, 5.6d };
const double expected = 2.32432293165495;
var actual = OwnFunctions.GetHarmonicMean(parameters);
Assert.AreEqual(expected, actual);
}
After running the test the following message is showing:
Assert.AreEqual failed. Expected:<2.32432293165495. Actual:<2.32432293165495.
For me that are both the same values.
Can somebody explain this? Or am I doing something wrong?
So I had a class that referenced a class that referenced another class that called a web service.
So I learn how to create an interface using partial classes.
I inject the web service through the constructor.
Then my unit test fails because I am newing up the actual web service in the second level of the class. So I end up modifying all three classes to pass the web service down through the constructor... was not happy :-( gave up....
what should I be doing in this case?
I have two classes:
public abstract class AbstractFoobar { ... }
and
public class ConcreteFoobar extends AbstractFoobar { ... }
I have corresponding test classes for these two classes:
public class AbstractFoobarTest { ... }
and
public class ConcreteFoobarTest extends AbstractFoobarTest { ... }
When I run ConcreteFoobarTest (in JUnit), the annotated @Test methods in AbstractFoobarTest get run along with those declared directly on ConcreteFoobarTest because they are inherited.
Is there anyway to skip them?
There is a simple controller that a querystring is read in constructor of it.
public class ProductController : Controller
{
parivate string productName;
public ProductController()
{
productName = Request.QueryString["productname"];
}
public ActionResult Index()
{
ViewData["Message"] = productName;
return View();
}
}
Also I have a function in unit test that create an instance of this Controller and I fill the querystring by a Mock object like below.
[TestClass]
public class ProductControllerTest
{
[TestMethod]
public void test()
{
// Arrange
var querystring = new System.Collections.Specialized.NameValueCollection { { "productname", "sampleproduct"} };
var mock = new Mock<ControllerContext>();
mock.SetupGet(p => p.HttpContext.Request.QueryString).Returns(querystring);
var controller = new ProductController();
controller.ControllerContext = mock.Object;
// Act
var result = controller.Index() as ViewResult;
// Assert
Assert.AreEqual("Index", result.ViewName);
}
}
Unfortunately Request.QueryString["productname"] is null in constructor of ProductController when I run test unit.
Is ther any way to fill a querystrin by a mocking and get it in constructor of a control?
Here is the mysterious:
I have a scope which looks like this (in Image.rb)
scope :moderate_all, delegates.where("moderation_flag = #{$moderation_flags[:not_moderated]}")
Note that delegates is another scope that I am defining before moderate_all
When I leave it like this, I can run my test that checks if an image has been "checked-out" it is not available anymore. I don't put the code of the test, because it does not matter actually.
With this code, when I run "rake test" it fails, but if I do "ruby test/unit/image_test.rb" it works! I was thinking I am starting to have a bad day. Then I tried
scope :moderate_all, lambda {
delegates.where("moderation_flag = #{$moderation_flags[:not_moderated]}")
}
And "rake test" passes!
So my problem is solved, but why?
I'm unable to run any selenium tests since I updated Firefox to 3.6. Is it happening just to me or is it everybody?
Error message I get is: Could not start Selenium session: Failed to start browser session
Is there any good framework for comparing whole objects?
now i do
assertEquals("[email protected]", obj.email);
assertEquals("5", obj.shop);
if bad email is returned i never get to know if it had the right shop, i would like to get a list of incorrect fields.
Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset.
Consider a test which pulls data out of the database by primary key:
class ChangeLogTest(django.test.TestCase):
def test_one(self):
do_something_which_creates_two_log_entries()
log = LogEntry.objects.get(id=1)
assert_log_entry_correct(log)
log = LogEntry.objects.get(id=2)
assert_log_entry_correct(log)
This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails.
This is actually a two part question:
Is it possible to force ./manage.py test to flush the database between each test case?
Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?
I am quite new to TDD and am going with NUnit and Moq. I have got a method where I expect an exception, so I wanted to play a little with the frameworks features.
My test code looks as follows:
[Test]
[ExpectedException(ExpectedException = typeof(MockException), ExpectedMessage = "Actual differs from expected")]
public void Write_MessageLogWithCategoryInfoFail()
{
string message = "Info Test Message";
Write_MessageLogWithCategory(message, "Info");
_LogTest.Verify(writeMessage =>
writeMessage.Info("This should fail"),
"Actual differs from expected"
);
}
But I always receive the errormessage that the error message that the actual exception message differs from the expected message. What am I doing wrong?
Can anyone help me explain how TimeProvider.Current can become null in the following class?
public abstract class TimeProvider
{
private static TimeProvider current =
DefaultTimeProvider.Instance;
public static TimeProvider Current
{
get { return TimeProvider.current; }
set
{
if (value == null)
{
throw new ArgumentNullException("value");
}
TimeProvider.current = value;
}
}
public abstract DateTime UtcNow { get; }
public static void ResetToDefault()
{
TimeProvider.current = DefaultTimeProvider.Instance;
}
}
Observations
All unit tests that directly reference TimeProvider also invokes ResetToDefault() in their Fixture Teardown.
There is no multithreaded code involved.
Once in a while, one of the unit tests fail because TimeProvider.Current is null (NullReferenceException is thrown).
This only happens when I run the entire suite, but not when I just run a single unit test, suggesting to me that there is some subtle test interdependence going on.
It happens approximately once every five or six test runs.
When a failure occurs, it seems to be occuring in the first executed tests that involves TimeProvider.Current.
More than one test can fail, but only one fails in a given test run.
FWIW, here's the DefaultTimeProvider class as well:
public class DefaultTimeProvider : TimeProvider
{
private readonly static DefaultTimeProvider instance =
new DefaultTimeProvider();
private DefaultTimeProvider() { }
public override DateTime UtcNow
{
get { return DateTime.UtcNow; }
}
public static DefaultTimeProvider Instance
{
get { return DefaultTimeProvider.instance; }
}
}
I suspect that there's some subtle interplay going on with static initialization where the runtime is actually allowed to access TimeProvider.Current before all static initialization has finished, but I can't quite put my finger on it.
Any help is appreciated.
In my MSTest UnitTest project, before running any tests, I need to execute some commands. Is there a feature, kind of like Global.asax is for web based projects, that will let me kick off something before any tests run?
Is it better to write/record selenium tests in html format and run them directly in the server with "-htmlSuite" or to write the tests in java/C#/... and run them in the server using selenium-rc?
What is the recommended solution?
For example:
// NUnit-like pseudo code (within a TestFixture)
Ctor()
{
m_globalVar = getFoo();
}
[Test]
Create()
{
a(m_globalVar)
}
[Test]
Delete()
{
// depends on Create being run
b(m_globalVar)
}
… or…
// NUnit-like pseudo code (within a TestFixture)
[Test]
CreateAndDelete()
{
Foo foo = getFoo();
a(foo);
// depends on Create being run
b(foo);
}
… I’m going with the later, and assuming that the answer to my question is:
No, at least not with NUnit, because according to the NUnit manual:
The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session.
... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.
We have splitted our grails application into several inplace-plugins.
Now we want to have the tests in the same plugin like the classes which they test.
Is it possible to configure our application (e.g. in BuildConfig.groovy) so that the tests in the plugins are executed too when we run "test-app"?
I am using JUnit to write some higher level tests for legacy code that does not have unit tests.
Much of this code "swallows" a variety of unchecked exceptions like NullPointerExceptions (e.g., by just printing stack trace and returning null). Therefore the unit test can pass even through there is a cascade of disasters at various points in the lower level code.
Is there any way to have a test fail on the first unchecked exception even if they are swallowed?
The only alternative I can think of is to write a custom JUnit wrapper that redirects System.err and then analyzes the output for exceptions.
We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs?
If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer?
What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?