Given an application, how can I measure
the amount of data read and written by that application?
the time spent reading/writing to disk?
The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64.
The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given.
File access may be local as well as on network shares.
Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation).
Any ideas?
Currently we are generating HTML Reports for Automation, but those reports are not good enough to explain number of scenario which we cover in Automation, Is there anything we can use with Selenium to generate a proper reports which can give a complete overview and can easily understand by anyone
First Thing we can show a complete pie charts which cover number of test case passed and Failed.
Second thing we can show, what are test cases are there in this build.
So I had a class that referenced a class that referenced another class that called a web service.
So I learn how to create an interface using partial classes.
I inject the web service through the constructor.
Then my unit test fails because I am newing up the actual web service in the second level of the class. So I end up modifying all three classes to pass the web service down through the constructor... was not happy :-( gave up....
what should I be doing in this case?
Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset.
Consider a test which pulls data out of the database by primary key:
class ChangeLogTest(django.test.TestCase):
def test_one(self):
do_something_which_creates_two_log_entries()
log = LogEntry.objects.get(id=1)
assert_log_entry_correct(log)
log = LogEntry.objects.get(id=2)
assert_log_entry_correct(log)
This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails.
This is actually a two part question:
Is it possible to force ./manage.py test to flush the database between each test case?
Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?
I've create a method that calculates the harmonic mean based on a list of doubles.
But when I'm running the test it keeps failing even thou the output result are the same.
My harmonic mean method:
public static double GetHarmonicMean(List<double> parameters)
{
var cumReciprocal = 0.0d;
var countN = parameters.Count;
foreach( var param in parameters)
{
cumReciprocal += 1.0d/param;
}
return 1.0d/(cumReciprocal/countN);
}
My test method:
[TestMethod()]
public void GetHarmonicMeanTest()
{
var parameters = new List<double> { 1.5d, 2.3d, 2.9d, 1.9d, 5.6d };
const double expected = 2.32432293165495;
var actual = OwnFunctions.GetHarmonicMean(parameters);
Assert.AreEqual(expected, actual);
}
After running the test the following message is showing:
Assert.AreEqual failed. Expected:<2.32432293165495. Actual:<2.32432293165495.
For me that are both the same values.
Can somebody explain this? Or am I doing something wrong?
I'm unable to run any selenium tests since I updated Firefox to 3.6. Is it happening just to me or is it everybody?
Error message I get is: Could not start Selenium session: Failed to start browser session
Is there any good framework for comparing whole objects?
now i do
assertEquals("[email protected]", obj.email);
assertEquals("5", obj.shop);
if bad email is returned i never get to know if it had the right shop, i would like to get a list of incorrect fields.
There are two directories in a clojure project - src/ and test/.
There's a file my_methods.clj in the src/calc/ directory which starts with
(ns calc.my_methods...).
I want to create a test file for it in test directory - test/my_methods-test.clj
(ns test.my_methods-test
(:require [calc.my_methods])
(:use clojure.test))
In the $CLASSPATH there are both project root directory and src/ directory. But the exception is still
"Could not locate calc/my_methods__init.class or calc/my_methods.clj on classpath". What is the problem with requiring it in the test file?
echo $CLASSPATH gives this:
~/project:~/project/src
Is it better to write/record selenium tests in html format and run them directly in the server with "-htmlSuite" or to write the tests in java/C#/... and run them in the server using selenium-rc?
What is the recommended solution?
In my MSTest UnitTest project, before running any tests, I need to execute some commands. Is there a feature, kind of like Global.asax is for web based projects, that will let me kick off something before any tests run?
People are still wondering what are the differences between the two most popular unit testing frameworks in the .NET world: the open source NUnit and the commercial MsTest). Heres a short list of what i remember instantly: Nunit contains a [TestCase] attribute that allows implementing parametrized tests. this does not exist in msTest MsTest's ExpectedException attribute has a bug where the expected message is never really asserted even if it's wrong - the test will pass. Nunit has an...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.
Can anyone help me explain how TimeProvider.Current can become null in the following class?
public abstract class TimeProvider
{
private static TimeProvider current =
DefaultTimeProvider.Instance;
public static TimeProvider Current
{
get { return TimeProvider.current; }
set
{
if (value == null)
{
throw new ArgumentNullException("value");
}
TimeProvider.current = value;
}
}
public abstract DateTime UtcNow { get; }
public static void ResetToDefault()
{
TimeProvider.current = DefaultTimeProvider.Instance;
}
}
Observations
All unit tests that directly reference TimeProvider also invokes ResetToDefault() in their Fixture Teardown.
There is no multithreaded code involved.
Once in a while, one of the unit tests fail because TimeProvider.Current is null (NullReferenceException is thrown).
This only happens when I run the entire suite, but not when I just run a single unit test, suggesting to me that there is some subtle test interdependence going on.
It happens approximately once every five or six test runs.
When a failure occurs, it seems to be occuring in the first executed tests that involves TimeProvider.Current.
More than one test can fail, but only one fails in a given test run.
FWIW, here's the DefaultTimeProvider class as well:
public class DefaultTimeProvider : TimeProvider
{
private readonly static DefaultTimeProvider instance =
new DefaultTimeProvider();
private DefaultTimeProvider() { }
public override DateTime UtcNow
{
get { return DateTime.UtcNow; }
}
public static DefaultTimeProvider Instance
{
get { return DefaultTimeProvider.instance; }
}
}
I suspect that there's some subtle interplay going on with static initialization where the runtime is actually allowed to access TimeProvider.Current before all static initialization has finished, but I can't quite put my finger on it.
Any help is appreciated.
We have splitted our grails application into several inplace-plugins.
Now we want to have the tests in the same plugin like the classes which they test.
Is it possible to configure our application (e.g. in BuildConfig.groovy) so that the tests in the plugins are executed too when we run "test-app"?
Here is the mysterious:
I have a scope which looks like this (in Image.rb)
scope :moderate_all, delegates.where("moderation_flag = #{$moderation_flags[:not_moderated]}")
Note that delegates is another scope that I am defining before moderate_all
When I leave it like this, I can run my test that checks if an image has been "checked-out" it is not available anymore. I don't put the code of the test, because it does not matter actually.
With this code, when I run "rake test" it fails, but if I do "ruby test/unit/image_test.rb" it works! I was thinking I am starting to have a bad day. Then I tried
scope :moderate_all, lambda {
delegates.where("moderation_flag = #{$moderation_flags[:not_moderated]}")
}
And "rake test" passes!
So my problem is solved, but why?
I have two classes:
public abstract class AbstractFoobar { ... }
and
public class ConcreteFoobar extends AbstractFoobar { ... }
I have corresponding test classes for these two classes:
public class AbstractFoobarTest { ... }
and
public class ConcreteFoobarTest extends AbstractFoobarTest { ... }
When I run ConcreteFoobarTest (in JUnit), the annotated @Test methods in AbstractFoobarTest get run along with those declared directly on ConcreteFoobarTest because they are inherited.
Is there anyway to skip them?
There is a simple controller that a querystring is read in constructor of it.
public class ProductController : Controller
{
parivate string productName;
public ProductController()
{
productName = Request.QueryString["productname"];
}
public ActionResult Index()
{
ViewData["Message"] = productName;
return View();
}
}
Also I have a function in unit test that create an instance of this Controller and I fill the querystring by a Mock object like below.
[TestClass]
public class ProductControllerTest
{
[TestMethod]
public void test()
{
// Arrange
var querystring = new System.Collections.Specialized.NameValueCollection { { "productname", "sampleproduct"} };
var mock = new Mock<ControllerContext>();
mock.SetupGet(p => p.HttpContext.Request.QueryString).Returns(querystring);
var controller = new ProductController();
controller.ControllerContext = mock.Object;
// Act
var result = controller.Index() as ViewResult;
// Assert
Assert.AreEqual("Index", result.ViewName);
}
}
Unfortunately Request.QueryString["productname"] is null in constructor of ProductController when I run test unit.
Is ther any way to fill a querystrin by a mocking and get it in constructor of a control?
I am quite new to TDD and am going with NUnit and Moq. I have got a method where I expect an exception, so I wanted to play a little with the frameworks features.
My test code looks as follows:
[Test]
[ExpectedException(ExpectedException = typeof(MockException), ExpectedMessage = "Actual differs from expected")]
public void Write_MessageLogWithCategoryInfoFail()
{
string message = "Info Test Message";
Write_MessageLogWithCategory(message, "Info");
_LogTest.Verify(writeMessage =>
writeMessage.Info("This should fail"),
"Actual differs from expected"
);
}
But I always receive the errormessage that the error message that the actual exception message differs from the expected message. What am I doing wrong?
I ran testng tests for my test classes,
I get the following three types of log output when I run the passing tests.
I am using org.testng.AssertJUnit.assertTrue() methods in my tests.
[testng] PASSED: testMethod1 on null(test.foo.bar.Class1)
[testng] PASSED: testMethod2 on Default test name(test.foo.bar.jar.Class2)
[testng] PASSED: testMethod3
Can any one please tell me why for some tests it says "on null" for some it says "on Default test name ... " and for some it does not say anything on the console output.
Specifically I want to make it consistent message.
Environment : linux
Testng framework : 6.3.2beta
please advise.
thanks.
We run a project in which we want to solve with test driven development. I thought about some questions that came up when initiating the project. One question was: Who should write the unit-test for a feature? Should the unit-test be written by the feature-implementing programmer? Or should the unit test be written by another programmer, who defines what a method should do and the feature-implementing programmer implements the method until the tests runs?
If I understand the concept of TDD in the right way, the feature-implementing programmer has to write the test by himself, because TDD is procedure with mini-iterations. So it would be too complex to have the tests written by another programmer?
What would you say? Should the tests in TDD be written by the programmer himself or should another programmer write the tests that describes what a method can do?
Hi,
I am using the Google Application Engine plugin for Eclipse 3.4, and I have added unit tests in my projects.
The unit tests are in a source folder named tests, separated from the source folder src.
But, in the war/classes that is generated, the tests classes are present.
Is there anyway not to put test classes in the generated war/classes directory?
Thanks.
UPDATE: I've changed the wording of the question. Previously it was a yes/no question about if a base class could be changed at runtime.
I may be working on mission impossible here, but I seem to be getting close. I want to extend a ASP.NET control, and I want my code to be unit testable. Also, I'd like to be able to fake behaviors of a real Label (namely things like ID generation, etc), which a real Label can't do in an nUnit host.
Here a working example that makes assertions on something that depends on a real base class and something that doesn't-- in a more realistic unit test, the test would depend on both --i.e. an ID existing and some custom behavior.
Anyhow the code says it better than I can:
public class LabelWrapper : Label //Runtime
//public class LabelWrapper : FakeLabel //Unit Test time
{
private readonly LabelLogic logic= new LabelLogic();
public override string Text
{
get
{
return logic.ProcessGetText(base.Text);
}
set
{
base.Text=logic.ProcessSetText(value);
}
}
}
//Ugh, now I have to test FakeLabelWrapper
public class FakeLabelWrapper : FakeLabel //Unit Test time
{
private readonly LabelLogic logic= new LabelLogic();
public override string Text
{
get
{
return logic.ProcessGetText(base.Text);
}
set
{
base.Text=logic.ProcessSetText(value);
}
}
}
[TestFixture]
public class UnitTest
{
[Test]
public void Test()
{
//Wish this was LabelWrapper label = new LabelWrapper(new FakeBase())
LabelWrapper label = new LabelWrapper();
//FakeLabelWrapper label = new FakeLabelWrapper();
label.Text = "ToUpper";
Assert.AreEqual("TOUPPER",label.Text);
StringWriter stringWriter = new StringWriter();
HtmlTextWriter writer = new HtmlTextWriter(stringWriter);
label.RenderControl(writer);
Assert.AreEqual(1,label.ID);
Assert.AreEqual("<span>TOUPPER</span>", stringWriter.ToString());
}
}
public class FakeLabel
{
virtual public string Text { get; set; }
public void RenderControl(TextWriter writer)
{
writer.Write("<span>" + Text + "</span>");
}
}
//System Under Test
internal class LabelLogic
{
internal string ProcessGetText(string value)
{
return value.ToUpper();
}
internal string ProcessSetText(string value)
{
return value.ToUpper();
}
}
In TDD(Test Driven Development) development process, how to deal with the test data?
Assumption that a scenario, parse a log file to get the needed column. For a strong test, How do I prepare the test data? And is it properly for me locate such files to the test class files?
For example:
// NUnit-like pseudo code (within a TestFixture)
Ctor()
{
m_globalVar = getFoo();
}
[Test]
Create()
{
a(m_globalVar)
}
[Test]
Delete()
{
// depends on Create being run
b(m_globalVar)
}
… or…
// NUnit-like pseudo code (within a TestFixture)
[Test]
CreateAndDelete()
{
Foo foo = getFoo();
a(foo);
// depends on Create being run
b(foo);
}
… I’m going with the later, and assuming that the answer to my question is:
No, at least not with NUnit, because according to the NUnit manual:
The constructor should not have any side effects, since NUnit may construct the class multiple times in the course of a session.
... also, can I assume it's bad practice in general? Since tests can usually be run separately. So the result of Create may never be cleaned up by Delete.
I recently obtained a Mac so I could test our sites on Safari and Firefox for Mac OS.
Now that Safari 5 is out, I'm not sure what I should do about upgrading. I presume what works on Safari 5 works on Safari 4, but I can't be sure, and vice versa. So, I don't know if I should upgrade and test on Safari 5 or keep on with Safari 4.
Are there any major differences between these two version in terms of CSS (2.1) handling or JavaScript? When do you think the majority of people will have Safari 5 instead of 4?
All thoughts appreciated.