I have to register two users and simulate interaction between them (for example, a conversation).
I can do the following: register the first user, then register the second, sign in using first user's data, write message to the second user and sign out. Then sign in using the second user's data, answer to the message and sign out.
Is it possible to implement users' conversation without signing out if the system requires enabled cookies for users?
I want to use prosody or maybe another xmpp server to test my xmpp bot. I want it to only accept connection from the address/localhost(don't want to configure firewall to block access). I would like to know the easiest way to accomplish this.
Hi everyone,
I am trying to call a method that passes an object called parameters.
public void LoadingDataLockFunctionalityTest()
{
DataCache_Accessor target = DataCacheTest.getNewDataCacheInstance();
target.itemsLoading.Add("WebFx.Caching.TestDataRetrieverFactorytestsync", true);
DataParameters parameters = new DataParameters("WebFx.Core",
"WebFx.Caching.TestDataRetrieverFactory",
"testsync");
parameters.CachingStrategy = CachingStrategy.TimerDontWait;
parameters.CacheDuration = 0;
string data = (string)target.performGetForTimerDontWaitStrategy(parameters);
TestSyncDataRetriever.SimulateLoadingForFiveSeconds = true;
Thread t1 = new Thread(delegate()
{
string s = (string)target.performGetForTimerDontWaitStrategy(parameters);
Console.WriteLine(s ?? String.Empty);
});
t1.Start();
t1.Join();
Thread.Sleep(1000);
ReaderWriterLockSlim rw = DataCache_Accessor.GetLoadingLock(parameters);
Assert.IsTrue(rw.IsWriteLockHeld);
Assert.IsNotNull(data);
}
My test is failing all the time and i am not able step through the method..
Can someone please put me in the right direction
Thanks
I need to test a class library project in VS. This project, itself, does not have a web.config file, but the classes do on the web server to which it's deployed. I access these like this:
ConfigurationManager.ConnectionStrings["stringname"].ConnectionString;
Can I adjust these strings while running unit tests in VS? Should I have considered a different design method to avoid this problem?
I need a URL to just test basic http connectivity. It needs to be consistent and:
Always be up
Never change drastically due to IP or user agent. (IE: 301 Location redirect/ huge difference in content... minor would be tolerable)
The URL itself has a consistent content-length. (IE: it doesn't vary from by 2kb at most, ever)
A few examples, yet none match all 3 criteria:
One example of always up: www.google.com (yet it 301 redirects based on IP location).
Another good one is http://www.google.com/webhp?hl=en. but the problem there is that based on a given holiday, the content-length can really vary.
I setup a NUnit test as such:
new PersistenceSpecification<MyTable>(_session)
.CheckProperty(c => c.ActionDate, DateTime.Now);
When I run the test via NUnit I get the following error:
SomeNamespace.MapTest:
System.ApplicationException : Expected '2/23/2010 11:08:38 AM' but got
'2/23/2010 11:08:38 AM' for Property 'ActionDate'
The ActionDate field is a datetime field in a SQL 2008 database. I use Auto Mapping and declare the ActionDate as a DateTime property in C#.
If I change the test to use DateTime.Today the tests pass.
My question is why is the test failing with DateTime.Now? Is NHibernate losing some precision when saving the date to the database and if so how do prevent the lose? Thank you.
Hi does anyone know what causes this error? In Visual Studio 2008 with Visual Assert
Thanks
1>------ Build started: Project: ChessRound1, Configuration: Debug Win32 ------
1>Compiling...
1>stdafx.cpp
1>C:\Program Files\Microsoft Visual Studio 9.0\VC\include\xlocnum(135) : error C2857: '#include' statement specified with the /Ycstdafx.h command-line option was not found in the source file
1>Build log was saved at "file://c:\Users\Admin1\Documents\Visual Studio 2008\Projects\ChessRound1\ChessRound1\Debug\BuildLog.htm"
1>ChessRound1 - 1 error(s), 0 warning(s)
========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========
The very common directory structure for even a simple Python module seems to be to separate the unit tests into their own test directory:
new_project/
antigravity/
antigravity.py
test/
test_antigravity.py
setup.py
etc.
for example see this Python project howto.
My question is simply What's the usual way of actually running the tests? I suspect this is obvious to everyone except me, but you can't just run python test_antigravity.py from the test directory as its import antigravity will fail as the module is not on the path.
I know I could modify PYTHONPATH and other search path related tricks, but I can't believe that's the simplest way - it's fine if you're the developer but not realistic to expect your users to use if they just want to check the tests are passing.
The other alternative is just to copy the test file into the other directory, but it seems a bit dumb and misses the point of having them in a separate directory to start with.
So, if you had just downloaded the source to my new project how would you run the unit tests? I'd prefer an answer that would let me say to my users: "To run the unit tests do X."
I want to get the code coverage of my tests. So I set the settings, build an app with .gcno files and run it on simulator.
It can get the coverage data successfully if there is no crash issue.
But if the app crashed, I will get nothing.
So how can I get the code coverage data when the app crash?
In my thought, this is because it will not call __gcov_flush() method when app crash. I only add app does not run in background to my plist file, so __gcov_flush() is called only at the time I press Home button.
Is there any way to call __gcov_flush() before the app crash?
Is it possible to perform unittest tests via a web interface...and if so how?
EDIT:
For now I want the results...for the tests I want them to be automated...possibly every time I make a change to the code. Sorry I forgot to make this more clear
I'm using Selenium's IWebDriver to write Unit Tests in C#.
Such is an example:
IWebDriver defaultDriver = new InternetExplorerDriver();
var ddl = driver.FindElements(By.TagName("select"));
The last line retrieves the select HTML element wrapped in a IWebElement.
I need a way to simulate selection to a specific option in that select list but I can't figure out how to do it.
Upon some research, I found examples where people are using the ISelenium DefaultSelenium class to accomplish the following, but I am not making use of this class because I'm doing everything with IWebDriver and INavigation (from defaultDriver.Navigate()).
I also noticed that ISelenium DefaultSelenium contains a ton of other methods that aren't available in the concrete implementations of IWebDriver.
So is there any way I can use IWebDriver and INavigation in conjunction with ISelenium DefaultSelenium ?
In one of my projects I need to collaborate with several backend systems. Some of them somewhat lacks in documentation, and partly therefore I have some test code that interact with some test servers just to see everything works as expected. However, accessing these servers is quite slow, and therefore I do not want to run these tests every time I run my test suite.
My question is how to deal with a situation where you want to skip certain tests. Currently I use an environment variable 'BACKEND_TEST' and a conditional statement which checks if the variable is set for each test I would like to skip. But sometimes I would like to skip all tests in a test file without having to add an extra row to the beginning of each test.
The tests which have to interact with the test servers are not many, as I use flexmock in other situations. However, you can't mock yourself away from reality.
As you can see from this question's title, I'm using Test::Unit. Additionally, if it makes any difference, the project is a Rails project.
We have a number of integration tests that fail when our staging server goes down for weekly maintenance. When the staging server is down we send a specific response that I could detect in my integration tests. When I get this response instead of failing the tests I'm wondering if it is possible to skip/ignore that test even though it has started running. This would keep our test reports a bit cleaner.
Does anybody have suggestions?
Given an application, how can I measure
the amount of data read and written by that application?
the time spent reading/writing to disk?
The specific application is Java-based (JBoss), and multi-threaded, and running as a service on Windows 7/2008 x64.
The overall goal I have is determining whether and why file access is a bottleneck in my application. Therefore, running the application in a defined and repeatable scenario is a given.
File access may be local as well as on network shares.
Windows performance monitor appears to be too hard to use (unless someone can point me to a helpful explanation).
Any ideas?
Django (1.2 beta) will reset the database(s) between every test that runs, meaning each test runs on an empty DB. However, the database(s) are not flushed. One of the effects of flushing the database is the auto_increment counters are reset.
Consider a test which pulls data out of the database by primary key:
class ChangeLogTest(django.test.TestCase):
def test_one(self):
do_something_which_creates_two_log_entries()
log = LogEntry.objects.get(id=1)
assert_log_entry_correct(log)
log = LogEntry.objects.get(id=2)
assert_log_entry_correct(log)
This will pass because only two log entries were ever created. However, if another test is added to ChangeLogTest and it happens to run before test_one, the primary keys of the log entries are no longer 1 and 2, they might be 2 and 3. Now test_one fails.
This is actually a two part question:
Is it possible to force ./manage.py test to flush the database between each test case?
Since Django doesn't flush the DB between each test by default, maybe there is a good reason. Does anyone know?
So I had a class that referenced a class that referenced another class that called a web service.
So I learn how to create an interface using partial classes.
I inject the web service through the constructor.
Then my unit test fails because I am newing up the actual web service in the second level of the class. So I end up modifying all three classes to pass the web service down through the constructor... was not happy :-( gave up....
what should I be doing in this case?
Currently we are generating HTML Reports for Automation, but those reports are not good enough to explain number of scenario which we cover in Automation, Is there anything we can use with Selenium to generate a proper reports which can give a complete overview and can easily understand by anyone
First Thing we can show a complete pie charts which cover number of test case passed and Failed.
Second thing we can show, what are test cases are there in this build.
I've create a method that calculates the harmonic mean based on a list of doubles.
But when I'm running the test it keeps failing even thou the output result are the same.
My harmonic mean method:
public static double GetHarmonicMean(List<double> parameters)
{
var cumReciprocal = 0.0d;
var countN = parameters.Count;
foreach( var param in parameters)
{
cumReciprocal += 1.0d/param;
}
return 1.0d/(cumReciprocal/countN);
}
My test method:
[TestMethod()]
public void GetHarmonicMeanTest()
{
var parameters = new List<double> { 1.5d, 2.3d, 2.9d, 1.9d, 5.6d };
const double expected = 2.32432293165495;
var actual = OwnFunctions.GetHarmonicMean(parameters);
Assert.AreEqual(expected, actual);
}
After running the test the following message is showing:
Assert.AreEqual failed. Expected:<2.32432293165495. Actual:<2.32432293165495.
For me that are both the same values.
Can somebody explain this? Or am I doing something wrong?
I'm unable to run any selenium tests since I updated Firefox to 3.6. Is it happening just to me or is it everybody?
Error message I get is: Could not start Selenium session: Failed to start browser session
There are two directories in a clojure project - src/ and test/.
There's a file my_methods.clj in the src/calc/ directory which starts with
(ns calc.my_methods...).
I want to create a test file for it in test directory - test/my_methods-test.clj
(ns test.my_methods-test
(:require [calc.my_methods])
(:use clojure.test))
In the $CLASSPATH there are both project root directory and src/ directory. But the exception is still
"Could not locate calc/my_methods__init.class or calc/my_methods.clj on classpath". What is the problem with requiring it in the test file?
echo $CLASSPATH gives this:
~/project:~/project/src
Is there any good framework for comparing whole objects?
now i do
assertEquals("[email protected]", obj.email);
assertEquals("5", obj.shop);
if bad email is returned i never get to know if it had the right shop, i would like to get a list of incorrect fields.
Is it better to write/record selenium tests in html format and run them directly in the server with "-htmlSuite" or to write the tests in java/C#/... and run them in the server using selenium-rc?
What is the recommended solution?
In my MSTest UnitTest project, before running any tests, I need to execute some commands. Is there a feature, kind of like Global.asax is for web based projects, that will let me kick off something before any tests run?