SQLite claim to have 679 times more test code than production one.
http://www.sqlite.org/testing.html
Does anyone knows how it is possible? Do they generate any test code automatically? What are the major parts of these "45678.3 KSLOC" of test code?
I want to use selenium test to cover my rails project ! but i just find little documents on selenium test . I want someone to give me some documents for selenium test of all types !like website ,pdf ,text etc. you can sent them to my gmail [email protected] Thank you ,and best regards!
In testing singleton classes we need the single instance to "go away" after each test. Is there a way to configure nunit to recreate the test app domain after each test, or at least after each fixture?
I would like to do some performance testing of a Rails app.
I want to test the app with real world data, which is 100 MB in size.
The problem is, Rails' test environment always rebuilds the database from fixtures, which always overwrites the real world data.
So how should I do the performance test?
I am working on a Netbeans Platform RCP application.
I use jmock in my unit tests and I have created a Library Wrapper Module to import the necessary libraries.
The Module has an section named 'Libraries' and another section named 'Unit Test Libraries'.
I hoped that I could add the JMock Library Wrapper to the 'Unit Test Libraries', however when I run the unit tests I get the error 'package org.jmock does not exist'.
If I import the JMock Library Wrapper in to the main 'Libraries' element then it works, but this feels wrong.
Maven allows me to specify unit-test only dependencies, and I assumed that NetBeans Platform did the same. Should this be possible? Am I doing something wrong? Should I resign myself to a run-time dependency on the unit-test libraries (ugh).
I am making a html5 drag and drop uploader with jquery, below is my code so far, the problem is that I get an empty array without any data. Is this line incorrect to store the file data - fd.append('file', $thisfile);?
$('#div').on(
'dragover',
function(e) {
e.preventDefault();
e.stopPropagation();
}
);
$('#div').on(
'dragenter',
function(e) {
e.preventDefault();
e.stopPropagation();
}
);
$('#div').on(
'drop',
function(e){
if(e.originalEvent.dataTransfer){
if(e.originalEvent.dataTransfer.files.length) {
e.preventDefault();
e.stopPropagation();
// The file list.
var fileList = e.originalEvent.dataTransfer.files;
//console.log(fileList);
// Loop the ajax post.
for (var i = 0; i < fileList.length; i++) {
var $thisfile = fileList[i];
console.log($thisfile);
// HTML5 form data object.
var fd = new FormData();
//console.log(fd);
fd.append('file', $thisfile);
/*
var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size};
$.each(file, function(key, value) {
fd.append('file['+key+']', value);
})
*/
$.ajax({
url: "upload.php",
type: "POST",
data: fd,
processData: false,
contentType: false,
success: function(response) {
// .. do something
},
error: function(jqXHR, textStatus, errorMessage) {
console.log(errorMessage); // Optional
}
});
}
/*UPLOAD FILES HERE*/
upload(e.originalEvent.dataTransfer.files);
}
}
}
);
function upload(files){
console.log('Upload '+files.length+' File(s).');
};
then if I use another method is that to make the file data into an array inside the jquery code,
var file = {name: fileList[i].name, type: fileList[i].type, size:fileList[i].size};
$.each(file, function(key, value) {
fd.append('file['+key+']', value);
});
but where is the tmp_name data inside e.originalEvent.dataTransfer.files[i]?
php,
print_r($_POST);
$uploaddir = './uploads/';
$file = $uploaddir . basename($_POST['file']['name']);
if (move_uploaded_file($_POST['file']['tmp_name'], $file)) {
echo "success";
} else {
echo "error";
}
as you can see that tmp_name is needed to upload the file via php...
html,
<div id="div">Drop here</div>
Hello,
I recently created a big portal site. It's time for putting it to test.
How do you guys test a site rigorously?
What are the ways and tools for that?
Can we sort of mimic hundreds of virtual users visiting the site to see its load handling?
The test should be for both security and speed
Thanks in advance.
In a Rails application I have a Test::Unit functional test that's failing, but the output on the console isn't telling me much.
How can I view the request, the response, the flash, the session, the variables set, and so on?
Is there something like...
rake test specific_test_file --verbose
Hi to all. First off, I'm new to MySQL Cluster. This is my pain:
I've managed to setup a MySQL Cluster with two data nodes, two SQL nodes and one management server. Everything works pretty well, except the following: my data nodes are spread across an intranet link which incurs latency into communications between the data nodes. Apparently, due to MySQL Cluster's internal partitioning schemes, when my PHP application pulls data from the cluster via SELECT queries, parts of the data are pulled from both data nodes. This makes the page appear onscreen REALLY slowly. If I bring one data node offline, the data can only be pulled from that single remaining data node, and thus, the final result (HTML output) appears on the screen in a very timely fashion. So, my question is this: can the data nodes/cluster be told to pull data from partitions stored only on a particular data node?
I wonder in which cases it would be good to make an NSManagedObjectModel completely programmatically, with NSEntityDescription instances and all this stuff.
I'm that kind of person who prefers to code programmatically, rejecting Interface Builder. But when it comes to Core Data, I have a hard time figuring out why I should kill my time NOT using the nice Xcode Data Modeler tool.
And since data models are stuck to a given state (except when you want to do some ugly migration operations where thinks probably go wrong and users get mad, really mad), I see no big sense in a data model that's made programmatically for the purpose of changing it all the time.
Did I miss something?
I currently have a project that uses various DB access technologies mainly for showcasing or for demos.
Currently we have:
Namespace App.Data (App.Data.dll)
Folder NHibernate
Folder EntityFramework
Folder LinqToSql
The above structure is ok as we only use Sql Server as the DB. But going forward we will be including Oracle, MySql etc.
So what would be a better structure with this in mind?
I thought about:
Namespace App.Data.SqlServer (App.Data.SqlServer.dll)
Folder NHibernate
Folder EntityFramework
Folder LinqToSql
Or would it just be better to have separate assemblies for each database and access technology?:
Namespace App.Data.SqlServer.NHibernate (App.Data.SqlServer.NHibernate.dll)
Namespace App.Data.SqlServer.EntityFramework(App.Data.SqlServer.EntityFramework.dll)
Namespace App.Data.Oracle.NHibernate (App.Data.Oracle.NHibernate.dll)
Namespace App.Data.MySql.NHibernate (App.Data.MySql.Oracle.dll)
I have the following directory layout:
runner.py
lib/
tests/
testsuite1/
testsuite1.py
testsuite2/
testsuite2.py
testsuite3/
testsuite3.py
testsuite4/
testsuite4.py
The format of testsuite*.py modules is as follows:
import pytest
class testsomething:
def setup_class(self):
''' do some setup '''
# Do some setup stuff here
def teardown_class(self):
'''' do some teardown'''
# Do some teardown stuff here
def test1(self):
# Do some test1 related stuff
def test2(self):
# Do some test2 related stuff
....
....
....
def test40(self):
# Do some test40 related stuff
if __name__=='__main()__'
pytest.main(args=[os.path.abspath(__file__)])
The problem I have is that I would like to execute the 'testsuites' in parallel i.e. I want testsuite1, testsuite2, testsuite3 and testsuite4 to start execution in parallel but individual tests within the testsuites need to be executed serially.
When I use the 'xdist' plugin from py.test and kick off the tests using 'py.test -n 4', py.test is gathering all the tests and randomly load balancing the tests among 4 workers. This leads to the 'setup_class' method to be executed every time of each test within a 'testsuitex.py' module (which defeats my purpose. I want setup_class to be executed only once per class and tests executed serially there after).
Essentially what I want the execution to look like is:
worker1: executes all tests in testsuite1.py serially
worker2: executes all tests in testsuite2.py serially
worker3: executes all tests in testsuite3.py serially
worker4: executes all tests in testsuite4.py serially
while worker1, worker2, worker3 and worker4 are all executed in parallel.
Is there a way to achieve this in 'pytest-xidst' framework?
The only option that I can think of is to kick off different processes to execute each test suite individually within runner.py:
def test_execute_func(testsuite_path):
subprocess.process('py.test %s' % testsuite_path)
if __name__=='__main__':
#Gather all the testsuite names
for each testsuite:
multiprocessing.Process(test_execute_func,(testsuite_path,))
i created in app purchases in my application.But still my test account doesn’t work fine. When i test my application using test account the sandbox environment asks me to buy the product and after buying it asks me to buy the product again straightaway. Is it some problem while using test accounts or is there a problem in my coding?? this is my first application and figuring out in app purchases for your application can be really hard at times. i have 4 products and this happens only with 1 or 2 products and rest work fine. So i am sure the in app purchases is fine but cant figure out what could be wrong??
Hi, I have classes which prviously had massive methods so i subdivided the work of this method into 'helper' methods. These helper methods are declared private to enforce encapsulation - however I want to unit test the big public methods, is it good to unit test the helper methods too as if one of them fail the public method that calls it will also fail - but this way we can identify why it failed. Also in order to test these using a mock object I would need to change their visibility from private to protected, is this desirable?
I am trying to unit test a simple factory - but it keeps telling me that I am trying to use a method like a type?
My unit test
using System;
using System.Text;
using System.Collections.Generic;
using System.Linq;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Home;
namespace HomeTest
{
[TestClass]
public class TestFactory
{
[TestMethod]
public void DoTestFactory()
{
InventoryType.InventorySelect select = new InventoryType.InventorySelect();
select.inventoryTypes.Add("cds");
Home.Services.Factory.CreateInventory get = new Home.Services.Factory.CreateInventory();
get.InventoryImpl();
if (select.Validate() == true)
Console.WriteLine("Test Passed");
else
if (select.Validate() == false)
Console.WriteLine("Test Returned False");
else
Console.WriteLine("Test Failed To Run");
Console.ReadLine();
}
}
}
My facotry
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Home.Services
{
public class Factory
{
public InventorySvc CreateInventory()
{
return new InventoryImpl();
}
}
}
I write my handler for server errors and define it at root urls.py:
handler500 = 'myhandler'
And I want to write unittest for testing how it works. For testing I write view with error and define it in test URLs configuration, when I make request to this view in browser I see my handler and receive status code 500, but when I launch test that make request to this view I see stack trace and my test failed. Have you some ideas for testing handler500 by unittests?
A group of us (.NET developers) are talking unit testing. Not any one framework (we've hit on MSpec, NUint, MSTest, RhinoMocks, TypeMock, etc) -- we're just talking generally.
We see lots of syntax that forces a distinct unit test per scenario, but we don't see an avenue to re-using one unit test with various inputs or scenarios. Also, we don't see an avenue to multiple asserts in a given test without an early assert's failure threatening the testing of later asserts (in the same test).
Is there anything like that happening in .NET unit testing (state- or behavior-based) today?
I'm attempting to add a Test Settings file to my Unit Tests project in VS2010. All websites seem to simply say "Go to Add New Item Installed Templates Test Settings". However, I don't have Test Settings as an option in my Installed Templates (nor does searching for them online turn up any results).
Can someone point me in the right direction for what I need to do?
I'm making my first steps in Test Driven Development with Visual Studio. I have some questions regarding how to implement generic classes with VS 2010.
First, let's say I want to implement my own version of an ArrayList.
I start by creating the following test (I'm using in this case MSTest):
[TestMethod]
public void Add_10_Items_Remove_10_Items_Check_Size_Is_Zero() {
var myArrayList = new MyArrayList<int>();
for (int i = 0; i < 10; ++i) {
myArrayList.Add(i);
}
for (int i = 0; i < 10; ++i) {
myArrayList.RemoveAt(0);
}
int expected = 0;
int actual = myArrayList.Size;
Assert.AreEqual(expected, actual);
}
I'm using VS 2010 ability to hit
ctrl + .
and have it implement classes/methods on the go.
I have been getting some trouble when implementing generic classes. For example, when I define an .Add(10) method, VS doesn't know if I intend a generic method(as the class is generic) or an Add(int number) method. Is there any way to differentiate this?
The same can happen with return types. Let's assume I'm implementing a MyStack stack and I want to test if after I push and element and pop it, the stack is still empty. We all know pop should return something, but usually, the code of this test shouldn't care for it. Visual Studio would then think that pop is a void method, which in fact is not what one would want. How to deal with this? For each method, should I start by making tests that are "very specific" such as is obvious the method should return something so I don't get this kind of ambiguity? Even if not using the result, should I have something like int popValue = myStack.Pop() ?
How should I do tests to generic classes? Only test with one generic kind of type? I have been using ints, as they are easy to use, but should I also test with different kinds of objects? How do you usually approach this?
I see there is a popular tool called TestDriven for .NET. With VS 2010 release, is it still useful, or a lot of its features are now part of VS 2010, rendering it kinda useless?
Thanks
hi everyone,
i want to set up a test account to test in app purchase on sandbox, i am logging into intunes connect and following the same procedure as prescribed in the itunes connect developer guide. i am clinking on manage user, but i am not able to see the window where i can select test in app purchase user.
do i need to do any change in my profile to make it visible.
Hi
I m working on a client-server program, where there is no test at all.
When i try to do some test with two server, it's look like both server is connected to the same database. I think the reason is some bad use of static field.
So i wonder, is there a way to start two VM in a junit test?
Hello I'm writting junit test how can I test this method .. this is only part of this method :
public MyClass{
public void myMethod(){
List<myObject> list = readData();
}
}
How will I make the test for this? ReadData is a private method inside MyClass?