Search Results

Search found 23949 results on 958 pages for 'test'.

Page 30/958 | < Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >

  • Error message from running a test file in ruby? What wrong? Please help!

    - by hamdoggy
    C:\Users\rahul\Documents\University of California, Santa Barbara\scheduler\sbsch eduler\gold_apiruby access_gold_test.rb C:/Ruby/lib/ruby/gems/1.8/gems/test-unit-2.0.0/lib/test/unit/testresult.rb:28: u ninitialized constant Test::Unit::TestResult::TestResultFailureSupport (NameErro r) from C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in ge m_original_require' from C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inre quire' from C:/Ruby/lib/ruby/gems/1.8/gems/test-unit-2.0.0/lib/test/unit/ui/tes trunnermediator.rb:9 from C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in ge m_original_require' from C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inre quire' from C:/Ruby/lib/ruby/gems/1.8/gems/test-unit-2.0.0/lib/test/unit/ui/con sole/testrunner.rb:9 from C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in ge m_original_require' from C:/Ruby/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:inre quire' from C:/Ruby/lib/ruby/1.8/test/unit/autorunner.rb:25 from C:/Ruby/lib/ruby/1.8/test/unit/autorunner.rb:214:in []' from C:/Ruby/lib/ruby/1.8/test/unit/autorunner.rb:214:inrun' from C:/Ruby/lib/ruby/1.8/test/unit/autorunner.rb:12:in `run' from C:/Ruby/lib/ruby/1.8/test/unit.rb:278 from access_gold_test.rb:34

    Read the article

  • Unit Testing with NUnit and Moles Redux

    - by João Angelo
    Almost two years ago, when Moles was still being packaged alongside Pex, I wrote a post on how to run NUnit tests supporting moled types. A lot has changed since then and Moles is now being distributed independently of Pex, but maintaining support for integration with NUnit and other testing frameworks. For NUnit the support is provided by an addin class library (Microsoft.Moles.NUnit.dll) that you need to reference in your test project so that you can decorate yours tests with the MoledAttribute. The addin DLL must also be placed in the addins folder inside the NUnit installation directory. There is however a downside, since Moles and NUnit follow a different release cycle and the addin DLL must be built against a specific NUnit version, you may find that the release included with the latest version of Moles does not work with your version of NUnit. Fortunately the code for building the NUnit addin is supplied in the archive (moles.samples.zip) that you can found in the Documentation folder inside the Moles installation directory. By rebuilding the addin against your specific version of NUnit you are able to support any version. Also to note that in Moles 0.94.51023.0 the addin code did not support the use of TestCaseAttribute in your moled tests. However, if you need this support, you need to make just a couple of changes. Change the ITestDecorator.Decorate method in the MolesAddin class: Test ITestDecorator.Decorate(Test test, MemberInfo member) { SafeDebug.AssumeNotNull(test, "test"); SafeDebug.AssumeNotNull(member, "member"); bool isTestFixture = true; isTestFixture &= test.IsSuite; isTestFixture &= test.FixtureType != null; bool hasMoledAttribute = true; hasMoledAttribute &= !SafeArray.IsNullOrEmpty( member.GetCustomAttributes(typeof(MoledAttribute), false)); if (!isTestFixture && hasMoledAttribute) { return new MoledTest(test); } return test; } Change the Tests property in the MoledTest class: public override System.Collections.IList Tests { get { if (this.test.Tests == null) { return null; } var moled = new List<Test>(this.test.Tests.Count); foreach (var test in this.test.Tests) { moled.Add(new MoledTest((Test)test)); } return moled; } } Disclaimer: I only tested this implementation against NUnit 2.5.10.11092 version. Finally you just need to run the NUnit console runner through the Moles runner. A quick example follows: moles.runner.exe [Tests.dll] /r:nunit-console.exe /x86 /args:[NUnitArgument1] /args:[NUnitArgument2]

    Read the article

  • test filenames for regex patterns in bash

    - by rk
    I'm not sure exactly how the code should be written but I want to test a file/folder for naming patterns, something like: if [ -d $i ] && [ regex([0-9].,$i) { do something } I want it to check if the file/folder is a directory and that the name of it is a number (i.e. 1 or 101 or 10007)...

    Read the article

  • Utorrent bandwidth test

    - by Atul
    When I run the bandwidth test,I get the lookup error: No such host is known (1101) Please help me . I am unable to download anything. No other torrent software is working either!

    Read the article

  • Application to test a cluster/grid

    - by Mystic
    Is anyone aware of any application (computation + data) that I can download to test an in-house grid middleware that will run on a local cluster. I hope to compare the performance with other available grid systems.

    Read the article

  • Test a site with a static subdomain locally

    - by bcmcfc
    How can I test a site that uses one or more static domains for serving images locally? e.g. domain.tld with images servered from static.domain.tld Local working copy of the site on WAMP checked out from SVN: URLs will be pointing at static.domain.tld rather than static.domain.local

    Read the article

  • How to manage two separate testing teams using different test tracking tools

    - by newuser
    I have two independent testing teams currently testing the same application. One team is using ClearQuest, and the other is using Mantis. It has been a huge effort to manage all of the duplicate reported bugs. What options would improve this situation? My constraint is that the ClearQuest team will not change test reporting tools. The migration to ClearQuest also comes with a large training effort.

    Read the article

  • mac + parallels and https site test = router restarts

    - by Erik
    Ok I have an interesting & very frustrating problem happening. I'm going to explain it the best I can. I work as a graphic designer & web designer on a mac and have a Comcast internet connection that comes through a Comcast branded router (SMC8014) which then ties into an Airport Extreme Base Station which runs my office network. I run OS 10.5.7 and also run Parallels 4.0.3 (running Windows XP) for testing websites in Internet Explorer and so on. Ok, so that the basic background. Here's my issue. I've been collaborating an ecommerce website with another designer/developer and when testing the site on the PC side we have started to run into some sort of network problem. The site is https if that matters at all I suspect it may. Basically when I run parallels for testing I am constantly having to restart the router in order to connect to the test site (it's hosted). Funny thing is I can access the rest of the internet fine, just not this site I'm working on until I restart the router (It's sorta like the site is timing out). This never happens when just running the Mac side of things. It only becomes an issue when Parallels is open and I am doing page refreshes while making css or HTML edits via something like Coda or CSS edit (connected to the hosting server via ftp). The real problem is that once the problem starts I only get about 2 or 3 page loads before I have to restart the router again. It's absolutely crippling. I cannot get any work done when I have to restart the router every couple of minutes. So, if you think this problem is isolated to me, the answer is no. The designer/developer I'm collaborating with has an office a couple miles away and experiences very similar problems under slightly different setup. He also has Comcast as his internet provider and connects his router to an Airport and primarily works on a mac. The main difference is that rather than using a visualizer like parallels to test the website on the PC he uses a real live PC that is on his network. Once he fire up the PC to do testing he runs into the same issue described above. After a couple of page refreshes in Internet Explorer or other browser on the PC the site becomes unresponsive and the router has to get restarted. Any thoughts on what is going on here would be greatly appreciated. Thanks in advance.

    Read the article

  • Test Amazon Ec2 instance Small

    - by user102130
    I have some questions about amazon ec2 and especially on small instance. I want to host my new website (the beta version) on this kind of instance, but before i want to know about how many simultaneous users can be connected on one small instance. You can see the caracteristics of a small one here : http://aws.amazon.com/ec2/instance-types/ My website is a kind of social network in PHP. Is someone had already test this type of instance ?

    Read the article

  • Tool to test USB sticks?

    - by Tony_Henrich
    Some of the files I copy to a USB stick seems to be corrupted? Maybe the stick is bad but I haven't used it much and want to know for sure. Is there a Windows utility to test the stick? Mark bad areas as bad so the file system doesn't save to them? I format before I copy but I still get corrupted files? Does chkdsk work on USB sticks reliably?

    Read the article

  • Creating Test Sites

    - by Robert
    I have a website running off site. When we hire someone I would like to create a test site (a copy of live site) for the new employee to tinker with. I will need to take fresh copies of the Files and Database (basically a snapshot) and allow them to access these copied files and database so they could edit and upload them to see the changes they made as if it was the live site Basically what is the best practice for creating a copy of a website for testing? Server is running Linux, PHP, mySQL

    Read the article

  • Test and Production Server

    - by Mike Silvis
    I am using Git for a test and production server and I'm trying to figure out the best way to update the production server. I have limited SSH access, and don't want to manually update my production server using FTP. I essentially would like to just be able to run a simple command and have the whole production server files match my dev. It is also important to note that users will be uploading images, and other files to our production server only, that we can not lose. Thanks,any help is appreciated

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • .NET unit test runner outputting FaultException.Detail

    - by Adam
    Hello, I am running some unit tests on a WCF service. The service is configured to include exception details in the fault response (with the following in my service configuration file). <serviceDebug includeExceptionDetailInFaults="true" /> If a test causes an unhandled exception on the server the fault is received by the client with a fully populated server stack trace. I can see this by calling the exception's ToString() method. The problem is that this doesn't seem to be output by any of the test runners that I have tried (xUnit, Gallio, MSTest). They appear to just output the Message and the StackTrace properties of the exception. To illustrate what I mean, the following unit test run by MSTest would output three sections: Error Message Error Stack Trace Standard Console Output (contains the information I would like, e.g. "Fault Detail is equal to An ExceptionDetail, likely created by IncludeExceptionDetailInFaults=true, whose value is: ..." try { service.CallMethodWhichCausesException(); } catch (Exception ex) { Console.WriteLine(ex); // this outputs the information I would like throw; } Having this information will make the initial phase of testing and deployment a lot less painful. I know I can just wrap each unit test in a generic exception handler and write the exception to the console and rethrow (as above) within all my unit tests but that seems a very long-winded way of achieving this (and would look pretty awful). Does anyone know if there's any way to get this information included for free whenever an unhandled exception occurs? Is there a setting that I am missing? Is my service configuration lacking in proper fault handling? Perhaps I could write some kind of plug-in / adapter for some unit testing framework? Perhaps theres a different unit testing framework which I should be using instead! My actual set-up is xUnit unit tests executed via Gallio for the development environment, but I do have a separate suite of "smoke tests" written which I would like to be able to have our engineers run via the xUnit GUI test runner (or Gallio or whatever) to simplify the final deployment. Thanks. Adam

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • what to do with a flawed C++ skills test

    - by Mike Landis
    In the following gcc.gnu.org post, Nathan Myers says that a C++ skills test at SANS Consulting Services contained three errors in nine questions: Looking around, one of fthe first on-line C++ skills tests I ran across was: http://www.geekinterview.com/question_details/13090 I looked at question 1... find(int x,int y) { return ((x<y)?0:(x-y)):} call find(a,find(a,b)) use to find (a) maximum of a,b (b) minimum of a,b (c) positive difference of a,b (d) sum of a,b ... immediately wondering why would anyone write anything so obtuse. Getting past the absurdity, I didn't really like any of the answers, immediately eliminating (a) and (b) because you can get back zero (which is neither a nor b) in a variety of circumstances. Sum or difference seemed more likely, except that you could also get zero regardless of the magnitudes of a and b. So... I put Matlab to work (code below) and found: when either a or b is negative you get zero; when b a you get a; otherwise you get b, so the answer is (b) min(a,b), if a and b are positive, though strictly speaking the answer should be none of the above because there are no range restrictions on either variable. That forces test takers into a dilemma - choose the best available answer and be wrong in 3 of 4 quadrants, or don't answer, leaving the door open to the conclusion that the grader thinks you couldn't figure it out. The solution for test givers is to fix the test, but in the interim, what's the right course of action for test takers? Complain about the questions? function z = findfunc(x,y) for i=1:length(x) if x(i) < y(i) z(i) = 0; else z(i) = x(i) - y(i); end end end function [b,d1,z] = plotstuff() k = 50; a = [-k:1:k]; b = (2*k+1) * rand(length(a),1) - k; d1 = findfunc(a,b); z = findfunc(a,d1); plot( a, b, 'r.', a, d1, 'g-', a, z, 'b-'); end

    Read the article

  • Python, unit test - Pass command line arguments to setUp of unittest.TestCase

    - by sberry2A
    I have a script that acts as a wrapper for some unit tests written using the Python unittest module. In addition to cleaning up some files, creating an output stream and generating some code, it loads test cases into a suite using unittest.TestLoader().loadTestsFromTestCase() I am already using optparse to pull out several command-line arguments used for determining the output location, whether to regenerate code and whether to do some clean up. I also want to pass a configuration variable, namely an endpoint URI, for use within the test cases. I realize I can add an OptionParser to the setUp method of the TestCase, but I want to instead pass the option to setUp. Is this possible using loadTestsFromTestCase()? I can iterate over the returned TestSuite's TestCases, but can I manually call setUp on the TestCases? ** EDIT ** I wanted to point out that I am able to pass the arguments to setUp if I iterate over the tests and call setUp manually like: (options, args) = op.parse_args() suite = unittest.TestLoader().loadTestsFromTestCase(MyTests.TestSOAPFunctions) for test in suite: test.setUp(options.soap_uri) However, I am using xmlrunner for this and its run method takes a TestSuite as an argument. I assume it will run the setUp method itself, so I would need the parameters available within the XMLTestRunner. I hope this makes sense.

    Read the article

  • Javascript to match a specific number using regular expressions

    - by ren33
    I was using javascript to detect for specific key strokes and while writing the method I thought I'd try regular expressions and the test() method and came up with: if (/8|9|37|38|46|47|48|49|50|51|52|53|54|55|56|57|96|97|98|99|100|101|102|103|104|105|110/.test(num)) { // do something if there's a match } This doesn't seem to work 100% as some values seem to make it past the regex test, such as 83. I've since moved on, but I'm still curious as to why this didn't work.

    Read the article

  • Junit: splitting integration test and Unit tests.

    - by jeff porter
    Hello all, I've inherited a load of Junit test, but these tests (apart from most not working) are a mixture of actual unit test and integration tests (requiring external systems, db etc). So I'm trying to think of a way to actually separate them out, so that I can run the unit test nice and quickly and the integration tests after that. The options are.. 1: Split them into separate directories. 2: Move to Junit4 and annotate the classes to separate them. 3: Use a file naming convention to tell what a class is , i.e. AdapterATest and AdapterAIntergrationTest. 3 has the issue that Eclipse has the option to "Run all tests in the selected project/package or folder". So it would make it very hard to just run the integration tests. 2: runs the risk that developers might start writing integration tests in unit test classes and it just gets messy. 1: Seems like the neatest solution, but my gut says there must be a better solution out there. So that is my question, how do you lot break apart integration tests and proper unit tests?

    Read the article

  • Recommended way to test ActiveRecord associations?

    - by Sugerman
    I have written my basic models and defined their associations as well as the migrations to create the associated tables. I want to be able to test: The associations are configured as intended The table structures support the associations properly I've written FG factories for all of my models in anticipation of having a complete set of test data but I can't grasp how to write a spec to test both belongs_to and has_many associations. For example, given an Organization that has_many Users I want to be able to test that my sample Organization has a reference to my sample User. Organization_Factory.rb: Factory.define :boardofrec, :class => 'Organization' do |o| o.name 'Board of Recreation' o.address '115 Main Street' o.city 'Smallville' o.state 'New Jersey' o.zip '01929' end Factory.define :boardofrec_with_users, :parent => :boardofrec do |o| o.after_create do |org| org.users = [Factory.create(:johnny, :organization => org)] end end User_Factory.rb: Factory.define :johnny, :class => 'User' do |u| u.name 'Johnny B. Badd' u.email '[email protected]' u.password 'password' u.org_admin true u.site_admin false u.association :organization, :factory => :boardofrec end Organization_spec.rb: ... it "should have the user Johnny B. Badd" do boardofrec_with_users = Factory.create(:boardofrec_with_users) boardofrec_with_users.users.should include(Factory.create(:johnny)) end ... This example fails because the Organization.users list and the comparison User :johnny are separate instances of the same Factory. I realize this doesn't follow the BDD ideas behind what these plugins (FG, rspec) seemed to be geared for but seeing as this is my first rails application I'm uncomfortable moving forward without knowing that I've configured my associations and table structures properly.

    Read the article

  • Assigning static final int in a JUnit (4.8.1) test suite

    - by Dr. Monkey
    I have a JUnit test class in which I have several static final ints that can be redefined at the top of the tester code to allow some variation in the test values. I have logic in my @BeforeClass method to ensure that the developer has entered values that won't break my tests. I would like to improve variation further by allowing these ints to be set to (sensible) random values in the @BeforeClass method if the developer sets a boolean useRandomValues = true;. I could simply remove the final keyword to allow the random values to overwrite the initialisation values, but I have final there to ensure that these values are not inadvertently changed, as some tests rely on the consistency of these values. Can I use a constructor in a JUnit test class? Eclipse starts putting red underlines everywhere if I try to make my @BeforeClass into a constructor for the test class, and making a separate constructor doesn't seem to allow assignment to these variables (even if I leave them unassigned at their declaration); Is there another way to ensure that any attempt to change these variables after the @BeforeClass method will result in a compile-time error? Can I make something final after it has been initialised?

    Read the article

< Previous Page | 26 27 28 29 30 31 32 33 34 35 36 37  | Next Page >