Search Results

Search found 10178 results on 408 pages for 'testing metaprogramming'.

Page 85/408 | < Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >

  • Online product demo environment for Windows applications

    - by Stefanos Tses
    I'm looking for a way to allow potential customers to try my application before they buy it. The product is a windows forms application that requires an SQL Server database to operate. Although I have a functional demo that the customer can install on their network, I want to make it easier for them by have them "play" with it at my environment. I remember Microsoft had (has?) something similar. I was testing Visual Studio a few years ago in a virtual environment where I was connecting to a server at Microsoft. Any suggestions? Thanks.

    Read the article

  • How can I detect a debugger or other tool that might be analysing my software?

    - by Workshop Alex
    A very simple situation. I'm working on an application in Delphi 2007 which is often compiled as 'Release' but still runs under a debugger. And occasionally it will run under SilkTest too, for regression testing. While this is quite fun I want to do something special... I want to detect if my application is running within a debugger/regression-tester and if that's the case, I want the application to know which tool is used! (Thus, when the application crashes, I could report this information in it's error report.) Any suggestions, solutions?

    Read the article

  • Setting up functional Tests in Flex

    - by Dan Monego
    I'm setting up a functional test suite for an application that loads an external configuration file. Right now, I'm using flexunit's addAsync function to load it and then again to test if the contents point to services that exist and can be accessed. The trouble with this is that having this kind of two (or more) stage method means that I'm running all of my tests in the context of one test with dozens of asserts, which seems like a kind of degenerate way to use the framework, and makes bugs harder to find. Is there a way to have something like an asynchronous setup? Is there another testing framework that handles this better?

    Read the article

  • Clojure / HBase: How to Import HBaseTestingUtility in v0.94.6.1

    - by David Williams
    In Clojure, if I want to start a test cluster using the hbase testing utility, I have to annotate my dependencies with: [org.apache.hbase/hbase "0.92.2" :classifier "tests" :scope "test"] First of all, I have no idea what this means. According to leiningens sample project.clj ;; Dependencies are listed as [group-id/name version]; in addition ;; to keywords supported by Pomegranate, you can use :native-prefix ;; to specify a prefix. This prefix is used to extract natives in ;; jars that don't adhere to the default "<os>/<arch>/" layout that ;; Leiningen expects. Question 1: What does that mean? Question 2: If I upgrade the version: [org.apache.hbase/hbase "0.94.6.1" :classifier "tests" :scope "test"] Then I receive a ClassNotFoundException Exception in thread "main" java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfiguration Whats going on here and how do I fix it?

    Read the article

  • Generic test suite for ASP.NET Membership/Role/Profile/Session Providers

    - by SztupY
    Hi! I've just created custom ASP.NET Membership, Role, Profile and Session State providers, and I was wondering whether there exists a test suite or something similar to test the implementation of the providers. I've checked some of the open source providers I could find (like the NauckIt.PostgreSQL provider), but neither of them contained unit tests, and all of the forum topics I've found mentioned only a few test cases (like checking whether creating a user works), but this is clearly not a complete test suite for a Membership provider. (And I couldn't find anything for the other three providers) Are there more or less complete test suites for the above mentioned providers, or are there custom providers out there that have at least some testing avaialable?

    Read the article

  • How doe we name test methods where we are checking for more than one condition?

    - by Sandbox
    I follow the technique specified in Roy Osherove's The Art Of Unit Testing book while naming test methods - MethodName_Scenario_Expectation. It suits perfectly well for my 'unit' tests. But,for tests that I write in 'controller' or 'coordinator' class, there isn't necessarily a method which I want to test. For these tests, I generate multiple conditions which make up one scenario and then I verify the expectation. For example, I may set some properties on different instances, generate an event and then verify that my expectations from controller/coordinator is being met. Now, my controller handles events using a private event handler. Here my scenario is that, I set some properties, say 3 condition1,condition2 and condition3 Also, my scenario includes an event is raised I don't have a method name as my event handler is private. How do I name such a test method?

    Read the article

  • How to simulate a mouse click in Cocoa for the iPhone?

    - by eagle
    I'm trying to setup automated unit tests for an iPhone application. I'm using a UIWebBrowser and need to simulate clicks on different links. I've tried doing this with JavaScript, but it doesn't produce the same results as when the I manually click on the links. The main problem is with links that have their target property set. I believe the only way for this automated unit test to work correctly is to simulate a mouse click at a specific x/y coordinate (i.e. where the link is located). Since the unit testing will only be used internally, private API calls are fine. It seems like this should be possible since the iPhone app isimulate seems to do something similar. Is there any way to do this in the framework?

    Read the article

  • C# / Visual Studio: production and test code placement

    - by Patrick Linskey
    Hi, In JavaLand, I'm used to creating projects that contain both production and test code. I like this practice because it simplifies testing of internal code without artificially exposing the internals in a project's published API. So far, in my experiences with C# / Visual Studio / ReSharper / NUnit, I've created separate projects (i.e., separate DLLs) for production and test code. Is this the idiom, or am I off base? If this idiomatically correct, what's the right way to deal with exposing classes and methods for test purposes? Thanks, -Patrick

    Read the article

  • Hudson, C++ and UnitTest++

    - by Gilad Naor
    Has anyone used Hudson as a Continuous-Integration server for a C++ project using UnitTest++ as a testing library? How exactly did you set it up? I know there have been several questions on Continuous Integration before, but I hope this one has a narrower scope. EDIT: I'll clarify a bit on what I'm looking for. I already have the build set to fail when the Unit-Tests fail. I'm looking for something like Hudson's JUnit support. UnitTest++ can create XML reports (See here). So, perhaps if someone knows how to translate these reports to be JUnit compatible, Hudson will know how to eat it up?

    Read the article

  • How do I diff two spreadsheets?

    - by neu242
    We have a lot of spreadsheets (xls) in our subversion repository. These are usually edited with gnumeric or openoffice.org, and are mostly used to populate databases for unit testing with dbUnit. There are no easy ways of doing diffs on xls files that I know of, and this makes merging extremely tedious and error prone. I've found Spreadsheet Compare, but it requires Excel 2000 or later. I've also tried to convert the spreadsheets to xml and doing a regular diff, but it really feels as a last resort. Are there any tools for diffing two spreadsheets (xls or ods)? I am primarily looking for a multi-platform/open source tool.

    Read the article

  • Are there any open source SimpleTest Test Cases that test PHP SPL interfaces

    - by JW
    I have quite a few objects in my system that implement the PHP SPL Iterator interface. As I write them I also write tests. I know that writing tests is generally NOT a cut 'n paste job. But, when it comes to testing classes that implement Standard PHP Library interfaces, surely it makes sense to have a few script snippets that can be borrowed and dropped in to a Test class - purely to test that particular interface. It seems sensible to have these publicly available. So, I was wondering if you knew of any?

    Read the article

  • mocking command object in grails controller results in hasErrors() return false no matter what! Plea

    - by egervari
    I have a controller that uses a command object in a controller action. When mocking this command object in a grails' controller unit test, the hasErrors() method always returns false, even when I am purposefully violating its constraints. def save = { RegistrationForm form -> if(form.hasErrors()) { // code block never gets executed } else { // code block always gets executed } } In the test itself, I do this: mockCommandObject(RegistrationForm) def form = new RegistrationForm(emailAddress: "ken.bad@gmail", password: "secret", confirmPassword: "wrong") controller.save(form) I am purposefully giving it a bad email address, and I am making sure the password and the confirmPassword properties are different. In this case, hasErrors() should return true... but it doesn't. I don't know how my testing can be any where reliable if such a basic thing does not work :/ Here is the RegistrationForm class, so you can see the constraints I am using: class RegistrationForm { def springSecurityService String emailAddress String password String confirmPassword String getEncryptedPassword() { springSecurityService.encodePassword(password) } static constraints = { emailAddress(blank: false, email: true) password(blank: false, minSize:4, maxSize: 10) confirmPassword(blank: false, validator: { confirmPassword, form -> confirmPassword == form.password }) } }

    Read the article

  • How to run only the latest/a given test using Rspec?

    - by marcgg
    Let's say I have a big spec file with 20 tests because I'm testing a large model and I had no other way of doing it : describe Blah it "should do X" do ... end it "should do Y" do ... end ... it "should do Z" do ... end end Running a single file is faster than running the whole test suite, but it's still pretty long. Is there a way to run the last one (ie the one at the end of the file, here "should do Z")? If this is not possible, is there a way to specify which test I want to run in my file ?

    Read the article

  • supply inputs to python unittests

    - by zubin71
    I`m relatively new to the concept of unit-testing and have very little experience in the same. I have been looking at lots of articles on how to write unit-tests; however, I still have difficulty in writing tests where conditions like the following arise:- Test user Input. Test input read from a file. Test input read from an environment variable. Itd be great if someone could show me how to approach the above mentioned scenarios; itd still be awesome if you could point me to a few docs/articles/blog posts which I could read.

    Read the article

  • Multiple Asserts in a Unit Test

    - by whatispunk
    I've just finished reading Roy Osherove's "The Art of Unit Testing" and I am trying to adhere to the best practices he lays out in the book. One of those best practices is to not use multiple asserts in a test method. The reason for this rule is fairly clear to me, but it makes me wonder... If I have a method like: public Foo MakeFoo(int x, int y, int z) { Foo f = new Foo(); f.X = x; f.Y = y; f.Z = z; return f; } Must I really write individual unit tests to assert each separate property of Foo is initialized with the supplied value? Is it really all that uncommon to use multiple asserts in a test method? FYI: I am using MSTest.

    Read the article

  • Working effectively unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

  • Custom annotations to configure tests

    - by ace
    First of al let me start off by saying I think custom annotations can be used for this but i'm not totally sure. I would like to have a set of annotations that I can decorate some test classes with. The annotations would allow me to configure the test for different environments. Example: public class Atest extends BaseTest{ private String env; @Login(environment=env) public void testLogin(){ //do something } @SignUp(environment=env) public void testSignUp(){ //do something } } The idea here would be that the login annotation would then be used to lookup the username and password to be used in the testLogin method for testing a login process for a particular environment. So my question(s) is this possible to do with annotations? If so I have not been able to find a decent howto online to do something like this. Everything out there seems to be your basic here's how to do your custom annotations and a basic processor but I haven't found anything for a situation like this. Ideas?

    Read the article

  • Can this loop be sped up in pure Python?

    - by Noctis Skytower
    I was trying out an experiment with Python, trying to find out how many times it could add one to an integer in one minute's time. Assuming two computers are the same except for the speed of the CPUs, this should give an estimate of how fast some CPU operations may take for the computer in question. The code below is an example of a test designed to fulfill the requirements given above. This version is about 20% faster than the first attempt and 150% faster than the third attempt. Can anyone make any suggestions as to how to get the most additions in a minute's time span? Higher numbers are desireable. EDIT: This experiment is being written in Python 3.1 and is 15% faster than the fourth speed-up attempt. def start(seconds): import time, _thread def stop(seconds, signal): time.sleep(seconds) signal.pop() total, signal = 0, [None] _thread.start_new_thread(stop, (seconds, signal)) while signal: total += 1 return total if __name__ == '__main__': print('Testing the CPU speed ...') print('Relative speed:', start(60))

    Read the article

  • Seeking recommendations on automated test framework for C

    - by Hissohathair
    I'm writing some code (some of which uses W3C's libwww) in C. It's been a while since I've touched ANSI C. Back in the day we rolled our own test framework. Does anybody here have any test frameworks that they recommend for C programming? Googling around I was inclined to go with Check. It has a page on other unit testing frameworks in C, a few of which I've taken a quick look at. GNU AutoUnit seemed like it might be a good choice since I'm using the GNU build tools (autoconf, automake) but it doesn't look that alive... Another option would be to use a C++ framework and just write my tests in C++ Anyway, any experienced opinions would be appreciated. Thanks.

    Read the article

  • Facebook Api - Local development, Testserver, Liveserver ... How?

    - by Thijs Kaspers
    I'm working on a new website that uses the Facebook API for users to login and several implementations of the graph Api. My workflow usually is: Development on localhost Development using MAMP/XAMPP or similar software Push to server - testing domain A team of people can test the changes for a few days to see if everything works as planned. Push to server - live domain Changes are live for public Facebook uses the site URL in the appsettings and for security reasons, they will only redirect to that url... Problem is.. I have localhost and 2 different domains. How can I make this work? Ofcourse I could edit the hostsfile, but that only fixes it for localhost.. Still no solution for the testdomain. Please tell me this is somehow possible! I'm getting more and more depressed with the Facebook API.

    Read the article

  • Getting Assert to work in Visual C++ Unit Tests?

    - by garsh0p
    I'm using Visual Studio 2008's built in testing framework in my Visual C++ project. I'm adding a new Test Project, then a new Unit Test. However, I can't use any of the functions provided by Assert. Assert shows up in the Intellisense, but I can't do anything with it. I've done unit tests fine in Visual C#. Am I forgetting to do anything? EDIT: There isn't much code because everything I'm doing is auto-generated by Visual Studio 2008. Here are the steps I'm doing: File - New Project - Visual C++ - General - Empty Project Right click solution in Solution Explorer - Add - New Project... Visual C++ - Test - Test Project Open UnitTest1.cpp (auto-generated) Go to TestMethod1() From here, when I try to use the Assert class (like Assert.AreEqual), I can't do it. If I do the same in a Visual C# project, it works fine.

    Read the article

  • Firefox - Stashing Requests for Deliberate Resubmission to Django App

    - by Koobz
    I've got an object creation form that's somewhat complicated, it contains a few dynamic formsets etc. I'm trying to ensure that these dynamic formsets are intact if the form runs into an error and returns you to the given page. In cases like this, the refresh button actually works well in re-submitting the request, but I can't rely on it. I'm doing some ad-hoc testing in the browser that I'd like to make a bit more repeatable, and eventually move to a unit test using Django's mock client. Is there an extension, or some convenient method to stash requests for later re-submission. The goal: I resubmit the request, tweak the code, eyeball the results, rinse and repeat. Three days later I can come back to it an try it again to make sure it's still working. The closest thing I can think of in this case is simply recording my activity with Selenium ide and replaying it.

    Read the article

  • Expected specifier-qualifier-list before 'CGPoint'

    - by Rob
    My project compiles and runs fine unless I try to compile my Unit Test Bundle it bombs out on the following with an "Expected specifier-qualifier-list before 'CGPoint'" error on line 5: #import <Foundation/Foundation.h> #import "Force.h" @interface WorldObject : NSObject { CGPoint coordinates; float altitude; NSMutableDictionary *forces; } @property (nonatomic) CGPoint coordinates; @property (nonatomic) float altitude; @property (nonatomic,retain) NSMutableDictionary *forces; - (void)setObject:(id)anObject inForcesForKey:(id)aKey; - (void)removeObjectFromForcesForKey:(id)aKey; - (id)objectFromForcesForKey:(id)aKey; - (void)applyForces; @end I have made sure that my Unit Test Bundle is a target of my WorldObject.m and it's header is imported in my testing header: #define USE_APPLICATION_UNIT_TEST 1 #import <SenTestingKit/SenTestingKit.h> #import <UIKit/UIKit.h> #import "Force.h" #import "WorldObject.h" @interface LogicTests : SenTestCase { Force *myForce; WorldObject *myWorldObject; } @end

    Read the article

  • Is my code really not unit-testable?

    - by John
    A lot of code in a current project is directly related to displaying things using a 3rd-party 3D rendering engine. As such, it's easy to say "this is a special case, you can't unit test it". But I wonder if this is a valid excuse... it's easy to think "I am special" but rarely actually the case. Are there types of code which are genuinely not suited for unit-testing? By suitable, I mean "without it taking longer to figure out how to write the test than is worth the effort"... dealing with a ton of 3D math/rendering it could take a lot of work to prove the output of a function is correct compared with just looking at the rendered graphics.

    Read the article

  • Working effectively with unit tests / Anyone tried the in-assembly approach?

    - by CodingCrapper
    I'm trying to re-introduce unit testing into my team as our current coverage is very poor. Our system is quite large 40+ projects/assemblies. We current use a project named [SystemName].Test.csproj were all the test code is dumped and organised to represent the namespaces using folders. This approach is not very scalable and makes it difficult to find tests. I've been thinking about added a Tests folder to each project, this would put the unit tests "in the developers face" and make them easy to find. The downside is the Production release code would contain references to nunit, nmocks as well as the test code and test data.... Has anyone tried this approach? How is everyone else working with unit tests on large projects? Having a Tests project per "real" project/assembly would introduce too many new projs. Thanks in advance

    Read the article

< Previous Page | 81 82 83 84 85 86 87 88 89 90 91 92  | Next Page >