Search Results

Search found 10078 results on 404 pages for 'smoke testing'.

Page 49/404 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • How do you run your unit tests? Compiler flags? Static libraries?

    - by Christopher Gateley
    I'm just getting started with TDD and am curious as to what approaches others take to run their tests. For reference, I am using the google testing framework, but I believe the question is applicable to most other testing frameworks and to languages other than C/C++. My general approach so far has been to do either one of three things: Write the majority of the application in a static library, then create two executables. One executable is the application itself, while the other is the test runner with all of the tests. Both link to the static library. Embed the testing code directly into the application itself, and enable or disable the testing code using compiler flags. This is probably the best approach I've used so far, but clutters up the code a bit. Embed the testing code directly into the application itself, and, given certain command-line switches either run the application itself or run the tests embedded in the application. None of these solutions are particularly elegant... How do you do it?

    Read the article

  • Is it feasible and useful to auto-generate some code of unit tests?

    - by skiwi
    Earlier today I have come up with an idea, based upon a particular real use case, which I would want to have checked for feasability and usefulness. This question will feature a fair chunk of Java code, but can be applied to all languages running inside a VM, and maybe even outside. While there is real code, it uses nothing language-specific, so please read it mostly as pseudo code. The idea Make unit testing less cumbersome by adding in some ways to autogenerate code based on human interaction with the codebase. I understand this goes against the principle of TDD, but I don't think anyone ever proved that doing TDD is better over first creating code and then immediatly therafter the tests. This may even be adapted to be fit into TDD, but that is not my current goal. To show how it is intended to be used, I'll copy one of my classes here, for which I need to make unit tests. public class PutMonsterOnFieldAction implements PlayerAction { private final int handCardIndex; private final int fieldMonsterIndex; public PutMonsterOnFieldAction(final int handCardIndex, final int fieldMonsterIndex) { this.handCardIndex = Arguments.requirePositiveOrZero(handCardIndex, "handCardIndex"); this.fieldMonsterIndex = Arguments.requirePositiveOrZero(fieldMonsterIndex, "fieldCardIndex"); } @Override public boolean isActionAllowed(final Player player) { Objects.requireNonNull(player, "player"); Hand hand = player.getHand(); Field field = player.getField(); if (handCardIndex >= hand.getCapacity()) { return false; } if (fieldMonsterIndex >= field.getMonsterCapacity()) { return false; } if (field.hasMonster(fieldMonsterIndex)) { return false; } if (!(hand.get(handCardIndex) instanceof MonsterCard)) { return false; } return true; } @Override public void performAction(final Player player) { Objects.requireNonNull(player); if (!isActionAllowed(player)) { throw new PlayerActionNotAllowedException(); } Hand hand = player.getHand(); Field field = player.getField(); field.setMonster(fieldMonsterIndex, (MonsterCard)hand.play(handCardIndex)); } } We can observe the need for the following tests: Constructor test with valid input Constructor test with invalid inputs isActionAllowed test with valid input isActionAllowed test with invalid inputs performAction test with valid input performAction test with invalid inputs My idea mainly focuses on the isActionAllowed test with invalid inputs. Writing these tests is not fun, you need to ensure a number of conditions and you check whether it really returns false, this can be extended to performAction, where an exception needs to be thrown in that case. The goal of my idea is to generate those tests, by indicating (through GUI of IDE hopefully) that you want to generate tests based on a specific branch. The implementation by example User clicks on "Generate code for branch if (handCardIndex >= hand.getCapacity())". Now the tool needs to find a case where that holds. (I haven't added the relevant code as that may clutter the post ultimately) To invalidate the branch, the tool needs to find a handCardIndex and hand.getCapacity() such that the condition >= holds. It needs to construct a Player with a Hand that has a capacity of at least 1. It notices that the capacity private int of Hand needs to be at least 1. It searches for ways to set it to 1. Fortunately it finds a constructor that takes the capacity as an argument. It uses 1 for this. Some more work needs to be done to succesfully construct a Player instance, involving the creation of objects that have constraints that can be seen by inspecting the source code. It has found the hand with the least capacity possible and is able to construct it. Now to invalidate the test it will need to set handCardIndex = 1. It constructs the test and asserts it to be false (the returned value of the branch) What does the tool need to work? In order to function properly, it will need the ability to scan through all source code (including JDK code) to figure out all constraints. Optionally this could be done through the javadoc, but that is not always used to indicate all constraints. It could also do some trial and error, but it pretty much stops if you cannot attach source code to compiled classes. Then it needs some basic knowledge of what the primitive types are, including arrays. And it needs to be able to construct some form of "modification trees". The tool knows that it needs to change a certain variable to a different value in order to get the correct testcase. Hence it will need to list all possible ways to change it, without using reflection obviously. What this tool will not replace is the need to create tailored unit tests that tests all kinds of conditions when a certain method actually works. It is purely to be used to test methods when they invalidate constraints. My questions: Is creating such a tool feasible? Would it ever work, or are there some obvious problems? Would such a tool be useful? Is it even useful to automatically generate these testcases at all? Could it be extended to do even more useful things? Does, by chance, such a project already exist and would I be reinventing the wheel? If not proven useful, but still possible to make such thing, I will still consider it for fun. If it's considered useful, then I might make an open source project for it depending on the time. For people searching more background information about the used Player and Hand classes in my example, please refer to this repository. At the time of writing the PutMonsterOnFieldAction has not been uploaded to the repo yet, but this will be done once I'm done with the unit tests.

    Read the article

  • Test Driven Development (TDD) with Rails

    - by macek
    I am looking for TDD resources that are specific to Rails. I've seen the Rails Guide: The Basics of Creating a Rails Plugin which really spurred my interest in the topic. I have the Agile Development with Rails book and I see there's some testing-related information there. However, it seems like the author takes you through the steps of building the app, then adds testing afterward. This isn't really Test Driven Development. Ideally, I'd like a book on this, but a collection of other tutorials or articles would be great if such a book doesn't exist. Things I'd like to learn: Primary goal: Best Practices Unit testing How to utilize Fixtures Possibly using existing development data in place of fixtures What's the community standard here? Writing tests for plugins Testing with session data User is logged in User can access URL /foo/bar Testing success of sending email Thanks for any help!

    Read the article

  • Sequence Number in testing Spring application with JUnit (Hibernating, Spring MVC)

    - by MBK
    I am testing DAO in Spring Application. @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration(locations = "classpath:/applicationContext.xml") @TransactionConfiguration(transactionManager = "transactionManager", defaultRollback = true) @Transactional public class CommentDAOImplTest { @Autowired //testing mehods here} The tests are running good. Iam able to add an comment and I also have a defaultRollback property set. So, the added comment will be deleted automatically. happy!..Now the problem is with the sequence number for mcomment. Can I, in any way rollback the seq number? any suggestins on that. I dont want to mess up the sequrnce number. Business requires comment Id to be showed. (I still dont know why). I know in memory db is an option....but I am guessing defaultRollback purpose is to eliminate in memory db testing and mocking. (Just my opinion.)

    Read the article

  • Code excavations, wishful invocations, perimeters and domain specific unit test frameworks

    - by RoyOsherove
    One of the talks I did at QCON London was about a subject that I’ve come across fairly recently , when I was building SilverUnit – a “pure” unit test framework for silverlight objects that depend on the silverlight runtime to run. It is the concept of “cogs in the machine” – when your piece of code needs to run inside a host framework or runtime that you have little or no control over for testability related matters. Examples of such cogs and machines can be: your custom control running inside silverlight runtime in the browser your plug-in running inside an IDE your activity running inside a windows workflow your code running inside a java EE bean your code inheriting from a COM+ (enterprise services) component etc.. Not all of these are necessarily testability problems. The main testability problem usually comes when your code actually inherits form something inside the system. For example. one of the biggest problems with testing objects like silverlight controls is the way they depend on the silverlight runtime – they don’t implement some silverlight interface, they don’t just call external static methods against the framework runtime that surrounds them – they actually inherit parts of the framework: they all inherit (in this case) from the silverlight DependencyObject Wrapping it up? An inheritance dependency is uniquely challenging to bring under test, because “classic” methods such as wrapping the object under test with a framework wrapper will not work, and the only way to do manually is to create parallel testable objects that get delegated with all the possible actions from the dependencies.    In silverlight’s case, that would mean creating your own custom logic class that would be called directly from controls that inherit from silverlight, and would be tested independently of these controls. The pro side is that you get the benefit of understanding the “contract” and the “roles” your system plays against your logic, but unfortunately, more often than not, it can be very tedious to create, and may sometimes feel unnecessary or like code duplication. About perimeters A perimeter is that invisible line that your draw around your pieces of logic during a test, that separate the code under test from any dependencies that it uses. Most of the time, a test perimeter around an object will be the list of seams (dependencies that can be replaced such as interfaces, virtual methods etc.) that are actually replaced for that test or for all the tests. Role based perimeters In the case of creating a wrapper around an object – one really creates a “role based” perimeter around the logic that is being tested – that wrapper takes on roles that are required by the code under test, and also communicates with the host system to implement those roles and provide any inputs to the logic under test. in the image below – we have the code we want to test represented as a star. No perimeter is drawn yet (we haven’t wrapped it up in anything yet). in the image below is what happens when you wrap your logic with a role based wrapper – you get a role based perimeter anywhere your code interacts with the system: There’s another way to bring that code under test – using isolation frameworks like typemock, rhino mocks and MOQ (but if your code inherits from the system, Typemock might be the only way to isolate the code from the system interaction.   Ad-Hoc Isolation perimeters the image below shows what I call ad-hoc perimeter that might be vastly different between different tests: This perimeter’s surface is much smaller, because for that specific test, that is all the “change” that is required to the host system behavior.   The third way of isolating the code from the host system is the main “meat” of this post: Subterranean perimeters Subterranean perimeters are Deep rooted perimeters  - “always on” seams that that can lie very deep in the heart of the host system where they are fully invisible even to the test itself, not just to the code under test. Because they lie deep inside a system you can’t control, the only way I’ve found to control them is with runtime (not compile time) interception of method calls on the system. One way to get such abilities is by using Aspect oriented frameworks – for example, in SilverUnit, I’ve used the CThru AOP framework based on Typemock hooks and CLR profilers to intercept such system level method calls and effectively turn them into seams that lie deep down at the heart of the silverlight runtime. the image below depicts an example of what such a perimeter could look like: As you can see, the actual seams can be very far away form the actual code under test, and as you’ll discover, that’s actually a very good thing. Here is only a partial list of examples of such deep rooted seams : disabling the constructor of a base class five levels below the code under test (this.base.base.base.base) faking static methods of a type that’s being called several levels down the stack: method x() calls y() calls z() calls SomeType.StaticMethod()  Replacing an async mechanism with a synchronous one (replacing all timers with your own timer behavior that always Ticks immediately upon calls to “start()” on the same caller thread for example) Replacing event mechanisms with your own event mechanism (to allow “firing” system events) Changing the way the system saves information with your own saving behavior (in silverunit, I replaced all Dependency Property set and get with calls to an in memory value store instead of using the one built into silverlight which threw exceptions without a browser) several questions could jump in: How do you know what to fake? (how do you discover the perimeter?) How do you fake it? Wouldn’t this be problematic  - to fake something you don’t own? it might change in the future How do you discover the perimeter to fake? To discover a perimeter all you have to do is start with a wishful invocation. a wishful invocation is the act of trying to invoke a method (or even just create an instance ) of an object using “regular” test code. You invoke the thing that you’d like to do in a real unit test, to see what happens: Can I even create an instance of this object without getting an exception? Can I invoke this method on that instance without getting an exception? Can I verify that some call into the system happened? You make the invocation, get an exception (because there is a dependency) and look at the stack trace. choose a location in the stack trace and disable it. Then try the invocation again. if you don’t get an exception the perimeter is good for that invocation, so you can move to trying out other methods on that object. in a future post I will show the process using CThru, and how you end up with something close to a domain specific test framework after you’re done creating the perimeter you need.

    Read the article

  • “It’s only test code…”

    - by Chris George
    “Let me hack this in, it’s only test code”, “Don’t worry about getting it reviewed, it’s only test code”, “It doesn’t have to be elegant or efficient, it’s only test code”… do these phrases sound familiar? Chances are if you’ve working with test automation, at one point or other you will have heard these phrases, you have probably even used them yourself! What is certain is that code written under this “it’s only test code” mantra will come back and bite you in the arse! I’ve recently encountered a case where a test was giving a false positive, therefore hiding a real product bug because that test code was very badly written. Firstly it was very difficult to understand what the test was actually trying to achieve let alone how it was doing it, and this complexity masked a simple logic error. These issues are real and they do happen. Let’s take a step back from this and look at what we are trying to do. We are writing test code that tests product code, and we do this to create a suite of tests that will help protect our software against regressions. This test code is making sure that the product behaves as it should by employing some sort of expected result verification. The simple cases of these are generally not a problem. However, automation allows us to explore more complex scenarios in many more permutations. As this complexity increases then so does the complexity of the test code. It is at this point that code which has not been architected properly will cause problems.   Keep your friends close… So, how do we make sure we are doing it right? The development teams I have worked on have always had Test Engineers working very closely with their Software Engineers. This is something that I have always tried to take full advantage of. They are coding experts! So run your ideas past them, ask for advice on how to structure your code, help you design your data structures. This may require a shift in your teams viewpoint, as contrary to this section title and folklore, Software Engineers are not actually the mortal enemy of Test Engineers. As time progresses, and test automation becomes more and more ingrained in what we do, the two roles are converging more than ever. Over the 16 years I have spent as a Test Engineer, I have seen the grey area between the two roles grow significantly larger. This serves to strengthen the relationship and common bond between the two roles which helps to make test code activities so much easier!   Pair for the win Possibly the best thing you could do to write good test code is to pair program on the task. This will serve a few purposes. you will get the benefit of the Software Engineers knowledge and experience the Software Engineer will gain knowledge on the testing process. Sharing the love is a wonderful thing! two pairs of eyes are always better than one… And so are two brains. Between the two of you, I will guarantee you will derive more useful test cases than if it was just one of you.   Code reviews Another policy which certainly pays dividends is the practice of code reviews. By having one of your peers review your code before you commit it serves two purposes. Firstly, it forces you to explain your code. Just the act of doing this will often pick up errors in your code. Secondly, it gets yet another pair of eyes on your code! I cannot stress enough how important code reviews are. The benefits they offer apply as much to product code as test code. In short, Software and Test Engineers should all be doing them! It can be extended even further by getting test code reviewed by a Software Engineer and a Test Engineer, and likewise product code. This serves to keep both functions in the loop with changes going on within your code base.   Learn from your devs I briefly touched on this earlier but I’d like to go into more detail here. Pairing with your Software Engineers when writing your test code is such an amazing opportunity to improve your coding skills. As I sit here writing this article waiting to be called into court for jury service, it reminds me that it takes a lot of patience to be a Test Engineer, almost as much as it takes to be a juror! However tempting it is to go rushing in and start writing your automated tests, resist that urge. Discuss what you want to achieve then talk through the approach you’re going to take. Then code it up together. I find it really enlightening to ask questions like ‘is there a better way to do this?’ Or ‘is this how you would code it?’ The latter question, especially, is where I learn the most. I’ve found that most Software Engineers will be reluctant to show you the ‘right way’ to code something when writing tests because they perceive the ‘right way’ to be too complicated for the Test Engineer (e.g. not mentioning LINQ and instead doing something verbose). So by asking how THEY would code it, it unleashes their true dev-ness and advanced code usually ensues! I would like to point out, however, that you don’t have to accept their method as the final answer. On numerous occasions I have opted for the more simple/verbose solution because I found the code written by the Software Engineer too advanced and therefore I would find it unreadable when I return to the code in a months’ time! Always keep the target audience in mind when writing clever code, and in my case that is mostly Test Engineers.  

    Read the article

  • Apache keeps resetting while testing on localhost...

    - by Scott
    Hello everyone. I'm getting errors while testing web pages on localhost. I'm running Windows 7 64-bit. I'm not using Wamp or Xampp. This is what the error.log tells me (I've highlighted the errors in question): [Sat Mar 06 05:10:55 2010] [notice] Apache/2.2.14 (Win32) PHP/5.2.13 configured -- resuming normal operations [Sat Mar 06 05:10:55 2010] [notice] Server built: Sep 28 2009 22:41:08 [Sat Mar 06 05:10:55 2010] [notice] Parent: Created child process 6588 httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName httpd.exe: Could not reliably determine the server's fully qualified domain name, using 192.168.2.2 for ServerName [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Child process is running [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Acquired the start mutex. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting 1000 worker threads. [Sat Mar 06 05:10:55 2010] [notice] Child 6588: Starting thread to listen on port 80. Any input would be greatly appreciated. Thanks.

    Read the article

  • testing ssl cert for smtps => "secure connection could not be established with this website"

    - by cc young
    testing ssl cert on server using a web service. https, imaps and pop3s all check, but smtps yields the message "we advise you not to submit any confidential or personal data to this website because a secure connection could not be established with this website." running postfix tls logging: connect from s097.networking4all.com[213.249.64.242] lost connection after UNKNOWN from s097.networking4all.com[213.249.64.242] disconnect from s097.networking4all.com[213.249.64.242] these work correctly: telnet mydomain.net 587 openssl s_client -starttls smtp -crlf -connect mydomain.net:587 but cannot get email using ssl to log into either 587 or 564 - get same "UNKNOWN" problem. email smtp w/o ssh works fine. the test site is http://www.networking4all.com/en/support/tools/site+check/

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • gtk2/mate apps are choppy on debian testing [migrated]

    - by b0ti
    I have recently upgraded to a core-i7 system and now all the apps from mate (GTK2 based) are very slow. Basically when I switch to a workspace that has a couple mate-terminals open, they are redrawn as if these were sent over the network. GTK3 apps and firefox work properly and are refreshed instantly. I'm running Debian testing. I have tried to reinstall xorg and related packages and everything seems to work fine except for this. Here is my xorg.log. Any hints?

    Read the article

  • Install package from debian stable unavailable in testing/unstable repositories

    - by overprescribed
    I'm currently running Debian testing and would like to install a package only available in the stable repositories. (I'm surprised I haven't come across this issue before) I could download the .deb directly and use dpkg to manually install it, but installing packages from one release into another is usually frowned upon. What's the best course of action? EDIT: Zoredache is right, I didn't realize this package has been removed from future versions of Debian as it no longer has a maintainer. It is of course, also pointed out by Zoredache, important to find out why a particular package has been removed before attempting to install it. I've altered the title slightly to reflect the actual issue.

    Read the article

  • Missing kernel on debian-testing-amd64-DVD-1

    - by Kyrol
    I want to install the Debian testing linux version. But at a certain point of the installation, an error occurs: Impossible to install kernel. a compatible kernel version is missing... … or something like that. I see that the image ISO of the DVD is less than 4GB, but is impossible that there is no compatible kernel version for the system. I have an Asus X53Sc with Intel core I7-2630QM, Geforce GT 520MX.

    Read the article

  • ipv6 ssh tunnel service for testing?

    - by Geuis
    I need to do some testing on a service that I run to make sure that it can handle ipv6 addresses. Basically, I need to connect to it from an ipv6 address. I've created a tunnel via tunnelbroker.net, but I'm finding the steps required to get a tunnel configured on my machine and router to be a lot of trouble. Given that I'm not a networking specialist and that I haven't had to dig into routing configuration in years, I'd like to know if there's an existing service that I can just ssh into and use it as my ipv6 endpoint. Simply being able to curl or wget from such an endpoint to my service would be more than enough to test what I need. Thanks!

    Read the article

  • Ogre3d particle effect causing error in iPhone

    - by anu
    1) First I have added the Particle Folder from the OgreSDK( Contains Smoke.particle) 2) Added the Smoke.material And smoke.png and smokecolors.ong 3) After this I added the Plugin = Plugin_ParticleFX in the plugins.cfg Here is my code: #Defines plugins to load # Define plugin folder PluginFolder=./ # Define plugins Plugin=RenderSystem_GL Plugin=Plugin_ParticleFX 4) I have added the particle path in the resources.cfg( adding the particle file in this get crash ) #Resource locations to be added to the 'bootstrap' path # This also contains the minimum you need to use the Ogre example framework [Bootstrap] Zip=media/packs/SdkTrays.zip # Resource locations to be added to the default path [General] FileSystem=media/models FileSystem=media/particle FileSystem=media/materials/scripts FileSystem=media/materials/textures FileSystem=media/RTShaderLib FileSystem=media/RTShaderLib/materials Zip=media/packs/cubemap.zip Zip=media/packs/cubemapsJS.zip Zip=media/packs/skybox.zip 6) Finally I did all the settings, my code is here: mPivotNode = OgreFramework::getSingletonPtr()->m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); // create a pivot node // create a child node and attach an ogre head and some smoke to it Ogre::SceneNode* headNode = mPivotNode->createChildSceneNode(Ogre::Vector3(100, 0, 0)); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createEntity("Head", "ogrehead.mesh")); headNode->attachObject(OgreFramework::getSingletonPtr()->m_pSceneMgr->createParticleSystem("Smoke", "Examples/Smoke")); 7) I run this, I got the below error: An exception has occurred: OGRE EXCEPTION(2:InvalidParametersException): Cannot find requested emitter type. in ParticleSystemManager::_createEmitter at /Users/davidrogers/Documents/Ogre/ogre-v1-7/OgreMain/src/OgreParticleSystemManager.cpp (line 353) 8) Getting crash at: (void)renderOneFrame:(id)sender { if(!OgreFramework::getSingletonPtr()->isOgreToBeShutDown() && Ogre::Root::getSingletonPtr() && Ogre::Root::getSingleton().isInitialised()) { if(OgreFramework::getSingletonPtr()->m_pRenderWnd->isActive()) { mStartTime = OgreFramework::getSingletonPtr()->m_pTimer->getMillisecondsCPU(); //( getting crash here) Does anyone know what could be causing this?

    Read the article

  • What are various methods for discovering test cases

    - by NativeByte
    All, I am a developer but like to know more about testing process and methods. I believe this helps me write more solid code as it improves the cases I can test using my unit tests before delivering product to the test team. I have recently started looking at Test Driven Development and Exploratory testing approach to software projects. Now it's easier for me to find test cases for the code that I have written. But I am curios to know how to discover test cases when I am not the developer for the functionality under test. Say for e.g. let's have a basic user registration form that we see on various websites. Assuming the person testing it is not the developer of the form, how should one go about testing the input fields on the form, what would be your strategy? How would you discover test cases? I believe this kind of testing benefits from exploratory testing approach, i may be wrong here though. I would appreciate your views on this. Thanks, Byte

    Read the article

  • Test SSL Certificate for MQ SSL Testing

    - by user171523
    I am in the process of testing MQ calls over the SSL. I woul like to know where i can get some demo SSL certificates. That i can use them for testing. I also would like to know if there any code sample which i can use to pass SSL kind of connection. The exaple i am looking is in C#

    Read the article

  • testing In-app purchases iphone??

    - by hemant
    while testing my application i bought a product it had on my phone through the test account...now i deleted the application and reinstalled it but in my application it still shows that the product is already bought...when we buy through in-app purchase does the product i bought or its id gets stored on iphone filesystem?? i am just testing the application so i dont know much about what could be wrong in it??

    Read the article

  • stop cassiniDev after stop testing

    - by senzacionale
    i run cassiniDev from cmd C:\CruiseControl.NET-1.5.0.6237\cassinidev.3.5.0.5.src-repack\CassiniDev\bin\Debug\CassiniDev.exe /a:D:_CCNET\proj /pm:Specific /p:3811 and then start debugging and testing. How can i stop cassiniDev from CMD after i finished testing. I try with cassiniDev_console but console not working so i am using cassiniDev from console.

    Read the article

  • Good hosted sites to manage quality testing?

    - by Chirag Patel
    Basically, I would like to manage quality testing with an issue management system focused on quality testing? I can't use the typical issue management system such as Lighthouse, FogBugz because each test is written as a ticket and 20 to 30 tickets need to be duplicated (w/ no history) every time we start a quality cycle. Do any (hosted) sites exist? We're currently using a Google Spreadsheet so it can be collaboratively edited.

    Read the article

  • Suggestions for open source testing tool for cloud computing

    - by vikraman
    Hi, I want to know if there is any open source testing tool for cloud computing. We have built a cloud framework with Xen, Eucalyptus, Hadoop, HBase as different layers. I am not looking at testing each of these tools separately, but i want to test them from the perspective of fitting into a cloud environment (for example scalability of xen hypervisor to handle multiple VMs). Would be great if you can suggest me some tool (open source) for the above.

    Read the article

  • unit testing of installer

    - by Alien01
    What is the best process where code is checkedin by developers, installer is created by build engineer and release to QA to test the installer. Should the installer be release to QA without unit testing by Dev. If dev do some changes then they should wait until QA to report bugs.Or if installer first given to dev for unit testing and once they signoff then only it should be release to QA?

    Read the article

  • How to do Automated UI testing for Flash

    - by Ran
    I have an actionscript 2 application that I'd like to write automated UI testing for. For example I'd like to simulate a mouse click on a button and validate that a movie-clip is displayed at the right position and in the right color... Basically, UI testing. What are the best tools available or what is the desired approach? In JavaScript there is the selenium framework which does a nice job. Any similar tool for flash?

    Read the article

  • How to clear Windows disk read cache?

    - by Sebastiaan Megens
    For performance testing I need to clear Windows' disk read cache. I tried googling but I couldn't find anything other than rebooting or other manual stuff. Before I give in and do that, I'd like to know if anyone knows of a way to clear Windows disk read cache. I'm testing on Windows 7, but I'm also interested in Windows XP solutions.

    Read the article

  • Unit testing a controller in ASP.NET MVC 2 with RedirectToAction

    - by Rob Walker
    I have a controller that implements a simple Add operation on an entity and redirects to the Details page: [HttpPost] public ActionResult Add(Thing thing) { // ... do validation, db stuff ... return this.RedirectToAction<c => c.Details(thing.Id)); } This works great (using the RedirectToAction from the MvcContrib assembly). When I'm unit testing this method I want to access the ViewData that is returned from the Details action (so I can get the newly inserted thing's primary key and prove it is now in the database). The test has: var result = controller.Add(thing); But result here is of type: System.Web.Mvc.RedirectToRouteResult (which is a System.Web.Mvc.ActionResult). It doesn't hasn't yet executed the Details method. I've tried calling ExecuteResult on the returned object passing in a mocked up ControllerContext but the framework wasn't happy with the lack of detail in the mocked object. I could try filling in the details, etc, etc but then my test code is way longer than the code I'm testing and I feel I need unit tests for the unit tests! Am I missing something in the testing philosophy? How do I test this action when I can't get at its returned state?

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >