Search Results

Search found 65503 results on 2621 pages for 'real application testing'.

Page 188/2621 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • how to test Asp.Net frontend?

    - by Fabiano
    Hi Does someone know a good way to automate the gui testing based on an Asp.Net frontend? (instead of always run the pages and test the inputs and outputs of a control by hand) Are there any references or framework to support these tests? Thanks

    Read the article

  • c# Unit Test: Writing to Settings in unit test does not save values in user.config

    - by HorstWalter
    I am running a c# unit test (VS 2008). Within the test I do write to the settings, which should result in saving the data to the user.config. Settings.Default.X = "History"; // X is string Settings.Default.Save(); But this simply does not create the file (I have crosschecked under "C:\Documents and Settings\HW\Local Settings\Application Data"). If I create the same stuff as a Console application, there is no problem persisting the data (same code). Is there something special I need to consider doing this in a UnitTest?

    Read the article

  • How to unit test django middleware?

    - by luc
    I've implemented a django middleware for getting pages from the database (something similar to the flatpage subframework) Unfortunately it seems that it is not possible to test it with the django testing framework. Any suggestion? Thanks in advance Update: maybe a mistake in my test but I can't get an object that should be returned by a middleware. I'll inverstigate more. Does anybody have unit-tested a middleware code?

    Read the article

  • Python - doctest vs. unittest

    - by Sean
    I'm trying to get started with unit testing in Python and I was wondering if someone could inform me of the advantages and disadvantages of doctest and unittest. What conditions would you use each for?

    Read the article

  • How to: mirror a staging server from a production server

    - by Zombies
    We want to mirror our current production app server (Oracle Application Server) onto our staging server. As it stands right now, various things are out of sync, and what may work in testing/QA can easily fail in production because of settings/patch/etc inconsistencies. I was thinking what would be best is to clone the entire disk daily and push it onto the staging server... Would this be the best method...?

    Read the article

  • Silverlight isolated storage: what identifies an "application"?

    - by Joe White
    The docs for Silverlight's IsolatedStorageFile.GetUserStoreForApplication just say that the isolated storage is specific to the "application", and that each different application will have its own storage independent of all other "applications" (but with one quota for the entire domain). That's great, but I haven't found anything yet that explains just what "application" is supposed to mean (either in the Silverlight docs or the regular .NET Framework docs). What information does Silverlight, in particular, use to decide that "this is application A" and "this is application B"? Does it just go off the URI to the .xap file, or what?

    Read the article

  • Is there a way to unit test an async method?

    - by Jiho Han
    I am using Xunit and NMock on .NET platform. I am testing a presentation model where a method is asynchronous. The method creates an async task and executes it so the method returns immediately and the state I need to check aren't ready yet. I can set a flag upon finish without modifying the SUT but that would mean I would have to keep checking the flag in a while loop for example, with perhaps timeout. What are my options?

    Read the article

  • What's the point of some of shoulda's macros?

    - by ryeguy
    I think shoulda is really neat, but what I don't understand is why some of the macros exist, such as: should_validate_uniqueness_of :title should_validate_presence_of :body, :message => /wtf/ should_validate_presence_of :title should_validate_numericality_of :user_id I'm relatively new to testing, but what purpose do these serve? They're almost an exact mirror of the same validations that happen in the model. For example, what exactly do you accomplish by going into your model and writing validates_uniqueness_of :title and then writing a test that says should_validate_uniqueness_of :title?

    Read the article

  • How can a test script inform R CMD check that it should emit a custom message?

    - by mariotomo
    I'm writing a R package (delftfews) here at office. we are using svUnit for unit testing. our process for describing new functionality: we define new unit tests, initially marked as DEACTIVATED; one block of tests at a time we activate them and implement the function described by the tests. almost all the time we have a small amount of DEACTIVATED tests, relative to functions that might be dropped or will be implemented. my problem/question is: can I alter the doSvUnit.R so that R CMD check pkg emits a NOTE (i.e. a custom message "NOTE" instead of "OK") in case there are DEACTIVATED tests? as of now, we see only that the active tests don't give error: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ OK * checking PDF version of manual ... OK which is all right if all tests succeed, but less all right if there are skipped tests and definitely wrong if there are failing tests. In this case, I'd actually like to see a NOTE or a WARNING like the following: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE 6 test(s) were skipped. WARNING 1 test(s) are failing. * checking PDF version of manual ... OK As of now, we have to open the doSvUnit.Rout to check the real test results. I contacted two of the maintainers at r-forge and CRAN and they pointed me to the sources of R, in particular the testing.R script. if I understand it correctly, to answer this question we need patching the tools package: scripts in the tests directory are called using a system call, output (stdout and stderr) go to one single file, there are two possible outcomes: ok or not ok, so I opened a change request on R, proposing something like bit-coding the return status, bit-0 for ERROR (as it is now), bit-1 for WARNING, bit-2 for NOTE. with my modification, it would be easy producing this output: . . * checking for unstated dependencies in tests ... OK * checking tests ... Running ‘doSvUnit.R’ NOTE - please check doSvUnit.Rout. WARNING - please check doSvUnit.Rout. * checking PDF version of manual ... OK Brian Ripley replied "There are however several packages with properly written unit tests that do signal as required. Please do take this discussion elsewhere: R-bugs is not the place to ask questions." and closed the change request. anybody has hints?

    Read the article

  • How do you write your QTP Tests?

    - by Josh Harris
    I am experimenting with using QTP for some webapp ui automation testing and I was wondering how people usually write their QTP tests. Do you use the object map, descriptive programming, a combination or some other way all together? Any little code example would be appreciated, Thank you

    Read the article

  • How do you test the usability of your user interfaces

    - by Martin
    How do you test the usability of the user interfaces of your applications - be they web or desktop? Do you just throw it all together and then tweak it based on user experience once the application is live? Or do you pass it to a specific usability team for testing prior to release? We are a small software house, but I am interested in the best practices of how to measure usability. Any help appreciated.

    Read the article

  • Catching typos in scripting languages

    - by Geo
    If your scripting language of choice doesn't have something like Perl's strict mode, how are you catching typos? Are you unit testing everything? Every constructor, every method? Is this the only way to go about it?

    Read the article

  • How to hide a Mono application on the OSX Dock

    - by Chris
    I have a Mono application that should not show on the dock, but will occasionally show a window. I want neither menu bar nor dock icon to show for this application. I have my application wrapped in an app bundle, and my info.plist file has the LSUIElement set to "1". This does not seem to be hiding my application from the Dock. I have tried also calling osascript with the following info in a Process.Start: osascript -e 'tell application "System Events" to set visible of process "myapp" to false' This returns a System Event error code: -10006. Thus far, I've had no luck finding out what that means. I've also tried all the standard Hide() and Visibility = false stuff inside Mono. Anyone found a workaround for this, or have an idea a direction I can look in? For the most part, working in Mono has been straightforward .Net coding, but this has me stumped.

    Read the article

  • Alternatives to userfly.com

    - by dfa
    During my master thesis I need to study how my users interact with my webapp. There are alternatives to userfly.com? I want just to know how I can do some usability testing without much hassle. Requests: must work under https cheap unobtrusive, if possible

    Read the article

  • Visual objects editor

    - by Vladimir
    I am looking for a tool which will make able to edit parts of Java code visually using something like inspector and place code back. For example: Person p = new Person(); p.setName("Bill Libb"); p.setAge(25); This code should be generated from visual inspector and copied into Java IDE. This will help quickly create sample objects for testing.

    Read the article

  • How to do test-first development with MVVM

    - by Thorsten79
    How do you build WPF MVVM applications and user controls test-first? I find myself writing ungodly amounts of XAML with DataTemplates before I even get to unit-testing my viewmodels. Should I develop the whole viewmodel system first before even writing XAML for it? Any help appreciated.

    Read the article

  • Using JUnit as an acceptance test framework

    - by Chris Knight
    OK, so I work for a company who has openly adopted agile practices for development in recent years. Our unit tests and code quality are improving. One area we still are working on is to find what works best for us in the automated acceptance test arena. We want to take our well formed user stories and use these to drive the code in a test driven manner. This will also give us acceptance level tests for each user story which we can then automate. To date, we've tried Fit, Fitnesse and Selenium. Each have their advantages, but we've also had real issues with them as well. With Fit and Fitnesse, we can't help but feel they overcomplicate things and we've had many technical issues using them. The business haven't fully bought in these tools and aren't particularly keen on maintaining the scripts all the time (and aren't big fans of the table style). Selenium is really good, but slow and relies on real time data and resources. One approach we are now considering is the use of the JUnit framework to provide similiar functionality. Rather than testing just a small unit of work using JUnit, why not use it to write a test (using the JUnit framework) to cover an acceptance level swath of the application? I.e. take a new story ("As a user I would like to see basic details of my policy...") and write a test in JUnit which starts executing application code at the point of entry for the policy details link but covers all code and logic down to the stubbed data access layer and back to the point of forwarding to the next page in the application, asserting on what data the user should see on that page. This seems to me to have the following advantages: Simplicity (no additional frameworks required) Zero effort to integrate with our Continuous Integration build server (since it already handles our JUnit tests) Full skillset already present in the team (its just a JUnit test after all) And the downsides being: Less customer involvement (though they are heavily involved in writing the user stories in the first place from which the acceptance tests will be written) Perhaps more difficult to understand (or make understood) the user story and acceptance criteria in a JUnit class verses a freetext specification ala Fit or Fitnesse So, my question is really, have you ever tried this method? Ever considered it? What are your thoughts? What do you like and dislike about this approach? Finally, please only mention alternative frameworks if you can say why you like or dislike them more than this approach.

    Read the article

  • How do I display the validation which failed in my Rails unit test?

    - by JD
    Summary: Failed unit tests tell me which assert (file:line) failed, but not which validation resulted in the failure. More info: I have 11 validations in one of my models. Unit testing is great, whether I run `rake test:units --trace' or 'ruby -Itest test/unit/mymodel_test.rb'. However, despite the fact that it tells me exactly which assert() failed me, I am not told which validation failed. I must be missing something obvious, because I can't ask Google this question well enough to get an answer. Thanks :)

    Read the article

  • Monitor what users are doing in an .net Application and trigger application functionality changes.

    - by Jamie Clayton
    I need some suggestions for how to implement a very basic mechanism that logs what multiple users are doing in an application. When another feature is running I then need to change the application, to restrict functionality. Use Case Example User can normaly edit unpaid records. If the application then runs a Payrun process (Long), I need to then change parts of the application to restrict functionality for a short period of time (eg. Make existing unpaid records readonly). Any suggestions on how I can do this in a .net application?

    Read the article

  • Have you been in cases where TDD increased development time?

    - by BillyONeal
    Hello everyone :) I was reading http://stackoverflow.com/questions/2512504/tdd-how-to-start-really-thinking-tdd and I noticed many of the answers indicate that tests + application should take less time than just writing the application. In my experience, this is not true. My problem though is that some 90% of the code I write has a TON of operating system calls. The time spent to actually mock these up takes much longer than just writing the code in the first place. Sometimes 4 or 5 times as long to write the test as to write the actual code. I'm curious if there are other developers in this kind of a scenario.

    Read the article

  • WatiN, VBScript

    - by little_devil
    I am using WatiN for my web tests. In one of my web pages, I have a VBScript function which opens a dialogbox. I am unable to access it using WatiN. I tried using WatiNTestRecorder; it was unsuccessful. I tried this also: http://blogs.dovetailsoftware.com/blogs/kmiller/archive/2008/07/16/scenario-testing-with-watin.aspx, unsuccessful again.

    Read the article

  • When should automation begin?

    - by onesith
    I want to start implementing automation. I just don't know if this would be a good use of resource. The application is growing at an exponential rate but i don't know at what point does automation benefits testing. Can you give me reasons why you would automate even though you have manual testers?

    Read the article

  • py.test import context problems (causes Django unit test failure)

    - by dhill
    I made a following test: # main.py import imported print imported.f.__module__ # imported.py def f(): pass # test_imported.py (py.test test case) import imported def test_imported(): result = imported.f.__module__ assert result == 'imported' Running python main.py, gives me imported, but running py.test gives me error and result value is moduletest.imported (moduletest is the name of the directory I keep the test in. It doesn't contain __init__.py, moduletest is the only directory containing *.py files in ~/tmp). How can I fix result value? The long story: I'm getting strange errors, while testing Django application. A call to reverse() from (django.urlresolvers). with function object foo as argument in tests crashes with NoReverseMatch: Reverse for 'site.app.views.foo'. The same call inside application works. I checked and it is converted to 'app.views.foo' (without site prefix). I first suspected my customised test setup for Django, but then I made above test.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >