Search Results

Search found 10681 results on 428 pages for 'usability testing'.

Page 23/428 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Unit testing http handlers?

    - by MockedMan.Object
    My current project based in Asp .net makes considerable use of Http handlers to process various requests? So, is there any way by which I can test the functionality of each of the handlers using unit test cases? We are using Nunit and Moq framework to facilitate unit testing.

    Read the article

  • Interesting links week #24 and #25

    - by erwin21
    Below a list of interesting links that I found this week: Interaction: Design Usability and All About It Frontend: CSS Lint – CSS Cleaning Tool 10 HTML Entity Crimes You Really Shouldn’t Commit Development: OWASP Top 10 for .NET developers part 7: Insecure Cryptographic Storage C#/.NET Fundamentals: Choosing the Right Collection Class Mobile: Tips to Design a Website for Mobile Marketing: 30 (New) Google Ranking Factors You May Over- or Underestimate Other: 5 Little-Known Web Files That Can Enhance Your Website Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Interesting links week #10

    - by erwin21
    Below a list of interesting links that I found this week: Interaction: The Ultimate 20 Usability Tips for Your Website Frontend: Adobe Releases Flash-to-HTML5 Converter, Codenamed Wallaby Development: 10 Tips for Decreasing Web Page Load Times Ten Things Every WordPress Plugin Developer Should Know Progressive enhancement tutorial with ASP.NET MVC 3 and jQuery Marketing: 5 Tips for SEO & User-Friendly Copy Other: Interested in more interesting links follow me at twitter http://twitter.com/erwingriekspoor

    Read the article

  • Cool Enhancements Everyone Can Enjoy

    - by Ruth
    With Release 17, we have a few visual and functional enhancements that make using CRM On Demand that much better for us all. I'll mention a few here, but to get the full outline of these upgrades, I recommend taking 10 minutes to view the Release 17 Usability Transfer of Information course. First and foremost, I find the ability to customize your theme (or skin) pretty cool, but I've said that before. Take a look at the Selecting Your Theme and the Themes - Create Your CRM Style blog articles for more information. My next favorite is the resizeable user interface (UI). CRM On Demand will dynamically fit the device and screen resolution you're using, which includes the resizing of fields, field editors and pop-ups. If you have a wide screen like me, you should appreciate that one very much. To make it easier to see that resized UI, the detail pages got a little face lift. New horizontal lines and other subtle changes make those pages easier to read. Also, those things you need to know, like error messages and inline help are highlighted with a little icon to show the message type. You may not think every change to the detail pages are particularly exciting, but I'm sure you'll enjoy the new Head Up Display, which saves you scrolling time by adding links to related information sections. I like that the head up display travels with me as I move up and down the page...it's like a little friend that takes me where I want to go as fast as possible. You may also really like the fact that the copy record feature is now available for all record types from both detail pages and lists. Your company administrator can choose which fields get copied, so you can maximize your efficiency when creating new records. Lists also got a face lift. Alternating colors in rows make it easier to see your data. Also, the Favorite Lists icon is now on the list itself, so you can save your most useful lists with one click. If you've ever tried to create a new list with 10 columns or more, you'll be happy to hear that the maximum number of columns in a list has increased from 9 to 20. This is great news, but doesn't mean you should include the kitchen sink in your list...excess columns can slow list performance. So choose your columns wisely. Again, these are just a few of my favorite things. Let us know what you think about the new usability features. What are your favorite things?

    Read the article

  • How to handle encryption key conflicts when synchronizing data?

    - by Rafael
    Assume that there is data that gets synchronized between several devices. The data is protected with a symmetric encryption algorithm and a key. The key is stored on each device and encrypted with a password. When a user changes the password only the key gets re-encrypted. Under normal circumstances, when there is a good network connection to other peers, the current key gets synchronized and all data on the new device gets encrypted with the same key. But how to handle situations where a new device doesn’t have a network connection and e.g. creates its own new, but incompatible key? How to keep the usability as high as possible under such circumstances? The application could detect that there is no network and hence refuse to start. That’s very bad usability in my opinion, because the application isn’t functional at all in this case. I don’t consider this a solution. The application could ignore the missing network connection and create a new key. But what to do when the application gains a network connection? There will be several incompatible keys and some parts of the underlying data could only be encrypted with one key and other parts with another key. The situation would get worse if there would be more keys than just two and the application would’ve to ask every time for a password when another object that should get decrypted with another key would be needed. It is very messy and time consuming to try to re-encrypt all data that is encrypted with another key with a main key. What should be the main key at all in this case? The oldest key? The key with the most encrypted objects? What if the key got synchronized but not all objects that got encrypted with this particular key? How should the user know for which particular password the application asks and why it takes probably very long to re-encrypt the data? It’s very hard to describe encryption “issues” to users. So far I didn’t find an acceptable solution, nor some kind of generic strategy. Do you have some hints about a concrete strategy or some books / papers that describe synchronization of symmetrically encrypted data with keys that could cause conflicts?

    Read the article

  • Best current framework for unit testing EJB3 / JPA

    - by kennygrimm
    Starting new project using EJB 3 / JPA, mainly stateless session beans and batch jobs. I've used JUnit in the past on standard Java webapps and it seemed to work pretty well. In EJB2 unit testing was a pain and required a running container such as JBoss to make the calls into. Now that we're going to be working in EJB3 / JPA I'd like to know what companies are using to write and run these tests. Are Junit and JMock still considered relevant or are there other newer frameworks that have come around that we should investigate?

    Read the article

  • Unit Testing iPhone - Linker Errors

    - by sliver
    I've followed the Unit Testing Applications guide from the iPhone Development documentation. I followed all the steps and it worked with the TestCase from the documentation. But as soon as I changed the TestCase to test real Code from my project I ended up with linker errors. All classes that are used in the TestCase are reported as missing. I've already searched the internet and found that the Bundle Loader property must be set to "$(BUILT_PRODUCTS_DIR)/MyApplication.app/Contents/MacOS/MyApplication". But this also fails because the file could not be found. Any ideas what I have to do to tell the linker where to search for the missing files?

    Read the article

  • iPhone - strange error when testing on simulator

    - by lostInTransit
    Hi I was testing my app on the simulator when it crashed on clicking a button of a UIAlertView. I stopped debugging there, made some changes to the code and built the app again. Now when I run the application, I get this error in the console Couldn't register com.myApp.debug with the bootstrap server. Error: unknown error code. This generally means that another instance of this process was already running or is hung in the debugger.Program received signal: “SIGABRT”. I tried removing the app from the simulator, doing a clean build but I still get this error when I try to run the app. Can someone please let me know what I should do to be able to run the app on my simulator again. Thanks.

    Read the article

  • Nginx A/B testing

    - by Alex
    Hey, I'm trying to do A/B testing and I'm using Nginx fo this purpose. My Nginx config file looks like this: events { worker_connections 1024; } error_log /usr/local/experiments/apps/reddit_test/error.log notice; http { rewrite_log on; server { listen 8081; access_log /usr/local/experiments/apps/reddit_test/access.log combined; location / { if ($remote_addr ~ "[02468]$") { rewrite ^(.+)$ /experiment$1 last; } rewrite ^(.+)$ /main$1 last; } location /main { internal; proxy_pass http://www.reddit.com/r/lisp; } location /experiment { internal; proxy_pass http://www.reddit.com/r/haskell; } } } This is kind of working, but css and js files woon't load. Can anyone tell me what's wrong with this config file or what would be the right way to do it? Thanks, Alex

    Read the article

  • Selecting Users For A/B (Champion/Challenger) Testing

    - by Gordon Guthrie
    We have a framework that offers A/B split testing. You have a 'champion' version of a page and you develop a 'challenger' version of it. Then you run the website and allocate some of your users the champion and some the challenger and measure their different responses. If the challenger is better than the champion at achieving your metrics then you dethrone the champion and develop a new challenger... My question is what mechanisms should I consider to allocate the versions? A number of options spring to mind: odd or even IP address (or sub-segments) ****.****.****.123 gets champion but .124 gets challenger cookie push - check for a champion/challenger cookie, if it doesn't exist randomly allocate the user to one and push the cookie Best practice? suggestions? comments? experience?

    Read the article

  • Documenting user scenarios and measuring/testing

    - by Rimian
    Please forgive me as I don't quite remember the exact terms for what I am talking about... hence my question. Recently I worked on a large Agile team where I encountered a method of defining user scenarios (much like user stories). These scenarios were a few very basic short sentences with keywords and a structure that could be understood by humans (especially project managers) and could also be coded against using some Java Framework (for verifying tests). The exact structure of this mini language used keywords like "when" and "and" or "if" which was how the framework parsed and verified the result. The purpose of this framework was to interface between management and the acceptance testing framework. So essentially management could write the tests themselves using English. The scenario went something like this: "When a user visits URL and User clicks on X Something happens (that can be measured)" Can anyone help me remember exactly what I am talking about? Many thanks

    Read the article

  • Unit testing .Net CF apps on Windows Mobile 6.5.3 in Visual Studio 2008

    - by Johann Gerell
    Did anyone get that to work? I mean, unit testing .Net CF apps on Windows Mobile 6.5.3 in Visual Studio 2008. It works great for a WM 6 Pro target, but not for a WM 6.5.3 target. I get this error: The test adapter ('Microsoft.VisualStudio.TestTools.TestTypes.Unit.UnitTestAdapter, Microsoft.VisualStudio.QualityTools.Tips.UnitTest.Adapter, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a') required to execute this test could not be loaded. Check that the test adapter is installed properly. Not enough storage is available to process this command. Yes, I can read the error text, but I don't understand the failed run. Any clues?

    Read the article

  • Testing harness for online teaching?

    - by candeira
    I have been asked to teach an online programming course, and I am looking for a test harness especially geared to education. Some students will have significant coding experience, but others will be total newbies. The course is an introduction to software development, mostly taught in C with some C++ and Java thrown in. In any case, I would like to read their source code only after a test suite has made sure that it compiles and executes properly. The students will also benefit from having a tool they can check their code against before submitting it. However, the Learning Management System my employer is using doesn't have such a system. Do you know of any LMS software that includes this feature? Which testing harness would you recommend in case I have to roll my own?

    Read the article

  • Unit Testing Codeigniter Classes with fooStack - clashes

    - by DrPep
    I'm having 'fun' testing interactions in a CodeIgniter based web app. It seems when running the entire test suite "phpunit AllTests.php" it loads all of the test classes, their targets (Systems under Test) and creates a PHPUnit_Framework_TestSuite instance which presumably iterates over the classes which extend CIUnit_TestCase and runs them. The problem comes where you have multiple classes referencing another class (such as a library). As all the classes are loaded into the same process space, PHP reports "cannot redefine class xyz". Have I missed something here or doing something haenously wrong? In my test class i'm doing something like: include_once dirname(__FILE__).'/../CIUnit.php'; include_once dirname(__FILE__).'/../../libraries/ProductsService.php'; class testProductsService extends CIUnit_TestCase { public function testGetProducts_ReturnsArrayOfProducts(){ $service = new ProductsService(); $products = $service->getProducts(); $this->assertTrue(is_array($products)); } } The problem manifests as I have a controller which does: $this->load->library('ProductsService');

    Read the article

  • Python unit-testing with nose: Making sequential tests

    - by cool-RR
    I am just learning how to do unit-testing. I'm on Python / nose / Wing IDE. (The project that I'm writing tests for is a simulations framework, and among other things it lets you run simulations both synchronously and asynchronously, and the results of the simulation should be the same in both.) The thing is, I want some of my tests to use simulation results that were created in other tests. For example, synchronous_test calculates a certain simulation in synchronous mode, but then I want to calculate it in asynchronous mode, and check that the results came out the same. How do I structure this? Do I put them all in one test function, or make a separate asynchronous_test? Do I pass these objects from one test function to another? Also, keep in mind that all these tests will run through a test generator, so I can do the tests for each of the simulation packages included with my program.

    Read the article

  • Recommended Model Based Testing Tools

    - by ModelTester
    Does anyone have any suggestions on what Model Based Testing Tools to use? Is Spec Explorer/SPEC# worth it's weight in tester training? What I have traditionally done is create a Visio Model where I call out the states and associated variables, outputs and expected results from each state. Then in a completely disconnected way, I data drive my test scripts with those variables based on that model. But, they are not connected. I want a way to create a model, associate the variables in a business friendly way, that will then build the data parameters for the scripts. I can't be the first person to need this. Is there a tool out there that will do basically that? Short of developing it myself.

    Read the article

  • Working with a Java Mail Server for Testing

    - by Charlie
    I'm in the process of testing an application that takes mail out of a mailbox, performs some action based on the content of that mail, and then sends a response mail depending on the result of the action. I'm looking for a way to write tests for this application. Ideally, I'd like for these tests to bring up their own mail server, push my test emails to a folder on this mail server, and have my application scrape the mail out of the mail server that my test started. Configuring the application to use the mailserver is not difficult, but I do not know where to look for a programatic way of starting a mail server in Java. I've looked at JAMES, but I am unable to figure out how to start the server from within my test. So the question is this: What can I use for a mail server in Java that I can configure and start entirely within Java?

    Read the article

  • Unit Testing, IDataContext and Stored Procedures via Linq

    - by Terry_Brown
    hey folks, I'm currently using Stephen Walther's approach to unit testing Linq to SQL and the datacontext (http://stephenwalther.com/blog/archive/2008/08/17/asp-net-mvc-tip-33-unit-test-linq-to-sql.aspx) which is working a treat for all things linq. My Dal layer takes in an IDataContext, I use DI (via Unity) to concrete that up as the correct DataContext object and all works well. But - we have a company policy here of writes going via Stored Procs. Has anyone come up with a solution for mocking/faking the data context and still allowing stored procs to effectively be unit tested? I've got no real experience of any of the mocking frameworks (Rhino etc.) so if these are the correct means of doing this, I'd love some pointers on guides for them. Many thanks, Terry

    Read the article

  • Grails Testing hickups

    - by egervari
    I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails: void testCount() { mockDomain(UserAccount) new UserAccount(firstName: "Ken").save() new UserAccount(firstName: "Bob").save() new UserAccount(firstName: "Dave").save() assertEquals(3, UserAccount.count()) } For some reason, I get 0 returned back. Did I forget to do something? The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options. Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests..... Help?

    Read the article

  • Web service performance testing plan, Microsoft .NET WS, SQL

    - by zxed
    Trying to answer a question to come up with a testing plan. It has to do with using a website and/or webservice that queries a sql server to get data and display to user. * Solution must be able to handle an estimated 2000 users, approximately 700 concurrent users, 10,000 + website hits a month. Database calls should handle 100,000 queries via the website/webservice a month. The system is used at multiple times during a 24 hour period; however networking and bandwidth traffic decreases after 5 pm * two windows 2003 servers are used, one for web, another for sql. Both are located in the same room. User access is varied and users can be far/near (its a centralized system), users access via www

    Read the article

  • Legacy Database, Fluent NHibernate, and Testing my mappings

    - by sdanna
    As the post title implies, I have a legacy database (not sure if that matters), I'm using Fluent NHibernate and I'm attempting to test my mappings using the Fluent NHibernate PersistenceSpecification class. My question is really a process one, I want to test these when I build locally in Visual Studio using the built in Unit Testing framework for now. Obviously this implies (I think) that I'm going to need a database. What are some options for getting this into the build? If I use an in memory database does NHibernate or Fluent NHibernate have some some mechanism for sucking the database schema from a target database or maybe the in memory database can do this? Will I need to manually get the schema to feed to an in memory database? Ideally I would like to get this this setup to where the other developers don't really have to think about it other than when they break the build because the tests don't pass.

    Read the article

  • Unit Testing Model Classes that inherit from NSManagedObject

    - by Matt Baker
    So...I'm trying to get unit tests set up in my iPhone App but I'm having some issues. I'm trying to test my model classes but they inherit directly from NSManagedObject. I'm sure this is a problem but I don't know how to get around it. Everything is building and running as expected but I get this error when calling any method on the class I'm testing: Unknown.m:0:0 unrecognized selector sent to instance 0xc2b120 If I follow this structure (http://chanson.livejournal.com/115621.html) to create my object in my tests I end up with another error entirely but it still doesn't help me. Basically my question is this: how can I test a class that inherits from NSManagedObject?

    Read the article

  • Rails Unit Testing with MyISAM Tables

    - by tadman
    I've got an application that requires the use of MyISAM on a few tables, but the rest are the traditional InnoDB type. The application itself is not concerned with transactions where it applies to these records, but performance is a concern. The Rails testing environment assumes the engine used is transactional, though, so when the test database is generated from the schema.rb it is imported with the same engine. Is it possible to over-ride this behaviour in a simple manner? I've resorted to an awful hack to ensure the tables are the correct type by appending this to test_helper.rb: (ActiveRecord::Base.connection.select_values("SHOW TABLES") - %w[ schema_info ]).each do |table_name| ActiveRecord::Base.connection.execute("ALTER TABLE `#{table_name}` ENGINE=InnoDB") end Is there a better way to make a MyISAM-backed model be testable?

    Read the article

  • Should a Unit-test replicate functionality or Test output?

    - by Daniel Beardsley
    I've run into this dilemma several times. Should my unit-tests duplicate the functionality of the method they are testing to verify it's integrity? OR Should unit tests strive to test the method with numerous manually created instances of inputs and expected outputs? I'm mainly asking the question for situations where the method you are testing is reasonably simple and it's proper operation can be verified by glancing at the code for a minute. Simplified example (in ruby): def concat_strings(str1, str2) return str1 + " AND " + str2 end Simplified functionality-replicating test for the above method: def test_concat_strings 10.times do str1 = random_string_generator str2 = random_string_generator assert_equal (str1 + " AND " + str2), concat_strings(str1, str2) end end I understand that most times the method you are testing won't be simple enough to justify doing it this way. But my question remains; is this a valid methodology in some circumstances (why or why not)?

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >