Search Results

Search found 11675 results on 467 pages for 'parallel testing'.

Page 58/467 | < Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >

  • Behavior-Driven Development / Use case diagram

    - by Mik378
    Regarding growing of Behavior-Driven Development imposing acceptance testing, are use cases diagram useful or do they lead to an "over-documentation"? Indeed, acceptance tests representing specifications by example, as use cases promote despite of a more generic manner (since cases, not scenarios), aren't they too similar to treat them both at the time of a newly created project? From this link, one opinion is: Another realization I had is that if you do UseCases and automated AcceptanceTests you are essentially doubling your work. There is duplication between the UseCases and the AcceptanceTests. I think there is a good case to be made that UserStories + AcceptanceTests are more efficient way to work when compared to UseCases + AcceptanceTests. What to think about?

    Read the article

  • Quality Assurance activities

    - by MasloIed
    Having asked but deleted the question as it was a bit misunderstood. If Quality Control is the actual testing, what are the commonest true quality assurance activities? I have read that verification (reviews, inspections..) but it does not make much sense to me as it looks more like quality control as mentioned here: DEPARTMENT OF HEALTH AND HUMAN SERVICES ENTERPRISE PERFORMANCE LIFE CYCLE FRAMEWORK Practices guide Verification - “Are we building the product right?” Verification is a quality control technique that is used to evaluate the system or its components to determine whether or not the project’s products satisfy defined requirements. During verification, the project’s processes are reviewed and examined by members of the IV&V team with the goal of preventing omissions, spotting problems, and ensuring the product is being developed correctly. Some Verification activities may include items such as: • Verification of requirement against defined specifications • Verification of design against defined specifications • Verification of product code against defined standards • Verification of terms, conditions, payment, etc., against contracts

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • How to configure Google Analytics experiments manually

    - by John
    I wish to run multivariate tests on an e-commerce site that run across all product pages. I will be setting and deciding the variations myself all I need to do is track the results in GA. I think may be possible (although only A/B testing is available via the GA UI): https://developers.google.com/analytics/devguides/platform/features/experiments#serving-framework EXTERNAL – You will choose variations, handle experiment optimization, and only report the chosen variation to Google Analytics. For example, this should be used by 3rd-party optimization platforms that want to integrate with Google Analytics for reporting purposes. In this case, the Google Analytics statistical engine will not run. However how do I configure this and push the data to GA in my page?

    Read the article

  • How do you unit test your javascript.

    - by Erin
    I spend a lot of time working in javascript of late. I have not found a way that seems to work well for testing javascript. This in the past hasn't been a problem for me since most of the websites I worked on had very little javascript in them. I now have a new website that makes extensive use of jQuery I would like to build unit tests for most of the system. My problems are this. Most of the functions make changes to the DOM in some way. Most of the functions request data from the web server as well and require a session on the service to get results back. I would like to run the test from either a command line or a test running harness rather then in a browser. Any help or articles I should be reading would be helpful.

    Read the article

  • How can I test a parser for a bespoke XML schema?

    - by Greg B
    I'm parsing a bespoke XML format into an object graph using .NET 4.0. My parser is using the System.XML namespace internally, I'm then interrogating the relevant properties of XmlNodes to create my object graph. I've got a first cut of the parser working on a basic input file and I want to put some unit tests around this before I progress on to more complex input files. Is there a pattern for how to test a parser such as this? When I started looking at this, my first move was to new up and XmlDocument, XmlNamespaceManager and create an XmlElement. But it occurs to me that this is quite lengthy and prone to human error. My parser is quite recursive as you can imagine and this might lead to testing the full system rather than the individual units (methods) of the system. So a second question might be What refactoring might make a recursive parser more testable?

    Read the article

  • Examples and Best Practices for Seeding Defects?

    - by MathAttack
    Defect Seeding seems to be one of the few ways a development organization can tell how thorough an independent testing group is. I'm a fan of using metrics to help counter overconfidence biases, and drive discussions around facts. With that said, I haven't seen Seeding Defects used in practice. Are there best practices above and beyond what McConnell explained? Are there public examples where this has been done? In the absence of the above, any thoughts on why it hasn't been done more? Thanks in advance!

    Read the article

  • How do you unit test your javascript

    - by Erin
    I spend a lot of time working in javascript of late. I have not found a way that seems to work well for testing javascript. This in the past hasn't been a problem for me since most of the websites I worked on had very little javascript in them. I now have a new website that makes extensive use of jQuery I would like to build unit tests for most of the system. My problems are this. Most of the functions make changes to the DOM in some way. Most of the functions request data from the web server as well and require a session on the service to get results back. I would like to run the test from either a command line or a test running harness rather then in a browser. Any help or articles I should be reading would be helpful.

    Read the article

  • Adding unit tests to a legacy, plain C project

    - by Groo
    The title says it all. My company is reusing a legacy firmware project for a microcontroller device, written completely in plain C. There are parts which are obviously wrong and need changing, and coming from a C#/TDD background I don't like the idea of randomly refactoring stuff with no tests to assure us that functionality remains unchanged. Also, I've seen that hard to find bugs were introduced in many occasions through slightest changes (which is something which I believe would be fixed if regression testing was used). A lot of care needs to be taken to avoid these mistakes: it's hard to track a bunch of globals around the code. To summarize: How do you add unit tests to existing tightly coupled code before refactoring? What tools do you recommend? (less important, but still nice to know) I am not directly involved in writing this code (my responsibility is an app which will interact with the device in various ways), but it would be bad if good programming principles were left behind if there was a chance they could be used.

    Read the article

  • How does one unit test an algorithm

    - by Asa Baylus
    I was recently working on a JS slideshow which rotates images using a weighted average algorithm. Thankfully, timgilbert has written a weighted list script which implements the exact algorithm I needed. However in his documentation he's noted under todos: "unit tests!". I'd like to know is how one goes about unit testing an algorithm. In the case of a weighted average how would you create a proof that the averages are accurate when there is the element of randomness? Code samples of similar would be very helpful to my understanding.

    Read the article

  • migrating product and team from startup race to quality development

    - by thevikas
    This is year 3 and product is selling good enough. Now we need to enforce good software development practices. The goal is to monitor incoming bug reports and reduce them, allow never ending features and get ready for scaling 10x. The phrases "test-driven-development" and "continuous-integration" are not even understood by the team cause they were all in the first 2 year product race. Tech team size is 5. The question is how to sell/convince team and management about TDD/unit testing/coding standards/documentation - with economics. train the team to do more than just feature coding and start writing test units along - which looks like more work, means needs more time! how to plan for creating units for all backlog production code

    Read the article

  • Is it costly to leave the Console and Script features enabled in Firebug?

    - by parisminton
    For some time now, I've run Firebug constantly enabled to do quick DOM inspections, leaving the Console and Script panels disabled. I'm just starting to use these two features so I don't have to keep using alerts for testing and debugging. I enable them while I use them and turn them back off when I'm done. I'd like to know if these particular features can slow things down such that they shouldn't be left on round-the-clock. Like do they slow down page loads, use inordinate chunks of memory or something? I don't see anything about it in the Firebug wiki.

    Read the article

  • What open source POSIX compliance test suites are available?

    - by Richard Pennington
    I'm working on a small open source project, ELLCC, that uses clang/LLVM as a cross compiler for various target processors. For the runtime environment, I'm using the NetBSD libraries and porting them to target Linux and standalone systems. I want to run a POSIX compliance test suite on the code. I've found the Open POSIX Test Suite, which looks like a good start, but it hasn't been updated since 2005. I've done some preliminary testing (with gcc and ecc under Linux), and it looks like it needs a few updates for modern compilers. My questions are: Does the Open POSIX Test Suite live on somewhere in a more up to date form? Are there other open source alternatives?

    Read the article

  • ISO 12207: SQA as a supporting process?

    - by user970696
    I have been following ISO12207 for the sake of my thesis dealing with software quality. Now I should explain quality assurance and here comes the problem: according to this norm, QA is a supporting process, separated but on the same level with verification, validation and auditing processes. According to other sources, Quality Assurance is basically high level activity making sure that standards, norms etc. are being followed. Usually the part of Quality Assurance is the Quality Control (testing, reviewing, inspections also V&V) which measures the quality and provides QA with this information so it can be acted upon. I somehow do not understand how QA is thought to be according to this ISO and what activities should it perform. Also it does not mention QC except for a footnote.

    Read the article

  • Any frameworks or library allow me to run large amount of concurrent jobs schedully?

    - by Yoga
    Are there any high level programming frameworks that allow me to run large amount of concurrent jobs schedully? e.g. I have 100K of urls need to check their uptime every 5 minutes Definitely I can write a program to handle this, but then I need to handle concurrency, queuing, error handling, system throttling, job distribution etc. Will there be a framework that I only focus on a particular job (i.e. the ping task) and the system will take care of the scaling and error handling for me? I am open to any language.

    Read the article

  • test coverage reality

    - by iPhoneDeveloper
    I am NOT doing test driven development and I write my test classes after the actual code is written. In my current project I have a test coverage of(Line coverage) %70 for 3000 lines of Java code.(Using JUnit, Mockito and Sonar for testing) But while I feel actually I am not covering and catching %70 of the problems that can occur. So my question is in theory is that possible to have a %100 Line coverage but in reality it is meaningless because of low quality of the test code and maybe a %40 well written test code is much better than a bad %100 coverage? or we can always say line coverage more or less gives the percentage of all covered issues?

    Read the article

  • What is the best unit test framework for .NET and why?

    - by rmx
    It seems to me that everyone uses NUnit without even considering the other options. I think this is because: Everyone is familiar with it already so they won't have to learn a new API. It is already set up with their continuous integration server to work with NUnit. Am I wrong about this? I decided to use xUnit on one of my own projects recently and I love it! It makes so much more sense to me and conceptually it seems like a definite step forward from NUnit. I'd like to hear opinions on which framework is actually the best - not taking into consideration having to learn it or reconfigure your automated testing.

    Read the article

  • Must all new features go through betatest?

    - by LTR
    Obviously, small usability fixes and bugfixes go directly into the stable product. What about small new features? Can you afford to just release them after internal testing, or do they have to be betatested by customers first? Situation: This is a young commercial project, produced by a one-person company. It has an existing userbase and is at it's second major version. Previous betatests have produced some results, however most feedback came from the stable product and not from beta versions.

    Read the article

  • Are my other partitions safe from harm from an alpha/beta release?

    - by Marcappuccino
    I am quite intrested in testing the latest alpha-3 of Ubuntu, however, performance in VirtualBox is slow and somewhat buggy (I know! It's an Alpha) - guest additions wern't installing, bad mouse intergration, etc. I would now like to test this release on my hard drive. But my main system (12.04) is also on this very same hard drive. Is this safe? Can the alpha touch my main partition? Are there any other risks?

    Read the article

  • Asynchronously returning a hierarchal data using .NET TPL... what should my return object "look" like?

    - by makerofthings7
    I want to use the .NET TPL to asynchronously do a DIR /S and search each subdirectory on a hard drive, and want to search for a word in each file... what should my API look like? In this scenario I know that each sub directory will have 0..10000 files or 0...10000 directories. I know the tree is unbalanced and want to return data (in relation to its position in the hierarchy) as soon as it's available. I am interested in getting data as quickly as possible, but also want to update that result if "better" data is found (better means closer to the root of c:) I may also be interested in finding all matches in relation to its position in the hierarchy. (akin to a report) Question: How should I return data to my caller? My first guess is that I think I need a shared object that will maintain the current "status" of the traversal (started | notstarted | complete ) , and might base it on the System.Collections.Concurrent. Another idea that I'm considering is the consumer/producer pattern (which ConcurrentCollections can handle) however I'm not sure what the objects "look" like. Optional Logical Constraint: The API doesn't have to address this, but in my "real world" design, if a directory has files, then only one file will ever contain the word I'm looking for.  If someone were to literally do a DIR /S as described above then they would need to account for more than one matching file per subdirectory. More information : I'm using Azure Tables to store a hierarchy of data using these TPL extension methods. A "node" is a table. Not only does each node in the hierarchy have a relation to any number of nodes, but it's possible for each node to have a reciprocal link back to any other node. This may have issues with recursion but I'm addressing that with a shared object in my recursion loop. Note that each "node" also has the ability to store local data unique to that node. It is this information that I'm searching for. In other words, I'm searching for a specific fixed RowKey in a hierarchy of nodes. When I search for the fixed RowKey in the hierarchy I'm interested in getting the results FAST (first node found) but prefer data that is "closer" to the starting point of the hierarchy. Since many nodes may have the particular RowKey I'm interested in, sometimes I may want to get a report of ALL the nodes that contain this RowKey.

    Read the article

  • Need to test .properties one by one in every possibility?

    - by ??? Shengyuan Lu
    For example, there are some key-value configuration in .properties file. Such like someFeatureEnable=true. It must be bool type value which will be parsed by framework, in my case it's typical Java Spring configuration. Spring will handle the configuration and throw Exception when users set someFeatureEnable=123. My question is: if there many properties in .properties file, Is it worth testing them one by one? It's quite troublesome and low priority. The .properties file is always configured by tech administrator stuff. Limited chances that they will mess up the configuration. Thanks!

    Read the article

  • Visual Studio Load Tests Virtual Users Simulation

    - by Eldar
    Hello, I'm currently working on writing a load testing application that takes advantage of Load Test using Visual Studio 2010. The load test will simulate 20 users on the same machine, and I need some data to be shared in-memory between all simulated users. I was suprised I couldn't find documentation answering the following question: What seperates each virtual user's running context from the other? Does each virtual user runs the tests in its own process? Maybe in its own app domain? Or just on its own thread? I need to know because if each user is running tests in its own process then all the in-memory cache isn't shared and is created for each user instead of one time for all of them, which is bad for me.

    Read the article

  • What Does It Usually Mean for a Feature to be "Supported"?

    - by joshin4colours
    I'm currently working some testing for a particular area of an application. I had to write some automated tests for a particular feature but due to the circumstances, this was not easy to do. When I asked one of the other testers about it, he mentioned that the same features exist in a sister application our company produces but isn't documented anywhere (end-user documentation or otherwise). He also said that the feature doesn't typically get tested at all in the sister application and isn't usually tested in the application I work on. Apparently this feature isn't heavily used but removing it would require a fair bit of work so the benefit-cost ratio doesn't work out. All of this has left me with some questions. Other than "The documentation says so" or "We told the client it is", what usually makes a feature "supported" versus an unsupported feature?

    Read the article

  • Should I pass an object into a constructor, or instantiate in class?

    - by Prisoner
    Consider these two examples: Passing an object to a constructor class ExampleA { private $config; public function __construct($config) { $this->config = $config; } } $config = new Config; $exampleA = new ExampleA($config); Instantiating a class class ExampleB { private $config; public function __construct() { $this->config = new Config; } } $exampleA = new ExampleA(); Which is the correct way to handle adding an object as a property? When should I use one over the other? Does unit testing affect what I should use?

    Read the article

  • What is the effect of creating unit tests during development on time to develop as well as time spent in maintenance activities?

    - by jgauffin
    I'm a consultant and I am going to introduce unit tests to all developers at my client site. My goal is to ensure that all new applications should have unit tests for all classes created. The client has a problem with high maintenance costs from fixing bugs in their existing applications. Their applications have a life span from between 5-15 years in which they continuously add new features. I'm quite confident that they will benefit greatly from starting with unit tests. I'm interested in the effect of unit tests on the time and cost of development: How much time will writing unit tests as part of the development process add? How much time will be saved in maintenance activities (testing and debugging) by having good unit tests?

    Read the article

< Previous Page | 54 55 56 57 58 59 60 61 62 63 64 65  | Next Page >