Search Results

Search found 26978 results on 1080 pages for 'load testing'.

Page 68/1080 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Tool to automate basic connectivity testing

    - by feicipet
    After our vendors have setup a certain test environment, we need to go in to perform connectivity testing between PC to servers and also between servers. The problem is that we run a range of tests to telnet between 2 nodes on several ports and this is a manual and rather tedious process. Does anyone know of a small tool or script that I can take input on the range of ports to be test and will run an automated range of testing against those ports? All I need to do is to validate whether a TCP connection can be established from the source PC / server to the target server IP address / port. Thanks, Wong

    Read the article

  • Automated browser testing: How to test JavaScript in web pages?

    - by Dave
    I am trying to write an application that will test a series of web-pages programmatically. The web pages being tested have JavaScript embedded within them which alter the structure of the HTML when they complete execution. It is then the goal to take the final HTML (post-execution of the embedded JavaScript) and compare it against a known output. Essentially, the Input --- Output for the test application is: URL ---[retrieve HTML]--- HTML ---[execute JS, then compare]--- PASS/FAIL Here is the challenge: I have been unable to find a solution that is able to take the HTML I retrieve from the URL and process the JavaScript, as a browser would, and generate the final HTML a user might see from "View Source" on the same page within the browser. It would be very surprising if this sort of approach has not been made before, so I'm hoping someone out there knows of a fitting solution for this application/problem? If at all possible, I'm hoping for a solution that integrates with .NET (I've tried using the WebBrowser, with no luck). However, if there is an existing 3rd party application that can do exactly this, that would be quite acceptable. Thanks in advance for the suggestions! Dave

    Read the article

  • Unit testing of static library that involves NSDocumentDirectory and other iOS App specific calls.

    - by Shiun
    Hi, I'm attempting to run unit tests for a static library that attempts to create/write/read a file in the document directory. Since this is a static library and not an application for the iOS, attempts to reference the NSDocumentDirectory is returning me directory for the form "/Users//Library/Application Support/iPhone Simulator/Documents" This directory does not exist. When attempting to access a directory from an actual application, the NSDocumentDirectory returns something of the form: "/Users//Library/Application Support/iPhone Simulator/4.2/FEDBEF5F-1326-4383-A087-CDA1B865E61A/Documents" (Please note the simulator version as well as application ID as part of the path) How can I overcome this shortcoming in the unit test framework for static libraries that implement tests that require iOS app specific calls? Thanks in advance.

    Read the article

  • junit testing with mockobjects

    - by Codenotguru
    we are planning to bring Junit testing into our project and we just realised that to do junit testing we need to create lot of mockobjects. Can anyone suggest me a good tool or framework that can create these mockobjects for the classes that i will be performing unit testing?

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • can't load big files to server with php [closed]

    - by yozhik
    Hi all! I can't load big files to server. The problem is in that file $_FILES["filename"]["tmp_name"] is empty if file a little more bigger then 2mb. I tried to change variables in php.ini upload_max_filesize = 700M post_max_size = 16M but not working to. Also tried to add this variables to my .httaccess file - but 500 error appears. Error code while uploading=1. UPLOAD_ERR_INI_SIZE Value: 1; The uploaded file exceeds the upload_max_filesize directive in php.ini. Here is my uppload.php page, please anwer what I doing wrong? Thanx! <?php if(strlen($_FILES["filename"]["name"])) { $folder = "uploads/"; echo $folder; $error = ""; if($_FILES["filename"]["size"] > 1024*700*1024) { $error .= "<b><p class=ErrorMessage>?????? ????? ????????? 5Mb</p></b><br>"; header("Location: upload.php?error=".$error, true, 303 ); } if(!file_exists($folder.="hh/")) { if(!mkdir($folder, 0700)) $error .= "<b><p class=ErrorMessage>Folder not created</p></b><br>"; } //echo "<br>".$_FILES["filename"]["tmp_name"]."<br>"; echo $folder.$_FILES["filename"]["name"]."<br>"; echo $_FILES["filename"]["error"]."<br>"; if(move_uploaded_file($_FILES["filename"]["tmp_name"], $folder.$_FILES["filename"]["name"])) { echo("???? ??????? ???????? <br>"); echo("?????????????? ?????: <br>"); echo("??? ?????: "); echo($_FILES["filename"]["name"]); echo("<br>?????? ?????: "); echo($_FILES["filename"]["size"]); echo("<br>??????? ??? ????????: "); echo($folder.=$_FILES["filename"]["name"]); echo("<br>??? ?????: "); echo($_FILES["filename"]["type"]); } else { $error .= "<b><p class=ErrorMessage>?????? ???????? ?????</p></b><br>"; } } ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>???????? ??? ????????</title> </head> <body> <?php if(isset($_REQUEST["error"])) { echo $_REQUEST["error"]; } ?> <h2><p><b> ????? ??? ???????? ?????? </b></p></h2> <form action="upload.php" method="post" enctype="multipart/form-data"> <input type="file" name="filename" READONLY><br> <input name="Upload" type="submit" value="Upload"><br> </form> </body> </html>

    Read the article

  • External drive hanging, load average through the roof

    - by Paul Tomblin
    I have an external USB drive, and I run an hourly rsync to it as a backup. This has been working fine for years. This weekend, I got two new 2Tb internal drives, and decided it was time to re-install Ubuntu from scratch to clear out all the old cruft. About once a day since the re-install, the backup script hangs hard, usually in the "rm -rf" I do before the rsync. By the time I notice the problem, my load average is in the stratosphere and climbing fast (one time, it was over 150), but anything that doesn't touch the drive seems to be running fine. One thing that I find suspicious is that something, I don't know what, is doing a "smartctl" and a "hdparm" command on the USB drive. I'm pretty sure smartctl isn't supposed to run on external drives. I can't figure out what's doing it, either. Here's part of ps auwwfx when it's hung: root 7310 0.0 0.0 4248 352 ? D 20:15 0:00 /sbin/hdparm -C /dev/sdd root 7808 0.0 0.0 17372 1632 ? D 20:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 8427 0.0 0.0 4248 356 ? D 20:20 0:00 /sbin/hdparm -C /dev/sdd root 8925 0.0 0.0 17372 1628 ? D 20:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 9529 0.0 0.0 4248 356 ? D 20:25 0:00 /sbin/hdparm -C /dev/sdd root 10026 0.0 0.0 17372 1628 ? D 20:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 10655 0.0 0.0 4248 356 ? D 20:30 0:00 /sbin/hdparm -C /dev/sdd root 11151 0.0 0.0 17372 1632 ? D 20:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 11774 0.0 0.0 4248 356 ? D 20:35 0:00 /sbin/hdparm -C /dev/sdd root 12271 0.0 0.0 17372 1628 ? D 20:35 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 12878 0.0 0.0 4248 352 ? D 20:40 0:00 /sbin/hdparm -C /dev/sdd root 13374 0.0 0.0 17372 1632 ? D 20:40 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 14011 0.0 0.0 4248 352 ? D 20:45 0:00 /sbin/hdparm -C /dev/sdd root 14507 0.0 0.0 17372 1628 ? D 20:45 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 15116 0.0 0.0 4248 352 ? D 20:50 0:00 /sbin/hdparm -C /dev/sdd root 15612 0.0 0.0 17372 1632 ? D 20:50 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 16223 0.0 0.0 4248 352 ? D 20:55 0:00 /sbin/hdparm -C /dev/sdd root 16734 0.0 0.0 17372 1632 ? D 20:55 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 17345 0.0 0.0 4248 352 ? D 21:00 0:00 /sbin/hdparm -C /dev/sdd root 17842 0.0 0.0 17372 1628 ? D 21:00 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 18463 0.0 0.0 4248 352 ? D 21:05 0:00 /sbin/hdparm -C /dev/sdd root 18960 0.0 0.0 17372 1628 ? D 21:05 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 19598 0.0 0.0 4248 356 ? D 21:10 0:00 /sbin/hdparm -C /dev/sdd root 20096 0.0 0.0 17372 1628 ? D 21:10 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 21280 0.0 0.0 4244 356 ? D 21:15 0:00 /sbin/hdparm -C /dev/sdd root 21784 0.0 0.0 17372 1632 ? D 21:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 22414 0.0 0.0 4244 356 ? D 21:20 0:00 /sbin/hdparm -C /dev/sdd root 22912 0.0 0.0 17372 1628 ? D 21:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 23541 0.0 0.0 4244 356 ? D 21:25 0:00 /sbin/hdparm -C /dev/sdd root 24038 0.0 0.0 17372 1632 ? D 21:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 24658 0.0 0.0 4244 356 ? D 21:30 0:00 /sbin/hdparm -C /dev/sdd root 25157 0.0 0.0 17372 1628 ? D 21:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd Why is this happening, and how can I stop it?

    Read the article

  • How to find out if my hosting's speed is good enough?

    - by Mert Nuhoglu
    There are lots of different online performance tests: Google PageSpeed Insights iWebTool Speed Test AlertFox Page Load Time WebPageTest Also there are several desktop/client software such as: ping tool YSlow Firebug's Net console Fiddler Http Watch I just want to decide if my hosting provider has a good enough performance or if I need to switch my hosting to another provider. So, which tool should I use to compare my hosting provider with other hosting providers?

    Read the article

  • The 2010 JavaOne Java EE 6 Panel: Where We Are and Where We're Going

    - by janice.heiss(at)oracle.com
    An informative article, based on a 2010 JavaOne (San Francisco, California) panel session, surveys a variety of expert perspectives on Java EE 6.The panel, moderated by Oracle's Alexis Moussine-Pouchkine, consisted of:* Adam Bien, Consultant Author/ Speaker, adam-bien.com* Emmanuel Bernard, Principal Software Engineer, JBoss by Red Hat,* David Blevins, Senior Software Engineer, and co-founder of the OpenEJB project and a     founder of Apache Geronimo* Roberto Chinnici, Technical Staff Consulting Member, Oracle* Jim Knutson, Java EE Architect, IBM* Reza Rahman, Lead Engineer, Caucho Technology, Inc.,* Krasimir Semerdzhiev, Development Architect, SAP Labs BulgariaThe panel addressed such topics as Platform and API Adoption, Contexts and Dependency Injection (CDI), Java EE vs. Spring, the impact of Java EE 6 on tooling and testing, Java EE.next, along with a variety of audience questions. Read the entire article for the whole picture.

    Read the article

  • ODI 12c - Parallel Table Load

    - by David Allan
    In this post we will look at the ODI 12c capability of parallel table load from the aspect of the mapping developer and the knowledge module developer - two quite different viewpoints. This is about parallel table loading which isn't to be confused with loading multiple targets per se. It supports the ability for ODI mappings to be executed concurrently especially if there is an overlap of the datastores that they access, so any temporary resources created may be uniquely constructed by ODI. Temporary objects can be anything basically - common examples are staging tables, indexes, views, directories - anything in the ETL to help the data integration flow do its job. In ODI 11g users found a few workarounds (such as changing the technology prefixes - see here) to build unique temporary names but it was more of a challenge in error cases. ODI 12c mappings by default operate exactly as they did in ODI 11g with respect to these temporary names (this is also true for upgraded interfaces and scenarios) but can be configured to support the uniqueness capabilities. We will look at this feature from two aspects; that of a mapping developer and that of a developer (of procedures or KMs). 1. Firstly as a Mapping Developer..... 1.1 Control when uniqueness is enabled A new property is available to set unique name generation on/off. When unique names have been enabled for a mapping, all temporary names used by the collection and integration objects will be generated using unique names. This property is presented as a check-box in the Property Inspector for a deployment specification. 1.2 Handle cleanup after successful execution Provided that all temporary objects that are created have a corresponding drop statement then all of the temporary objects should be removed during a successful execution. This should be the case with the KMs developed by Oracle. 1.3 Handle cleanup after unsuccessful execution If an execution failed in ODI 11g then temporary tables would have been left around and cleaned up in the subsequent run. In ODI 12c, KM tasks can now have a cleanup-type task which is executed even after a failure in the main tasks. These cleanup tasks will be executed even on failure if the property 'Remove Temporary Objects on Error' is set. If the agent was to crash and not be able to execute this task, then there is an ODI tool (OdiRemoveTemporaryObjects here) you can invoke to cleanup the tables - it supports date ranges and the like. That's all there is to it from the aspect of the mapping developer it's much, much simpler and straightforward. You can now execute the same mapping concurrently or execute many mappings using the same resource concurrently without worrying about conflict.  2. Secondly as a Procedure or KM Developer..... In the ODI Operator the executed code shows the actual name that is generated - you can also see the runtime code prior to execution (introduced in 11.1.1.7), for example below in the code type I selected 'Pre-executed Code' this lets you see the code about to be processed and you can also see the executed code (which is the default view). References to the collection (C$) and integration (I$) names will be automatically made unique by using the odiRef APIs - these objects will have unique names whenever concurrency has been enabled for a particular mapping deployment specification. It's also possible to use name uniqueness functions in procedures and your own KMs. 2.1 New uniqueness tags  You can also make your own temporary objects have unique names by explicitly including either %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG in the name passed to calls to the odiRef APIs. Such names would always include the unique tag regardless of the concurrency setting. To illustrate, let's look at the getObjectName() method. At <% expansion time, this API will append %UNIQUE_STEP_TAG to the object name for collection and integration tables. The name parameter passed to this API may contain  %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. This API always generates to the <? version of getObjectName() At execution time this API will replace the unique tag macros with a string that is unique to the current execution scope. The returned name will conform to the name-length restriction for the target technology, and its pattern for the unique tag. Any necessary truncation will be performed against the initial name for the object and any other fixed text that may have been specified. Examples are:- <?=odiRef.getObjectName("L", "%COL_PRFEMP%UNIQUE_STEP_TAG", "D")?> SCOTT.C$_EABH7QI1BR1EQI3M76PG9SIMBQQ <?=odiRef.getObjectName("L", "EMP%UNIQUE_STEP_TAG_AE", "D")?> SCOTT.EMPAO96Q2JEKO0FTHQP77TMSAIOSR_ Methods which have this kind of support include getFrom, getTableName, getTable, getObjectShortName and getTemporaryIndex. There are APIs for retrieving this tag info also, the getInfo API has been extended with the following properties (the UNIQUE* properties can also be used in ODI procedures); UNIQUE_STEP_TAG - Returns the unique value for the current step scope, e.g. 5rvmd8hOIy7OU2o1FhsF61 Note that this will be a different value for each loop-iteration when the step is in a loop. UNIQUE_SESSION_TAG - Returns the unique value for the current session scope, e.g. 6N38vXLrgjwUwT5MseHHY9 IS_CONCURRENT - Returns info about the current mapping, will return 0 or 1 (only in % phase) GUID_SRC_SET - Returns the UUID for the current source set/execution unit (only in % phase) The getPop API has been extended with the IS_CONCURRENT property which returns info about an mapping, will return 0 or 1.  2.2 Additional APIs Some new APIs are provided including getFormattedName which will allow KM developers to construct a name from fixed-text or ODI symbols that can be optionally truncate to a max length and use a specific encoding for the unique tag. It has syntax getFormattedName(String pName[, String pTechnologyCode]) This API is available at both the % and the ? phase.  The format string can contain the ODI prefixes that are available for getObjectName(), e.g. %INT_PRF, %COL_PRF, %ERR_PRF, %IDX_PRF alongwith %UNIQUE_STEP_TAG or %UNIQUE_SESSION_TAG. The latter tags will be expanded into a unique string according to the specified technology. Calls to this API within the same execution context are guaranteed to return the same unique name provided that the same parameters are passed to the call. e.g. <%=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")%> <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG_AE", "ORACLE")?> C$_MY_TAB7wDiBe80vBog1auacS1xB_AE <?=odiRef.getFormattedName("%COL_PRFMY_TABLE%UNIQUE_STEP_TAG.log", "FILE")?> C2_MY_TAB7wDiBe80vBog1auacS1xB.log 2.3 Name length generation  As part of name generation, the length of the generated name will be compared with the maximum length for the target technology and truncation may need to be applied. When a unique tag is included in the generated string it is important that uniqueness is not compromised by truncation of the unique tag. When a unique tag is NOT part of the generated name, the name will be truncated by removing characters from the end - this is the existing 11g algorithm. When a unique tag is included, the algorithm will first truncate the <postfix> and if necessary  the <prefix>. It is recommended that users will ensure there is sufficient uniqueness in the <prefix> section to ensure uniqueness of the final resultant name. SUMMARY To summarize, ODI 12c make it much simpler to utilize mappings in concurrent cases and provides APIs for helping developing any procedures or custom knowledge modules in such a way they can be used in highly concurrent, parallel scenarios. 

    Read the article

  • Howto install google-mock on Ubuntu 12.10

    - by user1459339
    I am having hard time trying to install Google C++ Mocking Framework. I have successfully run sudo apt-get install google-mock. Then I tried to compile this sample file #include "gmock/gmock.h" int main(int argc, char** argv) { ::testing::InitGoogleMock(&argc, argv); return RUN_ALL_TESTS(); } with g++ -lgmock main.cpp and these errors have shown main.cpp:(.text+0x1e): undefined reference to `testing::InitGoogleMock(int*, char**)' main.cpp:(.text+0x23): undefined reference to `testing::UnitTest::GetInstance()' main.cpp:(.text+0x2b): undefined reference to `testing::UnitTest::Run()' collect2: error: ld returned 1 exit status I guess the linker can not find the library files. Does anybody know how to fix this?

    Read the article

  • Can I use a genetic algorithm for balancing character builds?

    - by Renan Malke Stigliani
    I'm starting to build a online PVP (duel like, one-on-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I have never done anything like this, I'm still thinking about the math behind the levels/skills/specials balance. So I thought a good way of testing the best builds/combos, would be to implement a Genetic Algorithm. It'd be like this: Generate a big group of random characters Make them fight, level them up accordingly to their victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try and make even better characters Add some more random chars, emulating new players Repeat the process for some time, or util I find some chars who can beat everyone's butt I could then play with the math and try to find better balances to make sure that the top x% of chars would be a mix of various build types. So, is it a good idea, or is there some other, easier method to do the balancing?

    Read the article

  • genetic algorithm for leveling/build test

    - by Renan Malke Stigliani
    I'm starting o build a online PVP (duel like, one-to-one) game, where there is leveling, skill points, special attacks and all the common stuff. Since I never did anything like that, I'm still thinking about the maths behind the level/skill/special balances. So I thought good way of testing the best/combo builds would implement a Genetic Algorith. It'd be like that: Generate a big portion of random characters Make them fight, level them up accordingly to the victories(more XP)/losses(less XP) Mate the winners, crossing their builds, to try to make even best characters Add some more random chars, emulating new players Repeat the process for some time, or util find some chars who can beat everyone butts So I could play with the math and try to find the balance where the top x% chars would be a mix of various build types. So, is it a good idea, or there are some other easier method to do the balance? PS: I like this also, because it sounds funny

    Read the article

  • What resources will help me understand the data model for QC 10.0 in order to write my SQL queries?

    - by srihari
    I am a fresher in Quality Center 10.0 HP software testing tool. As per my understanding in order to generate reports from QC and to troubleshoot the scenarios, we need to write SQL queries in the QC back end database. In my case it is SQL db. I downloaded the database reference help file but I could not understand from where I can start. It just gave the table name and its information. For a starter like me are there any online tutorials or helpful websites,hands on exercises,scenario's where I can better understand how to write queries for the QC data model? I am very confident about the SQL coding itself, what I want to know is how to query on the QC database tables based on the scenarios that occur in QC tool. Please suggest. Thanks, Srihari

    Read the article

  • Need help on implementing corporate network security solution and coming up with time lines to test it

    - by abc
    I have to come up with a proposal to implement corporate network security. Once I have done that I also have to come up with estimates on the time / money needed to test (QA) the implementation. What I need help with: What should I keep in mind while coming up with this proposal? I have already considered: Routers, Firewalls, VPN, Wireless, Server System, Web Apps etc. I know I am missing quite a lot. What else should I include? This the most challenging part I feel: How should I estimate the time needed for testing these security implementations? I guess I need to understand how can I test these security implementations first...right? Can you help me?

    Read the article

  • What are tangible advantages to proper Unit Tests over Functional Test called unit tests

    - by Jackie
    A project I am working on has a bunch of legacy tests that were not properly mocked out. Because of this the only dependency it has is EasyMock, which doesn't support statics, constructors with arguments, etc. The tests instead rely on database connections and such to "run" the tests. Adding powermock to handle these cases is being shot down as cost prohibitive due to the need to upgrade the existing project to support it (Another discussion). My questions are, what are the REAL world tangible benifits of proper unit testing I can use to push back? Are there any? Am I just being a stickler by saying that bad unit tests (even if they work) are bad? Is code coverage just as effective?

    Read the article

  • rails fake data, considering switch from faker to forgery, any advantages or pitfalls?

    - by Michael Durrant
    With Ruby on Rails I've usually used Forgery for generating dummy data for testing. I've noticed recently that several clients and tutorials are using Faker They both seem fairly similar in use and popularity: Faker 128 forks, 418 watchers. Forgery 59 forks, 399 watchers. They both seem similar in how current they are: Faker Most updates are from 6 and 9 months ago. Forgery Most updates are from 4 and 9 months ago. The one distinguishing factor I've found so far is that Forgery seems like it has better instructions. Are there any particular benefits or disadvantages to using one over the other? Have you ever needed to switch from one to another for a particular reason?

    Read the article

  • Behavior-Driven Development / Use case diagram

    - by Mik378
    Regarding growing of Behavior-Driven Development imposing acceptance testing, are use cases diagram useful or do they lead to an "over-documentation"? Indeed, acceptance tests representing specifications by example, as use cases promote despite of a more generic manner (since cases, not scenarios), aren't they too similar to treat them both at the time of a newly created project? From this link, one opinion is: Another realization I had is that if you do UseCases and automated AcceptanceTests you are essentially doubling your work. There is duplication between the UseCases and the AcceptanceTests. I think there is a good case to be made that UserStories + AcceptanceTests are more efficient way to work when compared to UseCases + AcceptanceTests. What to think about?

    Read the article

  • Quality Assurance activities

    - by MasloIed
    Having asked but deleted the question as it was a bit misunderstood. If Quality Control is the actual testing, what are the commonest true quality assurance activities? I have read that verification (reviews, inspections..) but it does not make much sense to me as it looks more like quality control as mentioned here: DEPARTMENT OF HEALTH AND HUMAN SERVICES ENTERPRISE PERFORMANCE LIFE CYCLE FRAMEWORK Practices guide Verification - “Are we building the product right?” Verification is a quality control technique that is used to evaluate the system or its components to determine whether or not the project’s products satisfy defined requirements. During verification, the project’s processes are reviewed and examined by members of the IV&V team with the goal of preventing omissions, spotting problems, and ensuring the product is being developed correctly. Some Verification activities may include items such as: • Verification of requirement against defined specifications • Verification of design against defined specifications • Verification of product code against defined standards • Verification of terms, conditions, payment, etc., against contracts

    Read the article

  • How to make Unit Tests to make sure stored procedure is deleting row from the database?

    - by aspdotnetuser
    I'm new to unit testing and I need some help with the following. I have created a small project to help me learn how to make Unit Tests. The functionality for one of the forms in my application deletes a user from the User table (and other rows in mapping tables). Currently, the unit test I have created to test this sets up the required objects and then calls the business rules method (passing in the user id) which calls the data access method to execute the stored procedure that deletes the rows in the tables. Is this the correct method to test whether something is being deleted successfully? Should the unit test / setup method first insert some test data which the unit test then deletes?

    Read the article

  • How to configure Google Analytics experiments manually

    - by John
    I wish to run multivariate tests on an e-commerce site that run across all product pages. I will be setting and deciding the variations myself all I need to do is track the results in GA. I think may be possible (although only A/B testing is available via the GA UI): https://developers.google.com/analytics/devguides/platform/features/experiments#serving-framework EXTERNAL – You will choose variations, handle experiment optimization, and only report the chosen variation to Google Analytics. For example, this should be used by 3rd-party optimization platforms that want to integrate with Google Analytics for reporting purposes. In this case, the Google Analytics statistical engine will not run. However how do I configure this and push the data to GA in my page?

    Read the article

  • How do you unit test your javascript.

    - by Erin
    I spend a lot of time working in javascript of late. I have not found a way that seems to work well for testing javascript. This in the past hasn't been a problem for me since most of the websites I worked on had very little javascript in them. I now have a new website that makes extensive use of jQuery I would like to build unit tests for most of the system. My problems are this. Most of the functions make changes to the DOM in some way. Most of the functions request data from the web server as well and require a session on the service to get results back. I would like to run the test from either a command line or a test running harness rather then in a browser. Any help or articles I should be reading would be helpful.

    Read the article

  • How can I test a parser for a bespoke XML schema?

    - by Greg B
    I'm parsing a bespoke XML format into an object graph using .NET 4.0. My parser is using the System.XML namespace internally, I'm then interrogating the relevant properties of XmlNodes to create my object graph. I've got a first cut of the parser working on a basic input file and I want to put some unit tests around this before I progress on to more complex input files. Is there a pattern for how to test a parser such as this? When I started looking at this, my first move was to new up and XmlDocument, XmlNamespaceManager and create an XmlElement. But it occurs to me that this is quite lengthy and prone to human error. My parser is quite recursive as you can imagine and this might lead to testing the full system rather than the individual units (methods) of the system. So a second question might be What refactoring might make a recursive parser more testable?

    Read the article

  • Examples and Best Practices for Seeding Defects?

    - by MathAttack
    Defect Seeding seems to be one of the few ways a development organization can tell how thorough an independent testing group is. I'm a fan of using metrics to help counter overconfidence biases, and drive discussions around facts. With that said, I haven't seen Seeding Defects used in practice. Are there best practices above and beyond what McConnell explained? Are there public examples where this has been done? In the absence of the above, any thoughts on why it hasn't been done more? Thanks in advance!

    Read the article

  • How do you unit test your javascript

    - by Erin
    I spend a lot of time working in javascript of late. I have not found a way that seems to work well for testing javascript. This in the past hasn't been a problem for me since most of the websites I worked on had very little javascript in them. I now have a new website that makes extensive use of jQuery I would like to build unit tests for most of the system. My problems are this. Most of the functions make changes to the DOM in some way. Most of the functions request data from the web server as well and require a session on the service to get results back. I would like to run the test from either a command line or a test running harness rather then in a browser. Any help or articles I should be reading would be helpful.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >