Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 19/883 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Wierd Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days in to the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How can I test database access methods in Java?

    - by javaStudent
    I want to write a test for a method that accesses a database such as following. public class MyClass{ public String getAddress(Int id){ String query = "Select * from Address where id="+id; //some additional statements resultSet = statement.executeQuery(); return result.getString(ADDRESS); } } How can I test this method? I am using Java.

    Read the article

  • Assignments in mock return values

    - by zerkms
    (I will show examples using php and phpunit but this may be applied to any programming language) The case: let's say we have a method A::foo that delegates some work to class M and returns the value as-is. Which of these solutions would you choose: $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue('baz')); $obj = new A($mock); $this->assertEquals('baz', $obj->foo()); or $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result = 'baz')); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); or $result = 'baz'; $mock = $this->getMock('M'); $mock->expects($this->once()) ->method('bar') ->will($this->returnValue($result)); $obj = new A($mock); $this->assertEquals($result, $obj->foo()); Personally I always follow the 2nd solution, but just 10 minutes ago I had a conversation with couple of developers who said that it is "too tricky" and chose 3rd or 1st. So what would you usually do? And do you have any conventions to follow in such cases?

    Read the article

  • Weird Results A/B Test in Google Website Optimizer

    - by Yisroel
    I set up a test in Google Website Optimizer that has a 3 variations - original (A), B, and C. In order to further validate the results of the test, I added a variation C that is exactly the same as the original. And thats where the results get weird. 6 days into the test, the best performing variation is C. It outperforms the original by 18.4%! How is that possible? Do I now discount the results of this test entirely?

    Read the article

  • How do I check that my tests were not removed by other developers?

    - by parxier
    I've just came across an interesting collaborative coding issue at work. I've written some unit/functional/integration tests and implemented new functionality into application that's got ~20 developers working on it. All tests passed and I checked in the code. Next day I updated my project and noticed (by chance) that some of my test methods were deleted by other developers (merging problems on their end). New application code was not touched. How can I detect such problem automatically? I mean, I write tests to automatically check that my code still works (or was not deleted), how do I do the same for tests? We're using Java, JUnit, Selenium, SVN and Hudson CI if it matters.

    Read the article

  • Google Analytics Content Experiments for non-simultaneous tests

    - by mnort9
    I really like how Google Analytics displays the results of content experiments. However, it seems the tool only works for simultaneous tests. I'd like to use the tool without implementing the page variation code into my site. For example, I want to test copy on an ecommerece category page. The original page variation would be the current page for the past 2500 visits. After making the copy changes, the new variation would be for the next 2500 visits. I realize I can simply record the metrics before and after each variation, but I'd like to take advantage of Google's presentation of the experiment. Is it possible to use the Content Experiments in this way?

    Read the article

  • Is there any way to simulate a slow connection between my server and an iPad (without installing anything on the server)?

    - by Clay Nichols
    Some of our webapp users have difficulty on slower connections. I"m trying to get a better idea of what that "speed barrier is" so I'd like to be able to test a variety of connection speeds. I've found ways to do this on Windows but no on the iPad, so I'm looking more for some sort of proxy service that'll work with any device (not running ON that device) I did find an article about using the CharlesProxy and providing a connection to another device, but I was hoping for something simpler (need not be free) Constraints * We are on a shared server so we can't install anything and we are limited in our control over that server. * I'd like to test an iPad, Android Tablet, Windows PC.

    Read the article

  • Should the test and the fix be written by different people?

    - by Nutel
    There is a common practice in TDD to write a test before fix to avoid regression and simplify fixing. I just wonder what if the test and fix will be written by different people, total spent time will be almost the same but as now three people will think about possible failures (+tester) we increase probability that fix will cover all possible failure scenarios. Does this practice make sense or it will just waste additional time needed for one more person to familiarize with bug?

    Read the article

  • Where can I find statistics / figures on how long testing should / could take?

    - by NoCarrier
    I'm trying to convince management that testing/QA takes considerably longer than non-developers think. Some smaller shops don't have budgets for testers and phbs automatically assume the developer will spend a few minutes after every build "testing" and deliver a perfectly functional system. Can someone point me to some numbers? e.g. Testing should be XX% of your total man hour count , etc etc? Or perhaps some real world experience? My goal is to have some numbers that are grounded in real life so I can make time/effort allocation justifications for "proper" testing when preparing estimates and timelines for applications. Maybe not full blown 100% TDD, but pragmatically close to it. I apologize if I seem vague.

    Read the article

  • What set of tools make up "the rails way" of testing javascript in the browser?

    - by Jordan Feldstein
    What's the concensus for doing in-browser (either headless or remote-controlled) testing of javascript? Unit testing my JS is nice, but can't protect against irresponsible changes to the DOM. Unit testing of the JS and functional testing of the views to make sure they both provide and utilize the same, correct DOM, might work, but then the link between JS and DOM is being covered in two places which seems brittle or cumbersome. Is there an acknowledged "Rails Way" to implement full-stack tests, where I can run my javascript against the DOM rendered by the rest of the app, and check the results? (Something like what PHPUnit and Selenium give us, but inside the rails framework?)

    Read the article

  • Should I use a seperate class per test?

    - by user460667
    Taking the following simple method, how would you suggest I write a unit test for it (I am using MSTest however concepts are similar in other tools). public void MyMethod(MyObject myObj, bool validInput) { if(!validInput) { // Do nothing } else { // Update the object myObj.CurrentDateTime = DateTime.Now; myObj.Name = "Hello World"; } } If I try and follow the rule of one assert per test, my logic would be that I should have a Class Initialise method which executes the method and then individual tests which check each property on myobj. public class void MyTest { MyObj myObj; [TestInitialize] public void MyTestInitialize() { this.myObj = new MyObj(); MyMethod(myObj, true); } [TestMethod] public void IsValidName() { Assert.AreEqual("Hello World", this.myObj.Name); } [TestMethod] public void IsDateNotNull() { Assert.IsNotNull(this.myObj.CurrentDateTime); } } Where I am confused is around the TestInitialize. If I execute the method under TestInitialize, I would need seperate classes per variation of parameter inputs. Is this correct? This would leave me with a huge number of files in my project (unless I have multiple classes per file). Thanks

    Read the article

  • How to populate a private container for unit test?

    - by Sardathrion
    I have a class that defines a private (well, __container to be exact since it is python) container. I am using the information within said container as part of the logic of what the class does and have the ability to add/delete the elements of said container. For unit tests, I need to populate this container with some data. That date depends on the test done and thus putting it all in setUp() would be impractical and bloated -- plus it could add unwanted side effects. Since the data is private, I can only add things via the public interface of the object. This run codes that need not be run during a unit test and in some case is just a copy and paste from another test. Currently, I am mocking the whole container but somehow it does not feel that elegant a solution. Due to Python mocking frame work (mock), this requires the container to be public -- so I can use patch.dict(). I would rather keep that data private. What pattern can one use to still populate the containers without excising the public method so I have data to test with? Is there a way to do this with mock' patch.dict() that I missed?

    Read the article

  • Any good tools or tips for fuzz testing Windows forms applications?

    - by Ogre Psalm33
    I'm maintaining a ~300K LOC C# legacy thick-client application with a Windows.Forms interface. The app is full of little bugs and quirks. For example, I recently discovered a bug where if a users edits and tabs (not clicks) through cells on a DataViewGrid, and leaves the a certain cell selected, the app gets an "Object reference not set to an instance of an object" exception. I discover (or get a bug report of) something new like this about every week or two. I've had enough, and was thinking of trying some sort of fuzz testing on the application to try to ferret out undiscovered issues. If I roll-my-own fuzz testing, I'd assume I at least need to be able to generate test harnesses that run pieces of my app (main window, FormX, FormY, FormZ, ...) independently and try to inject events into them. I was trying to look for tools suited for this, but so far have come up with nothing for Win Forms. (There seems to be no shortage of fuzz testing tools for web apps, however). Any helpful ideas?

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • Strange performance issue with Dell R7610 and LSI 2208 RAID controller

    - by GregC
    Connecting controller to any of the three PCIe x16 slots yield choppy read performance around 750 MB/sec Lowly PCIe x4 slot yields steady 1.2 GB/sec read Given same files, same Windows Server 2008 R2 OS, same RAID6 24-disk Seagate ES.2 3TB array on LSI 9286-8e, same Dell R7610 Precision Workstation with A03 BIOS, same W5000 graphics card (no other cards), same settings etc. I see super-low CPU utilization in both cases. SiSoft Sandra reports x8 at 5GT/sec in x16 slot, and x4 at 5GT/sec in x4 slot, as expected. I'd like to be able to rely on the sheer speed of x16 slots. What gives? What can I try? Any ideas? Please assist Cross-posted from http://en.community.dell.com/support-forums/desktop/f/3514/t/19526990.aspx Follow-up information We did some more performance testing with reading from 8 SSDs, connected directly (without an expander chip). This means that both SAS cables were utilized. We saw nearly double performance, but it varied from run to run: {2.0, 1.8, 1.6, and 1.4 GB/sec were observed, then performance jumped back up to 2.0}. The SSD RAID0 tests were conducted in a x16 PCIe slot, all other variables kept the same. It seems to me that we were getting double the performance of HDD-based RAID6 array. Just for reference: maximum possible read burst speed over single channel of SAS 6Gb/sec is 570 MB/sec due to 8b/10b encoding and protocol limitations (SAS cable provides four such channels).

    Read the article

  • Looking for application performance tracking software

    - by JavaRocky
    I have multiple java-based applications which produce statistics on how long method calls take. Right now the information is being written into a log file and I analyse performance that way. However with multiple apps and more monitoring requirements this is being becoming a bit overwhelming. I am looking for an application which will collect stats and graph them so I can analyse performance and be aware of performance degradation. I have looked at Solarwinds Application Performance Monitoring, however this polls periodically to gather information. My applications are totally event based and we would like to graph and track this accordingly. I almost started hacking together some scripts to produce Google Charts but surely there are applications which do this already. Suggestions?

    Read the article

  • Hyper-V performance comparisons vs physical client?

    - by rwmnau
    Are there any comparisons between Hyper-V client machines and their physical equivalent? I've looked around and can find 4000 articles about improving Hyper-V performance, but I can't find any that actually do a side-by-side comparison or give benchmarking numbers. Ideally, I'm interested in a comparison of CPU, memory, disk, and graphics performance between something like the following: Some powerful workstation (with plenty of RAM) with Windows 7 installed on it directly Same exact worksation with Hyper-V Server 2008 R2 (the bare Server role) and a full-screen Windows 7 client machine Virtual Server 2005 had performance that didn't compare at all with actual hardware, but with the advances in CPU and hardware-level virtualization, has performance improved significantly? How obvious would it be to a user of the two above scenarios that one of them was virtualized, and does anybody know of actual benchmarking of this type?

    Read the article

  • LTO 2 tape performance in LTO 3 drive

    - by hmallett
    I have a pile of LTO 2 tapes, and both an LTO 2 drive (HP Ultrium 460e), and an autoloader with an LTO 3 drive in (Tandberg T24 autoloader, with a HP drive). Performance of the LTO 2 tapes in the LTO 2 drive is adequate and consistent. HP L&TT tells me that the tapes can be read and written at 64 MB/s, which seems in line with the performance specifications of the drive. When I perform a backup (over the network) using Symantec Backup Exec, I get about 1700 MB/min backup and verify speeds, which is slower, but still adequate. Performance of the LTO 2 tapes in the LTO 3 drive in the autoloader is a different story. HP L&TT tells me that the tapes can be read at 82 MB/s and written at 49 MB/s, which seems unusual at the write speed drop, but not the end of the world. When I perform a backup (over the network) using Symantec Backup Exec though, I get about 331 MB/min backup speed and 205 MB/min verify speeds, which is not only much slower, but also much slower for reads than for writes. Notes: The comparison testing was done on the same server, SCSI card and SCSI cable, with the same backup data set and the same tape each time. The tape and drives are error-free (according to HP L&TT and Backup Exec). The SCSI card is a U160 card, which is not normally recommended for LTO 3, but we're not writing to LTO 3 tapes at LTO 3 speeds, and a U320 SCSI card is not available to me at the moment. As I'm scratching my head to determine the reason for the performance drop, my first question is: While LTO drives can write to the previous generation LTO tapes, does doing so normally incur a performance penalty?

    Read the article

  • Raid-5 Performance per spindle scaling

    - by Bill N.
    So I am stuck in a corner, I have a storage project that is limited to 24 spindles, and requires heavy random Write (the corresponding read side is purely sequential). Needs every bit of space on my Drives, ~13TB total in a n-1 raid-5, and has to go fast, over 2GB/s sort of fast. The obvious answer is to use a Stripe/Concat (Raid-0/1), or better yet a raid-10 in place of the raid-5, but that is disallowed for reasons beyond my control. So I am here asking for help in getting a sub optimal configuration to be as good as it can be. The array built on direct attached SAS-2 10K rpm drives, backed by a ARECA 18xx series controller with 4GB of cache. 64k array stripes and an 4K stripe aligned XFS File system, with 24 Allocation groups (to avoid some of the penalty for being raid 5). The heart of my question is this: In the same setup with 6 spindles/AG's I see a near disk limited performance on the write, ~100MB/s per spindle, at 12 spindles I see that drop to ~80MB/s and at 24 ~60MB/s. I would expect that with a distributed parity and matched AG's, the performance should scale with the # of spindles, or be worse at small spindle counts, but this array is doing the opposite. What am I missing ? Should Raid-5 performance scale with # of spindles ? Many thanks for your answers and any ideas, input, or guidance. --Bill Edit: Improving RAID performance The other relevant thread I was able to find, discusses some of the same issues in the answers, though it still leaves me with out an answer on the performance scaling.

    Read the article

  • Good C++ books regarding Performance?

    - by Leon
    Besides the books everyone knows about, like Meyer's 3 Effective C++/STL books, are there any other really good C++ books specifically aimed towards performance code? Maybe this is for gaming, telecommunications, finance/high frequency etc? When I say performance I mean things where a normal C++ book wouldnt bother advising because the gain in performance isn't worthwhile for 95% of C++ developers. Maybe suggestions like avoiding virtual pointers, going into great depth about inlining etc? A book going into great depth on C++ memory allocation or multithreading performance would obviously be very useful.

    Read the article

  • Is there a Java Package for testing RESTful APIs?

    - by Zachary Spencer
    I'm getting ready to dive into testing of a RESTful service. The majority of our systems are built in Java and Eclipse, so I'm hoping to stay there. I've already found rest-client (http://code.google.com/p/rest-client/) for doing manual and exploratory testing, but is there a stack of java classes that may make my life easier? I'm using testNG for the test platform, but would love helper libraries that can save me time. I've found http4e (http://www.ywebb.com/) but I'd really like something FOSS.

    Read the article

  • Looking for Info on a Javascript Testing framework

    - by DaveDev
    Hi Can somebody fill me in on JavaScript Testing Frameworks? I'm working on a project now and as the JS (Mostly jQuery) libraries grow, it's getting more and more difficult to introduce change or refactor, because I have no way of guaranteeing the accuracy of the code without manually testing everything. I don't really know anything about JavaScript Testing Frameworks, or how they integrate/operate in a .Net project, so I thought I'd ask here. What would a good testing framework be for .Net? What does a JavaScript test look like? (e.g. with NUnit, I have [TestFixture] classes & [Test] methods in a ProjectTests assembly) How do I run a javascript test? What are the conceptual differences between testing JS & testing C#? Is there anything else that would be worth knowing? Thanks Dave

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >