Search Results

Search found 657 results on 27 pages for 'metrics'.

Page 4/27 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • which metric(s) show the difference between object-oriented and procedural code

    - by twieger
    Which metric(s) could help to indicate that i have procedural code instead of object-oriented code? I would like to have a set of simple metrics, which indicate with a high probability, that the analyzed code contains procedural transaction scripts and an anemic domain model instead of following sound object-oriented design principles. Would be happy about any set of useful metrics and tools for measuring. Thanks, Thomas!

    Read the article

  • Tips on a tool to measure code quality?

    - by Cristi Diaconescu
    I'm looking for a tool that can provide code quality metrics. For instance it could report very long functions (spaghetti code) very complex classes (which could contain do-it-all code) ... While we're on the (subjective:-) subject of code quality, what other code metrics would you suggest? I'm targetting C#/.NET code, but I'm sure this could extend to most programming languages.

    Read the article

  • How do I gather TeamCity code coverage reports from multiple projects into one report?

    - by Loofer
    We use the build in coverage application in TeamCity 6 (about to upgrade to 7.1) If we wish to see the code coverage (or other metrics) of a particular build it is fine as we can navigate to that build, but it would be great if we could pluck out a few interesting metrics from all/some of the current projects/build configurations and display them all together. For convenience I would expect the new display to be accessible from within TeamCity itself, however if there are solutions that require a separate solution we could look at them. Thanks

    Read the article

  • Finding key Solr performance metrics

    - by Mike Malloy
    To improve performance of Solr find your slowest searches, monitor query results, cache hit rate and cache size, document cache and filter cache; find problems with Solr update handlers by tracking index operations and document operations. There is a tool from New Relic which may help. http://www.newrelic.com/solr.html

    Read the article

  • Sun Directory Server 5.2 performance

    - by tmow
    Hi all, I'm using logconv.pl (provided by Sun), to measure performance on my server. These two metrics results, are worrying me a bit: Binds: 192164 Unbinds: 111569 In fact the difference between the two it's quite big, how can I determine which are the unbound requests? As stated by Lodovic: Many applications just close the connections without sending an Unbind request. This simply can explain the difference. But the logconv.pl doesn't show details about the unbound requests, do you know any other tools or can you suggest some queries or whatever that can help me find out the root cause? Do you think anyway that the performances may improve fixing the issue?

    Read the article

  • Measuring custom statistics with sar

    - by Will Glass
    I have a server application which I think is leaking file handles. I want to track the usage of file descriptors over time on my Linux (ubuntu) server. I've figured out that I can track the number of file descriptors in use by a process with lsof -p `pgrep the-process-name` | wc -l Since I'm already using sysstat and sar to track various metrics, I thought it'd be nice to display with sar. I want to measure this every 10 minutes. Is it possible to add a custom metric to sar? Then I can easily report it out. If not, I'll write a simple cron job to collect this data and store it separately in a log file.

    Read the article

  • What are the industry metrics for average spend on dev hardware and software? [on hold]

    - by RationalGeek
    I'm trying to budget for my dev shop and compare our budget items to industry expectations. I'm hoping to find some information on what percentage of a dev's salary is generally spent on tooling, both hardware and software. Where can I find such information? If instead there is a source that looks at raw dollars that is useful, too. I can extrapolate what I need from that. NOTE: Your anecdotal evidence from your own job will not be very helpful. I'm looking for industry average statistics from a credible source. EDIT: I'm reluctant to even keep this question going based on the passionate negative responses of commenters, but I do think this is valuable information (assuming anyone will care to answer) so let me make one attempt to clarify why I'm looking for this information, and then leave it at that. I'm not sure why understanding and validating my motives is a necessary step to providing the information, but apparently that is the case, so I will do my best. Firstly, let me respond to the idea that us "management types" shouldn't use these types of metrics to evaluate budgets. I agree in part. Ideally, you should spend whatever is necessary on developers in order to keep them fully happy and productive. And this is true of all employees. However, companies operate in a world of limited resources, and every dollar spent in one area means a dollar not spent in another. So it is not enough to simply say "I need to spend $10,000 per developer next year" without having some way to justify that position. One way to help justify it is to compare yourself against the industry. If it is the case that on average a software shops spends 5% (making up that number) of their total development budget (salaries being the large portion of the other 95%, for arguments sake), and I'm only spending 3%, it helps in the justification process. So, it is not my intent to use this information to limit what I spend on developers, but rather to arm myself with the necessary justification to spend what I need to spend on developers to give them the best tools I can. I have been a developer for many years and I understand the need for proper tooling. Next, let's examine the idea that even considering the relationship between a spend on developer salaries and developer tooling is ludicrous and should be banned from budgetary thinking. As Jimmy Hoffa put it in their comment, it's like saying "I'm going to spend no more than 10% of median employee salary on light bulbs and coffee from now on.". Well, yes, it is like saying that, and from a budgeting perspective, this is a useful way to look at things. If you know that, on average, an employee consumes X dollars of coffee a year, then you can project a coffee budget based on that. And you can compare it to an industry metric to understand where you fall: do you spend more on coffee than other companies or less? Why might this be? If you are a coffee supply manager, that seems like a useful thought process. The same seems to hold true for developers. Now, on to the idea that I need to compare "apples to apples" and only look at other shops that are in the same place geographically, the same business, the same application architecture, and the same development frameworks. I guess if I could find such a statistic that said "a shop that is exactly identical to yours spends X on developer tooling" it would be wonderful. But there is plenty of value in an average statistic. Here's an analogy: let's say you are working on a household budget and need to decide how much to spend on groceries. Is it enough to know that the average consumer spends 15% on groceries and therefore decide that you will budget exactly 15%? No. You have to tweak your budget based on your individual needs and situation. But the generalized statistic does help in this evaluation. You can know if your budget is grossly off from what others are doing, and this can help you figure out why this is. So, I will concede the point that it would be better to find statistics that align to my shop, though I think any statistics I could find would be useful for what I'm doing. In that light, let's say that my shop is mostly focused on ASP.NET web applications. That doesn't map perfectly to reality because large enterprises have very heterogenous IT environments. But if I was going to pick one technology that is our focus that would be it. But, if you were to point me at some statistics that are related to a Linux shop doing embedded Java applications, I would still find it useful as a point of comparison. SUMMARY: Let me try to rephrase my question. I'm trying to find industry metrics on how much dev shops spend on developer tooling, both hardware and software. I don't so much care whether it is expressed as a percentage of total budget or as X dollars per dev or as Y percentage of salary. Any metric would be useful. If there are metrics that are specific to ASP.NET dev shops in the Northeast US, all the better, but I would be happy to find anything.

    Read the article

  • Source of parsers for programming languages?

    - by Arkaaito
    I'm dusting off an old project of mine which calculates a number of simple metrics about large software projects. One of the metrics is the length of files/classes/methods. Currently my code "guesses" where class/method boundaries are based on a very crude algorithm (traverse the file, maintaining a "current depth" and adjusting it whenever you encounter unquoted brackets; when you return to the level a class or method began on, consider it exited). However, there are many problems with this procedure, and a "simple" way of detecting when your depth has changed is not always effective. To make this give accurate results, I need to use the canonical way (in each language) of detecting function definitions, class definitions and depth changes. This amounts to writing a simple parser to generate parse trees containing at least these elements for every language I want my project to be applicable to. Obviously parsers have been written for all these languages before, so it seems like I shouldn't have to duplicate that effort (even though writing parsers is fun). Is there some open-source project which collects ready-to-use parser libraries for a bunch of source languages? Or should I just be using ANTLR to make my own from scratch? (Note: I'd be delighted to port the project to another language to make use of a great existing resource, so if you know of one, it doesn't matter what language it's written in.)

    Read the article

  • How can I get metrics such as incoming and outcoming traffic with Apache servers?

    - by hhh
    Suppose a network consisting of hubs A, B, C, D ... and X. I am looking for ways to visualize how users use the network such as incoming, outgoing and other metrics. In Apache logs, I can see some errs if something did not work but I have no realistic picture about such a system in general i.e. how the system actually works. I am looking for some sort of flow-analysis and I would like to get pure data to create some graph. Then analyze the graph with some metrics where I do not even know the right metrics, perhaps some dispersion metric. My goal is to create some sort of objective way to judge quality.

    Read the article

  • useful JMX metrics for monitoring WebSphere Application Server (and apps inside it)?

    - by Justin Grant
    When managing custom Java applications hosted inside WebSphere Application Server, what JMX metrics do you find most useful for monitoring performance, monitoring availability, and troubleshooting problems? And how do you prefer to slice and visualize those metrics (e.g. chart by top 10 hosts, graph by app, etc.). The more details I can get, the better, as I need to specify a standard set of reports which IT can offer to owners of applications hosted by IT, which those owners can customize but many won't bother. So I'll need to come up with a bunch of generally-applicable reports which most groups can use out-of-the-box. Obviously there's no one perfect answer to this question, so I'll accept the answer with the most comprehensive details and I'll be generous about upvoting any other useful answer. My question is WebSphere-specific, but I realize that most JMX metrics are equally applicable across any container, so feel free to give an answer for JBoss, Tomcat, WebLogic, etc.

    Read the article

  • Algorithm Analysis tool for java

    - by Mansoor
    I am looking for an algorithm analysis tool for java that can calculate Big 0 of a function. Ideal I would like to make it part of my build process, along side of my other code metrics tool. Even after searching on google I am unable to find any opensource of commercial tool. Any suggestion would be welcome Thanks

    Read the article

  • javancss includes problem

    - by senzacionale
    <javancss srcdir="C:\Projekti\KIS\Model\src" generateReport="true" includes="**/*.java" outputfile="docs/javancss_metrics.xml" format="xml" /> if i use includes="*/.java" then no metrics are calculated. If i delete includes then works. Any idea why?

    Read the article

  • how many lines of code does my class library have

    - by zachary
    for metrics reasons I need to know how many lines of code my class library has. I'm doing this for code coverage.... So if Class library 1 has 50 lines of code and 100% coverage And if Class library 2 has 500 lines of code and 0% coverage My total coverage is 90% Any idea how to do this? Is there a utility or a way to use Visual Studio?

    Read the article

  • Is there a Windows equivalent of Unix 'CPU steal time'?

    - by Steffen Opel
    In order to assess performance monitoring accuracy on virtualization platforms, the CPU steal time has become an increasingly relevant metric - see EC2 monitoring: the case of stolen CPU for an instructive summary in the context of Amazon EC2 and IBM's paper on CPU time accounting for a more in-depth technical explanation (including illustrations) of the concept: Steal time is the percentage of time a virtual CPU waits for a real CPU while the hypervisor is servicing another virtual processor. Accordingly, it is exposed in most related Unix/Linux monitoring tools nowadays - see e.g. columns %steal or st in sar or top: st -- Steal Time The amount of CPU 'stolen' from this virtual machine by the hypervisor for other tasks (such as running another virtual machine). I've been unable to figure out how to capture the same metric on Windows though, is this possible already? (Ideally for the Windows 2008 Server R2 AMIs on EC2 and via a respective Windows Performance Counters of course.)

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

  • Monitoring tools that can take high rate and high volume?

    - by Jon Watte
    We're using Cacti with RRDTool to monitor and graph about 100,000 counters spread across about 1,000 Linux-based nodes. However, our current setup generally only gives us 5-minute graphs (with some data being minute-based); we often make changes where seeing feedback in "near real time" would be of value. I'd like approximately a week of 5- or 10-second data, a year of 1-minute data, and 5 years of 10-minute data. I have SSD disks and a dual-hexa-core server to spare. I tried setting up a Graphite/carbon/whisper server, and had about 15 nodes pipe to it, but it only has "average" for the retention function when promoting to older buckets. This is almost useless -- I'd like min, max, average, standard deviation, and perhaps "total sum" and "number of samples" or perhaps "95th percentile" available. The developer claims there's a new back-end "in beta" that allows you to write your own function, but this appears to still only do 1:1 retention (when saving older data, you really want the statistics calculated into many streams from a single input. Also, "in beta" seems a little risky for this installation. If I'm wrong about this assumption, I'd be happy to be shown my error! I've heard Zabbix recommended, but it puts data into MySQL or some other SQL database. 100,000 counters on a 5 second interval means 20,000 tps, and while I have an SSD, I don't have an 8-way RAID-6 with battery backup cache, which I think I'd need for that to work out :-) Again, if that's actually something that's not a problem, I'd be happy to be shown the error of my ways. Also, can Zabbix do the single data stream - promote with statistics thing? Finally, Munin claims to have a new 2.0 coming out "in beta" right now, and it boasts custom retention plans. However, again, it's that "in beta" part -- has anyone used that for real, and at scale? How did it perform, if so? I'm almost thinking about using a graphing front-end (such as Graphite) and rolling my own retention backend with a simple layer on top of mmap() and some stats. That wouldn't be particularly hard, and would probably perform very well, letting the kernel figure out the balance between frequency of flushing to disk and process operations. Any other suggestions I should look into? Note: it has to have shown itself able to sustain the kinds of data loads I'm suggesting above; if you can point at the specific implementation you're referencing, so much the better!

    Read the article

  • Where do vendors publish internal transfer rates of HDDs?

    - by red888
    So I've started to dig into storage fundamentals and found that in order to calculate the IOPS of a HDD you need to know the internal transfer rate of the drive (time it takes data to move from the platters to internal disk's cache). I went on newegg and even a few vendor sites and could not find this info published for any HDDs. Is it sometimes called something else? Take this link to a seagate HDD for instance. Nowhere do I see "internal transfer rate", but I do see something called "Sustained Data Rate OD"- is that the same thing? Just so you know where I'm getting this info (Book: "Information Storage and Management Storing, Managing..."): Consider an example with the following specifications provided for a disk: The average seek time is 5 ms in a random I/O environment; therefore, T = 5 ms. Disk rotation speed of 15,000 revolutions per minute or 250 revolutions per second — from which rotational latency (L) can be determined, which is one-half of the time taken for a full rotation or L = (0.5/250 rps expressed in ms). 40 MB/s internal data transfer rate, from which the internal transfer time (X) is derived based on the block size of the I/O — for example, an I/O with a block size of 32 KB; therefore X = 32 KB/40 MB. Consequently, the time taken by the I/O controller to serve an I/O of block size 32 KB is (TS) = 5 ms + (0.5/250) + 32 KB/40 MB = 7.8 ms. Therefore, the maximum number of I/Os serviced per second or IOPS is (1/TS) = 1/(7.8 × 10^-3) = 128 IOPS.

    Read the article

  • How to distribute multiple executions of an app across many machines

    - by Salec
    I've got a simulation app (64-bit windows) that runs without any user interaction. This app gathers information and pushes it to a remote MS SQL Server. What I'd like to do is execute this simulation as many times as I can on multiple machines after our nightly build has finished and it has passed the test suite. If possible I'd love to have the ability to configure it to stop after x total runs or if the entire batch has taken over y hours. I've tried using Visual Studio's built in test framework since we already have a test lab set up with multiple agents. I created a single unit test that simply runs the simulation then I created an ordered test and added that single test multiple times (from what I gather, this is the only way to execute the same unit test more than once). I found that ordered tests are only run on a single agent and not distributed which is very limiting. We use TeamCity to perform our nightly builds and I suspect it's possible to implement this on top of that, but I'm fairly new to TeamCity. We also have Jenkins and Bamboo available and I'm open to any other software that would get the job done presuming it runs on a 64-bit Windows OS. Any suggestions?

    Read the article

  • As our favorite imperative languages gain functional constructs, should loops be considered a code s

    - by Michael Buen
    In allusion to Dare Obasanjo's impressions on Map, Reduce, Filter (Functional Programming in C# 3.0: How Map/Reduce/Filter can Rock your World) "With these three building blocks, you could replace the majority of the procedural for loops in your application with a single line of code. C# 3.0 doesn't just stop there." Should we increasingly use them instead of loops? And should be having loops(instead of those three building blocks of data manipulation) be one of the metrics for coding horrors on code reviews? And why? [NOTE] I'm not advocating fully functional programming on those codes that could be simply translated to loops(e.g. tail recursions) Asking for politer term. Considering that the phrase "code smell" is not so diplomatic, I posted another question http://stackoverflow.com/questions/432492/whats-the-politer-word-for-code-smell about the right word for "code smell", er.. utterly bad code. Should that phrase have a place in our programming parlance?

    Read the article

  • Metric to measure object-orientedness

    - by Jono
    Is there a metric that can assist in determining the object-orientedness of a system or application? I've seen some pretty neat metrics in the .NET Reflector Add-ins codeplex project, but nothing like this yet. If such a metric doesn't exist, would it even be possible or useful? There are the 3 supposed tenets of object-oriented programming: encapsulation, inheritance, and polymorphism; a tool that ranked programs against these might be able to show areas of a C# (or similar) code base where the whole object-oriented ideal was discarded, and perhaps how many bugs are associated with that area versus the rest of the project.

    Read the article

  • Basic site analytics doesn't tally with Google data

    - by Jenkz
    After being stumped by an earlier quesiton: SO google-analytics-domain-data-without-filtering I've been experimenting with a very basic analytics system of my own. MySQL table: hit_id, subsite_id, timestamp, ip, url The subsite_id let's me drill down to a folder (as explained in the previous question). I can now get the following metrics: Page Views - Grouped by subsite_id and date Unique Page Views - Grouped by subsite_id, date, url, IP (not nesecarily how Google does it!) The usual "most visited page", "likely time to visit" etc etc. I've now compared my data to that in Google Analytics and found that Google has lower values each metric. Ie, my own setup is counting more hits than Google. So I've started discounting IP's from various web crawlers, Google, Yahoo & Dotbot so far. Short Questions: Is it worth me collating a list of all major crawlers to discount, is any list likely to change regularly? Are there any other obvious filters that Google will be applying to GA data? What other data would you collect that might be of use further down the line? What variables does Google use to work out entrance search keywords to a site? The data is only going to used internally for our own "subsite ranking system", but I would like to show my users some basic data (page views, most popular pages etc) for their reference.

    Read the article

  • SQL IO and SAN troubles

    - by James
    We are running two servers with identical software setup but different hardware. The first one is a VM on VMWare on a normal tower server with dual core xeons, 16 GB RAM and a 7200 RPM drive. The second one is a VM on XenServer on a powerful brand new rack server, with 4 core xeons and shared storage. We are running Dynamics AX 2012 and SQL Server 2008 R2. When I insert 15 000 records into a table on the slow tower server (as a test), it does so in 13 seconds. On the fast server it takes 33 seconds. I re-ran these tests several times with the same results. I have a feeling it is some sort of IO bottleneck, so I ran SQLIO on both. Here are the results for the slow tower server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 226.97 MBs/sec: 1.77 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 281 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 99 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS C:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 91.34 MBs/sec: 0.71 latency metrics: Min_Latency(ms): 14 Avg_Latency(ms): 699 Max_Latency(ms): 1124 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads writing for 120 secs to file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1094.50 MBs/sec: 68.40 latency metrics: Min_Latency(ms): 0 Avg_Latency(ms): 58 Max_Latency(ms): 467 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS C :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 14318180 counts per second 8 threads reading for 120 secs from file C:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: C:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1155.31 MBs/sec: 72.20 latency metrics: Min_Latency(ms): 17 Avg_Latency(ms): 55 Max_Latency(ms): 205 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Here are the results of the fast rack server: C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS E:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for write): The system cannot find the pa th specified. exiting C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS E :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file E:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) open_file: CreateFile (E:\TestFile.dat for read): The system cannot find the pat h specified. exiting C:\Program Files (x86)\SQLIO>test.bat C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 2575.77 MBs/sec: 20.12 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 24 Max_Latency(ms): 655 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 5 8 9 9 9 8 5 3 1 1 1 1 0 0 0 0 0 0 0 0 0 37 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -frandom -b8 -BH -LS c:\Tes tFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 8KB random IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1141.39 MBs/sec: 8.91 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 55 Max_Latency(ms): 652 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 91 C:\Program Files (x86)\SQLIO>sqlio -kW -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads writing for 120 secs to file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 341.37 MBs/sec: 21.33 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 186 Max_Latency(ms): 120037 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 C:\Program Files (x86)\SQLIO>sqlio -kR -t8 -s120 -o8 -fsequential -b64 -BH -LS c :\TestFile.dat sqlio v1.5.SG using system counter for latency timings, 62500000 counts per second 8 threads reading for 120 secs from file c:\TestFile.dat using 64KB sequential IOs enabling multiple I/Os per thread with 8 outstanding buffering set to use hardware disk cache (but not file cache) using current size: 5120 MB for file: c:\TestFile.dat initialization done CUMULATIVE DATA: throughput metrics: IOs/sec: 1024.07 MBs/sec: 64.00 latency metrics: Min_Latency(ms): 5 Avg_Latency(ms): 61 Max_Latency(ms): 81632 histogram: ms: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+ %: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 100 Three of the four tests are, to my mind, within reasonable parameters for the rack server. However, the 64 write test is incredibly slow on the rack server. (68 mb/sec on the slow tower vs 21 mb/s on the rack). The read speed for 64k also seems slow. Is this enough to say there is some sort of bottleneck with the shared storage? I need to know if I can take this evidence and say we need to launch an investigation into this. Any help is appreciated.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >