Search Results

Search found 3953 results on 159 pages for 'overlapped io'.

Page 15/159 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • Need data on disk drive management by OS: getting base I/O unit size, "sync" option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • RAID10 without write-back cache = horrible write performance?

    - by Harry Mexican
    I have just provisioned a dedicated server on singlehop. I'm running it through some tests to know what to expect performance-wise. On the I/O side (with 4 1TB disks in RAID 10) I get: write-cache disabled 200 MB/s read throughput 30 MB/s write throughput I thought that was really low compared to my desktop HD which gets 150-150 or so. So I had a chat with them and they suggested enabling the write cache. New results: write-cache enabled 280 MB/s read 260 MB/s write which is great and all but means I'd have to add a BBU for an additional monthly cost. Is it normal for the write throughput to be 1/4 of a regular drive on RAID10, if you don't have write cache? It almost feels like its intentionally bad to force you to pony up for the BBU. I'd be happy with normal non-raid performance of 150/150.

    Read the article

  • Try to delete files used by IIS

    - by Cédric Boivin
    I got a service coded in c# whoes deleted somes web site files hosted on iis, before an update. But sometime when i delete the files, they stay there. If I try to delete them manually, via explorer, the file are not deletable, because they are in state "Delete pending". There is the way my service try to delete the file try { // Enlève tout les attributs sur le fichiers afin de s'assurer que le fichier n'est pas en lecture seul File.SetAttributes(file, FileAttributes.Normal); // Supprime le fichier File.Delete(file); } It's there a way to avoid this state ? What can i do to force the delete by c# code? Could i release all process to the file by c# code ? The environnement is IIS 7.5 Windows 2008-r2 .net 4.0 Thanks

    Read the article

  • Windows Disk I/O Analysis

    - by Jonathon
    It appears that we are having a problem with the disk i/o speed on our Windows 2003 Enterprise Edition server (64-bit). As we were initializing a database that created two 1G tablespaces on 3 different machines, it became obvious that the two smaller machines (each 32-bit Windows 2003 Standard Edition with less RAM) killed the larger machine when creating the files. The larger machine took 10x as long to create the tablespaces than did the other machines. Now, I am left wondering how that could be. What programs or scripts would you guys recommend for tracking down the I/O problem? I think the issue may be with the controller card (all boxes are hardware RAID 10, but have different controller cards), but I would like to check the actual disk I/O speed as well, so I have some hard numbers to work with. Any help would be appreciated.

    Read the article

  • Determining Performance Limits

    - by JeffV
    I have a number of windows processes that pass messages between them hat a high rate using tcp to local host. Aside from testing on actual hardware how can I assess what my hardware limit will be. These applications are not doing CPU intensive work, mostly decomposing and combining messages, scanning over them for special flag in the data etc.. The message size is typically 3k and the rate is typically ~10k messages per second. ~30MB per second between processing stages. There may be 10 or more stages depending. For this type of application, what should I look to for assessing performance? What do I look for in a server performance wise? I am currently running an XEON L5408 with 32 GB ram. But I am assuming cache is more important than actual ram size as I am barely touching the ram.

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • I/O Reads and Writes per process Unix/SunOS?

    - by Alex
    Can prstat or something similar tell me how many reads/writes a process is doing similar to how Task Manager on Windows can show I/O Reads, I/O Writes and many other I/O columns per process? I'm using SunOS 5.10, but feel free to post other Unix flavours too.

    Read the article

  • replacement drive cage for power edge R710

    - by bumble_bee_tuna
    Hi I'm performance tuning a DB server its a Dell R710 there is a very significant I/O bottleneck. Unfortunately the server was purchased with the 6 x 3.5 inch sata configuration which doesn't give me the leeway I need to address the issue. Before going to DAS does anyone know if it is possible to purchase a replacement front drive enclosure ? I know the server is configurable with something like 12 or 16 2.5 inch drives and it appears to be modular ? I tired contacting dell but the offshore parts department reps are not very bright lol. Thanks.

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

  • Is Joerg Schilling's "sdd" a full replacement for "dd"

    - by fishtoprecords
    I'm trying to use 'sdd' on my Debian system, and can't get one set of options to work. They do work in 'dd' so I am wondering if I am specifying them incorrectly, or if sdd didn't implement them, or something else. What I want to do is sdd if=/dev/hdh1 of=/bay5/imagebay1 bs=4096 conv=sync,noerror if I leave out the "conv=..." option, it works, or at least starts copying data. sdd if=/dev/hdh1 of=/bay5/imagebay1 bs=4096 Can you shed a bit of light?

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • Linux Read-Ahead Downsides

    - by JPerkSter
    Hi Everyone, Hope all is well. I have a question regarding read-ahead caching. Are there any downsides to raising the size of the read-ahead cache? On our farm, we're currently running at 256, and upon raising that higher, we are seeing significant throughput gains.   [root@server~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 7352 MB in 2.00 seconds = 3677.62 MB/sec 3 Timing buffered disk reads: 244 MB in 3.10 seconds = 78.68 MB/sec [root@server ~]# blockdev --setra 10240 /dev/sda [root@server ~]# hdparm -tT /dev/sda /dev/sda: Timing cached reads: 11452 MB in 2.00 seconds = 5728.52 MB/sec Timing buffered disk reads: 422 MB in 3.17 seconds = 133.04 MB/sec We are running on 2.6. Thanks!

    Read the article

  • Hiding "Syntax OK" from apache2ctl output

    - by Oscar Barrett
    I am checking whether a particular apache module is installed using apache2ctl -M. When listing the modules, apache runs a syntax check on the configuration files which prints out "Syntax OK" if everything is fine. However, this message doesn't seem to be coming from STDOUT or STDERR as it shows even if all output is redirected to /dev/null. i.e. $ sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) ... Syntax OK $ sudo apache2ctl -M >/dev/null Syntax OK How is this being outputted, and is it possible to hide?

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

  • How do I know if my disks are being hit with too many I/O reads or writes or both?

    - by Mark F
    I know a bit about disk I/O and bottlenecks relating to this especially when relating to databases. How do I really know what the max I/O numbers will be for my disks? What metric might be available to me for working out roughly (but needs to be a good approximation) of how much capacity (if you will) have I got left available in I/O. I've seen it before where things are bubbling along nicely and then all of a sudden, everything screams to a halt, and it ends up being an I/O bound problem. Is there a better way to predict when I/O is reaching its limits? This article was interesting but not giving the answer I desire. So, is my best bet surrounding just looking at 'CPU I/O WAIT'? There must be a more reactive method than this.

    Read the article

  • How to accelerate and notice failure of potentially faulty disks

    - by rainier
    Hey, I got a bunch of 'used' servers, whose disks should have been checked, but they have been shipped around the county in crate which can't help. I just had one disk go bad (despite being mirrored, currently trying to get more details). The server was fine for about a week before everything ground to a halt this afternoon. Is there any way 'accelerate' the failure of faulty disks, with the goal of bringing the disk to failure before we launch production services? Would doing lots of I/O with 'dd' or 'iozone' be a good way to test these potentially faulty disks? Any other tests/tools that would help recognized failures before they happen?

    Read the article

  • I/O redirection using cygwin and mingw

    - by KLee1
    I have written a program in C and have compiled it using MinGW. When I try to run that program in Cygwin, it seems to behave normally (i.e. prints correct output etc.) However, I'm trying to pipe output to a program so that I can parse information from the program's output. However, the piping does not seem to be working in that I am not getting any input into the second program. I have confirmed this by using the following commands: This command seems to work fine: ./prog Performing this command returns nothing: ./prog | cat This command verifies the first: ./prog | wc Which returns: 0 0 0 I know that the script (including the piping from the program) works perfectly fine in an all Linux environment. Does anyone have any idea for why the piping isn't working in Cygwin? Thanks!

    Read the article

  • performance monitoring

    - by Sunny
    I want to monitor CPU usage, disk read/write usage for a particular process, say ./myprocess. To monitor CPU top command seems to be a nice option and for read and write iotop seems to be a handy one. For example to monitor read/write for every second i use the command iotop -tbod1 | grep "myprocess". My difficulty is I just want only three variables to store, namely read/sec, write/sec, cpu usage/sec. Could you help me with a script that combines the outputs the above said three variables from top and iotop to be stored into a log file? Thanks!

    Read the article

  • Full list of top-like tool family for perfomance monitoring in linux: iftop iotop htop atop more? [closed]

    - by Yauhen Yakimovich
    Top-like utilities are extremely handful in my work and I want to make sure I am not missing any of them. Please extend the following list of performance monitoring (top-like) family of linux tools: top - original tool htop - adds support to multicore/cpu iotop - input/output monitoring iftop - network monitoring atop - merges previous elements into a single overview slabtop – displays a listing of the top caches ? The only criteria - maturity and similarity in function/style.

    Read the article

  • How to increase the disk cache of Windows 7

    - by Mark Christiaens
    Under Windows 7 (64 bit), I'm reading through 9000 moderately sized files. In total, there is more than 200 MB of data. Using Java (JDK 1.6.21) I'm iterating over the files. The first 1400 or so go at full speed but then speed drops off to 4ms per file. It turns out that the main cost is incurred simply by opening the files. I'm opening the files using new FileInputStream (and of course closing them in time to avoid file leaks). After some investigating, I see that Windows' disk cache is using only 100 MB or so of RAM although I have 8 GiB available. I've tried increasing the cache size using the CacheSet tool but any values I provide are considered out of range. I've also tried enabling the LargeSystemCache registry key but (after rebooting) the CacheSet tool still indicates I'm using 100 MB of cache (and doesn't increase during the test run). Does anybody have any suggestions to "encourage" Windows 7 to cache my 9000 files?

    Read the article

  • Fastest security check of file tree on NFS

    - by fungs
    I am currently experiencing very bad performance using the following on an NFS network folder: time find . | while read f; do test -L "$f" && f=$(readlink -m $f); grp="$(stat -c %G $f)"; perm="$(stat -c %A $f)"; done Question 1) Within the loop permissions are checked using the variables grp and perm. Is there a way to lower the amount of disc I/O for these kind of checks over the network (e.g. read all meta data at once using find)? Question 2) It seems like the NFS isn't tuned very well, the same operation on a similar network link via SSHFS take only one third of the time. All parameters are auto-negotiated. Any suggestions?

    Read the article

  • Need data on disk drive management by OS: getting base I/O unit size, “sync” option, Direct Memory A

    - by Richard T
    Hello All, I want to ensure I have done all I can to configure a system's disks for serious database use. The three areas I know of (any others?) to be concerned about are: I/O size: the database engine and disk's native size should either match, or the database's native I/O size should be a multiple of the disk's native I/O size. Disks that are capable of Direct Memory Access (eg. IDE) should be configured for it. When a disk says it has written data persistently, it must be so! No keeping it in cache and lying about it. I have been looking for information on how to ensure these are so for CENTOS and Ubuntu, but can't seem to find anything at all! I want to be able to check these things and change them if needed. Any and all input appreciated.

    Read the article

  • random hard disk errors

    - by AugB
    For the past 2 years or so (4 year old custom build) I've been getting random moments where everything stops responding (or takes a very long time to respond) followed by I/O and hdd not detected errors on restart. To fix it, all I usually need to do is unplug my SATA cables from the hdd and mobo and plug them back in again and the problem disappears, at least for a little while (it ranges anywhere from a day to a few months time). Sometimes even a startup repair does the job. I've done multiple reformats and have also ran chkdsk more times than I can remember and both do not seem to help in the long run. Both the drives seem to be exhibiting the same problem. Have both my hdds been "dying" for the past couple of years, even though they are fully functional besides these occasional hiccups? Does the issue lie elsewhere? All feedback is appreciated. System specs: Biostar Tpower i45 mobo 2x WD Caviar 640GB hdds Zalman 750w psu Radeon 5870 gpu 2x2gb Gskill DDR2 ram Win7 64

    Read the article

  • Parallel.For System.OutOfMemoryException

    - by Martin Neal
    We have a fairly simple program that's used for creating backups. I'm attempting to parallelize it but am getting an OutofMemoryException within an AggregateExcption. Some of the source folders are quite large, and the program doesn't crash for about 40 minutes after it starts. I don't know where to start looking so the below code is a near exact dump of all code the code sans directory structure and Exception logging code. Any advise as to where to start looking? using System; using System.Diagnostics; using System.IO; using System.Threading.Tasks; namespace SelfBackup { class Program { static readonly string[] saSrc = { "\\src\\dir1\\", //... "\\src\\dirN\\", //this folder is over 6 GB }; static readonly string[] saDest = { "\\dest\\dir1\\", //... "\\dest\\dirN\\", }; static void Main(string[] args) { Parallel.For(0, saDest.Length, i => { try { if (Directory.Exists(sDest)) { //Delete directory first so old stuff gets cleaned up Directory.Delete(sDest, true); } //recursive function clsCopyDirectory.copyDirectory(saSrc[i], sDest); } catch (Exception e) { //standard error logging CL.EmailError(); } }); } } /////////////////////////////////////// using System.IO; using System.Threading.Tasks; namespace SelfBackup { static class clsCopyDirectory { static public void copyDirectory(string Src, string Dst) { Directory.CreateDirectory(Dst); /* Copy all the files in the folder If and when .NET 4.0 is installed, change Directory.GetFiles to Directory.Enumerate files for slightly better performance.*/ Parallel.ForEach<string>(Directory.GetFiles(Src), file => { /* An exception thrown here may be arbitrarily deep into this recursive function there's also a good chance that if one copy fails here, so too will other files in the same directory, so we don't want to spam out hundreds of error e-mails but we don't want to abort all together. Instead, the best solution is probably to throw back up to the original caller of copy directory an move on to the next Src/Dst pair by not catching any possible exception here.*/ File.Copy(file, //src Path.Combine(Dst, Path.GetFileName(file)), //dest true);//bool overwrite }); //Call this function again for every directory in the folder. Parallel.ForEach(Directory.GetDirectories(Src), dir => { copyDirectory(dir, Path.Combine(Dst, Path.GetFileName(dir))); }); } }

    Read the article

  • fileToSTring keeps on returning " "

    - by karikari
    I managed to get this code to compile with out error. But somehow it did not return the strings that I wrote inside file1.txt and file.txt that I pass its path through str1 and str2. My objective is to use this open source library to measure the similarity between strings contains inside 2 text files. Inside the its Javadoc, its states that ... public static java.lang.StringBuffer fileToString(java.io.File f) private call to load a file and return its content as a string. Parameters: f - a file for which to load its content Returns: a string containing the files contents or "" if empty or not present Here's is my modified code trying to use the FileLoader function, but fails to return the strings inside the file. The end result keeps on returning me the "" . I do not know where is my fault: package uk.ac.shef.wit.simmetrics; import java.io.File; import uk.ac.shef.wit.simmetrics.similaritymetrics.*; import uk.ac.shef.wit.simmetrics.utils.*; public class SimpleExample { public static void main(final String[] args) { if(args.length != 2) { usage(); } else { String str1 = "arg[0]"; String str2 = "arg[1]"; File objFile1 = new File(str1); File objFile2 = new File(str2); FileLoader obj1 = new FileLoader(); FileLoader obj2 = new FileLoader(); str1 = obj1.fileToString(objFile1).toString(); str2 = obj2.fileToString(objFile2).toString(); System.out.println(str1); System.out.println(str2); AbstractStringMetric metric = new MongeElkan(); //this single line performs the similarity test float result = metric.getSimilarity(str1, str2); //outputs the results outputResult(result, metric, str1, str2); } } private static void outputResult(final float result, final AbstractStringMetric metric, final String str1, final String str2) { System.out.println("Using Metric " + metric.getShortDescriptionString() + " on strings \"" + str1 + "\" & \"" + str2 + "\" gives a similarity score of " + result); } private static void usage() { System.out.println("Performs a rudimentary string metric comparison from the arguments given.\n\tArgs:\n\t\t1) String1 to compare\n\t\t2)String2 to compare\n\n\tReturns:\n\t\tA standard output (command line of the similarity metric with the given test strings, for more details of this simple class please see the SimpleExample.java source file)"); } }

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >