Search Results

Search found 12367 results on 495 pages for 'disk io'.

Page 68/495 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • replacement drive cage for power edge R710

    - by bumble_bee_tuna
    Hi I'm performance tuning a DB server its a Dell R710 there is a very significant I/O bottleneck. Unfortunately the server was purchased with the 6 x 3.5 inch sata configuration which doesn't give me the leeway I need to address the issue. Before going to DAS does anyone know if it is possible to purchase a replacement front drive enclosure ? I know the server is configurable with something like 12 or 16 2.5 inch drives and it appears to be modular ? I tired contacting dell but the offshore parts department reps are not very bright lol. Thanks.

    Read the article

  • perfmon reporting higher IOPs than possible?

    - by BlueToast
    We created a monitoring report for IOPs on performance counters using Disk reads/sec and Disk writes/sec on four servers (physical boxes, no virtualization) that have 4x 15k 146GB SAS drives in RAID10 per server, set to check and record data every 1 second, and logged for 24 hours before stopping reports. These are the results we got: Server1 Maximum disk reads/sec: 4249.437 Maximum disk writes/sec: 4178.946 Server2 Maximum disk reads/sec: 2550.140 Maximum disk writes/sec: 5177.821 Server3 Maximum disk reads/sec: 1903.300 Maximum disk writes/sec: 5299.036 Server4 Maximum disk reads/sec: 8453.572 Maximum disk writes/sec: 11584.653 The average disk reads and writes per second were generally low. I.e. for one particular server it was like average 33 writes/sec, but when monitoring in real-time it would often spike up to several hundreds and also sometimes into the thousands. Could someone explain to me why these numbers are significantly higher than theoretical calculations assuming each drive can do 180 IOPs? Additional details (RAID card): HP Smart Array P410i, Total cache size of 1GB, Write cache is disabled, Array accelerator cache ratio is 25% read and 75% write

    Read the article

  • Encryption of external HDD -- accessible from windows without installation

    - by Rainer
    I would like to use encryption on my external HDD but I would like to be able to access the encrypted data from Windows as well. As suggested in other questions, TrueCrypt is one option here, or I am using momentarily encfs, which is not available for Windows. But my question goes further: I would like to be able to access the encrypted partition from Windows without installation as I will be using it from different Windows machines for which I have no administrator access. My main OS is Linux and I have full root access to that computer. Is there a full disk or file based encryption which I can use cross platform and which does not require installation under Windows? ADDITION: It seems that TrueCrypt provides a portable mode which fulfills my requirements partly: http://www.truecrypt.org/docs/?s=truecrypt-portable, but still the TrueCrypt driver needs to be installed by an administrator... pitty Thanks

    Read the article

  • How to interpret IOZone results?

    - by homer5439
    Here are the resuts of running IOZone on an ext3 filesystem on an LVM volume residing on a SAN LUN (it was ran with 5 parallel processes). "Throughput report Y-axis is type of test X-axis is number of processes" "Record size = 4 Kbytes " "Output is in Kbytes/sec" " Initial write " 81628.55 " Rewrite " 83354.72 " Read " 115595.02 " Re-read " 119306.09 " Reverse Read " 47684.20 " Stride read " 10011.09 " Random read " 16751.27 " Mixed workload " 5659.77 " Random write " 1661.85 " Pwrite " 36030.83 Now this is all nice and dandy, but my question is: how do I know whether the values are as good as they could be or there is something to tweak (and if so, what?) The actual usage I will have for that Logical Volume is to act as virtual disk for a VM.

    Read the article

  • Why am I seeing excessive disk activity when installing applications?

    - by Kev
    I'm running Windows 7 Ultimate 64 bit on a Dell Vostro 1720 with 8GB of RAM, 7200RPM Disk, 2.53 GHz Core2Duo (Windows 7 64 bit is a supported option and the laptop came with the OS pre-installed). I'm noticing some fairly excessive disk activity when running installers. For example the Visual Studio 2010 RC installer constantly accessed the disk for ~10 minutes. It was so excessive that I was unable to use the machine until this ceased. Today I installed Trillian Astra 4.1 for Windows (latest build from the website). Again when I ran the installer I was pretty much locked out of the machine until the disk activity calmed down. In both cases when I eventually managed to launch task manager I could see that the CPU was sitting at around 5% to 7% utilisation whilst this was going on. All other disk related activity is fine, the machine is snappy and applications launch without delay. It's just when I run an installer I see this odd behaviour. Why would this be?

    Read the article

  • poor performance when deleteing many files

    - by choppy
    I've got two machines: The first is IBM Blade with 24 cores 96GB RAM and single local hard drive with 278GB divided to 4 partitions: 1. c: - 40GB; 3GB free 2. d: - 40GB; 37GB free 3. e: - 198322GB; 198.1 free 4. 100MB (EFI system Partition) Formatted with GPT The other is pizza server with 4 cores 8GB RAM and single local hard drive with 273GB divided to 3 partitions: 1. c: - 136.81; 20GB free 2. d: - 88.74GB; 87.91 free 3. e: - 47.85GB; 46.91 free Formatted with MBR I have two scripts, the first creates 20,000 files in one directory, each file size is 192KB, the second delete the folder (recursive) and prints how much time it toke to delete all files. The problem is on the first server (blade) it takes about 2 minutes to delete all 20,000 files while on the second (pizza) it takes about 4 seconds!? Both servers have clean windows server 2008R2 with no special application running on background. Any ideas what is going on?

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • Average mail quota usage: tricks to implement unlimited email quota.

    - by Marco Demaio
    I suppose that hosters who provides unlimited mail quota are only claiming it unlimited, and hope that they won't run out of disk space. Correct me if I'm wrong. In order to do such trick they will have probably to calculate the average real quota used by the average user. Let's say on a 100 GB space hosting I offer to 20 x 1GB emails, obviously if all user fill their mail my server would stop working cause they would require 200 GB, but I think I can expect this trick to work cause it will never happen (or it's extermly unprobable) that all user fills up all their mails. But the QUESTTIONS are: What's the average email usage? Can we say that a user normally fills up 1/2 or 1/3 of the quota you provide him? Thanks to any answers/suggetions you might provide.

    Read the article

  • linux: accessing thousands of files in hash of directories

    - by 130490868091234
    I would like to know what is the most efficient way of concurrently accessing thousands of files of a similar size in a modern Linux cluster of computers. I am carrying an indexing operation in each of these files, so the 4 index files, about 5-10x smaller than the data file, are produced next to the file to index. Right now I am using a hierarchy of directories from ./00/00/00 to ./99/99/99 and I place 1 file at the end of each directory, like ./00/00/00/file000000.ext to ./00/00/00/file999999.ext. It seems to work better than having thousands of files in the same directory but I would like to know if there is a better way of laying out the files to improve access.

    Read the article

  • Hiding "Syntax OK" from apache2ctl output

    - by Oscar Barrett
    I am checking whether a particular apache module is installed using apache2ctl -M. When listing the modules, apache runs a syntax check on the configuration files which prints out "Syntax OK" if everything is fine. However, this message doesn't seem to be coming from STDOUT or STDERR as it shows even if all output is redirected to /dev/null. i.e. $ sudo apache2ctl -M Loaded Modules: core_module (static) log_config_module (static) ... Syntax OK $ sudo apache2ctl -M >/dev/null Syntax OK How is this being outputted, and is it possible to hide?

    Read the article

  • Is it normal for a SAS drive to have a few bad blocks, or should I replace my drive ASAP?

    - by Nate
    I have a drive—part of a RAID 1 mirror—that has two bad blocks. Adaptec Storage Manger e-mailed me when it detected the blocks. It shows 4 medium errors for that drive, but state is still “optimal”. This is my first time using Adaptec RAID controllers. I don’t know if an occasional bad block is normal, or if I should immediately replace that drive. Update: The drive failed later the same day! The disk subsystem is: Adaptec 6405 with ZMM (2) Seagate near-line SAS drives (ST31000424SS) The other drive hasn’t reported any bad blocks yet. I am running a consistency check.

    Read the article

  • Phantom Local Disks appearing in my drive list

    - by Paul
    I seem to have several phantom Local Disks mapped to different letters that are of 0 bytes in size. Strangely, they do not show up when I view my drives through Windows Explorer. But if I open an application such as ACDSee Pro or MS Word and then go to open a file I can see all these Local Disks mapped to different letters. This means when I plug in my external hard disk it ends up mapped to letter R instead of its usual G which messes up any programs I have pointing to it by default. How did they get there and more importantly, how do I get rid of them? I'm on a Window 7 Home Premium 32 bit machine.

    Read the article

  • Is it dangerous to add/remove a hard-drive to a Windows machine which is in stand by?

    - by Adal
    Can I add a SATA drive to a Windows 7 machine which is in standby mode? The hardware supports hot-plug. Could pulling the drive out while in standby corrupt the data on the drive (unflushed caches, ...)? Does Windows flush before standing by? How about swapping a drive with another drive of different kind (SSD - mechanical disk) and size, also while in stand-by. Could the OS when waking up believe that the old drive is still there, and write to it and thus corrupt it, since the new one has different partitions and data?

    Read the article

  • Xen find VBD id for physical disks

    - by Joe
    I'm starting a xen domU using xm create config.cfg. Within the config file are a number of physical block devices (LVs) which are added to the guest and can be accessed fine when it boots. However, at a point in the future I need to be able to hot unplug one of these disks using the xm block-detach command. This command, however, requires the vbd id of the device to be detached and I can't find a way to find the device id for a particular disk 'plugged in' at start up. Any help is much appreciated!

    Read the article

  • I/O redirection using cygwin and mingw

    - by KLee1
    I have written a program in C and have compiled it using MinGW. When I try to run that program in Cygwin, it seems to behave normally (i.e. prints correct output etc.) However, I'm trying to pipe output to a program so that I can parse information from the program's output. However, the piping does not seem to be working in that I am not getting any input into the second program. I have confirmed this by using the following commands: This command seems to work fine: ./prog Performing this command returns nothing: ./prog | cat This command verifies the first: ./prog | wc Which returns: 0 0 0 I know that the script (including the piping from the program) works perfectly fine in an all Linux environment. Does anyone have any idea for why the piping isn't working in Cygwin? Thanks!

    Read the article

  • Full list of top-like tool family for perfomance monitoring in linux: iftop iotop htop atop more? [closed]

    - by Yauhen Yakimovich
    Top-like utilities are extremely handful in my work and I want to make sure I am not missing any of them. Please extend the following list of performance monitoring (top-like) family of linux tools: top - original tool htop - adds support to multicore/cpu iotop - input/output monitoring iftop - network monitoring atop - merges previous elements into a single overview slabtop – displays a listing of the top caches ? The only criteria - maturity and similarity in function/style.

    Read the article

  • How to clone & restore virtual box hard drive

    - by user23950
    What I want to do is to clone my virtual box hdd with dual boot os. Xp and Vista. I'm using acronis and back it up on a flash drive. And end up with the flash drive that is partitioned. 2 partitions just like the virtual box hard disk. What do I do to restore it. I'm running acronis inside virtual box. What do I do to make use of the backup and actually restore what I've back up. And to be able to boot to xp and vista again inside virtual box. Please help.

    Read the article

  • Is it possible to put only the boot partition on a usb stick?

    - by Steve V.
    I've been looking at system encryption with ArchLinux and i think I have it pretty much figured out but I have a question about the /boot partition. Once the system is booted up is it possible to unmount the /boot partition and allow the system to continue to run? My thought was to install /boot to a USB stick since it can't be left encrypted and then boot from the USB stick which would boot up the encrypted hard disk. Then I can take the USB key out and just use the system as normal. The reason I want to do this is because if an attacker was able to get physical access to the machine they could modify the /boot partition with a keystroke logger and steal the key and if they already had a copy of the encrypted data they could just sit back and wait for the key. I guess I could come up with a system of verifying that the boot has been untouched at each startup. Has this been done before? Any guidance for implementing it on my own?

    Read the article

  • Will UUID be the same if a disk moved from one machine to another?

    - by Sunry
    While in Linux every disk got a UUID. I just wondering will the UUID be the same if I moved the same disk from one Linux box to another? Is it the same UUID in different machines with the same disk? Or for a disk the UUID will change with attached machine? Also a similar question: Will the UUID be the same after Linux distribution reinstalled in the same machine with the same disk? For example: First is CentOS 5, then reinstalled it to CentOS 6.

    Read the article

  • Fastest security check of file tree on NFS

    - by fungs
    I am currently experiencing very bad performance using the following on an NFS network folder: time find . | while read f; do test -L "$f" && f=$(readlink -m $f); grp="$(stat -c %G $f)"; perm="$(stat -c %A $f)"; done Question 1) Within the loop permissions are checked using the variables grp and perm. Is there a way to lower the amount of disc I/O for these kind of checks over the network (e.g. read all meta data at once using find)? Question 2) It seems like the NFS isn't tuned very well, the same operation on a similar network link via SSHFS take only one third of the time. All parameters are auto-negotiated. Any suggestions?

    Read the article

  • Windows always logs in to temporary profile (thinks it is in D while it is in C)

    - by asdf
    I have Windows on C: Disk 0 Partition 1 When I start it works fine until the login screen. When I log in, it starts to display "preparing your desktop.." and logs in to a temporary profile. I have to run explorer.exe manually then using task manager. If I execute %SystemRoot% it tells me that Windows could not find D:\Windows. (while Windows is in C:) I have no such drive as D then why Windows is thinking it is in D? I've tried this Bootmanager is missing but it did not work. Bootrec /ScanOS from Windows setup gives me Total identified Windows installations: 0 Also note that Windows Setup correctly thinks windows is installed on C but Windows itself thinks it is on D.

    Read the article

  • swapon --all --verbose : 'read swap header failed: Invalid argument'

    - by user66088
    Recently ran through EnableHibernateWithEncryptedSwap and ran the following command: swapon --all --verbose and received: 'read swap header failed: Invalid argument' How do I fix this? Here's some more pertinent output... Output of sudo fdisk -l: Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00006d20 Device Boot Start End Blocks Id System /dev/sda1 * 2048 499711 248832 83 Linux /dev/sda2 501758 156301311 77899777 5 Extended /dev/sda5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/ubuntu--t10194-root: 75.5 GB, 75539415040 bytes 255 heads, 63 sectors/track, 9183 cylinders, total 147537920 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ubuntu--t10194-root doesn't contain a valid partition table Disk /dev/mapper/ubuntu--t10194-swap_1: 4227 MB, 4227858432 bytes 255 heads, 63 sectors/track, 514 cylinders, total 8257536 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x08040000 Disk /dev/mapper/ubuntu--t10194-swap_1 doesn't contain a valid partition table Disk /dev/mapper/cryptswap1: 4225 MB, 4225761280 bytes 255 heads, 63 sectors/track, 513 cylinders, total 8253440 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xd2236983 Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Thanks for any and ALL help!

    Read the article

  • Parallel.For System.OutOfMemoryException

    - by Martin Neal
    We have a fairly simple program that's used for creating backups. I'm attempting to parallelize it but am getting an OutofMemoryException within an AggregateExcption. Some of the source folders are quite large, and the program doesn't crash for about 40 minutes after it starts. I don't know where to start looking so the below code is a near exact dump of all code the code sans directory structure and Exception logging code. Any advise as to where to start looking? using System; using System.Diagnostics; using System.IO; using System.Threading.Tasks; namespace SelfBackup { class Program { static readonly string[] saSrc = { "\\src\\dir1\\", //... "\\src\\dirN\\", //this folder is over 6 GB }; static readonly string[] saDest = { "\\dest\\dir1\\", //... "\\dest\\dirN\\", }; static void Main(string[] args) { Parallel.For(0, saDest.Length, i => { try { if (Directory.Exists(sDest)) { //Delete directory first so old stuff gets cleaned up Directory.Delete(sDest, true); } //recursive function clsCopyDirectory.copyDirectory(saSrc[i], sDest); } catch (Exception e) { //standard error logging CL.EmailError(); } }); } } /////////////////////////////////////// using System.IO; using System.Threading.Tasks; namespace SelfBackup { static class clsCopyDirectory { static public void copyDirectory(string Src, string Dst) { Directory.CreateDirectory(Dst); /* Copy all the files in the folder If and when .NET 4.0 is installed, change Directory.GetFiles to Directory.Enumerate files for slightly better performance.*/ Parallel.ForEach<string>(Directory.GetFiles(Src), file => { /* An exception thrown here may be arbitrarily deep into this recursive function there's also a good chance that if one copy fails here, so too will other files in the same directory, so we don't want to spam out hundreds of error e-mails but we don't want to abort all together. Instead, the best solution is probably to throw back up to the original caller of copy directory an move on to the next Src/Dst pair by not catching any possible exception here.*/ File.Copy(file, //src Path.Combine(Dst, Path.GetFileName(file)), //dest true);//bool overwrite }); //Call this function again for every directory in the folder. Parallel.ForEach(Directory.GetDirectories(Src), dir => { copyDirectory(dir, Path.Combine(Dst, Path.GetFileName(dir))); }); } }

    Read the article

  • fileToSTring keeps on returning " "

    - by karikari
    I managed to get this code to compile with out error. But somehow it did not return the strings that I wrote inside file1.txt and file.txt that I pass its path through str1 and str2. My objective is to use this open source library to measure the similarity between strings contains inside 2 text files. Inside the its Javadoc, its states that ... public static java.lang.StringBuffer fileToString(java.io.File f) private call to load a file and return its content as a string. Parameters: f - a file for which to load its content Returns: a string containing the files contents or "" if empty or not present Here's is my modified code trying to use the FileLoader function, but fails to return the strings inside the file. The end result keeps on returning me the "" . I do not know where is my fault: package uk.ac.shef.wit.simmetrics; import java.io.File; import uk.ac.shef.wit.simmetrics.similaritymetrics.*; import uk.ac.shef.wit.simmetrics.utils.*; public class SimpleExample { public static void main(final String[] args) { if(args.length != 2) { usage(); } else { String str1 = "arg[0]"; String str2 = "arg[1]"; File objFile1 = new File(str1); File objFile2 = new File(str2); FileLoader obj1 = new FileLoader(); FileLoader obj2 = new FileLoader(); str1 = obj1.fileToString(objFile1).toString(); str2 = obj2.fileToString(objFile2).toString(); System.out.println(str1); System.out.println(str2); AbstractStringMetric metric = new MongeElkan(); //this single line performs the similarity test float result = metric.getSimilarity(str1, str2); //outputs the results outputResult(result, metric, str1, str2); } } private static void outputResult(final float result, final AbstractStringMetric metric, final String str1, final String str2) { System.out.println("Using Metric " + metric.getShortDescriptionString() + " on strings \"" + str1 + "\" & \"" + str2 + "\" gives a similarity score of " + result); } private static void usage() { System.out.println("Performs a rudimentary string metric comparison from the arguments given.\n\tArgs:\n\t\t1) String1 to compare\n\t\t2)String2 to compare\n\n\tReturns:\n\t\tA standard output (command line of the similarity metric with the given test strings, for more details of this simple class please see the SimpleExample.java source file)"); } }

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >