Search Results

Search found 10693 results on 428 pages for 'raw disk'.

Page 65/428 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • How to convert a raw disk image to a copy-on-write image based on another image for use with kvm and

    - by Jean-Paul Calderone
    I have a virtual Windows machine running on kvm. Presently it has a 90GB raw disk image. I would like to clone this VM without having to keep two copies of the 90GB raw disk image around. It seems like a good approach for doing this is to make two new qcow or qcow2 images based on the original. First I converted the raw image to a qcow2 image: qemu-img convert -O qcow2 basewindowsxp.img basewindowsxp.qcow2 Then I tried creating a new image backed by this: qemu-img create -F qcow2 -f qcow2 -b `pwd`/basewindowsxp.qcow2 windowsxp-1.qcow2 Then I used virt-manager to point the original VM at windowsxp-1.qcow2. However, when I try to start up the VM in this new configuration, virt-manager reports an error: Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/engine.py", line 588, in run_domain vm.startup() File "/usr/share/virt-manager/virtManager/domain.py", line 150, in startup self._backend.create() File "/usr/lib/python2.6/dist-packages/libvirt.py", line 300, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: internal error unable to start guest: qemu: could not open disk image /var/lib/libvirt/images/windowsxp-1.qcow2 The error suggests that the filename was misspecified or that the filesystem permissions are too restrictive, but neither of these is the case: $ ls -l /var/lib/libvirt/images/windowsxp-1.qcow2 -rwxrwxrwx 1 root root 262144 2010-05-27 08:32 /var/lib/libvirt/images/windowsxp-1.qcow2 Why won't virt-manager start this vm?

    Read the article

  • How do you find the disk size of a Postgres table and its indexes

    - by mmrobins
    I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where. Thanks in advance.

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

  • Removing multiple files from a Git repo that have already been deleted from disk

    - by Codebeef
    I have a Git repo that I have deleted four files from using rm (not git rm), and my Git status looks like this: # deleted: file1.txt # deleted: file2.txt # deleted: file3.txt # deleted: file4.txt How do I remove these files from Git without having to manually go through and add each file like this: git rm file1 file2 file3 file4 Ideally, I'm looking for something that works in the same way that git add . does, if that's possible.

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Referencing file on disk from NSManagedObject

    - by Kamchatka
    Hello, What would be the best way to name a file associated to a NSManagedObject. The NSManagedObject will hold the URL to this file. But I need to create a unique filename for my file. Is there some kind of autoincrement id that I could use? Should I use mktemp (but it's not a temporary file) or try to convert the NSManagedObjectId to a filename? but I fear there will be special characters which might cause problem. What would you suggest?

    Read the article

  • While saving a PNG image using NSData writetofile saves corrupted data on the iphone disk

    - by jAmi
    I have a number of images (PNG,GIF and JPG) in my Application Resource Bundle. I want some images to be saved in my Documents Directory so i use : imgPath=[documentsDirectoryPath stringByAppendingPathComponent:@"myImage.png"]; if (![fileMgr fileExistsAtPath:imgPath]) { [[fileMgr contentsAtPath:[[NSBundle mainBundle] pathForResource:@"myImage"ofType:@"png"]] writeToFile:imgPath atomically:NO]; } This saves an Image file on my desired Path but this file has an Extra 300 bytes (of maybe junk data) in it which results in a corrupted image... Am i doing something wrong here? This works in the simulator but on the real device the image has some extra 300 bytes. Also a GIF image gets copied nicely and works but this problem occurs for PNG image.

    Read the article

  • Has anybody used the WB B-tree library?

    - by Chris B
    I stumbled across the WB on-disk B-tree library: http://people.csail.mit.edu/jaffer/WB It seems like it could be useful for my purposes (swapping data to disk during very large statistical calculations that do not fit in memory), but I was wondering how stable it is. Reading the manual, it seems worringly 'researchy' - there are sections labelled [NOT IMPLEMENTED] etc. But maybe the manual is just out-of-date. So, is this library useable? Am I better off looking at Tokyo Cabinet, MemcacheDB, etc.? By the way I am working in Java.

    Read the article

  • Arraylist can't compare objects after they are loaded from disk

    - by Zka
    To make it easy, lets say I have an arraylist allBooks containing class "books" and an arraylist someBooks containing some but not all of the "books". Using contains() method worked fine when I wanted to see if a book from one arraylist was also contained in another. The problem was that this isn't working anymore when I save both of the Arraylists to a .bin file and load them back once the program restarts. Doing the same test as before, the contains() returns false even if the compared objects are the same (have the same info inside). I solved it by overloading the equals method and it works fine, but I want to know why did this happen?

    Read the article

  • C# arraylist can't compare objects after they are loaded from disk

    - by Zka
    To make it easy, lets say I have an arraylist allBooks containing class "books" and an arraylist someBooks containing some but not all of the "books". Using contains() method worked fine when I wanted to see if a book from one arraylist was also contained in another. The problem was that this isn't working anymore when I save both of the Arraylists to a .bin file and load them back once the program restarts. Doing the same test as before, the contains() returns false even if the compared objects are the same (have the same info inside). I solved it by overloading the equals method and it works fine, but I want to know why did this happen?

    Read the article

  • NULL pointer dereference in swiotlb_unmap_sg_attrs() on disk IO

    - by Inductiveload
    I'm getting an error I really don't understand when reading or writing files using a PCIe block device driver. I seem to be hitting an issue in swiotlb_unmap_sg_attrs(), which appears to be doing a NULL dereference of the sg pointer, but I don't know where this is coming from, as the only scatterlist I use myself is allocated as part of the device info structure and persists as long as the driver does. There is a stacktrace to go with the problem. It tends to vary a bit in exact details, but it always crashes in swiotlb_unmap_sq_attrs(). I think it's likely I have a locking issue, as I am not sure how to handle the locks around the IO functions. The lock is already held when the request function is called, I release it before the IO functions themselves are called, as they need an (MSI) IRQ to complete. The IRQ handler updates a "status" value, which the IO function is waiting for. When the IO function returns, I then take the lock back up and return to request queue handling. The crash happens in blk_fetch_request() during the following: if (!__blk_end_request(req, res, bytes)){ printk(KERN_ERR "%s next request\n", DRIVER_NAME); req = blk_fetch_request(q); } else { printk(KERN_ERR "%s same request\n", DRIVER_NAME); } where bytes is updated by the request handler to be the total length of IO (summed length of each scatter-gather segment).

    Read the article

  • Best data recovery tools?

    - by Nonick
    So due to a recent act of stupidity and bravado, I uttered the words "backups! who needs backups?!" and what followed was the tragic loss of 260gb of data. This scenario in particular is requiring me to recover a repartitioned hard disk, but I was wondering what tools people here use in general to recover lost data. I'm sure everyone has been there, either accidentally rewriting files, resaving an old version, computer crash, hard disk death, user deletes an important document etc. So was thinking it might be an interesting point of discussion as to what you guys use to recover lost data. I appologise if this is considered irrelevant, but considering there has been a few recovery questions, I think this might be interesting.

    Read the article

  • MySQL: LOAD DATA reclaim disk space after delete

    - by Michael
    I have a DB schema composed of MYISAM tables, i am interested to delete old records from time to time from some of the tables. I know that delete does not reclaim the memory space, but as i found in a description of DELETE command, inserts may reuse the space deleted In MyISAM tables, deleted rows are maintained in a linked list and subsequent INSERT operations reuse old row positions. I am interested if LOAD DATA command also reuses the deleted space? UPDATE I am also interested how the index space reclaimed?

    Read the article

  • How to partition Seagate FreeAgent GoFlex 2TB hard disk?

    - by balki
    Hi I bought a new Seagate 2TB external hard disk. I opened the drive's application in my virtual windows, did product registration using the application present in it. I have few questions on how best to use it. The drive by default has some files and folders - setup.exe, System Volume Information, USB 3.0 PC Card Adapter etc,. I copied all the files to my laptop. Is it safe to delete these files? It has a dash board for windows which allows to tune power options, test the drive etc. Will I be able to use the dash board if I put back all these files and mount on windows again? I want to partition and format the hard disk. Data I like to store is Around 10 to 20 GB Files - Virtual box images. Around 4GB Files - Dvd images. Other Movies and personal Files. What is the best filesystem to store very huge files like 10 to 20GB files. So that they are written and accessed fast also best uses the drive's capacity. If I leave one of the partition as ntfs and others to different files systems, will it be able to mount on windows and Will I be able to use the device's dash board? Note: I dont need any encryption for my data. Any other advice on using the hard disk is also welcome.

    Read the article

  • Package upgrade on Ubuntu raid server and grub setup issue

    - by RecNes
    I have remote Ubuntu 10.10 server running on raid system. I did package upgrade yesterday night for security reasons. During the upgrade, grub installation screen appeared and asked me which partition I wanted to install grub. Options are sda,sdb,md1 and md2. I decide to install them on both sda and sdb partitions. I wondering, was I make true decision? If machine get reboot is it can be boot up safely? You can find fdisk output and fstab mount points below: Fstab: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md0 none swap sw 0 0 /dev/md1 /boot ext3 defaults 0 0 /dev/md2 / ext3 defaults 0 0 Fdisk: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029bb5 Device Boot Start End Blocks Id System /dev/sda1 1 262 2102562 fd Linux raid autodetect /dev/sda2 263 295 265072+ fd Linux raid autodetect /dev/sda3 296 91201 730202445 fd Linux raid autodetect Disk /dev/md0: 2152 MB, 2152923136 bytes 2 heads, 4 sectors/track, 525616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 271 MB, 271319040 bytes 2 heads, 4 sectors/track, 66240 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md2: 747.7 GB, 747727224832 bytes 2 heads, 4 sectors/track, 182550592 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00088969 Device Boot Start End Blocks Id System /dev/sdb1 1 262 2102562 fd Linux raid autodetect /dev/sdb2 263 295 265072+ fd Linux raid autodetect /dev/sdb3 296 91201 730202445 fd Linux raid autodetect

    Read the article

  • Should I use "Raid 5 + spare" or "Raid 6"?

    - by Trevor Boyd Smith
    What is "Raid 5 + Spare" (excerpt from User Manual, Sect 4.17.2, P.54): RAID5+Spare: RAID 5+Spare is a RAID 5 array in which one disk is used as spare to rebuild the system as soon as a disk fails (Fig. 79). At least four disks are required. If one physical disk fails, the data remains available because it is read from the parity blocks. Data from a failed disk is rebuilt onto the hot spare disk. When a failed disk is replaced, the replacement becomes the new hot spare. No data is lost in the case of a single disk failure, but if a second disk fails before the system can rebuild data to the hot spare, all data in the array will be lost. What is "Raid 6" (excerpt from User Manual, Sect 4.17.2, P.54): RAID6: In RAID 6, data is striped across all disks (minimum of four) and a two parity blocks for each data block (p and q in Fig. 80) is written on the same stripe. If one physical disk fails, the data from the failed disk can be rebuilt onto a replacement disk. This Raid mode can support up to two disk failures with no data loss. RAID 6 provides for faster rebuilding of data from a failed disk. Both "Raid 5 + spare" and "Raid 6" are SO similar ... I can't tell the difference. When would "Raid 5 + Spare" be optimal? And when would "Raid 6" be optimal"? The manual dumbs down the different raid with 5 star ratings. "Raid 5 + Spare" only gets 4 stars but "Raid 6" gets 5 stars. If I were to blindly trust the manual I would conclude that "Raid 6" is always better. Is "Raid 6" always better?

    Read the article

  • Vmware Workstation 10 connect remote server (Debian, Guest-Windows XP) Does not allow raw disk access nor shared folders

    - by Alex
    The setup: Ubuntu with local Vmware Workstation 10 (everything works locally) Connects(File- Connect to Server) Debian server with the same Vmware Workstation 10 (Windows XP Guest) Debian setup does not allow raw disk access nor shared folders (most options does not exist) No shared folder No physical disk option I use root user for this machine. Default install. I've tried to add shared folder from command line - it does not work. How to enable shared folders or raw disk access? I have created new Windows 8 64 bit template from scratch - I cannot use physical HDD either, and no SharedFolder option. I think this is something about security policy of remote server.

    Read the article

  • Why does OS X insist on spinning up all external drives when loading a file from the local drive?

    - by Phillip Oldham
    Why does OS X insist on spinning up all the attached external drives (firewire, usb) when loading a file from the local (internal) drive? It's driving me insane that I have to wait for 3 attached drives (1 back-up, 2 media) to spin up -- a total of 20s -- to access a file that is located only on my local/internal drive. There is no obvious need to access the other drives; nothing is being read from them and nothing need be written. Examples: Quicktime X opening a file from the local HDD. Starting Caffeine, an app which doesn't access any other files at all. Can I tell OS X to only spin those drives up when actually accessing them?

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • what is acceptable datastore latency on VMware ESXi host?

    - by BeowulfNode42
    Looking at our performance figures on our existing VMware ESXi 4.1 host at the Datastore/Real-time performance data Write Latency Avg 14 ms Max 41 ms Read Latency Avg 4.5 ms Max 12 ms People don't seem to be complaining too much about it being slow with those numbers. But how much higher could they get before people found it to be a problem? We are reviewing our head office systems due to running low on storage space, and are tossing up between buying a 2nd VM host with DAS or buying some sort of NAS for SMB file shares in the near term and maybe running VMs from it in the longer term. Currently we have just under 40 staff at head office with 9 smaller branches spread across the country. Head office is runnning in an MS RDS session based environment with linux ERP and mail systems. In total 22 VMs on a single host with DAS made from a RAID 10 made of 6x 15k SAS disks.

    Read the article

  • MD3200 - 3 to 4x less throughput than MD1220. Am I missing something here?

    - by Igor Polishchuk
    I have two R710 servers with similar configuration. One in my office has MD1220 attached. Another one in the datacenter of my hosting services vendor has MD3200. I'm getting significantly worse throughput from MD3200 at my vendors setup. I'm mostly interested in sequential writes, and I'm getting these results in bonnie++ and dd tests: Seq. writes on MD1220 in my office: 1.1 GB/s - bonnie++, 1.3GB/s - dd Seq. writes on MD3200 at my vendor's: 240MB/s - bonnie++, 310MB/s - dd Unfortunately, I could not test the exactly the same configurations, but the two I have should be comparable. If anything, my good performing environment is cheaper than the bad performing. I expect at least similar throughput from these two setups. My vendor cannot really help me. Hopefully, somebody more familiar with the DAS performance can look at it and tell if I'm missing something here and my expectations are too high. To summarize, the question here is it reasonable to expect about 100MB/s of sequential write throughput per each couple of drives in RAID10 on MD3200? Is there any trick to enable such performance in MD3200 with dual controller as opposed to simple MD1220 with a single H800 adapter? More details about the configurations: A good one in my office: Dell R710 2CPU X5650 @ 2.67GHz 12 cores 96GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.26.1.el5 x86_64 20x300GB 2.5" SAS 10K in a single RAID10 1MB chunk size on MD1220 + Dell H800 I/O controller with 1GB cache in the host Not so good one at my vendor's: Dell R710 2CPU L5520 @ 2.27GHz 8 cores 144GB DDR3, OS: RHEL 5.5, kernel 2.6.18-194.11.4.el5 x86_64 20x146GB 2.5" SAS 15K in a single RAID10 512KB chunk size, Dell MD3200, 2 I/O controllers in array with 1GB cache each Additional information. I've also ran the same tests on the same vendor's host, but the storage was: two raids of 14x146GB 15K RPM drives RAID 10, striped together on the OS level on MD3000+MD1000. The performance was about 25% worse than on MD3200 despite having more drives. When I ran similar tests on the internal storage of my vendor's host (2x146GB 15K RPM drives RAID1, Perc 6i) I've got about 128MB/s seq. writes. Just two internal drives gave me about a half of 20 drives' throughput on MD3200. The random I/O performance of the MD3200 setup is ok, it gives me at least 1300 IOPS. I'm mostly have problems with sequentioal I/O throughput. Thank you for looking into it. Regards Igor

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >