Search Results

Search found 1864 results on 75 pages for 'raid'.

Page 7/75 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Disaster After Removing Two HDD From LaCie RAID 0 Case

    - by John
    This is the second time this has happened. I own a LaCie IDE RAID 0 Enclosure and the RAID went bad. The system gave me a warning that the data could be read from the RAID but that nothing could be written, and to remove the data ASAP. I did that and erased and reinitialized the RAID. System reported it was fine, no issues. I wrote to the RAID again and the system reported the same issue. So, I removed the drives and tested them individually thinking one must have gone bad. Sure enough, one HDD reported all bad blocks, every single one after the Master Boot Record. I didn't think much about it because of the age of the drives, 5 years old. So, I bought two new drives plugged them in and started up the RAID again. Exactly the same thing happened. All was fine after initializing the RAID and then the next day after powering on the RAID the exact same issue. The HDD sitting in the same position as the first "bad" HDD reported all bad blocks. Obviously, this is an issue with LaCie's bridge board not with the drives. No utility I have used has been able to bring this HDD back to life. I thought I would just copy the MBR from the good drive to the new one using a sector editor but am hesitant. Is it possible the firmware on the HDD has been corrupted by the LaCie bridge board?? What else could be the cause of such an issue? How can I fix this drive?

    Read the article

  • Debian software raid 1: boot from both disk

    - by bsreekanth
    I newly installed debian squeeze with software raid.The way I did was, as also given in this thread. I have 2 HDD with 500 GB each. For each of them, I created 3 partitions (/boot, / and swap) I selected the hard drive and created a new partition table I created a new partition that was 1GB. I then specified to use the partition as a Physical Volume for RAID. and used for /boot and enabled bootable. Created another partition, which is of 480 GB, and then specified to use the partition as a Physical Volume for RAID. and used for /. Created another partion and used for swap Then RAID configuration: Through Configure RAID menu - create MD device - (2 for the number of drives, 0 for spare devices) Next select the partitions you want to be members of /dev/MD0. I selected /dev/sda1 and /dev/sdb1 (for /boot) Next select the partitions you want to be members of /dev/MD1. I selected /dev/sda6 and /dev/sdb6 (for /) And no RAID for swap partitions 'Finish Partitioning and write changes to disk' -- Finish the rest of the install like normal Everything is ok now, except I am not sure how to test my raid config. When I pull the power of the HDD, it only boots from one disk. I read in some forum that I may have to install GRUB manually on the other. In Debian Squeeze, there is no grub command. Not sure how to make my software raid bootable from both disk. Also, please comment on my steps above. Anything unusual. I configured /boot partitions of both disks to be boot=yes. Not sure whether that is ok. Thanks, Bsr

    Read the article

  • Rebuild Apple RAID set

    - by Clinton Blackmore
    We have a Mac Pro tower with an Apple RAID card in it using third party drives. When one drive failed, we replaced it and the RAID 5 set was nearly done rebuilding when the computer was rebooted. It did not come back up. We are now booting up off of a different internal volume, and have three (third-party) drives of identical spec (including revision and firmware) in the box. One of the drives is a global spare; the other two are recognized as belong to a RAID set but are in "Roaming" mode. The intention is to recreate the three-drive RAID set using the data on the two drives that are good. When we tell the system to create a RAID 5 using the three drives, it tells us that it'll create a RAID set but everything will be lost. There are no obvious options to rebuild a RAID using the two good drives and incorporating the third drive in Apple's RAID Utility, and we've looked through the options for the raidutil command. Fortunately, all important data is backed up, and we can rebuild from scratch, but, is there any way to make the RAIDset work again?

    Read the article

  • RAID administration in Debian Lenny

    - by Siim K
    I've got an old box that I don't want to scrap yet because it's got a nice working 5-disk RAID assembly. I want to create 2 arrays: RAID 1 with 2 disks and RAID 5 with the other 3 disks. The RAID card is Intel SRCU31L. I can create the RAID 1 volume in the console that you access with Ctrl+C at startup. But it only allows for creation of one volume so I can't do anything with the 3 remaining disks. I installed Debian Lenny on the RAID 1 volume and it worked out nicely. What utilites could I now use to create/manage the RAID volumes in Debian Linux? I installed the raidutils package but get an error when trying to fetch a list: #raidutil -L controller or #raidutil -L physical # raidutil -L controller osdOpenEngine : 11/08/110-18:16:08 Fatal error, no active controller device files found. Engine connect failed: Open What could I try to get this thing working? Can you suggest any other tools? Command #lspci -vv gives me this about the controller: 00:06.1 I2O: Intel Corporation Integrated RAID (rev 02) (prog-if 01) Subsystem: Intel Corporation Device 0001 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Step ping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort - <MAbort- >SERR- <PERR- INTx- Latency: 64, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 26 Region 0: Memory at f9800000 (32-bit, prefetchable) [size=8M] [virtual] Expansion ROM at 30020000 [disabled] [size=64K] Capabilities: <access denied> Kernel driver in use: PCI_I2O Kernel modules: i2o_core

    Read the article

  • How to get an inactive RAID device working again?

    - by Jonik
    After booting, my RAID1 device (/dev/md_d0 *) sometimes goes in some funny state and I cannot mount it. * Originally I created /dev/md0 but it has somehow changed itself into /dev/md_d0. # mount /opt mount: wrong fs type, bad option, bad superblock on /dev/md_d0, missing codepage or helper program, or other error (could this be the IDE device where you in fact use ide-scsi so that sr0 or sda or so is needed?) In some cases useful info is found in syslog - try dmesg | tail or so The RAID device appears to be inactive somehow: # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md_d0 : inactive sda4[0](S) 241095104 blocks # mdadm --detail /dev/md_d0 mdadm: md device /dev/md_d0 does not appear to be active. Question is, how to make active the device again (using mdmadm, I presume)? (Other times it's alright (active) after boot, and I can mount it manually without problems. But it still won't mount automatically even though I have it in /etc/fstab: /dev/md_d0 /opt ext4 defaults 0 0 So a bonus question: what should I do to make the RAID device automatically mount at /opt at boot time?) This is an Ubuntu 9.10 workstation. Background info about my RAID setup in this question.

    Read the article

  • Software RAID 1 Configuration

    - by Corve
    I have created a software RAID 1 quite some while ago and it always seemed to work for me. However I am not completely sure that I have configured everything right and do not have the experience to check so I would be very grateful for some advice or just verification that all seems right so far. I am using Linux Fedora 20 (32 bit with plans to upgrade to 64bit) The RAID 1 should consist of two 1TB SATA hard drives. This is the output of mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Sun Jan 29 11:25:18 2012 Raid Level : raid1 Array Size : 976761424 (931.51 GiB 1000.20 GB) Used Dev Size : 976761424 (931.51 GiB 1000.20 GB) Raid Devices : 2 Total Devices : 1 Persistence : Superblock is persistent Update Time : Sat Jun 7 10:38:09 2014 State : clean, degraded Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Name : argo:0 (local to host argo) UUID : 1596d0a1:5806e590:c56d0b27:765e3220 Events : 996387 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 0 1 active sync /dev/sda The RAID is mounted successfully: friedrich@argo:~ ? sudo mount -l | grep md0 /dev/md0 on /mnt/raid type ext4 (rw,relatime,data=ordered) Basically my question are: Why do I only have 1 active device? What does the State removed at bottom mean? Also I noticed some strange error messages that I see on the console on system start and shutdown and always repeating in the background when I switch with Ctrl + Alt + F2: ... ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: COMRESET failed (errno=-32) ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ata2: irq_stat 0x00000040 connection status changed ata2: SError: { CommWake DevExch } ata2: exception Emask 0x10 SAct 0x0 SErr 0x4040000 action 0xe frozen ... Are these errors related to the RAID? Something seems wrong with the SATA devices.. All together the system works (I can read and write to the mounted raid) but I always had these strange errors on startup shutdown (probably always in the background). Thx for your help

    Read the article

  • LSI1068E hidden drives after failed raid volume creation

    - by silk
    We are using LSI 1068E raid chipset with SAS drives. We had added new drives to the system, and tried to create new raid volume with the lsiutil, unfortunately the creation failed. The problem is that now we do not have the new raid volume and disks 'disappeared' and are not available as targets for raid. Lsiutil option 8 (scan for devices) does not display these disks at all. lsiutil option 16 (display attached devices) does list them as targets. lsiutil option 21+30 (create raid) does not list these disks. Just after insrting them to enclosure these disks appeared in the system, as expected. During the raid creation kernel logged: Mar 4 08:40:02 kilo kernel: [57555.687946] mptbase: ioc0: RAID STATUS CHANGE for PhysDisk 2 id=0 Mar 4 08:40:02 kilo kernel: [57555.687978] mptbase: ioc0: PhysDisk has been created Mar 4 08:40:02 kilo kernel: [57555.695438] scsi target0:0:2: mptsas: ioc0: RAID Hidding: fw_channel=0, fw_id=0, physdsk 2, sas_addr 0x5000c50008ebe5fd for both of them, again as expected. Unfortunately they did not appear back even though the volume was not created. The same situation is in the controller's bios after a reboot. Taking the disks out and inserting in different slots did not help, either. Has someone seen a similar problem? And knows how to 'get back' our disks?

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • Swapping RAID sets in and out of the same controller

    - by hazymat
    This is a really simple question, and the answer is probably encoded in various wikipedia articles, however my question is reasonably specific, and I need a bulletproof answer! I'm not sure if my question pertains to hardware RAID in general, or to the specific RAID controller I'm working on. Either way it is the Dell SAS 6/iR (this is an LSI sas1068e chipset). I simply want to: remove a set of striped (RAID 0) disks from this RAID controller in a server put in another set of disks, and create a RAID 1 array (or create a new 'virtual disk', as they call it in the SAS 6/iR manual) Do stuff with the new RAID 1 array Have the option of putting back the old set of disks (the RAID 0 striped ones) I am quite sure this is possible, but I need some form of reliable, evidence-based answer as it's for a client of mine, and I need to migrate their data safely. The question: can I actually do the above? Does the RAID configuration get stored on the disks themselves, or in the hardware controller? Is any data stored in the hardware controller? If there is any chance I cannot completely restore operation of the first set of disks I removed, then I need to know about it! The manual alludes to the answer to this question (see page 45 of this document), and talks about activating an array of disks. I just need someone to confirm I can definitely do the above. See, simple question, right? :)

    Read the article

  • Matched or unmatched drives for RAID arrays?

    - by Will
    Looking around there is conflciting information on this, with some strongly suggesting one or the other. From my understanding the issue with matched drives is that the wear on both drives is more or less the same, so the potential for the second drive failing with or very soon after the first is pretty high. People also claim matched drives give substianatally higher performance however assuming the unmatched drives are more or less the same (eg 2, 1 TB STATA II 7200rpm drives with 32MB cache), would the minor differences between say a Seagate and a Western Digital one (say one has a 128MB/s read rate, and the other a 150MB/s read rate, as well as I guess various other minor differences) actually cause any notable performance loss, ie potentialy worse than two matched 128MB/s drives, or does RAID not really care and give you essentially an optimal solution (eg upto 278MB/s total read speed for RAID 0 and 1) and similar for other RAID with more "unmatched" drives (5 and 1+0 come to mind as possibilities)? Also I couldnt find much info on how this is different on different RAID setups, eg RAID 0 or RAID 1, software or hardware RAID, etc. I'm assuming such things have an effect, and thats it's not all the same for RAID in general?

    Read the article

  • RAID 0 performance gains?

    - by NickAldwin
    I'm building a new computer over the summer. I'm fairly competent in computer hardware, and am thus building the computer from scratch. I have everything planned out, but I was wondering about RAID. I asked which RAID I should use earlier, but now that it's pretty clear that RAID 1 isn't really that great, I think I'll go with cloud-backup instead of disk-redundancy. However, I still face a choice: use two 1TB drives as two 1TB drives, or combine them into a RAID 0 striped array. Is there any performance gain at all? I know that if one drive dies, everything is gone, so is the performance gain worth it? I'm building a pretty advanced computer, with SLI video cards and a fast CPU, so I'm thinking RAID 0 would give me some good hard drive performance. From your experience, is RAID 0 viable?

    Read the article

  • Create mirror software raid with bad blocks hdd. How to check data integrity?

    - by rumburak
    There is error in System event log like this one: "The device, \Device\Harddisk1\DR1, has a bad block." Because of above I created Raid 1 on this disk and other one. I'm using Windows Server 2008 R2 software RAID volumes. Volume in Disk Manager is marked as "Failed Redundancy" and "At Risk". I could command to "Reactivate Disk" and it's starts to re-sync, but after a while it stops and returns to previous state. It stops re-sync on bad block on old disk and creates same error in System event log. Old disk status is Errors, new disk status is Online. How can I check that there is exact copy of the old disk on new one ? It is server machine so I would prefer to keep it running during this check.

    Read the article

  • Stop RAID 5 from Initializing

    - by Antz
    Hi, I am trying to follow Ictinike's guide on Recovering Intel RAID "Non-Member Disk" Error found here, Ictinike's RAID recovery Guide I have recreated my RAID array as per the instructions. However my RAID array status is then automatically set to: INITIALIZE When I boot back into my Windows XP desktop, the Intel Matrix Storage Utility begins to "Initialize" my drives. This is a long slow process that will take about 20 hours. I suspect all my data will be lost. I have gone back into my bios and disabled my RAID controller to prevent any further initialization and data loss. I have read that initialization will cause data loss. I've also read somewhere that it won't. I am not so confident in the latter. Is there anyway to stop this initialization process so I can continue to follow the steps in the recovery guide? Some system specs: ABIT IP35 Pro Motherboard ICH9R on board RAID controller

    Read the article

  • Improving mdadm RAID-6 write speed

    - by BarsMonster
    Hi! I have a mdadm RAID-6 in my home server of 5x1Tb WD Green HDDs. Read speed is more than enough - 268 Mb/s in dd. But write speed is just 37.1 Mb/s. (Both tested via dd on 48Gb file, RAM size is 1Gb, block size used in testing is 8kb) Could you please suggest why write speed is so low and is there any ways to improve it? CPU usage during writing is just 25% (i.e. half of 1 core of Opteron 165) No business critical data there & server is UPS-backed. mdstat is: Personalities : [raid6] [raid5] [raid4] md0 : active raid6 sda1[0] sdd1[4] sde1[3] sdf1[2] sdb1[1] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [5/5] [UUUUU] bitmap: 0/8 pages [0KB], 65536KB chunk unused devices: <none> Any suggestions?

    Read the article

  • Non-Apple RAID card for Mac PRO (TOWER)

    - by Arthor
    I have the following: MAC PRO (Model Number: A1186) (PCIe - SLOTS) At present I am using the software RAID however I wish to move to the hardware raid because of the following: Performance (4 x 300gb SATA II in RAID 5) Redundancy (Raid 5, 1 drive can fail and system will be online) I do not wish to use the Apple RAID card (very expensive), I would like to use an aftermarket one which is cheaper. Questions: Does anyone have a WORKING aftermarket RAID card working in their MAC PRO (TOWER)? -(Have done some research, ROCKETRAID, need confirmation) If so to the above, does it work from boot? Thanks

    Read the article

  • How do I configure hardware raid in a Poweredge 2850?

    - by Eric Fossum
    I just bought a Dell Poweredge 2850 from Craigslist and for the most part I'm happy with it's $300 price-tag, but I cannot figure out where to configure the embedded hardware raid... I've seen online you should hit <CTRL-M>, but while booting my box never says that. I have <CTRL-A> (I think) for an LSI Logic config, but that seems to just program SCSI and verify drives on my SCSI-A and SCSI-B. Anyone have a clue where this RAID config is?

    Read the article

  • RAID setup for maximizing data retention and read speed

    - by cat pants
    My goals are simple: maximize data retention safety, and maximize read speeds. My first instinct is to do a a three drive software RAID 1. I have only used fakeraid RAID 1 in the past and it was terrible (would have led to data loss actually if it weren't for backups) Would you say software raid 1 or a cheap actual hardware raid card? OS will be linux. Could I start with a two drive raid 1 and add a third drive on the fly? Can I hot swap? Can I pull one of the drives and throw it into a new machine and be able to read all the data? I do not want a situation where I have a raid card fail and have to try and find the same chipset in order to read my data (which i am assuming can happen) Please clarify any points on which it sounds like I have no idea what I am talking about, as I am admittedly inexperienced here. (My hardest lesson was fakeraid lol) Thanks!

    Read the article

  • How to expand Raid 5 on ICH10 - Gigabyte ex58-ds4?

    - by NeverEatAlone
    I was wondering if there is a relatively simple way to expand my HD space. My setup is 4 x 640 GB drives. Motherboard has 4 ports on 1 controller and 2 ports on another controller, however they cant be joined. I would like to somehow get more store space in raid configuration. One scenario that I can see working is replacing one 640 drive for a 2TB drive. Waiting for Raid to rebuild. Rinse and repeat. However, I have no idea if I would be able to even see/access the new space. All alternatives / ideas are welcome. Thank you

    Read the article

  • Which motherboard for Intel i7 and how to get RAID working?

    - by jasondavis
    I am wanting to build a new PC, I have a couple questions. 1) I am wanting to go with the Intel Core i7 920 Processor, can anyone reccomend a good reliable motherboard for this processor? Graphics card support does not matter (sli-crossfire). I would like to support a lot of ram, so the more ram slots the better. I have read so many bad reviews about certain boards not working good, I would love recommendation from experience. 2) I am wanting to run a couple SSD drives in RAID-0, I have never done this, will I need to purchase anything additional to the MB and CPU and drives to get raid working?

    Read the article

  • Preseeding Ubuntu partman recipe using LVM and RAID

    - by Swav
    I'm trying to preseed Ubuntu 12.04 server installation and created a recipe that would create RAID 1 on 2 drives and then partition that using LVM. Unfortunately partman complains when creating LVM volumes saying there no partitions in recipe that could be used with LVM (in console it complains about unusable recipe). The layout I'm after is RAID 1 on sdb and sdc (installing from USB stick so it takes sda) and then use LVM to create boot, root and swap. The odd thing is that if I change the mount point of boot_lv to home the recipe works fine (apart from mounting in wrong place), but when mounting at /boot it fails I know I could use separate /boot primary partition, but can anybody tell me why it fails. Recipe and relevant options below. ## Partitioning using RAID d-i partman-auto/disk string /dev/sdb /dev/sdc d-i partman-auto/method string raid d-i partman-lvm/device_remove_lvm boolean true d-i partman-md/device_remove_md boolean true #d-i partman-lvm/confirm boolean true d-i partman-auto-lvm/new_vg_name string main_vg d-i partman-auto/expert_recipe string \ multiraid :: \ 100 512 -1 raid \ $lvmignore{ } \ $primary{ } \ method{ raid } \ . \ 256 512 256 ext3 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext3 } \ mountpoint{ /boot } \ lv_name{ boot_lv } \ . \ 2000 5000 -1 ext4 \ $defaultignore{ } \ $lvmok{ } \ method{ format } \ format{ } \ use_filesystem{ } \ filesystem{ ext4 } \ mountpoint{ / } \ lv_name{ root_lv } \ . \ 64 512 300% linux-swap \ $defaultignore{ } \ $lvmok{ } \ method{ swap } \ format{ } \ lv_name{ swap_lv } \ . d-i partman-auto-raid/recipe string \ 1 2 0 lvm - \ /dev/sdb1#/dev/sdc1 \ . d-i mdadm/boot_degraded boolean true #d-i partman-md/confirm boolean true #d-i partman-partitioning/confirm_write_new_label boolean true #d-i partman/choose_partition select Finish partitioning and write changes to disk #d-i partman/confirm boolean true #d-i partman-md/confirm_nooverwrite boolean true #d-i partman/confirm_nooverwrite boolean true EDIT: After a bit of googling I found below snippet of code from partman-auto-lvm, but I still don't understand why would they prevent that setup if it's possible to do manually and booting from boot partition on LVM is possible. # Make sure a boot partition isn't marked as lvmok if echo "$scheme" | grep lvmok | grep -q "[[:space:]]/boot[[:space:]]"; then bail_out unusable_recipe fi

    Read the article

  • Using mixed disks and OpenFiler to create RAID storage

    - by Cylindric
    I need to improve my home storage to add some resilience. I currently have four disks, as follows: D0: 500Gb (System, Boot) D1: 1Tb D2: 500Gb D3: 250Gb There's a mix of partitions on there, so it's not JBOD, but data is pretty spread out and not redundant. As this is my primary PC and I don't want to give up the entire OS to storage, my plan is to use OpenFiler in a VM to create a virtual SAN. I will also use Windows Software RAID to mirror the OS. Partitions will be created as follows: D0 P1: 100Mb: System-Reserved Boot D0 P2: 50Gb: Virtual Machine VMDKs for OS D0 P3: 350Gb: Data D1 P1: 100Mb: System-Reserved Boot D1 P2: 50Gb: Virtual Machine VMDKs for OS D1 P3: 800Gb: Data D2 P1: 450Gb: Data D3 P1: 200Gb: Data This will result in: Mirrored boot partition Mirrored Operating system Mirrored Virtual machine O/S disks Four partitions for data In the four data partitions I will create several large VMDK files, which I will "mount" into OpenFiler as block-storage devices, combined into three RAID arrays (due to the differing disk sizes) In effect, I'll end up with the following usable partitions SYSTEM 100Mb the small boot partition created by the Windows 7 installer (RAID-1) HOST 50Gb the Windows 7 partition (RAID-1) GUESTS 50Gb Virtual machine Guest VMDK's (RAID-1) VG1 900Gb Volume group consisting of a RAID-5 and two RAID-1 VG2 300Gb Volume group consisting of a single disk On VG1 I can dynamically assign storage for my media, photographs, documents, whatever, and it will be safe. On VG2 I can dynamically assign storage for my data that is not critical, and easily recoverable, as it is not safe. Are there any particular 'gotchas' when implementing a virtual OpenFiler like this? Is the recovery process for a failing disk going to be very problematic? Thanks.

    Read the article

  • Package upgrade on Ubuntu raid server and grub setup issue

    - by RecNes
    I have remote Ubuntu 10.10 server running on raid system. I did package upgrade yesterday night for security reasons. During the upgrade, grub installation screen appeared and asked me which partition I wanted to install grub. Options are sda,sdb,md1 and md2. I decide to install them on both sda and sdb partitions. I wondering, was I make true decision? If machine get reboot is it can be boot up safely? You can find fdisk output and fstab mount points below: Fstab: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md0 none swap sw 0 0 /dev/md1 /boot ext3 defaults 0 0 /dev/md2 / ext3 defaults 0 0 Fdisk: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00029bb5 Device Boot Start End Blocks Id System /dev/sda1 1 262 2102562 fd Linux raid autodetect /dev/sda2 263 295 265072+ fd Linux raid autodetect /dev/sda3 296 91201 730202445 fd Linux raid autodetect Disk /dev/md0: 2152 MB, 2152923136 bytes 2 heads, 4 sectors/track, 525616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Disk /dev/md1: 271 MB, 271319040 bytes 2 heads, 4 sectors/track, 66240 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table Disk /dev/md2: 747.7 GB, 747727224832 bytes 2 heads, 4 sectors/track, 182550592 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00088969 Device Boot Start End Blocks Id System /dev/sdb1 1 262 2102562 fd Linux raid autodetect /dev/sdb2 263 295 265072+ fd Linux raid autodetect /dev/sdb3 296 91201 730202445 fd Linux raid autodetect

    Read the article

  • RAID-capable 3.5" SATA Drives

    - by nroam
    I recently purchased a pair of 1TB Western Digital WD1002FBYS RE3 drives for use in an external RAID enclosure. I have found that they tend to drop out of the array after a while. Thinking it was the enclosure I tried them on another one but found the same issue. So a bit of googling and I found http://www.tomshardware.com/forum/251076-32-raid-issues-western-digital-hard-disk which suggests that: "WD's "RE" (RAID Edition) HDDs support Time-Limited Error Recovery ("TLER" ): http://www.wdc.com/en/products/productcatalog.asp?language=en As a non-TLER HDD fills up with data, the error detection firmware might take too long, and the RAID controller may drop that HDD from a RAID array." So now I wonder what SATA drives have firmware which is compatible with RAID arrays (esp. RAID 1, 5, but not 0)? I have not been able to come up with the magic set of keywords to ellicit the answer from Google. However, various sites suggest that Seagate & Hitachi are in general OK. Does anyone have any generic (or even specific) guidance on how to work out if a drive's firmware may harbour code that is potentially an issue in a RAID0 setting other than stating that it must be 'enterprise' ready?

    Read the article

  • Hardware RAID Controller Support for SSD TRIM

    - by dss539
    Do any hardware RAID controllers available today support TRIM? If not, do any manufacturers have target dates for supporting TRIM? Should I even care about TRIM for SSDs installed in performance-sensitive workstations? Before you suggest it, yes software RAID would sidestep the issue, but my requirements do not allow software RAID. edit: The answer appears to be "no RAID controllers support TRIM" at the current date.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >