Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 3/1762 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Computer hangs at boot screen with new RAID card

    - by shanethehat
    I am trying to build a new server around a Biostar TH61 motherboard and an Adaptec 6405E RAID controller card. The machine booted fine from USB before the RAID card and drives were installed. After installing, on the first boot the card was detected, but then started to spit out the following message every 10 seconds: Error: Controller Kernel Stopped Running << Press any key to continue ... Following the troubleshooting guide I unplugged everything, reseated the card, and reattached all the drives. This time the machine is sitting on the boot screen without any error messages and flashing a cursor, but after 15 minutes of this, nothing seems to be happening. Given that there are no error messages I'm hesitant to reboot again. Is it normal for a RAID card to sit without a status message when it firsts boots, maybe to initialise the drives or something? The current screen output looks a bit like this: Controller #00 found at PCI Slot:01, Bus:01, Dev:00, Func:00 Controller Model: Adaptec 6405E Firmware Version: 5.2-0[18512] Memory Size: 128MB Serial number: 111111111111111 SAS WWN: 50000D1104AE9180 _ Update: So after waiting 30 minutes I've rebooted back to the Kernal Stopped Running error. Maybe time to update the RAID BIOS.

    Read the article

  • Recommended Win2k8 Server software to fix my RAID-0 issue

    - by Jason Kealey
    I'm running an Asus P6T V2 Deluxe. It has six SATA ports and supports onboard RAID. I am using two of those ports for a RAID0 array of 1.5TB Seagate drives using the onboard RAID controller. One of them is giving me SMART warnings and I want to preemptively replace it. I pulled out two other 1.5TB drives from another computer and am ready to use one or both, if necessary. I can't run any SMART diagnostic software from within Windows because it only sees the hardware RAID-0 array, not each individual drive. The first thing I tried was a slow sector-by-sector copy using a free tool called EASEUS Disk Copy. Used the bootdisk, copied (took like 16 hours), unplugged the defective drive and plugged the new one in its place. The motherboard didn't recognize the new drive as being part of the known setup, so it did not want to boot. The second thing I tried was using other software (I forget the name) to copy the partition from within Windows. The first software failed because I had a server operating system. I found another software (I forget the name) which supported a server OS and did a partition copy onto the new drive. This seemed to work and the OS started to boot, but blue screened and started a reboot cycle. I'm assuming the software I was using was no good as it was trying to copy the boot disk while it was in use. I am looking for recommendations on what software to use to fix my problem without doing a re-install. Everything is backed up but my computer works fine and I'd like to avoid re-installation when possible. However, my system would be back up now if I had just started over on a second RAID array. :)

    Read the article

  • Windows 8 Disk Mirroring vs Intel Fake RAID

    - by Johnny W
    So Windows 8 is out and I have a new motherboard. I wish to create a RAID 1 coupling between two HDDs -- for storage purposes only (my OS is on an SSD) -- but I don't know which is the best route to take. My motherboard (Z77 chipset) comes with the age old Intel Fake RAID, but since I only wish to use my RAID for storage, I wondered if I might be better to use Windows 8 Disk Mirroring. Can anyone advise which is better? Or perhaps the pros and cons of each, if that's too contentious? I just can't see the benefit of FakeRAID. You can see my current setup here, if that might change things(?): Thanks!

    Read the article

  • RAID FS detection at boot time

    - by alex
    An excerpt from dmesg: md: Autodetecting RAID arrays. md: Scanned 2 and added 2 devices. md: autorun ... md: considering sdb1 ... md: adding sdb1 ... md: adding sda1 ... md: created md1 md: bind<sda1> md: bind<sdb1> md: running: <sdb1><sda1> raid1: raid set md1 active with 2 out of 2 mirrors md1: detected capacity change from 0 to 1500299198464 md: ... autorun DONE. md1: unknown partition table EXT3-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT2-fs (md1): error: couldn't mount because of unsupported optional features (240) EXT4-fs (md1): mounted filesystem with ordered data mode Is it OK that kernel tries to mount an ext4 raid as ext3, ext2 first? Is there a way to tell it to skip those two steps? Just in case: /dev/md1 / ext4 noatime 0 1 TIA.

    Read the article

  • How to (hardware) RAID 10 on Ubuntu 10.04 LTS with 4 drives and motherboard with RAID contoller

    - by lollercoaster
    I have 4 500GB hard drives. I set up a RAID 10 in BIOS, much like shown here: http://www.supermicro.com/manuals/other/RAID_SATA_ESB2.pdf Then I followed these instructions: http://www.unrest.ca/Knowledge-Base/configuring-mdadm-raid10-for-ubuntu-910 Basically I cannot get it to work. I go through the instructions when I get to the "partition" section of the install, creating 4 RAID 1's (2 partitions on each drive, one for primary and one for swap space), then combining to make a RAID 10. Unfortunately it still shows 2 partitions, one 500 GB and another being 36GB for some reason. Any ideas? I think best would be if anyone had found good instructions (step by step) for how to do this...I've been googling for hours and haven't found anything...

    Read the article

  • Horrible performing RAID

    - by Philip
    I have a small GlusterFS Cluster with two storage servers providing a replicated volume. Each server has 2 SAS disks for the OS and logs and 22 SATA disks for the actual data striped together as a RAID10 using MegaRAID SAS 9280-4i4e with this configuration: http://pastebin.com/2xj4401J Connected to this cluster are a few other servers with the native client running nginx to serve files stored on it in the order of 3-10MB. Right now a storage server has a outgoing bandwith of 300Mbit/s and the busy rate of the raid array is at 30-40%. There are also strange side-effects: Sometimes the io-latency skyrockets and there is no access possible on the raid for 10 seconds. The file system used is xfs and it has been tuned to match the raid stripe size. Does anyone have an idea what could be the reason for such a bad performing array? 22 Disks in a RAID10 should deliver way more throughput.

    Read the article

  • Will Software RAID And iSCSI Work For A SAN

    - by Justin
    I am looking for a SAN solution, but can't afford even entry level solutions. Basically, the SAN is for development and a proof of concept product. The performance doesn't have to be amazing, but needs to be functional. My buddy says we should just setup sotware RAID and software iSCSI in Linux. Essentially I have a spare server with dual Xeon processors, 4GB of memory, and (2) 500GB 7200RPM drives. It's a bit old but working. I am sure there is reason people don't do software RAID and iSCSI, but will performance be usable? Thinking of configuring the drives in RAID 0 (for performance).

    Read the article

  • Making GRUB see software RAID 0 under Ubuntu 10.10 LiveCD

    - by unknownthreat
    I just installed Windows 7 recently, and I expect that it would alter GRUB and it did. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. (Software RAID 0) I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". So what now? How can we make GRUB see RAID 0 under Ubuntu 10.10 LiveCD?

    Read the article

  • How to identify RAID (5 or 6) controllers that allow dynamic resize of the array

    - by David Pfeffer
    I'm building a server with a RAID5 array, based on a hardware controller. I want to be able to later add additional disks and have the array rebalance across all of the disks, enlarging the usable size. I also want to be able to later upgrade to bigger disks (one at a time, of course) and then expand the array to fill the entire drive. These features are available in Linux software raid (md). I've also heard they're available in some hardware controllers. Currently, I own the Adaptec RAID 3805 card and the 3ware 9650se card. I'd prefer to use the Adaptec if possible, but I can't find if either of these cards offer this feature. If they don't, are there other affordable (read as: sub-$600) RAID cards available that can accomplish this?

    Read the article

  • GRUB2 not detecting OS on raid partitions

    - by sleeves
    I have recently added a drive to a system and have successfully raid'ed (RAID-1) the paritions, with the exception of the boot partition. I have it ready and mirrored, but can't get GRUB2 (update-grub) to find it. System: Ubuntu 11.04 Raid Metadata: 1.2 If I run update-grub, it finds the kernel images on the /dev/sda2 partition (present root) but not the images on /dev/md127. /dev/md127 is composed of "missing" and "/dev/sdb2". fdisk on /dev/sdb confirms that sdb2 is of type fd (raid autodetect) and is also flagged bootable. I have two things I want to do. Make the boot.cfg on /dev/sdb2 have a menu option to have the root be /dev/md127 Install grub onto /dev/md127 so the actual boot.cfg from there is being used. Thanks!

    Read the article

  • Moving software RAID to Linux

    - by terman
    I'm using a RAID 1 (mirrored pair) configuration in my Media Center/ NAS system. Currently it's running Windows 8 (yeah, big mistake I know) and I'm regretting it (did it for the games, not worth it). Currently I'm having two software RAID 1s (3TB + 2TB) configured with Storage Spaces and unfortunately formatted with NTFS. Now I would like to switch to Fedora (or maybe Ubuntu if there are advantages) for good. Is there a way that I could continue using the disks as they are without the need to format them with ext or something? I'm glad for every tip. Oh, the system disk is of cause not in a raid configuration.

    Read the article

  • Recovering data from a Silicon Image SiI3114 RAID

    - by Isaac Truett
    I have a set of 3 disks in RAID 5 originally created with a Silicon Image SiI3114 on-board RAID controller. The old motherboard is dead. The new motherboard (which has a different raid controller) won't boot from the array. I have no reason to believe that the drives are damaged or corrupted. I'm 99% sure that the problem is that the new controller isn't compatible or I'm not setting it up properly. Is it possible to recover data from the drives using a different controller? Would a PCI card like this one allow me to read from the array again?

    Read the article

  • Raid Shows Up as Multiple Drives - Can't Mount

    - by manyxcxi
    I have a single hard drive that the OS is installed on and I have Sil raid card installed with two matching 500GB hdds set up in Raid 0 and formatted- they're completely empty. For whatever reason they are showing up as /dev/sdb and /dev/sdc and not as a single hard drive. I used fdisk to format both raid drives as Linux raid auto (fd) but I cannot mount either device and dmraid doesn't seem to want to work, what step am I missing? When I installed 9.04 oh so long ago it seems like it recognized and automatically did everything that needed to be done, now I'm stuck. dmraid Output root@tripoli:~# dmraid -r /dev/sdc: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 /dev/sdb: sil, "sil_biaebhadcfcb", stripe, ok, 976771072 sectors, data@ 0 root@tripoli:~# dmraid -ay RAID set "sil_biaebhadcfcb" already active fdisk Output root@tripoli:~# fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000b9b01 Device Boot Start End Blocks Id System /dev/sda1 * 1 32 248832 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 32 60802 488134657 5 Extended /dev/sda5 32 60802 488134656 8e Linux LVM Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x6ead5c9a Device Boot Start End Blocks Id System /dev/sdb1 1 60801 488384001 fd Linux raid autodetect Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6e2af28 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 fd Linux raid autodetect

    Read the article

  • raid advice with SSD and two HDD

    - by Nin
    I have a new machine with one 128GB SSD and two 1TB HDD. On the SSD is the OS and my initial thought was to put the two HDD in RAID 1 for user data. After some more thought I came up with two other setups and now I'm in doubt :) Can someone advise what would be the best setup? 1: single SSD and HDD in RAID 1 (original thought) 2: Create 2 partitions on the HDD (128GB and 872GB). Put the two 872GB in RAID 1 and create another RAID 1 with the SSD and one 128GB HDD partition. 3: Create 2 partitions on the HDD (750/250), put the 705GB in RAID 1 and use the 2 250GB as backup and make automatic snapshots of the SSD to (one of) these partitions. I think the 2 main questions are: Is it advisable to create a raid array with only part of a drive and actively use the other part of that drive or should you always use the full disk? Is it advisable to create a raid 1 array with a SSD and HDD or will that blow the whole speed advantage of the SSD?

    Read the article

  • Can't write to raid on Fedora

    - by 99miles
    I just did a fresh install of Fedora 11 and added Raid 1 following this tutorial: http://www.optimiz3.com/installing-fedora-11-and-setting-up-a-raid-0-1-5-6-or-10-array/ Now I see the filesystem when I open 'Computer' in the GUI, and I open it and see 'lost+found', but i can't write to the drive. The option is simply greyed out. And when I view Properties on the drive and go to Permissions, it says 'The permissions of {driveid} could not be determined.' Any ideas?

    Read the article

  • How to use Hardware RAID in Ubuntu Server

    - by user2071938
    I have an Adaptec RAID-Controller and created an RAID-1(Mirroring) succesfully. Now I have installed Ubuntu Server 12.04.3. When I type fdisk -l I get this output: bf@fileserver:~$ sudo fdisk -l Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sda doesn't contain a valid partition table Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/sdb doesn't contain a valid partition table Disk /dev/sdc: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders, total 156301488 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0004c454 Device Boot Start End Blocks Id System /dev/sdc1 * 2048 499711 248832 83 Linux /dev/sdc2 501758 156301311 77899777 5 Extended /dev/sdc5 501760 156301311 77899776 8e Linux LVM Disk /dev/mapper/fileserver--vg-root: 75.6 GB, 75606523904 bytes 255 heads, 63 sectors/track, 9191 cylinders, total 147668992 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/fileserver--vg-root doesn't contain a valid partition table Disk /dev/mapper/ddf1_Data: 1000.1 GB, 1000065728512 bytes 255 heads, 63 sectors/track, 121584 cylinders, total 1953253376 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/mapper/ddf1_Data doesn't contain a valid partition table Disk /dev/mapper/fileserver--vg-swap_1: 4160 MB, 4160749568 bytes 255 heads, 63 sectors/track, 505 cylinders, total 8126464 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/fileserver--vg-swap_1 doesn't contain a valid partition table The 80 GB HDD is for the System The 1000.2 GB HDD should be for my data. But I'm a bit confused becauser there are listed two 1000.2 GB HDDs, due the Hardware RAID shoudln't there be only one HDD vissible to the OS? (I have two 1000.2 GB HDDs in an Raid-1 Array) dmraid gives me bf@fileserver:~$ sudo dmraid -r /dev/sdb: ddf1, ".ddf1_disks", GROUP, ok, 1953253376 sectors, data@ 0 /dev/sda: ddf1, ".ddf1_disks", GROUP, ok, 1953253376 sectors, data@ 0 so It seems to be ok? But how do I partitionate this disks and which one should I mount(sdb or sda?) Hope you can help me thx Florian

    Read the article

  • How to interrupt software raid resync?

    - by Adam5
    I want to interrupt a running resync operation on an ubuntu 10.04 software raid. (This is the regular scheduled compare resync) How to stop it while it is running? Another raid array is "resync pending", I want a complete stop of all resyncing. [Edit: "sudo kill -9 1010" doesn't do anything, 1010 is the PID of the md2_resync process] I would also like to know how I can control the intervals between resyncs and the remainig time till the next one.

    Read the article

  • Use a RAID Controller without drivers?

    - by cian1500ww
    Ordered an Adaptec 1420SA RAID card for my Debian Squeeze media server but didn't check to see if it was compatible, turns out it's not because it uses something called hostRAID which requires special drivers that aren't available for Debian. Could I still use the card as an ordinary controller and just use OS software RAID?? I'm not looking for speed, just need to mirror some drives that will be used for storage, the OS will reside on a disk connected to the server's onboard controller so the system won't be booting from any drives on the Adaptec controller.

    Read the article

  • Upon reboot, Linux software raid fails to include one device of a RAID1 array

    - by user1389890
    One of my four Linux software raid arrays drops one of its two devices when I reboot my system. The other three arrays work fine. I am running RAID1 on kernel version 2.6.32-5-amd64 (Debian Squeeze). Every time I reboot, /dev/md2 comes up with only one device. I can manually add the device by saying $ sudo mdadm /dev/md2 --add /dev/sdc1. This works fine, and mdadm confirms that the device has been re-added as follows: mdadm: re-added /dev/sdc1 After adding the device and allowing the array time to resynch, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdc1[0] sdd1[1] 732574464 blocks [2/2] [UU] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> Then after I reboot, this is what the output of $ cat /proc/mdstat looks like: Personalities : [raid1] md3 : active raid1 sda4[0] sdb4[1] 244186840 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdd1[1] 732574464 blocks [2/1] [_U] md1 : active raid1 sda3[0] sdb3[1] 722804416 blocks [2/2] [UU] md0 : active raid1 sda1[0] sdb1[1] 6835520 blocks [2/2] [UU] unused devices: <none> During reboot, here is the output of $ sudo cat /var/log/syslog | grep mdadm : Jun 22 19:00:08 rook mdadm[1709]: RebuildFinished event detected on md device /dev/md2 Jun 22 19:00:08 rook mdadm[1709]: SpareActive event detected on md device /dev/md2, component device /dev/sdc1 Jun 22 19:00:20 rook kernel: [ 7819.446412] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446415] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446782] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.446785] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515844] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.515847] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606829] mdadm: sending ioctl 1261 to a partition! Jun 22 19:00:20 rook kernel: [ 7819.606832] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855616] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855620] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855950] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:48 rook kernel: [ 8027.855952] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962169] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8027.962171] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054365] mdadm: sending ioctl 1261 to a partition! Jun 22 19:03:49 rook kernel: [ 8028.054368] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588662] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.588664] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601990] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.601991] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602693] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.602695] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605981] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.605983] mdadm: sending ioctl 1261 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606138] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:23 rook kernel: [ 9.606139] mdadm: sending ioctl 800c0910 to a partition! Jun 22 19:10:48 rook mdadm[1737]: DegradedArray event detected on md device /dev/md2 Here is the result of $ cat /etc/mdadm/mdadm.conf: ARRAY /dev/md0 metadata=0.90 UUID=92121d42:37f46b82:926983e9:7d8aad9b ARRAY /dev/md1 metadata=0.90 UUID=9c1bafc3:1762d51d:c1ae3c29:66348110 ARRAY /dev/md2 metadata=0.90 UUID=98cea6ca:25b5f305:49e8ec88:e84bc7f0 ARRAY /dev/md3 metadata=1.2 name=rook:3 UUID=ca3fce37:95d49a09:badd0ddc:b63a4792 Here is the output of $ sudo mdadm -E /dev/sdc1 after re-adding the device and letting it resync: /dev/sdc1: Magic : a92b4efc Version : 0.90.00 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Array Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Checksum : 5fd6cc13 - correct Events : 180998 Number Major Minor RaidDevice State this 0 8 33 0 active sync /dev/sdc1 0 0 8 33 0 active sync /dev/sdc1 1 1 8 49 1 active sync /dev/sdd1 Here is the output of $ sudo mdadm -D /dev/md2 after re-adding the device and letting it resync: /dev/md2: Version : 0.90 Creation Time : Sun Jul 13 08:05:55 2008 Raid Level : raid1 Array Size : 732574464 (698.64 GiB 750.16 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Mon Jun 24 07:42:49 2013 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 98cea6ca:25b5f305:49e8ec88:e84bc7f0 (local to host rook) Events : 0.180998 Number Major Minor RaidDevice State 0 8 33 0 active sync /dev/sdc1 1 8 49 1 active sync /dev/sdd1 I also ran $ sudo smartctl -t long /dev/sdc and no hardware issues were detected. As long as I do not reboot, /dev/md2 seems to work fine. Does anyone have any suggestions?

    Read the article

  • RAID 10 not being found by installer

    - by dko
    I had ubuntu installed with raid 0 enabled. I have added 2 more disks went into the bios deleted and created a new raid setup using raid 10 (total of 4 disks now). However during install of ubuntu server it asks if it should activate the RAID Sata disks, I tell it yes. Next step shows up blank for available disks when determining where to mount the root etc. Anyone have a clue as to why this would be?

    Read the article

  • external drive enclosure -> software RAID 5?

    - by memilanuk
    Hello all, I have two older PCs on my LAN posing as 'servers'... one running FreeNAS off a USB stick using three 500GB hdds in a ZFS RAID-Z pool serving as storage for the LAN and one running Debian Lenny with an 80GB drive used as a general purpose 'tinker' box that I can ssh into, etc. Problem is that the SMART report for one of those 500GB drives in the FreeNAS box is showing some pre-failure attributes, and the whole array is a little small anyways. Rather than simply replace one 500GB drive with another 500GB drive, and have no backup of the file server, I'd like to upgrade all the drives to 2TB ones - but I have no where to store that much data in the mean while. As such, I started looking at getting a 4-bay external drive enclosure with an eSATA card for the Debian box, with the hopes of creating a RAID5 + LVM setup using those drives and backing the data up to that external drive enclosure. After the backup is done, replace the drives in the FreeNAS box and rebuild the array there and mirror the data back. Then, I'd have both the primary storage (on the FreeNAS box) and a backup (which I don't have currently) using the external drive enclosure on the Debian box. My big question is... most of these external drive boxes seem to claim support for JBOD, RAID 0, 1, 10, 5, etc. - should I presume that is simply fake RAID like many commodity mobos have, and not really usable in Linux? In that case, with all the drives hanging off the one eSATA connection, will Linux (specifically Debian Squeeze, as I plan on upgrading that box here shortly) see all four drives, or just the first one? Will I be able to configure them in a RAID5 array as desired? Thanks, Monte

    Read the article

  • Four disks - RAID 10 or two mirrored pairs?

    - by ewwhite
    I have this discussion with developers quite often. The context is an application running in Linux that has a medium amount of disk I/O. The servers are HP ProLiant DL3x0 G6 with four disks of equal size @ 15k rpm, backed with a P410 controller and 512MB of battery or flash-based cache. There are two schools of thought here, and I wanted some feedback... 1). I'm of the mind that it makes sense to create an array containing all four disks set up in a RAID 10 (1+0) and partition as necessary. This gives the greatest headroom for growth, has the benefit of leveraging the higher spindle count and better fault-tolerance without degradation. 2). The developers think that it's better to have multiple RAID 1 pairs. One for the OS and one for the application data, citing that the spindle separation would reduce resource contention. However, this limits throughput by halving the number of drives and in this case, the OS doesn't really do much other than regular system logging. Additionally, the fact that we have the battery RAID cache and substantial RAM seems to negate the impact of disk latency... What are your thoughts?

    Read the article

  • Linux SW Raid: whole disk or per-partition?

    - by Steve Pomeroy
    I have inherited a machine which has 2 physical disks and uses Linux SW RAID(1). Both disks are partitioned and are are all individual arrays (/dev/md0, /dev/md6, etc.). Those arrays are then mounted (/boot, /home, etc. even /tmp). As RAID is designed to mitigate physical failures, is there any reason why one would use this technique over whole-disk arrays that are then partitioned (perhaps using LVM)? This seems prone to more potential issues, but may have some special properties that I haven't been able to glean. I'm planning on moving this setup to: disks?SWRAID(1)?LVM as I'll be making multiple VMs out of the one machine, but wanted to make sure I knew what I was doing when I got rid of the old setup.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >