Search Results

Search found 1864 results on 75 pages for 'raid'.

Page 27/75 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Creating properly aligned partitions on a replacement disk

    - by Marius Gedminas
    I've a typical small office server with two hard disks configured for RAID-1 (mirroring). Each disk has several partitions: one for swap, the others paired in several /dev/mdX arrays. Every couple of years one of the disks dies and is replaced. The replacement typically goes something like this: # copy partition table from the remaining good disk to the empty replacement disk # (instead of /dev/good_disk and /dev/new_disk I use /dev/sda and /dev/sdb, as appropriate) sfdisk -d /dev/good_disk | sfdisk /dev/new_disk # install boot loader grub-install /dev/new_disk # create swap partition reusing the same UUID, so I don't need to edit /etc/fstab mkswap /dev/new_disk1 -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # hot-add the new partitions to my RAID arrays mdadm /dev/md0 -a /dev/new_disk2 mdadm /dev/md1 -a /dev/new_disk5 mdadm /dev/md2 -a /dev/new_disk6 mdadm /dev/md3 -a /dev/new_disk7 mdadm /dev/md4 -a /dev/new_disk8 The disks were originally partitioned with cfdisk back in 2009, and so the partition table is aligned traditionally to cylinder boundaries (255 heads * 63 sectors). This is not the optimum configuration for new 4K-sector drives. My question is: how can I create a set of partitions for the new disk and ensure they're properly aligned, and have correct sizes for my RAID arrays (rounding up is acceptable, I suppose, but rounding down is definitely not)?

    Read the article

  • Did I lost my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful so it even can reshape arrays so you can start with a smaller array and the grow it as you need. I've bought 3x1Tb drives and made RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as a spare and run a command to grow array to 7-disk. It started working, but speed was ridiculously slow, ~100kb/sec. AFter processing first 37Mb at such an amasing speed, one of old HDDs fails. I properly shutdown PC and disconnected failed drive. After bootup it appeared it recreated intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlocks, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lost my data again? Any suggestions how can I make it work? OS is Ubuntu Server 10.04.2... PS. Needless to say that data is unaccessible - I cannot mount /dev/md0 as save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it.

    Read the article

  • Replace a failed drive in Linux RAID

    <b>Tech Republic:</b> "A few weeks ago I had the distinct displeasure of waking up to a series of emails indicating that a series of RAID arrays on a remote system had degraded. The remote system was still running, but one of the hard drives was pretty much dead."

    Read the article

  • PROUHD: RAID for the end-user

    <b>Linuxconfig:</b> "Therefore, there is currently no storage solution that manages heterogeneous storage devices efficiently. In this article, we propose such a solution and we call it PROUHD (Pool of RAID Over User Heterogeneous Devices)."

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • hyper-v multiple virtual machines with >2TB volumes from one raid

    - by wurlog
    I have a server with two Raids. Raid 0: 2x 1TB Raid 6: 8x 2TB The first raid I used for the hyper-v installation itself. The virtual machines should use the Raid 6, but how can I config it? I need at least one file server with the most of the disc space (maybe a second). But every vhd has a maximum of 2 TB and I can't use the volume directly because other virtual machines have to have access the Raid6. What do I do?

    Read the article

  • MediaShield RAID 5 is showing up as 760GB when the actual size is 2.7TB

    - by Ilya Volodin
    I just finished setting up Windows 2003 Server on my new server. And I started setting up a RAID 5 for it. I have 4x1TB Hard Drives. From MediaSheild RAID Utility (at boot time) the RAID size is displayed as 2.7TB. Linux also shows it as 2.7TB. However, in Windows, everything (including Windows Disk Management as well as Windows based MediaShield utility) is reporting only 760Gb. I already tried converting partitioning table to GUID from MBR, because I read somewhere that Windows can only handle up to 2TB MBR tables, that didn't help much. Tried searching for partitioning utilities that I could use, but couldn't find anything free. Formatted the disk as NTFS partition from within Linux, it stop showing in Windows all together, even MediaShield windows utility isn't showing at anymore. Windows is installed on a separate 500Gb hard drive, that's setup not to support RAID. Any ideas?

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • Is there anyway to build a raid system without all drives?

    - by xenoterracide
    I'm building a raid1 (ok it will probably be a raid10,f2 but the difference with 2 drives... isn't much) system with 2 1TB drives. However, 1 of the drives I've ordered is bad so I'm RMA-ing it. I'm wondering if I could partition and install to the 1 drive and then rebuild the array when I get the second drive (after I test it of course) My initial investigation doesn't show me a way of creating the array without specifying all devices... and the device the second drive will be is one that has data that I will need to migrate (plus it's not big enough). Is it possible that I could create an array without specifying all devices? or specify false ones and reconfigure to the right ones later? Or some other method I'm not thinking of.

    Read the article

  • Best practice Raid groups for EqualLogic PS6510X

    - by 20th Century Boy
    We are thinking about purchasing 4 x EqualLogic PS6510X SANs (the Sumo boxes). Each has 48 x 600GB 10k SAS drives. They will be stacked to form a logical pool of storage (all in the same location). I understand that when you create a RAID group its done on a "per box" basis. So one box could be Raid 50, another Raid 10 etc. My question is, should I make one box a "performance" box ie Raid 10, and the other boxes "standard" ie Raid50? How do people configure their EQL arrays in the real world?

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • which drive do I mount

    - by Crash893
    I have a system hdd then two raid1 hard drives I see that sda1 is the system drive but when i do a fdisk -l I get the following results so which of the following do i need to mount to get the "raid" drive and not the individual hdd? root@Mxxxx-PDC:/etc/samba# fdisk -l Disk /dev/sda: 251.0 GB, 251000193024 bytes 255 heads, 63 sectors/track, 30515 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000762dc Device Boot Start End Blocks Id System /dev/sda1 * 1 30328 243609628+ 83 Linux /dev/sda2 30329 30515 1502077+ 5 Extended /dev/sda5 30329 30515 1502046 82 Linux swap / Solaris Disk /dev/sdb: 400.0 GB, 400088457216 bytes 255 heads, 63 sectors/track, 48641 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 48641 390708801 83 Linux Disk /dev/sdc: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0009f4b2 Device Boot Start End Blocks Id System /dev/sdc1 * 1 255 2048256 fd Linux raid autodetect /dev/sdc2 256 30401 242147745 fd Linux raid autodetect Disk /dev/sdd: 250.0 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x000b7f4c Device Boot Start End Blocks Id System /dev/sdd1 * 1 255 2048256 fd Linux raid autodetect /dev/sdd2 256 30401 242147745 fd Linux raid autodetect

    Read the article

  • breaking mdadm raid and moving to NTFS

    - by daveyt
    I'm running Ubuntu 8 something and my data is on a mirrored pair of 1TB disks formatted as ext3, and the RAID is via mdadm. I want to move to Windows 7 (yeah yeah I know but Linux aint doing it for me at the moment) and migrate the disks to NTFS. My plan is: Break the MDADM RAID (by failing one disk logically) Format the 'failed' disk as NTFS Copy data from the RAID array to the NTFS disk (dont care about perms) Install Windows, (new separate non RAid disk) and my data disk is available. I've researched this and it seems the easiest way. I dont have another disk to back up to so I think this is my only way. Can anyone see a better/easier way?

    Read the article

  • HP E200i Controlller RAID Configuration fon Win2008 Ent, Sql Server, IIS Apps etc need opinion

    - by mn
    Hello, Actually I run RAID 5 (4 x SAS drives) with Win 2008 Ent(1x host) Win 2008 End(3x guest) Sql Server 2005 Std (on guest) 3 x asp.net applications (on guest) I bought 3 x drives to create additional array (on same controller E200i, I am waiting now for confirmation is it possible to have 3 raids in same controller) I am planning to have 2 x RAID5 (if it is possible) first RAID 5 with all vhd files, systems etc second RAID 5 all data files and transaction logs I am looking for opinion how to optimize data layer (seven drives, one controller).

    Read the article

  • Linux Software Raid runs checkarray on the First Sunday of the Month? Why?

    - by mgjk
    It looks like Debian has a default to run checkarray on the first Sunday of the month. This causes massive performance problems and heavy disk usage for 12 hours on my 2TB mirror. Doing this "just in case" is bizzare to me. Discovering data out of sync between the two disks without quorum would be a failure anyway. This massive checking could only tell me that I have an unrecoverable drive failure and corrupt data. Which is nice, but not all that helpful. Is it necessary? Given I have no disk errors and no reason to believe my disks have failed, why is this check necessary? Should I take it out of my cron? /etc/cron.d# tail -1 /etc/cron.d/mdadm 57 0 * * 0 root [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ] && /usr/share/mdadm/checkarray --cron --all --quiet Thanks for any insight,

    Read the article

  • How do you expand a raid disk array in a dell 2850?

    - by johnny
    Hi, I have a Dell 2850 and I want to install Windows 2008 Server. Problem is that my C drive only has 16GB of space. The requirements say I need at least 20. I have an open bay for a drive. If I put in another drive, how can I add that to the array and them make it only for the C drive? what do I do? Thank you. edit: I don't want to remove any drives. I just want to add a new one to the existing array. Can I do that and make sure that new drive is for the logical C drive?

    Read the article

  • Linux Raid: Can mdadm --grow a raid1 while mounted?

    - by Chris
    I have 2 500gb drives in a RAID1 setup that I needed to upgrade for more space. I mdadm --fail'ed each drive in turn and I used dd to copy each drive to it's respective larger drive (2tb each), removed the smaller drives and replaced them with the larger drives, and reassembled the array and forced a resync. So now I've got a 500gb RAID1 sitting on 2TB drives, and wish to grow them. The plan is to use mdadm --manage /dev/md0 --grow to grow them, then boot a rescue cd, assemble the array under that environment, and do the resize2fs on them. Can I use mdadm --grow on a mounted and live filesystem? Also, do I need more options to make sure the grow operation stays raid1?

    Read the article

  • Windows 7 Sharing issue on RAID 5 Array(s)

    - by K.A.I.N
    Greetings all, I'm having a very odd error with a windows 7 ultimate x64 system. The network system setup is as follows: 2x XP Pro 32 Bit machines 1x Vista ultimate x64 machine 2x Windows 7 x64 Ultimate machines all chained into 1x 16 port netgear prosafe gigabit switch, the windows 7 & vista machines are duplexed. Also there is a router (netgear Rangemax) chained off the switch I am basically using one of the windows 7 machines to host storage & stream media to other machines. To this end i have put 2x 3tb hardware RAID 5 arrays in it and assorted other spare disks which i have shared the roots of. The unusual problems start when i am getting Access denied, Please contact administrator for permission blah blah blah when trying to access both of the RAID 5 arrays but not the other stand alones. I have checked the permission settings, i have added everyone to the read permission for the root, i have tried moving things into sub directories then sharing them. I have tried various setting combinations in HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa and always the same. I have tried flushing caches all round, disabling and re-enabling shares & sharing after restart as well as several other things & the result is always the same... No problem on individual drives but access denied on both the RAID arrays from both XP & Vista & Windows 7 machines. One interesting quirk that may lead to an answer is that there is no "offline status" information regarding the folders when you select the RAID 5s from a windows 7 machine yet there is on the normal drives which say they are online. It is as if the raid is present but turned off or spun down but as far as i was aware windows will spin an array back up on network request and on the machine itself the drives seem to be online and can be accessed. Have to admit this has me stumped. Any suggestions anyone? Thanks in advance for any fellow geek assistance. K.A.I.N

    Read the article

  • Windows Server 2008 RAID10

    - by JT
    Hello All, I am building a storage system for myself. I have a 16 bay SATA chasis and right now I have 1 x 500GB SATA for booting 8 x 1.5TB for data. 3Ware 9500S-8 RAID card where these 8 drives above are connected to. I am used to linux, but not in the RAID department. I have Windows experience too. What I am looking for is something that I can just let sit, be reliable and use for other items as well. (Like running test websites, Apache, MySQL, etc). This box is private on a Class-C subnet. My thought is to at least consider Windows Server 2008. I especially like the potential for NON-GUI Mode. Can Windows Server 2008 do a Software RAID 10 out of the box? Software RAID is better performance and better in case the raid needs to be moved to another machine? I just want to SCP files, so OpenSSH running on it? Can one install the GUI, but not use it unless they get in a bind? Is Windows a good idea or should I stick to a Linux Software RAID or FreeBSD + ZFS?

    Read the article

  • Adding third disk as a single disk in a server with an existing RAID1

    - by slowhandsolo
    I've got a ProLiant DL360 G5 server (Fedora 13) with two SAS disks in a hardware RAID 1, working fine. Now I hot plugged another SAS disk. I'd like to configure this new hard disk out of my RAID, as a single non-RAID disk (ex. /dev/sdb). Even after rebooting the server, I can't see the new disk with "fdisk -l". It displays only my hardware RAID, but not the new disk. [root@myserver]# fdisk -l Disco /dev/cciss/c0d0: 300.0 GB, 299966445568 bytes Disposit. Inicio Comienzo Fin Bloques Id Sistema /dev/cciss/c0d0p1 * 1 126 512000 83 Linux /dev/cciss/c0d0p2 126 71798 292422656 8e Linux LVM Disco /dev/dm-0: 234.9 GB, 234881024000 bytes Disco /dev/dm-1: 10.5 GB, 10536091648 bytes Disco /dev/dm-2: 21.0 GB, 20971520000 bytes Disco /dev/dm-3: 31.5 GB, 31474057216 bytes Disco /dev/dm-4: 1577 MB, 1577058304 bytes However, I can see the new disk using the HP Array Configuration Utility CLI for Linux "hpacucli": [root@myserver]# hpacucli => controller slot=0 physicaldrive all show status physicaldrive 1I:1:1 (port 1I:box 1:bay 1, 300 GB): OK physicaldrive 1I:1:2 (port 1I:box 1:bay 2, 300 GB): OK physicaldrive 1I:1:3 (port 1I:box 1:bay 3, 300 GB): OK => controller slot=0 pd all show detail Smart Array P400i in Slot 0 (Embedded) array A physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 **unassigned** physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: **Unassigned Drive** As you can see, I've got two SAS disks in a RAID 1 and the new disk as "unassigned". Is there any way to work with the new disk as another non-RAID single disk? If relevant, I want to create a new partition in my new disk, format it with mkfs and mount it, but as I can't see it with fdisk, I don't know how to do it. Thanks!

    Read the article

  • How to create partition when growing raid5 with mdadm.

    - by hometoast
    I have 4 drives, 2x640GB, and 2x1TB drives. My array is made up of the four 640GB partitions and the beginning of each drive. I want to replace both 640GB with 1TB drives. I understand I need to 1) fail a disk 2) replace with new 3) partition 4) add disk to array My question is, when I create the new partition on the new 1TB drive, do I create a 1TB "Raid Auto Detect" partition? Or do I create another 640GB partition and grow it later? Or perhaps the same question could be worded: after I replace the drives how to I grow the 640GB raid partitions to fill the rest of the 1TB drive? fdisk info: Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xe3d0900f Device Boot Start End Blocks Id System /dev/sdb1 1 77825 625129281 fd Linux raid autodetect /dev/sdb2 77826 121601 351630720 83 Linux Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xc0b23adf Device Boot Start End Blocks Id System /dev/sdc1 1 77825 625129281 fd Linux raid autodetect /dev/sdc2 77826 121601 351630720 83 Linux Disk /dev/sdd: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x582c8b94 Device Boot Start End Blocks Id System /dev/sdd1 1 77825 625129281 fd Linux raid autodetect Disk /dev/sde: 640.1 GB, 640135028736 bytes 255 heads, 63 sectors/track, 77825 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xbc33313a Device Boot Start End Blocks Id System /dev/sde1 1 77825 625129281 fd Linux raid autodetect Disk /dev/md0: 1920.4 GB, 1920396951552 bytes 2 heads, 4 sectors/track, 468846912 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0x00000000

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >