Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 11/1762 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Removing raid 1 (mirroring) and leaving data on both drives

    - by ajma
    Hello, I have two drives in a raid 1 (mirroring) array. Hardware raid using whatever is built into an Intel motherboard. (asus P5BE) I'd like to remove one drive but keep the data in both (I want to put one of the drives into another machine). Can I go into the raid configuration and remove the array and have the data remain?

    Read the article

  • Running RAID on an internal storage drive

    - by Johnny W
    I am running Windows 8 on an SSD, and it's all running swimmingly, but I want to put my documents on a "normal" HD running under RAID 1. I have four SATA 3GB/s ports on my motherboard (my Windows 8 SSD drive in on a different 6GB/s controller). All four are used (1 Bluray Optical Drive, 1 Spare HD, and the 2 I wish to turn into a RAID 1 drive. In my BIOS I can only change settings for the entire controller, not just ports. So my question is: If I turn these four ports into a RAID controller, will that negatively affect the non-RAID hardware plugged into it? I.e. Will a HD or Bluray drive be slower/incompatible with being plugged into a non AHCI SATA port? Thanks.

    Read the article

  • Can a RAID 0 disk be rebuilt

    - by Rogue
    Recently one of the hard drives of one of my RAID 0 configuration gave an error. What do I do now I'm hoping that I can replace that faulty disk with a new hard drive and that the RAID can rebuild itself. (using Intel Matrix Storage Console) Is this possible? Though I doubt it. Is there anyway that I can rebuild the RAID? or have I lost all the matter on it. TECH INFO: I have a software raid on an Intel DG965WH motherboard and the current operating system is Windows

    Read the article

  • Can a RAID 0 disk/config be rebuilt ?

    - by Rogue
    Recently one of the hard drives of one of my RAID 0 configuration gave an error. What do I do now I'm hoping that I can replace that faulty disk with a new hard drive and that the RAID can rebuild itself. (using Intel Matrix Storage Console) Is this possible? Though I doubt it. Is there anyway that I can rebuild the RAID? or have I lost all the matter on it. TECH INFO: I have a software raid on an Intel DG965WH motherboard and the current operating system is Windows

    Read the article

  • Linux RAID: Replacing Failed Drive...permanantly

    - by user137519
    Okay, odd question here. I have a server with RAID 5. A drive failed, in a really physically in a really odd way. On the machine it boots and is seen by the BIOS but...no partition can be seen on the drive consistantly (in and out). 2 out of 3 drives working...I made new spare disk and added it, RAID 5 rebuilt clean. All appears well but...when I reboot it keeps trying to use the 2nd drive which doesn't give any partition data, so of course the RAID 5 gets 2 out of 3...again. The status of my drive is as follows: /dev/sda2:Good /dev/sdb2 (drive has physical problem so no partition data) bad, /dev/sdc2:good /dev/sdd2:good. Every time I reboot the mdadm system seems to keep trying to use /dev/sdb which has physical failure (although spins and is detected). /dev/sdd is the new drive I created. I added /dev/sdd to the raid and it rebuilds the raid but this action isn't memorized upon reboot so it keeps listing /dev/sda and /dev/sdc but doesn't use the perfectly good /dev/sdd until I re-add manually. I've tried removing the dead drive with the mdadm tool, but as it cannot see /dev/sdb paritions it will not fail or remove it (says partition doesn't exist). the /etc/mdadm.conf was automatically made on the original OS install which only lists: DEVICE partitions MAILADDR root ARRAY /dev/md2 super-minor=2 ARRAY /dev/md0 super-minor=0 ARRAY /dev/md1 super-minor=1 Basically just the raids to use on boot. I need to remove this semi-dead drive (/dev/sdb) but I'd prefer to know why this is happening before I do. any ideas or suggestions. I supposed I could attempt to clone/replace /dev/sdb (the partitions on drive show up, then disappear shortly after) but given the partition "chester cat" behaviour this seems risky to me and as I have a working "spare" it seems unnecessary. Thanks in advance for your insight.

    Read the article

  • downsides to running RAID on unmatched drives?

    - by NoCarrier
    I've got some generic RAID controller built into my motherboard and I want to build either a RAID 5 or RAID 0+1 array. What are the disadvantages of running unmatched drives? Like 4 different brand 7200rpm 500gb drives? This would determine whether I look around for whatever used drives I could get my hands on vs. paying extra for a set of matched identical drives.

    Read the article

  • How to setup RAID partitions with parted?

    - by psycketom
    I'm going through the https://wiki.archlinux.org/index.php/RAID guide in here, but I'm stuck on Partition Tables. Since my drives are 3TB, fdisk and cfdisk won't cut it due to their 2GB limit, but they are straight forward when managing partitions - adding da or fd as types. But, there is not that straight forward guide for RAID partition setup with parted. So, how do I make Non-FS or RAID partition with parted?

    Read the article

  • Disable writing RAID degraded mode

    - by jolivier
    I have a RAID5 with 5 disks on my machine and suspect the motherboard chipset to fail at some points and make my raid going in degraded mode. Last time it happened I noticed it on the failure of the 2nd drive connected to the same chipset and lost a lot of data. So I would like to prevent this, and especially I would like to have mdadm disable writes on the raid if one of the disk fails. So that in between I get notified, I recover and can use my system again. Sadly I could not find it in man mdadm so I was wondering if this is possible via a tool or hidden option since for me it looks like a standard feature of a RAID system. If this is not possible I would also be happy with a solution to stop the raid if degraded.

    Read the article

  • Correct stripe with for RAID-50

    - by daniel
    I've been trying to determine what the correct stripe width for a RAID-50 volume is but haven't been successful in google searches or empirical tests. The volume is built off of 4 disk spans that contain 6 disks each. If I understand it correctly, each individual span is a RAID-5 volume and the 4 spans are combined using RAID-0. However, I'm not seeing any noticable effect when I vary the stripe-width from 2-20. Suggestions?

    Read the article

  • When RAID 10 is SLOWER than RAID 1, why?

    - by Paul
    We have a Dell 2950 with PERC and 14 external SAS 15K 73GB drives. An Oracle database job takes 3 hours to run with the drives set as hardware RAID 10 (striped across 7 mirrored pairs). The same job with the drives in RAID 1 takes only 1 hour. OS is Win 2008 R2 I think. Before we change the RAID level (with considerable downtime) on the production box, does anyone know why we're seeing this odd result, and if there's a better way to fix it?

    Read the article

  • HDDs randomly falling off raid

    - by michael
    I really need help on this it's Saturday long weekend.No customer service help:( .I recently build new server/light duty desktop.Main purpose is only file sharing.Raid configuration :Adaptec 6805 ,8x 3TB HDD WD Red,Intel RES2SV240 expander,Raid 6,set in Intel mobo DZ 77GA-70K.I upgraded firmwares, but I'm having strange problem. During Build Verify segment 7 got missing.I just reinsert drive into hot swap bay and it started to rebuild Array.After rebuild was done another segments 0 and 5 gone missing while build/verify.I reinserted drives and now I'm praying that raid is going to rebuild successfully from remaining 6 drives,because i already transfer some data on it(I know it was bad idea).I checked S.M.A.R.T on missing drives , it only says link failure and aborted commands on one of them.No Errors on HDDs.Connections and cables are good.I added 2 fans blowing on RAID controller because it was getting too hot, so I guess overheating shouldn't be issue.What can be possibly wrong? Thank you for help.

    Read the article

  • Windows 2008 Best Raid Configuration

    - by Brandon Wilson
    I have 4 2TB hard drives and I was thinking about using Raid 10. This would give me 4TB correct? My next question is would it be easy to add more hard drives to the raid array. For example if I bought another hard drive can I add it to the array without backing up any data? Basically I want to be able to start off with 4TB and when the space becomes full add more space as needed. If this isn't possible with Raid 10, is it possible with any Raid configuration. Any suggestions would be appreciated. Thank you.

    Read the article

  • RAID strategy - 8 1TB drives

    - by alex
    I'm setting up a backup storage device- This machine has Windows Server 2008, on a separate boot drive. It has 8x 1TB drives, and uses a hardware RAID card. My question is, which RAID configuration should I go for? Initially, I was going to go with RAID 5 across all 8 drives, however members on serverFault have advised against it. I was just wondering why? Some people have suggested 2 lots of RAID 5 configuration on 4 of the drives, then striping them... I want to maximise the storage space, as this is a backup unit - will store SQL backups, Acronis Images, files, etc... It won't be for public access, so the I/O won't be that high I wouldn't think.

    Read the article

  • How to display/define Mirror/Stripping pairs with mdadm

    - by Chris
    I want to make a standard linux software Raid10 over 4 HDD. The server has 4HDDs, 2 pairs from different vendors in order to avoid batch problems. I want to have the mirror over two different Vendors, and then the Stripe over the mirror pairs. I could do that by manually creating Raid1/0, but mdadm supports Raid level 10. I just cant figure out how the Raid10 is then handled and how the data is distributed. mdadm --detail /dev/md10 /dev/md10: Version : 1.2 Creation Time : Wed May 28 11:06:23 2014 Raid Level : raid10 Array Size : 1953260544 (1862.77 GiB 2000.14 GB) Used Dev Size : 976630272 (931.39 GiB 1000.07 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Wed May 28 11:06:23 2014 State : clean, resyncing (PENDING) Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : near=2 Chunk Size : 512K Name : pdwhost:10 (local to host pdwhost) UUID : a3de0ad5:9e694ee1:addc6786:c4449e40 Events : 0 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 81 1 active sync /dev/sdf1 2 8 97 2 active sync /dev/sdg1 3 8 113 3 active sync /dev/sdh1 does not really give any information about that. How it should be: Raid 1 / Mirror over /dev/sda1 /dev/sdf1 and /dev/sdg1 /dev/sdh1 Raid 0 over the two Raid 1 pairs Is it possible to do that with the built in "level=10", how can I see what pairs are mirrored? Thanks a lot for you help

    Read the article

  • What the best way to recover from when your RAID H/W incorrectly thinks a disk is missing

    - by Software Monkey
    I have a Windows 7 system with an MSI motherboard (running the latest AMD BIOS) and two of my four disks (not the system boot disk) configured via the Mobo as RAID-1. After a normal system restart today, the RAID BIOS reports that one of the two drives has been disconnected or has failed. It's not really failed; via recovery tools I can verify that if I take the BIOS out of RAID mode. But I can find no way to re-add the second hard disk to the array and rebuild via the BIOS - the only option seems be to delete the array and recreate it, but I've done that once before and it blows away the disk. It's done this once before, however on a subsequent reboot after double-checking the drive cabling (but not changing anything) and it boot up fine. So I think the mobo RAID is a little bit flaky. At this point I would like to remove the RAID drivers, change to AHCI mode and switch over to using a Windows 7 dynamic mirror disk. But the RAID drivers seem somehow deeply bound into the Windows startup - I can't find anything like the good ol' safe-mode in Windows 7. If I boot from the Win 7 install disk in ACHI mode I can use recovery tools to log in to the Windows 7 installation, so the boot drive it seems fine with ACHI mode. Additionally, I can see all my other disks, run chkdsk on them and they seem to be fine. If I try to boot from the HDD in AHCI mode, it just reboots part way through, presumably because the RAID drivers load and conflict with the BIOS being set to AHCI. So: How do I strip the RAID drivers from my Win 7 installation? If I delete the RAID logical disk, will it really delete partitioning information, or is that just a poorly worded message when it says the data on the disk will be deleted? If I disconnect the 2 disks in a RAID array, then delete the logical disk array, and then reconnect and reboot still in RAID mode, will the disks simply revert to RAID single-disks like my other 2 and then maybe I can leave windows with RAID drivers by operate the disks as singles with 2 of them in a Windows dynamic disk mirrored setup? Does Windows 7 have anything like the Windows XP Repair Install, where it will reinstall the O/S binaries from CD, but leave apps and setup alone. I am really hoping I don't have to do a complete reinstall of Windows 7 - the last one, when I upgraded from XP, took me two days to get everything set up and installed.

    Read the article

  • Linux Software RAID1 Rebuild Completes, but after reboot, its degraded again

    - by zimmy6996
    I have been beating my head with an issue here, and I'm now turning to the internet for help. I have a system running Mandrake Linux, with the following configuration: /dev/hda - This is a IDE drive. Has some partitions on it that boot the system and make up most of the file system. /dev/sda - This is drive 1 of 2 for a software raid /dev/md0 /dev/sdb - This is drive 2 of 2 for a software raid /dev/md0 md0 gets mounted but fstab as /data-storage, so it is not critical to the systems ability to boot. We can comment it out of fstab, and the system works just fine either way. The problem is, we have a failed sdb drive. So I shut the box down, and have pulled the failed disk and installed a new disk. When the system boots up, /proc/mdstat shows only sda as part of the raid. I then run the various command to rebuild the RAID to /dev/sdb. Everything rebuilds correctly, and upon completion, you look at /proc/mdstat and it shows 2 drives sda1(0) and sdb1(1). Everything looks great. Then you reboot the box ... UGH!!! Once rebooted, sdb is missing again from the RAID. It is like the rebuild never happened. I can walk through the commands to rebuild it again, and it will work, but again, after reboot, the box seems to make sdb just vanish! The real odd thing is, if after reboot, I pull sda out of the box, and try to get the system to load with the rebuilt sdb drive in the system, and when I do, the system actually throws and error just after grub, and says something about drive error, and the system has to shut down. Thoughts??? I'm starting to wonder if grub has something to do with this mess. That the drive isn't being setup within grub to be visible at boot? This RAID array isn't necessary for the system to boot, but when the replacement drive is in there, without SDA it won't boot system, so it makes me believe there is something to that. On top of that, there just seems to be something wonky here the drive falling off of RAID after reboot. I've hit the point of pounding my head on the keyboard. Any help would be greatly appreciated!!!

    Read the article

  • How many disks is too many in this RAID 5 configuration??

    - by Tom
    HP 2012i SAN, 7 disks in RAID 5 with 1 hot spare, took several days to expand the volume from 5 to 7 300GB SAS drives. Looking for suggestions about when and how I would determine that having 2 volumes in the SAN, each one with RAID 5, would be better?? I can add 3 more drives to the controller someday, the SAN is used for ESX/vSphere VMs. Thank you...

    Read the article

  • Linux RAID-0 performance doesn't scale up over 1 GB/s

    - by wazoox
    I have trouble getting the max throughput out of my setup. The hardware is as follow : dual Quad-Core AMD Opteron(tm) Processor 2376 16 GB DDR2 ECC RAM dual Adaptec 52245 RAID controllers 48 1 TB SATA drives set up as 2 RAID-6 arrays (256KB stripe) + spares. Software : Plain vanilla 2.6.32.25 kernel, compiled for AMD-64, optimized for NUMA; Debian Lenny userland. benchmarks run : disktest, bonnie++, dd, etc. All give the same results. No discrepancy here. io scheduler used : noop. Yeah, no trick here. Up until now I basically assumed that striping (RAID 0) several physical devices should augment performance roughly linearly. However this is not the case here : each RAID array achieves about 780 MB/s write, sustained, and 1 GB/s read, sustained. writing to both RAID arrays simultaneously with two different processes gives 750 + 750 MB/s, and reading from both gives 1 + 1 GB/s. however when I stripe both arrays together, using either mdadm or lvm, the performance is about 850 MB/s writing and 1.4 GB/s reading. at least 30% less than expected! running two parallel writer or reader processes against the striped arrays doesn't enhance the figures, in fact it degrades performance even further. So what's happening here? Basically I ruled out bus or memory contention, because when I run dd on both drives simultaneously, aggregate write speed actually reach 1.5 GB/s and reading speed tops 2 GB/s. So it's not the PCIe bus. I suppose it's not the RAM. It's not the filesystem, because I get exactly the same numbers benchmarking against the raw device or using XFS. And I also get exactly the same performance using either LVM striping and md striping. What's wrong? What's preventing a process from going up to the max possible throughput? Is Linux striping defective? What other tests could I run?

    Read the article

  • HP Proliant G7 hardware RAID configuration automation with ribcl

    - by karthik
    I have been trying to automate hardware RAID configuration of HP proliant machines before OS installation (So I can not use hpacucli) ssh into iLO3 doesn't have option for RAID configuration I use ribcl but there is no command for RAID config, however I see this under the command GET_EMBEDDED_HEALTH. <STORAGE> <CONTROLLER> <LABEL VALUE="Controller on System Board"/> <STATUS VALUE="OK"/> <CONTROLLER_STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="50014380215F0070"/> <MODEL VALUE="HP Smart Array P420i Controller"/> <FW_VERSION VALUE="3.41"/> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 1I Box 1"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="04"/> </DRIVE_ENCLOSURE> <DRIVE_ENCLOSURE> <LABEL VALUE="Port 2I Box 0"/> <STATUS VALUE="OK"/> <DRIVE_BAY VALUE="01"/> </DRIVE_ENCLOSURE> <LOGICAL_DRIVE> <LABEL VALUE="01"/> <STATUS VALUE="OK"/> <CAPACITY VALUE="68 GB"/> <FAULT_TOLERANCE VALUE="RAID 0"/> <PHYSICAL_DRIVE> <LABEL VALUE="Port 1I Box 1 Bay 3"/> <STATUS VALUE="OK"/> <SERIAL_NUMBER VALUE="6TA0N3SZ0000B231CYDT"/> <MODEL VALUE="EH0072FAWJA"/> <CAPACITY VALUE="68 GB"/> <LOCATION VALUE="Port 1I Box 1 Bay 3"/> <FW_VERSION VALUE="HPDH"/> <DRIVE_CONFIGURATION VALUE="Configured"/> </PHYSICAL_DRIVE> </LOGICAL_DRIVE> </CONTROLLER> </STORAGE> My question is, is there a way I modify/create this xml piece (say I have 2 Logical drive with one spare) and reboot the server it takes effect ? If this approach is not correct are there any other ways to automate hardware raid config ?

    Read the article

  • How to disable the RAID in x3400 M2

    - by BanKtsu
    Hi I just wanna disable the default RAID in my server IBM System X3400 M2 Server(7837-24X),i have 3 disk drives SAS. I want to make them a JBOD "Just a Bunch Of Disks", because I want to install in the drive 0 CentOS, and the other two make them cache files for a squid server. I disable the RAID in the BIOS: System Settings/Adapters and UEFI drivers/LSI Logic Fusion MPT SAS Driver -PciRoot(0x0)/Pci(0x3,0X0)/Pci(0x0,0x0) LSI Logic MPT Setup Utility RAID Properties/Delete Array Later I boot the CentOS live CD and install the OS in the drive 0, and the others 2 mounted like this: *LVM Volume Groups vg_proxyserver 139508 lv_root 51200 / ext4 lv_home 84276 /home ext4 lv_swap 4032 Hard Drive sdb(/dev/sdb) free 140011 sdc(/dev/sdc) free 140011 sdd(/dev/sdd) sdd1 500 /boot ext4 sdd2 139512 vg_proxyserver physical volume(LVM) But when I restart the server give me the error: Boot failed Hard Disk 0 UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15130,0X0)) ........PXE-E18:Server response timeout. UEFI PXE PciRoot(0x0)/Pci(0x1,0X0)/Pci(0x0,0x0)/MAC(001A64B15132,0X0)) ........PXE-E18:Server response timeout. and the OS not start. The IBM force me to do a RAID?,why?

    Read the article

  • Proliant RAID 1 Rebuild Questions

    - by Nicholas
    I have a HP Proliant ML350 G5 server that experienced a power supply failure overnight. The power supply was replaced but unfortunately it got restarted with only 1 disk in the RAID 1 set plugged in. (The raid controller is the build in E200i). The raid BIOS then said on start-up that it had entered Interim Recovery Mode. However I would have expected it to still start up with only the 1 drive. The bios however says that it cannot find a C: drive and enters a reboot loop polling the other boot devices. First question is, is this normal behaviour not to start up on 1 disk? The second drive was then plugged in (all drives are ok) and the raid bios started an automatic rebuild on that disk. This appears to be a background process as there is no progress shown. However based on the light flashing it looks like it is working. My second question is how long will this rebuild take? (36GB 15K SAS drive). I cannot see any error messages and it looks like it is rebuilding the drive ok, but the computer still will not start-up. It still says during the boot up process that the C: drive is not found. If I wait for the rebuild to finish, is it likely to fix itself and find the C: drive? Or is there some other problem here?

    Read the article

  • Dell Poweredge 2600 RAID Transfer How-to

    - by DCookie
    Help, please! Hardware: Dell Poweredge 2600 PERC 4 SCSI Drives, 1 standalone 3 in a RAID 5 configuration OS: Windows 2000 Server In other words, a fairly old system. Anyway, we are in the process of taking over support for this site. The current tech wants out and is fading from view fast, so we need to solve this problem: The standalone disk (where the OS was) failed. We've replaced the disk, installed the OS, but need to know exactly how to proceed from here. I've never worked with a RAID system before, so I don't want to touch anything without knowing what I'm doing. We are not certain if the site will want us to attempt to recover the array or wait for the old tech to become available. We have replaced the server with a temporary box, and recovered MOST of the data from an online backup service. However, the other tech failed to backup a part of the data and the only copy of it is on this RAID array. Hence, our caution. We have poked minimally around in the boot-up PERC config utility, and it seems to me that that's where we'll need to be to reclaim the array. Another possibility is that there is some Dell software for the RAID controller we need to acquire. Can anyone provide clues as to how to proceed from here? Any help GREATLY appreciated.

    Read the article

  • Recommended motherboard with hardware raid for Linux

    - by luison
    Hi. We want to setup an internal office server for testing jobs (LAMP), email and samba. Only about 5-10 users. We are also considering starting to virtualize, initially by a base Ubuntu Server with Xen or VMWare Open Source server. Our current system runs with a Linux Raid which has worked great but it's always been complicated to recover the boot sector when one the drives fail and therefore I would prefer using now a hardware raid instead, but ideally with some kind of software monitoring. For this reason and considering we don't want to spend a fortune a I would appreciate any comments on the following options. Motherboard with RAID with linux support... which could you recommend. Motherboard + Hardware Raid card... Adaptec does not seem to have great Linux suppport. 3Ware seems to have a tc soft controller which we've used on a hosting company, but hard to find here in Spain. HP Proliant type basic server, which? Dell Small Servers... any good for Linux? Thanks in advance for any feedback.

    Read the article

  • How to configure SCSI hard drives and RAID for Poweredge 2850 web server

    - by Saul
    I'm trying to set up a Poweredge 2850 as a web server, but as a server novice it's causing me some confusion. Its a virgin install so no data to be lost as yet, so I'd like to get the best arrangement for setting up Windows Server 2008. The box will run IIS, a mail and FTP server. The current physical arrangement of the hot swap drives is 1 73GB 3 146GB 5 blank 0 73GB 2 146GB 4 146GB (but flashes green, amber off) When I enter the PERC config screens on boot up I've got Raid Ch- 0 ID 0 ONLIN A00-00 1 ONLIN A00-01 2 ONLIN A01-00 3 ONLIN A01-01 4 HOTSP I think that drives 0 and 1 are set to RAID 1 and drives 2 and 3 are also set to RAID 1, certainly I can see 2 logical drives, both raid 1 of 69880MB and 139900MB Now what I think I am getting here is that the 2 73GB drives mirror each other and the 2 146GB drives mirror 2? so by my noob thinking if a drive fails I can pull it, insert and new one and it will reduplicate from its matching pair? I think the flashing amber probably indicates a failing drive in slot 4, should that just be binned? What confuses me coming from a home user XP background is that when I load up Windows Server 2008 OS under my computer I only see a C drive of about 70GB capacity. i.e wheres the 146GB drive? Any advice appreciated

    Read the article

  • Extend RAID 1 (HP SmartArray P410i) running Linux

    - by Oliver
    I took over a fairly simple server setup with the following RAID 1 config running Ubuntu 11.10 (Kernel 3.0.0-12-server x86_64): => ctrl all show config Smart Array P410i in Slot 0 (Embedded) (sn: removed) array A (SAS, Unused Space: 1335535 MB) logicaldrive 1 (279.4 GB, RAID 1, OK) physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 1 TB, OK) physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 1 TB, OK) Initially there were two 300GB disks that got replaced by 1TB disks and I now have to extend the logical volume to use that extra space. However, when trying to do so I get the following warning: => ctrl slot=0 ld 1 modify size=max Warning: Extension may not be supported on certain operating systems. Performing extension on these operating systems can cause data to become inaccessible. See ACU documentation for details. Continue? (y/n) Is it safe to say yes or am I at risk of corrupting the file system / loosing data? Rearranging and extending the file system afterwards shouldn't be an issue as I can take the server offline and boot from a gparted live disk. Here's the config of the RAID controller in use: => ctrl all show detail Smart Array P410i in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Serial Number: removed RAID 6 (ADG) Status: Disabled Controller Status: OK Hardware Revision: Rev C Firmware Version: 5.12 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: False Drive Write Cache: Disabled SATA NCQ Supported: True And the partition table: Number Start End Size Type File system Flags 1 1049kB 274GB 274GB primary ext4 boot 2 274GB 300GB 25.8GB extended 5 274GB 300GB 25.8GB logical linux-swap(v1)

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >