Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 19/1762 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • Installing raid controller forces reinstall of Windows Server 2008

    - by Tyler
    So, I've tried two different RAID controllers that have external SATA connections on my Server 2008 machine. I can install the hardware, boot into Windows, install the drivers and reboot again. No problems. However, as soon as I try to use eSATA-connected drives and reboot something happens to the Windows install and I can no longer boot into Windows. I tried repairing from the command line, and the end result is that repair console tells me I have 0 Windows installations (?). I end up having no choice but to reinstall Windows to get back on track. I must be doing something fundamentally wrong here, but I don't know what :(

    Read the article

  • Access RAID configuration in x3550M2

    - by Mike B
    I'm trying to configure RAID in a IBM x3550 M2 server. I can't find any messages on boot about any hotkey to access the configuration utility. I wasn't able either in BIOS. The card is a on board LSI 1068e. I can't find the original CD's shipped with the server, but I downloaded Server Guide 9.21 from IBM web. Booted with that, and only getting stuck at "Windows loading"p. Tried with a 8.? version from a x3550 (M1), but it was less useful. Although I do not consider myself an expert, I'm starting to feel like a newbie luser. Any help?

    Read the article

  • Windows XP 32-bit + RocketRaid 622 + 4 x 3TB = not quite a RAID setup

    - by gmoney
    I'm looking to make a 6TB RAID 10 array from my new pile of drives under Windows XP 32-bit, however they are only for auxiliary storage. After adding all the drives to an array, and initializing them XP sees only a fraction of the storage, 2TB. I'm assuming this has to do with MBR vs GPT. Is making a series of 2TB volumes and then spanning my only solution? Most questions online have to do with booting from this setup, but I'm just using them as extra storage. Hardware: 4 x 3TB Hitachi Deskstars + RocketRaid 622 + Sans Digital TR8M TowerRAID. The array is connected via eSATA.

    Read the article

  • What are the recognized ways to increase the size of the RAID array online/offline?

    - by user149509
    Is it possible, in theory, increase the size of the RAID-array of any level just by adding new drive(s)? Variant like "backup whole data - delete old array - add/replace disks - create new array - restore data" is obvious so what are the other options? Does it depend on the RAID-level only or on the implementation of RAID-controller only, or on both? Adding new disks to a striped array necessarily leads to a rebuilding of the array with the redistribution of the strips to the new drives? What steps should be done to increase size of RAID-array in online/offline scenarios? Especially interesting RAID-5 and RAID-10. I would like to see the big picture.

    Read the article

  • NVidia raid 5 array spooling sounds and delay

    - by Chase B. Gale
    I've had a raid 5 array setup with 3 2TB WD Green drives for about 3 years now. Starting last week, when I would access the array for the first time, I hear a loud drive spooling sound and experience a ~5 second delay before being able to access\save files. This behavior happens when I don't use the array for some time (about an hour) and after occurring it doesn't happen again if I continue to access the array. I've run SMART scans on all drives and they come back as being a-ok. What's causing this? Is my array getting close to death?

    Read the article

  • Windows shows incorrect free space on Raid 10 volume

    - by Adenverd
    I have 4 1TB hard drives in RAID 1+0 configuration. Theoretically, I should have ~2TB of available space. Windows says the drive has a total size of 1.81 TB, which I'm fine with. As far as files on the volume go, I used WinDirStat to determine that I have 552.8GB of files on the volume. This means that I should have somewhere around 1.3TB minimum of free space. Yet Windows shows the drive as only having 492GB of free space. Are there hidden files somewhere that I can't see (I have show hidden files/folders turned on)? Does Windows not recognize that old files have been deleted? Is there any way to correct this problem?

    Read the article

  • LSI RAID monitor reports "Consistency Check inconsistency logging disabled"

    - by carlpett
    I have a server with a LSI MegaRAID 9261-8i controller. Recently I started getting alerts like this one: Controller ID: 1 Consistency Check inconsistency logging disabled, too many inconsistencies on VD: 0 Generated on:Sat May 12 04:06:40 2012 SYSTEM DETAILS--- IP Address: 192.168.1.29 OS Name: Windows 7 x64 OS Version: 6.01 Driver Name: megasas.sys Driver Version: 4.5.1.64 IMAGE DETAILS--- BIOS Version: 2.120.33-1197 Firmware Package Version: 12.12.0-0045 Firmware Version: 3.21.00_4.11.05.00_0x05000000 VD 0 is a RAID mirror containing the system disk. I have searched and read, but cannot find any trace of how to actually do anything about this. I tried running a scandisk but that did not find anything (as I expected, since scandisk reads the disks as exposed by the controller, right?). The MegaRAID Storage Manager does not as far as I can see have any options for checking or fixing physical disks. The program claims the VD is "healty", and both disks have Error count 0. Also a bit strange is the System details in the message... The IP address is associated with the RAS (dial in) interface, and the OS should be Windows Server 2011 SBS. Has anyone else experienced this before? What can be done?

    Read the article

  • Problems migrating software RAID 5 to new server (linux)

    - by leleu
    I have a CentOS setup with sw RAID5 that holds my data. Well, the server died, so I bought another box to migrate my drives to. Only thing is, I cannot get the RAID array rebuilt (not even sure it needs rebuilding, might just need the /dev/md0 mapping created... but I don't even know how to determine what I need!) Some details: RAID5 software (used mdadm) 4x 250GB drives (2 are SATA, 2 are EIDE -- would this matter? It worked fine in the other box...) latest CentOS distro built using mdadm I've got a decent amount of experience with standard linux stuff, but the hardware level stuff runs me in circles. I've spent some time googling and elsewhere here on SF, so please be kind for my newbie questions :). My question is this: how can I diagnose the problem? For all I know, I'm using the wrong device blocks when I try to rebuild the array, but I can't find the command to display only the devices that have some physical attachment. Is there some simple way for me to run mdadm, having it scan over all my physical drives, and say "hey, drives 2,5,6,7 are a software array, want me to mount it?" I basically just took the drives from my old box and put it into my new one. They show up in the BIOS. What steps do I need to take in order to get the array up, running, and mounted? Thanks in advance!

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • Formula to calculate probability of unrecoverable read error during RAID rebuild

    - by OlafM
    I need to compare the reliability of different RAID systems with either consumer or enterprise drives. The formula to have the probability of success of a rebuild, ignoring mechanical problems, is simple: error_probability = 1 - (1-per_bit_error_rate)^bit_read and with 3 TB drives I get 38% probability to experience an URE (unrecoverable read error) for a 2+1 disks RAID5 (4.7% for enterprise drives) 21% for a RAID1 (2.4% for enterprise drives) 51% probability of error during recovery for the 3+1 RAID5 often used by users of SOHO products like Synologys. Most people don't know about this. Calculating the error for single disk tolerance is easy, my question concerns systems tolerant to multiple disks failures (RAID6/Z2, RAIDZ3 and RAID1 with multiple disks). If only the first disk is used for rebuild and the second one is read again from the beginning in case or an URE, then the error probability is the one calculated above squared (14.5% for consumer RAID5 2+1, 4.5% for consumer RAID1 1+2). However, I suppose (at least in ZFS that has full checksums!) that the second parity/available disk is read only where needed, meaning that only few sectors are needed: how many UREs can possibly happen in the first disk? not many, otherwise the error probability for single-disk tolerance systems would skyrocket even more than I calculated. If I'm correct, a second parity disk would practically lower the risk to extremely low values. Am I correct?

    Read the article

  • How to choose the most optimal RAID settings on PE2950

    - by javano
    I have some Dell PowerEdge 2950's with 4x 15k, 150GB Cheetah SAS drives in them. They are going to be VM hosts, CentOS running ESXi with Windows Server 2k8 guests. Some guests will be hosting IIS servers, and others MSSQL servers. I am trying to set the RAID virtual disks settings and can't decide which is more optimal given this situation; Read Policy: Out of Read-Ahead, No-Read-Ahead and Adaptive Read-Ahead, the default is Read-Ahead. I will be making large sequential writes initially, writing out blank images for virtual machine hard drives (lets say 30GBs from /dev/zero for example) so Read-Ahead seems good at first. But within the virtual machines reads could be random from anywhere within their file systems as they are IIS and MSSQL servers, so perhaps No-Read-Ahead is a better idea? Now I think Adaptive Read-Ahead would be better then as a compromise but I don't know much about this option, how does it compare in performance to the others? Write Policy: write-back caching, write-through caching, the default is write-back caching. The default of write-back caching is safer than write-through caching but at a performance expense. My thinking here is that in the event of power loss for example, it seems more likely in my head (this is why I need some clarification!) that damage will occur to a guest VM with write-back caching enabled, so I should favour write-through? I have searched around and there is obviously no definitive answer, so I would like to find out what is best for my situation.

    Read the article

  • Resize a RAID 1 volume on OS X Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OS X 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • effective back-up using Raid / Win7 back-up

    - by Job
    I have a stand-alone pc system with two 2 tb harddiscs, one of which configured as Raid1, i.e. mirorring. The operational drive is partitioned. I use an external 1 tb harddisc for back-up using Windows 7 back-up facility which will be swapped weekly and stored on other premises. I back-up all partitions AND allow a system back-up. All application software is on the C: partition. Questions: How can I see whether Raid1 is working; i.e. is doing its job. All I see now is a status message in the start-up procedure that says its status is normal. How can I see used or available space on Raid 1? The Win-7 backup allows for 1 schedule only as far as I can see. I want daily back-ups of data. However due to the single schedule I am forced to do the time-consuming system back-up and c: back-up as well. Is there a way to activate two schedules allowing a frequent (daily) data back-up and a system back-up with c: drive back-up on a say weekly basis? Of course it can be forced by hand but I am likely to forget that. I am not the programming type of person so looking for simple and controllable solutions. Thank you - any help is apreciated.

    Read the article

  • mkfs Operation Takes Very Long on Linux Software Raid 5

    - by Elmar Weber
    I've set-up a Linux software raid level 5 consisting of 4 * 2 TB disks. The disk array was created with a 64k stripe size and no other configuration parameters. After the initial rebuild I tried to create a filesystem and this step takes very long (about half an hour or more). I tried to create an xfs and ext3 filesystem, both took a long time, with mkfs.ext3 I observed the following behaviour, which might be helpful: writing inode tables runs fast until it reaches 1053 (~ 1 second), then it writes about 50, waits for two seconds, then the next 50 are written (according to the console display) when I try to cancel the operation with Control+C it hangs for half a minute before it is really canceled The performance of the disks individually is very good, I've run bonnie++ on each one separately with write / read values of around 95 / 110MB/s. Even when I run bonnie++ on every drive in parallel the values are only reduced by about 10 MB. So I'm excluding hardware / I/O scheduling in general as a problem source. I tried different configuration parameters for stripe_cache_size and readahead size without success, but I don't think they are that relevant for the file system creation operation. The server details: Linux server 2.6.35-27-generic #48-Ubuntu SMP x86_64 GNU/Linux mdadm - v2.6.7.1 Does anyone has a suggestion on how to further debug this?

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • What does "single-bit ECC errors were detected on the RAID controller" mean?

    - by jsp
    I have a Dell T7600 with a Perc H710P RAID controller and 4 attached 3TB drives. Over the past few months the RAID controller has been intermittently reporting errors on boot: "no boot device found", "adapter at baseport is not responding", disks frequently reported as missing or failed. I have since replaced the RAID controller, the 4 hard drives, and finally the system's motherboard. After replacing the motherboard and rebooting a few times, I got the error Single bit ECC errors were detected on the RAID controller. Please contact technical support to resolve this issue. After rebooting about 20 more times, I haven't seen the ECC error. The system seems otherwise OK, except for the fact that the disk fans will sometimes start blowing at full blast when the the system is sitting completely idle and not stop until I reboot. Are the ECC errors in memory on the RAID controller? Or, does the RAID controller map in system memory, and the ECC errors are really in system memory? Or, are the ECC errors in the 1GB cache that resides in the RAID controller?

    Read the article

  • Dell Poweredge 1950 with Perc 5i keeps losing raid config -> "Foreign Configuration Found"

    - by nosage
    The quick and dirty: the machine is a Dell Poweredge 1950, dual xeon quad cores, 8GB of ram, 2 2TB seagate SATAs in (supposed to be raid1) using a Perc 5i raid card. They are hot-swappable with a back-plane. I can build the raid fine and after a little while an install of server 08 r2 will blue screen and restart. When it comes up the raid controller says "Foreign Configuration Found." When I go into the raid configuration panel there is no raid listed but I can import the "foreign config", and the OS will boot up fine, until it blue screens again after a little while. The issue is OS independent. I have tried swapping raid cards, swapping the RAM module on the raid card and swapping the raid battery, all to no avail. Its almost as if there is a loose connection from the raid card to the back plane and both of disks get lost and the raid card drops the config. But it sees the disks fine when it boots back up. The raid card uses a SCSI SAS cable to connect to the back-plane so I guess the next step is to replace that, but... then I might as well replace the back-plane with a SCSI SAS to sata breakout cable, but... then I need a way to power the disks. Sorry for the wall of txt but it would be great to get some thoughts from people who worked with perc raid cards or poweredge servers with this type of issue before. Ironically I want to get this system up and running so I can work on MCITP labs. Thank you for any/all help and feel free to ask questions!

    Read the article

  • CentOS - mdadm raid1 drive won't mount to default location

    - by danny
    I'm running CentOS 5.5, the system, boot, swap, etc. is all on /dev/sda and I have two identical single-partition drives /dev/sdb1 /dev/sdc1 that are configured in RAID1 (using mdadm). It was working fine (configured to mount to /mnt/data in the fstab file) and I recently let yum install a couple of automatic updates without paying attention to what they were, and now it doesn't work. Raid is working fine (dmesg shows it gets loaded correctly). mdstat shows: # cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] XXXX blocks [2/2] [UU] unused devices: <none> Additionally, I can mount it anywhere other than its default directory (i.e. the following works, and I can read data off the drives). # mount /dev/md0 /mnt/data2 EXT3-fs warning: mounting fs with errors, running e2fsck is recommended But when I run the following I get: # mount -a mount: /dev/sdb1 already mounted or /mnt/data busy It says nothing is mounted when I try to umount /dev/sdb1 or umount /mnt/data, so I assume it's the second of those errors. However, lsof | grep mnt shows nothing. The weird thing is that I can save files in /mnt/data. So something is obviously mounted there, but when I try to umount it I get the error that nothing is mounted. /etc/mtab doesn't mention any of the partitions or files I am trying to work with, and fstab just has that one line I mentioned above that is supposed to mount my raid partition. Again, it was all working fine until I On Google I've found a few things about dmraid interfering with mdadm after an update, but I yum remove'd dmraid and rebooted and it didn't help. I'm really confused and need to get this working to get on with my work!

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • Western Digital My Book not recognized by WD software

    - by Kari
    A few years ago I bought a WD My Book Pro 2. It worked fine for a while, then one of the drives failed and I sent it back to be replaced under warranty. I never got around to setting up the new one when I got it back. I finally ran out of room on my internal drive, so I tried to use the external - no go. Both drives spin up, but aren't recognized by either Disk Utility (Mac) or the WD Drive Manager. I tried on a PC as well with fresh software. Then I pulled the drives out of the enclosure (warranty is already expired) and plugged them straight into the PC. Both recognized and working 100% in RAID0. BIOS recognizes either disk as functional; Windows only sees them when both are connected due to the RAID which I can't change without the WD software. The drives that were returned to me are the "Green" drives which I've read are NOT recommended for RAID. Is it possible that this is interfering with them reading externally? Any other ideas? My main computer is a laptop so using them internally isn't an option :(

    Read the article

  • Setup access to SAS RAID drives with NTFS partitions on CentOS Machine

    - by Quanano
    We have a Dell Poweredge 2900 system with Adaptec 39320A SCSI CONTROLLER CARD and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on the other raid array with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log

    Read the article

  • How can I fix my corrupted RAID1 ext4 partition on a Synology DS212 NAS?

    - by Neil
    I have two identical 3 TB disks that were in a RAID1 array, where one disk crashed. I replaced the failed disk, but not after the RAID partitions got messed up. I need to figure out how to restore the RAID array and get at my ext4 partition. Here are the properties of the surviving disk: # fdisk -l /dev/sda fdisk: device has more than 2^32 sectors, can't use all of them Disk /dev/sda: 2199.0 GB, 2199023255040 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Device Boot Start End Blocks Id System /dev/sda1 1 267350 2147483647+ ee EFI GPT # parted /dev/sda print Model: ATA ST3000DM001-9YN1 (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 131kB 2550MB 2550MB ext4 raid 2 2550MB 4698MB 2147MB linux-swap(v1) raid 5 4840MB 3001GB 2996GB raid I replaced the failed drive, and cloned the surviving drive to it so I have something to work with. I cloned the drives with dd if=/dev/sdb of=/dev/sda conv=noerror bs=64M, and now /dev/sda and /dev/sdb are identical. Here is the RAID information: # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md1 : active raid1 sdb2[1] 2097088 blocks [2/1] [_U] md0 : active raid1 sdb1[1] 2490176 blocks [2/1] [_U] unused devices: <none> It seems that md2 is missing. Here is what testdisk 6.14-WIP finds: Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Current partition structure: Partition Start End Size in sectors 1 P Linux Raid 256 4980735 4980480 [md0] 2 P Linux Raid 4980736 9175039 4194304 [md1] Invalid RAID superblock 5 P Linux Raid 9453280 5860519007 5851065728 5 P Linux Raid 9453280 5860519007 5851065728 # After a quick search Disk /dev/sda - 3000 GB / 2794 GiB - CHS 364801 255 63 Partition Start End Size in sectors D MS Data 256 4980607 4980352 [1.41.12-2197] D Linux Raid 256 4980735 4980480 [md0] D Linux Swap 4980736 9174895 4194160 D Linux Raid 4980736 9175039 4194304 [md1] >P MS Data 9481056 5858437983 5848956928 [1.41.12-2228] And listing the files on the last partition in the list shows all of my files intact. What should I do?

    Read the article

  • Allignment of ext3 partition on LVM RAID volume group

    - by John P
    I'm trying to add a partition on a LVM that resides on a RAID6 volume group and fdisk is complaining about the partition not residing on a physical sector boundry. My question is, how do you calculate the correct starting sector for a partition on a LVM? This partition will be formated ext3. Would it be better to just format the LVM directly instead of creating a new partition? Disk /dev/dedvol/backup: 2199.0 GB, 2199023255552 bytes 255 heads, 63 sectors/track, 267349 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 1048576 bytes / 8388608 bytes Disk identifier: 0x4e428f49 Device Boot Start End Blocks Id System /dev/dedvol/backup1 63 267349 2146982827+ 83 Linux Partition 1 does not start on physical sector boundary. lvdisplay /dev/dedvol/backup --- Logical volume --- LV Name /dev/dedvol/backup VG Name dedvol LV UUID OV2n5j-7LHb-exJL-t8dI-dU8A-2vxf-uIicCt LV Write Access read/write LV Status available # open 0 LV Size 2.00 TiB Current LE 524288 Segments 1 Allocation inherit Read ahead sectors auto - currently set to 32768 Block device 253:1 vgdisplay dedvol --- Volume group --- VG Name dedvol System ID Format lvm2 Metadata Areas 1 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 2 Open LV 1 Max PV 0 Cur PV 1 Act PV 1 VG Size 14.55 TiB PE Size 4.00 MiB Total PE 3815448 Alloc PE / Size 3670016 / 14.00 TiB Free PE / Size 145432 / 568.09 GiB VG UUID 8fBcOk-aXGx-P3Qy-VVpJ-0zK1-fQgy-Cb691J

    Read the article

  • Will this RAID5 setup work (3TB Seagate Barracudas + Adaptec RAID 6405)?

    - by Slayer537
    As the title states, will this RAID combo work, and if not what needs to be changed? Overall opinions would be most helpful. I currently run a small file server of about 5TB or so. I keep outgrowing my needs and need to build a RAID setup that will allow me to expand as needed. I am new to RAID setups, especially one of the scale I have currently planned out, but I have being doing some research for the past couple of weeks and have come up with a build. Ideally, I'd have the setup completely built, but I'd like to keep the total cost around $1k and can't afford to go above $1.5k, so unfortunately that's not an option. 2 of my current drives are WD Caviar Blacks 2TB; however, I have recently learned that due to the lack of TLER those drives are awful for any RAID setup other than 0 or 1. That being said, my third drive is a Seagate Barracuda 3TB (ST300DM001) and I have found a RAID controller that states it supports it, so I'd like to use this same type of drive, if possible. Have any of you had any experience using this drive or a similar one in a RAID5 configuration? The manufacturer states that it supports it, but knowing that it is not an enterprise drive, I am slightly concerned that it could drop out of the array. I would just go with enterprise drives, but those are about double in cost... Parts list: Storage rack: http://www.ebay.com/itm/SGI-3U-Media-Storage-Server-16-Hard-Drive-Bay-SATA-SAS-Expander-Omnistor-SE3016-/140735776937?pt=LH_DefaultDomain_0&hash=item20c48188a9 3 more HDs (for now..): http://www.amazon.com/Seagate-Barracuda-3-5-Inch-Internal-ST3000DM001/dp/B005T3GRLY/ref=dp_return_2?ie=UTF8&n=172282&s=electronics Adaptec RAID 6405: http://www.newegg.com/Product/Product.aspx?Item=N82E16816103224 here's a link to the compatibility sheet if that helps: http://download.adaptec.com/pdfs/compatibility_report/arc-sas_cr_03-27-12_series6.pdf SAS expander cable: http://www.pc-pitstop.com/sas_cables_adapters/8887-2M.asp My plan is to install the RAID card in my computer and then route the SAS cable to the rack. Setup a RAID5 on 3 drives, transfer my data over from my other drive, and then add that drive to the array. Eventually, I'd like to get a 2U unit and run the file server on that and move the RAID card over to there, but that will have to happen later on. Side note: The computer the card would be going into will be running Windows 7 Pro with 24GB of DDR3-1600 and an i7-930.

    Read the article

  • Hard drives indication with controller MegaRAID SAS 9261-8i on HP Proliant DL320e Gen8. Is it possible?

    - by ame
    Give me advice, please. My situation: There're the server HP ProLiant DL320e Gen8 and MegaRAID SAS 9261-8i RAID Controller. I installed Controller into server and I reconnected Mini-SAS cord from block of hard drives to controller, but I haven't any indication of hard discs on server front panel. There's indication of activity of drives only during boot of server. Controller has 2-pin connector (JT6B3, SAS Activity LED header) but where and how can I connect it? Thanx.

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >