Search Results

Search found 1864 results on 75 pages for 'raid 5'.

Page 15/75 | < Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >

  • mdadm auto grow raid

    - by johannes
    I have a raid0/1 on lvm logical volumes. I resized the logical volumes. Now I want to resize the raid to use the complete logical volumes. This can be done with mdadm /dev/md? --grow -z newsize But somehow I can't figure out how to calculate the newsize argument. Is there a way to tell mdadm to grow to the biggest possible size? If not, how do I calculate the biggest possible size of the raid to use for the newsize argument?

    Read the article

  • usb vs firewire for connecting two RAID 0 disks

    - by Arne
    I have a 2TB and a 4TB RAID 0 external drives (both have two physical hard drives in them). Both have a FW800, FW400, and USB port. My MacBook Pro has one FW800 port and two USB ports. I want to copy data from the 4TB drive to the 2TB drive. Is it better to A - connect both directly to the laptop, one with USB and one with FW800 or B - connect the 4TB drive to laptop with FW800 and the 2TB drive to the 4TB drive using a FW400 cable? Anyone have problems daisy-chaining RAID 0 disks using FW? Thanks!

    Read the article

  • Is there any way to do 'software raid' without losing data?I

    - by user1706582
    I say software raid because that is a pretty tentative guess at what I actually want. I have two drives, both with stuff already on them which I want to combine. If they weren't full of stuff (data, not windows installation) I would use software raid to combine them into one big drive or make them into one partition. I could probably do this with some complicated reference system but really I just want to be able to keep saving things to X: without running out of space until both drives are full. Thanks in advance.

    Read the article

  • RAID 50 24Port Fast Writes Slow Reads - Ubuntu

    - by James
    What is going on here?! I am baffled. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/dev/zero of=/Volumes/MercuryInternal/test/test.fs bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 57.0948 s, 735 MB/s serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo dd if=/Volumes/MercuryInternal/test/test.fs of=/dev/null bs=4096k count=10000 10000+0 records in 10000+0 records out 41943040000 bytes (42 GB) copied, 116.189 s, 361 MB/s OF NOTE: My RAID50 is 3 sets of 8 disks. - This might not be the best config for SPEED. OS: Ubuntu 12.04.1 x64 Hardware Raid: RocketRaid 2782 - 24 Port Controller HardDriveType: Seagate Barracuda ES.2 1TB Drivers: v1.1 Open Source Linux Drivers. So 24 x 1TB drives, partitioned using parted. Filesystem is ext4. I/O scheduler WAS noop but have changed it to deadline with no seemingly performance benefit/cost. serveradmin@FILESERVER:/Volumes/MercuryInternal/test$ sudo gdisk -l /dev/sdb GPT fdisk (gdisk) version 0.8.1 Partition table scan: MBR: protective BSD: not present APM: not present GPT: present Found valid GPT with protective MBR; using GPT. Disk /dev/sdb: 41020686336 sectors, 19.1 TiB Logical sector size: 512 bytes Disk identifier (GUID): 95045EC6-6EAF-4072-9969-AC46A32E38C8 Partition table holds up to 128 entries First usable sector is 34, last usable sector is 41020686302 Partitions will be aligned on 2048-sector boundaries Total free space is 5062589 sectors (2.4 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 41015625727 19.1 TiB 0700 primary To me this should be working fine. I can't think of anything that would be causing this other then fundamental driver errors? I can't seem to get much/if any higher then the 361MB a second, is this hitting the "SATA2" link speed, which it shouldn't given it is a PCIe2.0 card. Or maybe some cacheing quirk - I do have Write Back enabled. Does anyone have any suggestions? Tests for me to perform? Or if you require more information, I am happy to provide it! This is a video fileserver for editing machines, so we have a preference for FAST reads over writes. I was just expected more from RAID 50 and 24 drives together... EDIT: (hdparm results) serveradmin@FILESERVER:/Volumes/MercuryInternal$ sudo hdparm -Tt /dev/sdb /dev/sdb: Timing cached reads: 17458 MB in 2.00 seconds = 8735.50 MB/sec Timing buffered disk reads: 884 MB in 3.00 seconds = 294.32 MB/sec EDIT2: (config details) Also, I am using a RAID block size of 256K. I was told a larger block size is better for larger (in my case large video) files. EDIT3: (Bonnie++ Results. Would love some guidance with this!)

    Read the article

  • RAID P410i and P812 performance issues

    - by Alexey
    I'm having much trouble with I/O performance of HP DL360 server with two RAID controllers - P410i and P812, Windows Server 2008, 36 GiB RAM and 16 x Intel Xeon x5550. The server runs a bunch of tasks producing heavy sequential I/O, and after about 20-30 minutes of intensive work it looks like the tasks are stuck, not using CPU and with enough free memory (so this cannot be a bottleneck). The same tasks were running quite well on the older server (Windows Server 2003, 4 x Intel Xeon, 12 GiB RAM). RAID cache is present, write-cache battery is installed. Cache is configured as 25% readahead/75% writeback. The swap file resides on the logical disk served by P410i and other logical disks are on P812. Can someone tell me what can be the matter of this? Is this a hardware problem or misconfiguration?

    Read the article

  • How to create Raid 10 with megacli

    - by Henno
    I have OpenFiler storage server. Without installing Windows and MSM, I want to create raid10 array from disks 2 to 21. I have already successfully installed MegaCli to OpenFiler but I'm stuck in figuring out the correct command line for creating a raid 10 array. The documentations says that the syntax for creating a raid 10 is: MegaCli -CfgSpanAdd -r10 -Array0[E:S,E:S] -Array1[E:S,E:S] -aN My enclosure ID is 25, so: [root@linux-h5ut ~]# MegaCli -CfgSpanAdd -r10 -Array0[E25:S02,E25:S21] -Array1[E25:S02,E25:S21] WB Cached NoCachedBadBBU -a0 Invalid input at or near token E I have googled high and low but there doesn't seem to be any example doing raid10 with megaraid (only the syntax). Can anyone explain what is wrong?

    Read the article

  • What is "2LUN" mode in connection with RAID?

    - by naxa
    I've came across RAID products that also list JBOD (just a bunch of disks) mode and 2LUN mode. What the heck is 2LUN mode? I could not find a description; the closest thing seems to be LUN 'logical unit number' but I don't get the 2LUN thing. UPDATE 1 This is what Wikipedia has to say about JBOD: JBOD (derived from "just a bunch of disks"): an architecture involving multiple hard drives, while making them accessible either as independent hard drives, or as a combined (spanned) single logical volume with no actual RAID functionality. So JBOD can actually mean two different (albeit related) things. Answer of Guest says 2LUN means no spanning. Does this suggest that 2LUN would simply mean the JBOD-variant with no span?

    Read the article

  • High I/O latency with software RAID, LUKS encrypted and LVM partitioned KVM setup

    - by aef
    I found out a performance problems with a Mumble server, which I described in a previous question are caused by an I/O latency problem of unknown origin. As I have no idea what is causing this and how to further debug it, I'm asking for your ideas on the topic. I'm running a Hetzner EX4S root server as KVM hypervisor. The server is running Debian Wheezy Beta 4 and KVM virtualisation is utilized through LibVirt. The server has two different 3TB hard drives as one of the hard drives was replaced after S.M.A.R.T. errors were reported. The first hard disk is a Seagate Barracuda XT ST33000651AS (512 bytes logical, 4096 bytes physical sector size), the other one a Seagate Barracuda 7200.14 (AF) ST3000DM001-9YN166 (512 bytes logical and physical sector size). There are two Linux software RAID1 devices. One for the unencrypted boot partition and one as container for the encrypted rest, using both hard drives. Inside the latter RAID device lies an AES encrypted LUKS container. Inside the LUKS container there is a LVM physical volume. The hypervisor's VFS is split on three logical volumes on the described LVM physical volume: one for /, one for /home and one for swap. Here is a diagram of the block device configuration stack: sda (Physical HDD) - md0 (RAID1) - md1 (RAID1) sdb (Physical HDD) - md0 (RAID1) - md1 (RAID1) md0 (Boot RAID) - ext4 (/boot) md1 (Data RAID) - LUKS container - LVM Physical volume - LVM volume hypervisor-root - LVM volume hypervisor-home - LVM volume hypervisor-swap - … (Virtual machine volumes) The guest systems (virtual machines) are mostly running Debian Wheezy Beta 4 too. We have one additional Ubuntu Precise instance. They get their block devices from the LVM physical volume, too. The volumes are accessed through Virtio drivers in native writethrough mode. The IO scheduler (elevator) on both the hypervisor and the guest system is set to deadline instead of the default cfs as that happened to be the most performant setup according to our bonnie++ test series. The I/O latency problem is experienced not only inside the guest systems but is also affecting services running on the hypervisor system itself. The setup seems complex, but I'm sure that not the basic structure causes the latency problems, as my previous server ran four years with almost the same basic setup, without any of the performance problems. On the old setup the following things were different: Debian Lenny was the OS for both hypervisor and almost all guests Xen software virtualisation (therefore no Virtio, also) no LibVirt management Different hard drives, each 1.5TB in size (one of them was a Seagate Barracuda 7200.11 ST31500341AS, the other one I can't tell anymore) We had no IPv6 connectivity Neither in the hypervisor nor in guests we had noticable I/O latency problems According the the datasheets, the current hard drives and the one of the old machine have an average latency of 4.12ms.

    Read the article

  • "Verifying DMI Pool" hang caused by raid array..

    - by Ling
    Hi Experts, I have a problem, I obtained a new server with 4 hard drives (2 500 gig, 2 two TB), and an adeptec RAID card. I arranged them in two arrays with RAID 1 (500 gigs together as primary and the 2 TB drives for lots of data). When both arrays are configured, the server hangs while booting at message "Verifying DMI Pool", however if I remove the second array from the configuration the server boots fine. I have checked they are both on different channels, I have disabled all other peripherals from the boot menu and ensured the hard drive is #1. I have booted into the linux rescue mode and checked that it is reading both arrays fine. What else could be causing these problems? Thanks

    Read the article

  • CentOS - Add additional hard drive raid arrays on Dell Perc 5/i card

    - by Quanano
    We have a Dell Poweredge 2900 system with Dell Perc 5/i card and 4 SAS hard drives attached, with NTFS partitions on them. We installed CentOS on one raid array on this controller with a different controller and it is working fine. We are now trying to access the drives shown above and they are not being shown in /dev as sdb, etc. sda is the drive that we installed centos on and it has sda1, sda2, sda3, etc. The CDROM has been picked up as well. If I scan for scsi devices then the perc and adaptec controllers are both found. sg0 is the CDROM and sg2 is the centos installed, however I think sg1 is the other drive but I cannot see anyway to mount the partitions, as only the drive is listed in /dev. Thanks. EXTRA INFO fdisk -l: Disk /dev/sda: 72.7 GB, 72746008576 bytes 255 heads, 63 sectors/track, 8844 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x11e3119f Device Boot Start End Blocks Id System /dev/sda1 * 1 64 512000 83 Linux Partition 1 does not end on cylinder boundary. /dev/sda2 64 8845 70528000 8e Linux LVM Disk /dev/mapper/vg_lal2server-lv_root: 34.4 GB, 34431041536 bytes 255 heads, 63 sectors/track, 4186 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_root doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_swap: 21.1 GB, 21139292160 bytes 255 heads, 63 sectors/track, 2570 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_swap doesn't contain a valid partition table Disk /dev/mapper/vg_lal2server-lv_home: 16.6 GB, 16647192576 bytes 255 heads, 63 sectors/track, 2023 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/mapper/vg_lal2server-lv_home doesn't contain a valid partition table These are all from the install hdd not the additional hard drives modprobe a320raid FATAL: Module a320raid not found. lsscsi -v: [0:0:0:0] cd/dvd TSSTcorp CDRWDVD TS-H492C DE02 /dev/sr0 dir: /sys/bus/scsi/devices/0:0:0:0 [/sys/devices/pci0000:00/0000:00:1f.1/host0/target0:0:0/0:0:0:0] [4:0:10:0] enclosu DP BACKPLANE 1.05 - dir: /sys/bus/scsi/devices/4:0:10:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:0:10/4:0:10:0] [4:2:0:0] disk DELL PERC 5/i 1.03 /dev/sda dir: /sys/bus/scsi/devices/4:2:0:0 [/sys/devices/pci0000:00/0000:00:05.0/0000:01:00.0/0000:02:0e.0/host4/target4:2:0/4:2:0:0] . lsmod: Module Size Used by fuse 66285 0 des_generic 16604 0 ecb 2209 0 md4 3461 0 nls_utf8 1455 0 cifs 278370 0 autofs4 26888 4 ipt_REJECT 2383 0 ip6t_REJECT 4628 2 nf_conntrack_ipv6 8748 2 nf_defrag_ipv6 12182 1 nf_conntrack_ipv6 xt_state 1492 2 nf_conntrack 79453 2 nf_conntrack_ipv6,xt_state ip6table_filter 2889 1 ip6_tables 19458 1 ip6table_filter ipv6 322029 31 ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 bnx2 79618 0 ses 6859 0 enclosure 8395 1 ses dcdbas 9219 0 serio_raw 4818 0 sg 30124 0 iTCO_wdt 13662 0 iTCO_vendor_support 3088 1 iTCO_wdt i5000_edac 8867 0 edac_core 46773 3 i5000_edac i5k_amb 5105 0 shpchp 33482 0 ext4 364410 3 mbcache 8144 1 ext4 jbd2 88738 1 ext4 sd_mod 39488 3 crc_t10dif 1541 1 sd_mod sr_mod 16228 0 cdrom 39771 1 sr_mod megaraid_sas 77090 2 aic79xx 129492 0 scsi_transport_spi 26151 1 aic79xx pata_acpi 3701 0 ata_generic 3837 0 ata_piix 22846 0 radeon 1023359 1 ttm 70328 1 radeon drm_kms_helper 33236 1 radeon drm 230675 3 radeon,ttm,drm_kms_helper i2c_algo_bit 5762 1 radeon i2c_core 31276 4 radeon,drm_kms_helper,drm,i2c_algo_bit dm_mirror 14101 0 dm_region_hash 12170 1 dm_mirror dm_log 10122 2 dm_mirror,dm_region_hash dm_mod 81500 11 dm_mirror,dm_log blkid: /dev/sda1: UUID="bc4777d9-ae2c-4c58-96ea-cedb342b8338" TYPE="ext4" /dev/sda2: UUID="j2wRZr-Mlko-QWBR-BndC-V2uN-vdhO-iKCuYu" TYPE="LVM2_member" /dev/mapper/vg_lal2server-lv_root: UUID="9238208a-1daf-4c3c-aa9b-469f0387ebee" TYPE="ext4" /dev/mapper/vg_lal2server-lv_swap: UUID="dbefb39c-5871-4bc9-b767-1ef18f12bd3d" TYPE="swap" /dev/mapper/vg_lal2server-lv_home: UUID="ec698993-08b7-443e-84f0-9f9cb31c5da8" TYPE="ext4" dmesg shows: megaraid_sas: fw state:c0000000 megasas: fwstate:c0000000, dis_OCR=0 scsi2 : LSI SAS based MegaRAID driver scsi 2:0:0:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:1:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:2:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:3:0: Direct-Access SEAGATE ST3146855SS S527 PQ: 0 ANSI: 5 scsi 2:0:4:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:5:0: Direct-Access HITACHI HUS154545VLS300 D590 PQ: 0 ANSI: 5 scsi 2:0:8:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 scsi 2:0:9:0: Direct-Access FUJITSU MBA3073RC D305 PQ: 0 ANSI: 5 i.e. the 3 RAID Arrays Seagate Hitatchi and Fujitsu hard drives respectively. FURTHER UPDATE I have installed the megaraid storage manager console and connected to the server. It appears that the two CentOS installation hard drives are OK. The other 6 drives, one raid array of 4 and one raid array of 2 disks. The other drives are listed as (Foreign) Unconfigured Good.

    Read the article

  • HP Proliant DL360 G5 + MSA50 RAID Setup recommendations

    - by JohnRB
    I am running a HP Proliant DL360 2 x 3GHz Xeon 16GB Ram P400 integrated RAID card with 6 x 73GB SAS HDDs running Ubuntu Server 14.04 CLI only. I recently got my hands on a MSA50 SAS Enclosure (10 x SAS HDD bays w/ SAS in/out interface) and wondering what you guys recommended as far as an addon raid controller for one of the pciex slots. I have both slots free Full and Half sizes. Any suggestions are greatly appreciated, I am an I.T. Consultant but have not used these particular units before so I was hoping to hear from someone who has. Thanks!

    Read the article

  • Software/FakeRAID: Windows 8 Disk Mirroring vs Intel Onboard

    - by Johnny W
    So Windows 8 is out and I have a new motherboard. I wish to create a RAID 1 coupling between two HDDs -- for storage purposes only (my OS is on an SSD) -- but I don't know which is the best route to take. My motherboard (Z77 chipset) comes with the age old Intel Fake RAID, but since I only wish to use my RAID for storage, I wondered if I might be better to use Windows 8 Disk Mirroring. Can anyone advise which is better? Or perhaps the pros and cons of each, if that's too contentious? I just can't see the benefit of FakeRAID. You can see my current setup here, if that might change things(?): Thanks!

    Read the article

  • How to reassign drive back into MediaShield (nvidia) RAID stripe

    - by scottwed
    So I managed to inadvertently remove the RAID configuration from 1 drive of a 2 drive stripe. I have both drives, unmodified, and undamaged. Specifically for MediaShield RAID, is there any way to reattach the 2nd drive back into the stripe? Current state is the stripe is displayed in Error status, and I have the other drive available, but unassigned. I strongly suspect there is no solution without purging the array completely and redefining the stripe, but I figured it was worth asking before I wipe out the data.

    Read the article

  • How to perform fresh linux install while preserving software raid and user accounts

    - by slayton
    I have a system with two software raid arrays. The OS is Ubuntu 9.04 and is no longer receiving updates. I'd like to update the system to 12.04 rather than trying to do the automatic update from 9.04-> 9.10-> ... -> 12.04. My main drive has 2 partitions that are mounted at / and /home. Is it possible to do a fresh install of linux to the partition where / is mounted while preserving user accounts and preferences (such as passwords, home dir locations, etc...)? Additionally what do I need to do to keep my software raid array intact following the OS re-install?

    Read the article

  • Remote RAID Control in ESXi on a Dell PowerEdge 2950 Using OpenManage

    - by yoyomommy
    I was wondering how one can add a drive into an existing RAID array while ESXi is still running. I have read that you are able to use Dell OpenManage to do this. I have installed OMSA 7.0 on the VMWare ESXi host (5.0 and fully updated) and I've installed OpenManage Essentials on a Windows Server 2008 R2 guest. The issue that I'm having is that OpenManage is unable to see my RAID controller. I have seen videos and photos as parts of guides on how to do this online, so I would assume that the functionality exists and I just have it set up wrong.

    Read the article

  • choosing the right RAID level for PostgresQL database

    - by Sergey
    Hi, I got an disk array appliance of 8 disks 1T each (UltraStor RS8IP4). It will be used solely by PostgresQL database and I am trying to choose the best RAID level for it. The most priority is for read performance since we operate large data sets (tables, indexes) and we do lots of searches/scans. With the old disks that we have now the most slowdowns happen on SELECTs. Fault tolerance is less important, it can be 1 or 2 disks. Space is the least important factor. Even 1T will be enough. Which RAID level would you recommend in this situation. The current options are 60, 50 and 10, but probably other options can be even better.

    Read the article

  • GRUB2 UEFI booting from LVM on RAID (with XEN)

    - by pavian
    I'm experimenting with booting root fs from LVM volume inside the raid (mdraid superblock 1.x) via UEFI with GRUB2. Also I'm using Xen hypervisor. From grub command line I can see my lvm volume (ls command) but I got kernel panic due to "unable to mount root fs". I saw a note in this article telling it's probably impossible to boot root fs from raid via UEFI, but I don't understand the reason why not. Is it possible to boot linux with this configuration without the initramfs (which I don't won't to use)?

    Read the article

  • Dell Power Edge R515 - Replacing a Bad Hard Drive in a RAID

    - by LonnieBest
    I've ordered a new hard drive to replace a bad one in a Dell Power Edge R515. The manual covers obvious topics regarding physical replacing of hard drives, but I've never done this before on a production server where RAID is involved. I've heard people talk about this topic, and I've heard that some servers have RAID controllers that are smart enough to allow you to just put in the new drive (hot swap), and then the server will know automatically how to rebuild that drive to be what the old one was to the system. Where do I find the proper procedure for replacing a failed hard drive on a live production Dell Power Edge R515? Can someone with experience tell me how easy or hard this usually is?

    Read the article

  • Hardware RAID 0 without OS re-installation

    - by sterz
    I have Ubuntu & Windows 7 installed on my hdd. Can I mirror the image of the hdd to the second identical drive? Is this not recommended (i.e have to re-install every OS)? If it is okay to mirror, is there anything else to do to make hw RAID 0 work? Does RAID 0 have the same risk as a single drive? What sector size would you recommend for read/write/extract video files (mostly each around 2 GB)?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • Benefits of a RAID BBU in addition to a double UPS + PS system

    - by Wikser
    Today I roughly measured the benefits of enabling write-back on the RAID controller on a server at work. It got no RAID battery-backup-unit (BBU) so the write-cache is currently disabled. As the server is not used to capacity (by far), the results in most test were spectacular, e.g.: Database CRUD: before 35s, after 4s Saving a 1MB Excel file: before: 20s (!), after: 0.5s Of course having a BBU is always recommended, but what are the main benefits of installing a BBU to a system, which got redundant power supplies and is attached to UPSs? Does this depend on the type of the system (database, file, terminal)? What is a realistic fail scenario which could be prevented by a BBU? Thanks in advance!

    Read the article

  • HighPoint RAID Controller can't see drives

    - by Satellite
    I've just built a system with a HighPoint 2720 RAID controller. Everything appears to be OK, except that the controller doesn't see any of my drives - the RAID BIOS displays "ERROR. No Suitable disks". I have tried a WD SATA, Seagate SATA, and OCZ Vertex 3 SATA. I am using MiniSAS to 4x SATA breakout cables. How do I get this working? I've tried to sign up for HighPoint support, but their support site doesn't appear to be functioning correctly.

    Read the article

  • Raid1+0: create stripe over two /dev/mdx on partition or not?

    - by Chris
    Given that I haven't found a way to define how a Raid10 is created with mdadm, i went the Raid1+0 solution. How to display/define Mirror/Stripping pairs with mdadm mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdf1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdg1 /dev/sdh1 mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 My question is about the stripe. For the mirror I create a primary partition over the full HD and set partition type to FD. So, should I do the same for the Stripe? Create partition on /dev/md0 and /dev/md1 (primary over full 'HDD', set partition type correctly) and then do the stripe on the partition? Is there a correct way here or are there any advantages/disadvantages to a solution? Thank you

    Read the article

  • Most efficient RAID configuration with 6 disks?

    - by Bob King
    I have a hand-me-down server that I'm setting up at home and it's got 6 72Gb hard disks (as well as 2 18Gb drives that I'm using for the OS). What is the best way to configure those 6 drives? Should I RAID 5 or 6, or go with something simpler, like mirroring? I'm planning to use it to hold a source control repository, and possibly data for a development SQL server. The machine has a hardware raid controller. It is an old IBM server.

    Read the article

  • RAID 1 in ubuntu 12.04

    - by Bavly Hanna
    Right now I have a small file server to which I have loaded ubuntu 12.04 desktop on a small 160gb harddrive. This harddrive is the primary drive from which the OS boots. I want to move all my data to my file server so it can be shared on the network (contain in 2x2TB harddrives in my desktop. The 2TB drivers are in raid 1 (hardware). I simply want to move them to the file server and set it up so that they are in software raid 1. If at all possible I'd like to be able to do this without losing any info on the drives. I've searched around and the guides i find describe raiding drives for boot driver, but these wouldn't be boot drives just regular storage drives. If someone could tell me how to perform this or point me in the right direction it would be much appreciated.

    Read the article

< Previous Page | 11 12 13 14 15 16 17 18 19 20 21 22  | Next Page >