Search Results

Search found 1900 results on 76 pages for 'xserve raid'.

Page 25/76 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • Cheap machine with Xen to have more than one personal dev servers with RAID option?

    - by AJ
    I do not have a lot of money so when I need a personal dev servers I decided I will use Xen on one machine to run various OS's. Although, I really like Dell Zino (http://www.dell.com/us/en/home/desktops/inspiron-zino-hd/pd.aspx?refid=inspiron-zino-hd&cs=19&s=dhs) as a very economical and compact PC which can be scaled to 8GB of RAM, I do not know how to have a RAID setup, for the backup. Is there an economical way to add a RAID setup to Dell Zino or I would have to invest in a proper box for that? Thanks for your help in advance. **Also, please let me know if this question is not meant for this forum.

    Read the article

  • How to resize an encrypted LVM in RAID 1 in Linux?

    - by user28712
    Hi, I am running 3 partitions in RAID 1. Partition : Mountpoint : Filesystem : Encrypted : LVM ------------------------------------------------------ 1 : /boot : ext2 : No : No 2 : / : ext3 : Yes : No 3 : /home : xfs : Yes : Yes It now happens that I have given the root of the system too few gigs; i would like to shrink the LVM (3) and give the root(2) more space. How can I do this safely without messing up the system? In what order do I have to resize the raid partitions, encryption, lvm, file systems? Any help would be greatly appreciated. Adriaan

    Read the article

  • How do I recover a RAID 1 volume on Mac OS X (10.7)?

    - by Avry
    I have a Synology NAS that I've set up with RAID 1. The device is set up with two drives, both the same size (i.e. 500 GB each), formatted in ext3, as a RAID 1 volume (i.e. even though the total capacity is 1TB, I effectively only get 500 GB). In the case of a device failure where I can only access one of the drives, how can I recover my data? The solution I'm looking for is something like: 'Put the working drive in an enclosure, and use <some software> to recover your data.'

    Read the article

  • Can a RAID 4 disk setup crash if only 1 hard disk fails?

    - by Steve Rodrigue
    I am a web developer. I have not much experience in hardware. For this reason, I use managed servers. This morning, one of the drives in our setup failed. However, the full site went down. I asked my web host what happened and he replied that the hard disk failed in such a way that the RAID controller couldn't work properly. The array was set up as RAID 4. Do you guys ever seen that before? Is it possible? Thanks for any help on this guys. I need to know if my web host is honest with me.

    Read the article

  • Can I swap a HDD from a RAID 1 for a bigger capacity drive?

    - by Gnuffo1
    I have a RAID 1 setup with two 500GB drives. So they mirror each other. I know that if I take one of them out and put in a new 500GB drive it will set it up so that the new drive will mirror the old one. My question is if I take out one of the 500GB drives and add say a 2TB drive, will it set up the 2TB drive to mirror the 500GB? I assume if it does, it will only allow me access to 500GB. However, if I then take out the one remaining 500GB after the data has been mirrored and stick in a 2TB, once that is rebuilt, will I then have a 2TB RAID setup with the same data as what was originally on the 2x 500GB setup?

    Read the article

  • LVM vs RAID0 vs RAID "linear" - Combine 2 disks as one, data recovery?

    - by leto
    Hi there, given two 2TB USB external disks that have to be combined to one 4TB volume and formatted with one big Filesystem (XFS), I have a small question to ask. Does LVM provide better Data recovery, should one disk be unplugged/damaged by being able to recover the data of the still working disk or is everything lost? I would appreciate a solution where only the data of one disk is lost and I can recover the content of the other with the usual filesystem/lvm/raid tools. Is that possible with LVM or RAID "linear"? This is for storing unimportant files that can be retrieved from backup, but I want to save time :) Thank you in advance

    Read the article

  • Device or resource busy errors when setting up a RAID array

    - by JGazlyVFX
    I'm trying to create a Raid array and these are the errors I keep getting. I have repartitioned the drives several times. root@BitchStewie:/dev# sudo mdadm --verbose --create /dev/md1 --chunk=64 --level=0 --raid-devices=12 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 mdadm: super1.x cannot open /dev/sdb1: Device or resource busy mdadm: ddf: Cannot use /dev/sdb1: Device or resource busy mdadm: Cannot use /dev/sdb1: It is busy mdadm: device /dev/sdb1 not suitable for any style of array

    Read the article

  • MegaCLI always returns blank output

    - by JamesHannah
    This server is a Dell R200 running Ubuntu 8.04LTS using a LSI SAS1068E raid card supplied from Dell, I suspect that there might be some kind of RAID issue with the hardware raid built into the motherboard, but I can't seem to get MegaCLi to return any useful output: root@81 $ ./MegaCli -AdpAllInfo -aALL root@81 $ ./MegaCli -PDList -aALL root@81 $ The disks work and AFAIK the raid software is installed correctly. I've seen this issue on RedHat issues also in the past. The RAID was initially setup through the BIOS on this server and appears to be functioning fine apart from this.

    Read the article

  • Experience with AMCC 3ware 9650se raid cards? Ours seems dead

    - by antiduh
    We have a 8-port 3ware 9650se raid card for our main disk array. We had to bring the server down for a pending power outage, and when we turned the machine back on, the raid card never started. This card has been in service for a couple years without problems, and was working up until the shutdown. Now, when we turn the machine on, the bios option rom that normally kicks in before the bootloader doesn't show up, none of the drives start, and when the OS tries to access the device, it just times out. The firmware on it has been upgraded in the past, so it's possible we've hit some sort of firmware bug. We're using it in a Silicon Mechanics R272 machine with gentoo for the OS. The OS eventually boots, but alas, without the card. We've ordered a new one, but I'm worried that if we replace the card it won't recognize the existing array. Has anybody performed a card swap before? Any help would be greatly appreciated. Edit: These are the kernel errors we see: 3ware 9000 Storage Controller device driver for Linux v2.26.02.012. 3w-9xxx 0000:09:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18 3w-9xxx 0000:09:00.0: setting latency timer to 64 3w-9xxx: scsi0: ERROR: (0x06:0x000D): PCI Abort: clearing. 3w-9xxx: scsi0: ERROR: (0x06:0x001F): Microcontroller not ready during reset sequence. 3w-9xxx: scsi0: ERROR: (0x06:0x0036): Response queue (large) empty failed during reset sequence. 3w-9xxx 0000:09:00.0: PCI INT A disabled

    Read the article

  • Where do I connect the HDD LED wires on my RAID adapter?

    - by Giffyguy
    I'm using a Promise FastTrak TX8660 with RAID 5. The manual (and Google) just doesn't seem to explain how exactly to connect a standard two-pin HDD LED wire to the eight available pins on the card. The Manual just says - To connect your LED, follow the following diagram: The card itself resembles the diagram: But it doesn't make any sense to me. All I have is a two-pin connecter for HDD LED on the front of my computer case. I don't need anything fancy like the fault LED or seperate indicators for each drive. I just want to be able to see when my RAID 5 array is working, that's all. I don't know what the "R" and "G" stand for, but my HDD LED wires are red and white. I tried connecting the red wire to the "R" pin and the white wire to the "G" pin, but that just makes the LED on the front of my case light up indefinitely, even when the computer is idle. Which pins am I suppose to connect the HDD LED header to for basic activity indication?

    Read the article

  • Why is my mdadm raid-1 recovery so slow?

    - by dimmer
    On a system I'm running Ubuntu 10.04. My raid-1 restore started out fast but quickly became ridiculously slow (at this rate the restore will take 150 days!): dimmer@paimon:~$ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid1 sdc1[2] sdb1[1] 1953513408 blocks [2/1] [_U] [====>................] recovery = 24.4% (477497344/1953513408) finish=217368.0min speed=113K/sec unused devices: <none> Eventhough I have set the kernel variables to reasonably quick values: dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_min 1000000 dimmer@paimon:~$ cat /proc/sys/dev/raid/speed_limit_max 100000000 I am using 2 2.0TB Western Digital Hard Disks, WDC WD20EARS-00M and WDC WD20EARS-00J. I believe they have been partitioned such that their sectors are aligned. dimmer@paimon:/sys$ sudo parted /dev/sdb GNU Parted 2.2 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00M (scsi) Disk /dev/sdb: 2000GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 (parted) unit s (parted) p Number Start End Size File system Name Flags 1 2048s 3907028991s 3907026944s ext4 (parted) q dimmer@paimon:/sys$ sudo parted /dev/sdc GNU Parted 2.2 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) p Model: ATA WDC WD20EARS-00J (scsi) Disk /dev/sdc: 2000GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 2000GB 2000GB ext4 I am beginning to think that I have a hardware problem, otherwise I can't imagine why the mdadm restore should be so slow. I have done a benchmark on /dev/sdc using Ubuntu's disk utility GUI app, and the results looked normal so I know that sdc has the capability to write faster than this. I also had the same problem on a similar WD drive that I RMAd because of bad sectors. I suppose it's possible they sent me a replacement with bad sectors too, although there are no SMART values showing them yet. Any ideas? Thanks. As requested, output of top sorted by cpu usage (notice there is ~0 cpu usage). iowait is also zero which seems strange: top - 11:35:13 up 2 days, 9:40, 3 users, load average: 2.87, 2.58, 2.30 Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.2%sy, 0.0%ni, 99.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 3096304k total, 1482164k used, 1614140k free, 617672k buffers Swap: 1526132k total, 0k used, 1526132k free, 535416k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 45 root 20 0 0 0 0 S 0 0.0 2:17.02 scsi_eh_0 1 root 20 0 2808 1752 1204 S 0 0.1 0:00.46 init 2 root 20 0 0 0 0 S 0 0.0 0:00.00 kthreadd 3 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/0 4 root 20 0 0 0 0 S 0 0.0 0:00.17 ksoftirqd/0 5 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 6 root RT 0 0 0 0 S 0 0.0 0:00.02 migration/1 ... dmesg errors, definitely looking like hardware: [202884.000157] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [202884.007015] ata5.00: failed command: FLUSH CACHE EXT [202884.013728] ata5.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 [202884.013730] res 40/00:00:ff:59:2e/00:00:35:00:00/e0 Emask 0x4 (timeout) [202884.033667] ata5.00: status: { DRDY } [202884.040329] ata5: hard resetting link [202889.400050] ata5: link is slow to respond, please be patient (ready=0) [202894.048087] ata5: COMRESET failed (errno=-16) [202894.054663] ata5: hard resetting link [202899.412049] ata5: link is slow to respond, please be patient (ready=0) [202904.060107] ata5: COMRESET failed (errno=-16) [202904.066646] ata5: hard resetting link [202905.840056] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [202905.849178] ata5.00: configured for UDMA/133 [202905.849188] ata5: EH complete [203899.000292] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [203899.007096] ata5.00: failed command: IDENTIFY DEVICE [203899.013841] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [203899.013843] res 40/00:00:ff:f9:f6/00:00:38:00:00/e0 Emask 0x4 (timeout) [203899.041232] ata5.00: status: { DRDY } [203899.048133] ata5: hard resetting link [203899.816134] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [203899.826062] ata5.00: configured for UDMA/133 [203899.826079] ata5: EH complete [204375.000200] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204375.007421] ata5.00: failed command: IDENTIFY DEVICE [204375.014799] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204375.014800] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204375.044374] ata5.00: status: { DRDY } [204375.051842] ata5: hard resetting link [204380.408049] ata5: link is slow to respond, please be patient (ready=0) [204384.440076] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204384.449938] ata5.00: configured for UDMA/133 [204384.449955] ata5: EH complete [204395.988135] ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen [204395.988140] ata5.00: failed command: IDENTIFY DEVICE [204395.988147] ata5.00: cmd ec/00:01:00:00:00/00:00:00:00:00/00 tag 0 pio 512 in [204395.988149] res 40/00:00:ff:0c:0f/00:00:39:00:00/e0 Emask 0x4 (timeout) [204395.988151] ata5.00: status: { DRDY } [204395.988156] ata5: hard resetting link [204399.320075] ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300) [204399.330487] ata5.00: configured for UDMA/133 [204399.330503] ata5: EH complete

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Disaster Recovery Plan&ndash;Rebuild System Disk (Dell Server 2900 with PERC RAID controller)

    - by Jim Lahman
    Goal: Since the system disk is a RAID 1 mirrored set, we can rebuild the shadow set by replacing one of the good sets with a blank disk Steps Shutdown and power down server Remove the disk from bay 9, which is part of the system shadow set. Put this disk on the shelf Insert blank/old disk into the empty bay     Label the new disk before inserting it into the empty bay       Power up server During the booting process, the following message appears: “Some configured disks have been removed from your system…”       Press ‘C’ to Load Configuration utility             Press 'Y' to confirm to load the foreign configuration       In this example, the system shadow set is Disk Group 2.  (Before proceeding, confirm this is the disk group in your case).  Expanding the physical disks shows a disk in bay 8 and a missing disk in bay 9.  This is correct.   Now, we have to include the new inserted disk in this group       RAID controller reporting bay 9 is empty       There may be times when the new disk is seen as a foreign disk.  In this case, do the following:     Foreign disk is reported in bay 9 CTRL-N (Next Page) to Foreign Mgt All the disk groups will be displayed.  Typically, the disk group containing the foreign disk will be grey.  To remove the foreign disk Highlight Controller Press F2 Select Foreign Select Clear (do NOT import the configuration!)       Clear the foreign configuration Now the disk can be brought into the system shadow set disk group as a hot spare   To include the newly inserted disk into the system shadowset disk group, it must be brought in as a hot spare Highlight Disk Group 2 (VD Management) Hit F2 Select 'Manage Ded. HS'     Manage dedicated hot swap Select the disk in bay 9 (Hit space bar to select) Tab to 'OK'.  Hit the return key     Select hot spare to bring into RAID 1 mirror set   Rebuild automatically commences     Rebuild in process   Restart now or restart after rebuild is completed

    Read the article

  • VMWare Setup with 2 Servers and a DAS (DELL MD3220)

    - by Kumala
    I am planning to use a VMWare based setup consisting of two VMWare servers (2 CPU, 256GB Memory) and a DAS (DELL MD3220 with 24x900GB disks). The virtual machines will be half running MS SQL databases (Application, Sharepoint, BI) and the other half of the VM will be file services, IIS. To enhance the capacity of the storage, we'll be adding a MD1220 enclosure with another 24x900GB to the MD3220. Both DAS will have 2 controllers. Our current measured IOPS is 1000 IOPS average, 7000 IOPS peak (those happen maybe twice per hour). We are in the planning phase now and are looking at the proper setup of the disks. The intention is to setup up both DAS one of the DAS with RAID 10 only and the other DAS with RAID 5. That will allow us to put the applications on the DAS that supports the application performance needs best. Question is how best to partition the two DASs to get best possible IOPS/MBps, each DAS will have to have 2 hot spares? For the RAID 5 Setup: Generally speaking, would it be better to have one single disk group across all 22 disks (24 - 2 hot spares) with both controllers assigned to the one disk group or is it better to have 2 disk groups each 11 disks, assigned to one of the two controllers? Same question for the RAID 10 setup: The plan is: 2 disks for logs (Raid 1), 2 Hotspare and 20 disks for RAID 10. Option 1: 5 * 4 disks (RAID 10), with two groups assigned to 1 controller and 3 groups to the other controller Option 2: One large RAID 10 across all the disks and have both controllers assigned to the same group? I would assume that there is no right or wrong, but it all depends very much on the specific application behaviour, so I am looking for some general ideas what the pros and cons are of the different options. IF there are other meaningful options, feel free to propose them.

    Read the article

  • Debian: Disable software Raid?

    - by DebFo
    Hey, I own a server with 2 hdds and I used a debian raid installation template. Know I want to reinstall my server and don't want raid anymore to have more space. My provider wants money to change the template to a non-raid debian one. Is there a way to disable software raid on linux after a fresh installation? Thanks

    Read the article

  • Should I buy matching hard drives for a NAS RAID 1 array?

    - by Jamie Ide
    I'm planning to buy a NAS (network attached storage) box and I've picked the Synology DS209. I want to set up a RAID 1 array and I'm wondering if I should buy a matching pair of hard drives or if it would be better to buy from different manufacturers. I'm concerned that a matching pair would be more likely to fail at the same time.

    Read the article

  • Dell PowerEdge R720 - Corrupted RAID

    - by BT643
    Apologies in advance for the lengthy question. We have a Dell PowerEdge R720 server with: 2 x 136GB SAS drives in RAID 1 for the OS (Ubuntu Server 12.04) 6 x 3TB SATA drives in RAID 5 for data A few days ago we were getting errors when trying to access files on the large RAID 5 partition. We rebooted the server and got a message about the raid controller has found a foriegn config. We've had this before, and just needed to use Dell's RAID configuration utility to import foreign config on the RAID. Last time this worked, but this time, it started doing a disk check then we got this: FSCK has returned the following: "/dev/sdb1 inode 364738 has a bad extended attribute block 7 /dev/sdb1 unexpected inconsistency run fsck manually (i.e without -a or -p options) MOUNTALL fsck /ourdatapartition [1019] terminated with status 4 MOUNTALL filesystem has errors /ourdatapartition errors where found while checking the disk drive for /ourdatapartition Press F to fix errors, I to Ignore or M for Manual Recovery" We pressed F to try and fix the errors, but it eventually errored with: Inode 275841084, i_blocks is 167080, should be 0. Fix? yes Inode 275841141 has an invalid extend node (blk 2206761006, lblk 0) Clear? yes Inode 275841141, i_blocks is 227872, should be 0. Fix? yes Inode 275842303 has an invalid extend node (blk 2206760975, lblk 0) Clear? yes .... Error storing directory block information (inode=275906766, block=0, num=2699516178): Memory allocation failed /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** e2fsck: aborted /dev/sdb1: ***** FILE SYSTEM WAS MODIFIED ***** mountall: fsck /ourdatapartition [1286] terminated with status 9 mountall: Unrecoverable fsck error: /ourdatapartition We noticed one of the drive lights was not lit at all, and thought this may have failed and be the problem. We replaced the drive with a spare, and tried "F" to repair it again, but we keep just getting the same error as above. In the RAID configuration utility, all drives show as "online" and "optimal". We do have this data on another replicated server, so we're not worried about "recovering" anything, we just want to get the system back online asap. The server has 64 or 32GB memory, can't remember off the top of my head, but either way, with a 14TB RAID, I think it may still not be enough. Thanks EDIT - I checked the memory usage while fsck was running as suggested and after 2 or 3 minutes, it looked like this, using up nearly all of our servers memory: When it failed after 5 minutes or so with the error in my post, the memory immediately freed up again:

    Read the article

  • How to know which disk has failed on a Mirrored Raid? marked as DR0 [migrated]

    - by Saariko
    Our 2ndry DC, which is on a W2K8R2 Mirrored software raid has lost it's sync, and disk management displays the failed redundancy error How do I know which of the disks has failed? (beside to try and replace one - and see if it loads and syncs) On the device manager, under disks I see both disks, one of them has an icon of: Disable, while the other doesn't Event log displays an event id 7 - bad block on Hard disk DR0 The thing is that looking in device manager, both disks are located in '0' location, which is bizzare

    Read the article

  • How to know which disk has failed on a Mirrored Raid? marked as DR0

    - by Saariko
    Our 2ndry DC, which is on a W2K8R2 Mirrored software raid has lost it's sync, and disk management displays the failed redundancy error How do I know which of the disks has failed? (beside to try and replace one - and see if it loads and syncs) On the device manager, under disks I see both disks, one of them has an icon of: Disable, while the other doesn't Event log displays an event id 7 - bad block on Hard disk DR0 The thing is that looking in device manager, both disks are located in '0' location, which is bizzare

    Read the article

  • Is it possible to use Linux as a Fibre Channel Raid Disk Box?

    - by SvenW
    You probably all know the relatively simple RAID boxes exporting a bunch of SATA disks as one big drive via FC, SAS or iSCSI, like the HP StorageWorks MSA2000, Infortrends EonStore series or many different other models from different manufacturers. Is it possible to create such a device with Linux, a few disks and an FC controller, using the controller in the reverse direction than usual? This would come handy to test some ideas and concepts in an emerging SAN environment.

    Read the article

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

  • How to have 3 operating systems on a mirror RAID 1.

    - by Chris_45
    How do one proceed if I want to have 3 Operating Systems: Windows 7, Ubuntu, Debian plus a swap partion, all in all 4 partitions? Lets say I have 2 disks, each 640 GB and make room - 300 GB for Windows 7, 160 GB Ubuntu, 160 GB Debian and the rest for swap 20 GB. Where do I make these partitions, do I first make one big raid array 1 in BIOS and then partition when Windows 7 is installed or do I already in BIOS make these 4 partitions?

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >