Search Results

Search found 44026 results on 1762 pages for 'raid question'.

Page 27/1762 | < Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >

  • Can enabling a RAID controller's writeback cache harm overall performance?

    - by Nathan O'Sullivan
    I have an 8 drive RAID 10 setup connected to an Adaptec 5805Z, running Centos 5.5 and deadline scheduler. A basic dd read test shows 400mb/sec, and a basic dd write test shows about the same. When I run the two simultaneously, I see the read speed drop to ~5mb/sec while the write speed stays at more or less the same 400mb/sec. The output of iostat -x as you would expect, shows that very few read transactions are being executed while the disk is bombarded with writes. If i turn the controller's writeback cache off, I dont see a 50:50 split but I do see a marked improvement, somewhere around 100mb/s reads and 300mb/s writes. I've also found if I lower the nr_requests setting on the drive's queue (somewhere around 8 seems optimal) I can end up with 150mb/sec reads and 150mb/sec writes; ie. a reduction in total throughput but certainly more suitable for my workload. Is this a real phenomenon? Or is my synthetic test too simplistic? The reason this could happen seems clear enough, when the scheduler switches from reads to writes, it can run heaps of write requests because they all just land in the controllers cache but must be carried out at some point. I would guess the actual disk writes are occuring when the scheduler starts trying to perform reads again, resulting in very few read requests being executed. This seems a reasonable explanation, but it also seems like a massive drawback to using writeback cache on an system with non-trivial write loads. I've been searching for discussions around this all afternoon and found nothing. What am I missing?

    Read the article

  • Why does my simple Raid 1 backup storage perform really slow sometimes?

    - by randomguy
    I bought 2x Samsung F3 EcoGreen 2TB hard disks to make a backup storage. I put them in Raid 1 (mirror) mode. Made a single partition and formatted it to NTFS, running Windows 7. For some reason, accessing the drive's contents (simply by navigating folders) is sometimes really slow. Like opening D:/photos/ can sometimes take several seconds before it starts showing any of the folder's contents. Same applies for other folders. What could be causing this and what could I do to improve the performance? I remember that there was an option somewhere inside Windows to choose fast access but less reliable persistence operations (read/write). It was a tick inside some dialog. At the time, it felt like a good idea to take the tick away from the option and get more reliable persistence but slower access, but now I'm regretting. I'm unable to find this dialog.. I've looked hard. I don't know, if it would make any difference. Oh, and I've ran scan disk and defrag on the drive. No errors and speed isn't improved.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • Non-volatile cache RAID controllers: what kind of protection is there against NVCACHE failure?

    - by astrostl
    The battery back-up (BBU) model: admin enables write-back cache with BBU writes are cached to the RAID controller's RAM (major performance benefit) the battery saves uncommitted and cached data in the event of a power loss (reliability) If I lose power and come back within a day or so, my data should be both complete and uncorrupted. The downside to this is that, if the battery is dead or low, OR EVEN IF IT IS IN A RELEARN CYCLE (drain/charge loops to ensure the battery's health), the controller reverts to write-through mode and performance will suffer. What's more, the relearn cycles are usually automated on a schedule which may or may not happen in the middle of big traffic. So, that has to be manually disabled and manually scheduled for off-hours if it's a concern. Annoying either way. NV caches have capacitors with a sufficient charge to commit any uncommitted-to-disk data to flash. Not only is that more survivable in longer loss situations, but you don't have to concern yourself with battery death, wear-out, or relearning. All of that sounds great to me. What doesn't sound great to me is the prospect of that flash module having an issue, though. What if it's completely hosed? What if it's only partially hosed? A bit corrupted at the edges? Relearn cycles can tell when something like a simple battery is failing, but is there a similar process to verify that the flash is functional? I'm just far more trusting of a battery, warts and all. I know the card's RAM can fail, the card itself can fail - that's common territory, though. In case you didn't guess, yeah, I've experienced a shocking-to-me amount of flash/SSD/etc. failure :)

    Read the article

  • SQL SERVER – SQLServer Quiz 2011 – Do you know your execution plan – Two questions – One Answer

    - by pinaldave
    My friend Jacob Sebastian has SQL Server Quiz 2011 launched. This time when he asked me to come up with quiz question – I wanted to come up with something which is new and make participant to think about it. After carefully thinking I come with question which I really like to solve myself. Here is the details: 1) Using Single table only Once in Single SELECT statement generate execution plan which have JOIN operator. Explain the reason for the same. 2) Using Single table only Once in Single SELECT statement generate execution plan which have parallelism operator. Explain the reason for the same. Bonus: Create a single query which satisfy both of the above statement. To answer this question and win exciting gifts please visit the SQL Server Quiz website. Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Failed Software RAID0 on Linux - Attempting to recover data

    - by Gizmo_the_Great
    I have a two disk RAID0 software raid (not hardware raid) that is reported to have failed during boot and my OS won't start. Using a Live CD, I get the following output : sudo mdadm -E /dev/sdc1 /dev/sdd1 /dev/sdc1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 1465145328 (698.64 GiB 750.15 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : ad427cd2:9f885f57:7f41015f:90f8f6af Update Time : Sun Jun 8 12:35:11 2014 Checksum : a37407ff - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) /dev/sdd1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 3710713d:fb301031:84b61247:d1d53e0f Name : HP-xw9300:0 Creation Time : Sun Sep 1 15:22:26 2013 Raid Level : -unknown- Raid Devices : 0 Avail Dev Size : 976771056 (465.76 GiB 500.11 GB) Data Offset : 16 sectors Super Offset : 8 sectors State : active Device UUID : 2ea0199d:cb08d9e7:0830448a:a1e1e348 Update Time : Sun Jun 8 13:06:19 2014 Checksum : 8883c492 - correct Events : 1 Device Role : spare Array State : ('A' == active, '.' == missing) GParted lists both disks, detects the flags as 'Raid' and lists the data usage. Can anyone please help me re-assemble just so that I can copy some of the data off that I have not backed up recently? Thanks

    Read the article

  • md/raid:md2: cannot start dirty degraded array, kernel panic

    - by nl-x
    After having made use of a remote power switch, my server did not come back online. When I went to the datacenter and reboot the computer on the spot I see the server booting (I see the centos progress bar with running almost all the way to the end) and eventually giving the following messages: md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... md/raid:md2: cannot start dirty degraded array. md/raid:md2: failed to run raid set. md: pers->run() failed ... Kernel panic - not syncing: Attempted to kill init! Pid: 1, comm: init not tainted 2.6.32-279.1.1.el6.i686 #1 Call Trace: [<c083bfbc>] ? panic+0x68/0x11c [<c045a501>] ? do_exit+0x741/0x750 [<c045a54c>] ? do_group_exit+0x3c/0xa0 [<c045a5c1>] ? sys_exit_group+0x11/0x20 [<c083eba4>] ? syscall_call+0x7/0xb [<c083007b>] ? cmos_wake_setup+0x62/0x112 The server runs CentOS and has software raid, and I don't have backups of the raid settings. The only backup I have is of /home and the database dumps. (Glad to at least have those though.) Since the server is an old Dell PowerEdge 1750 with no CD-ROM drive, I have no way of booting the machine from a boot disk. I also remember in the past that the server also wouldn't boot from a bootable USB disk. So the only way I know how to boot the server is to go to the datacenter, pick up the server and take it to the office. Screw open the server. Attach a cdrom drive to an IDE slot on the motherboard. And then boot it. I am hoping you guys could help me avoid this. I have looked a bit through the boot options and I found the following boot options. When CentOS is about to boot and interrupt the boot-countdown: CentOS (2.6.32-279.1.1.el63.i686) CentOS Linux (2.6.32-71.29.1.el6.i686) centos (2.6.32-71.el6.i686) I think the first configuration is the default one, because choosing that gets me to the above mentioned kernel panic. The other ones end with something like "Sleeping forever". I can press 'e' to edit boot commands, press 'a' to modify kernel arguments and press 'c' for grub command line. The command line gives a grub prompt. But I have no idea how to get the system to boot without (trying to) access the dirty partitions. What I want to do is off course: - boot the machine - check hard drive for errors - mark the drive as clean

    Read the article

  • File Server - Storage configuration: RAID vs LVM vs ZFS something else... ?

    - by privatehuff
    We are a small company that does video editing, among other things, and need a place to keep backup copies of large media files and make it easy to share them. I've got a box set up with Ubuntu Server and 4 x 500 GB drives. They're currently set up with Samba as four shared folders that Mac/Windows workstations can see fine, but I want a better solution. There are two major reasons for this: 500 GB is not really big enough (some projects are larger) It is cumbersome to manage the current setup, because individual hard drives have different amounts of free space and duplicated data (for backup). It is confusing now and that will only get worse once there are multiple servers. ("the project is on sever2 in share4" etc) So, I need a way to combine hard drives in such a way as to avoid complete data loss with the failure of a single drive, and so users see only a single share on each server. I've done linux software RAID5 and had a bad experience with it, but would try it again. LVM looks ok but it seems like no one uses it. ZFS seems interesting but it is relatively "new". What is the most efficient and least risky way to to combine the hdd's that is convenient for my users? Edit: The Goal here is basically to create servers that contain an arbitrary number of hard drives but limit complexity from an end-user perspective. (i.e. they see one "folder" per server) Backing up data is not an issue here, but how each solution responds to hardware failure is a serious concern. That is why I lump RAID, LVM, ZFS, and who-knows-what together. My prior experience with RAID5 was also on an Ubuntu Server box and there was a tricky and unlikely set of circumstances that led to complete data loss. I could avoid that again but was left with a feeling that I was adding an unnecessary additional point of failure to the system. I haven't used RAID10 but we are on commodity hardware and the most data drives per box is pretty much fixed at 6. We've got a lot of 500 GB drives and 1.5 TB is pretty small. (Still an option for at least one server, however) I have no experience with LVM and have read conflicting reports on how it handles drive failure. If a (non-striped) LVM setup could handle a single drive failing and only loose whichever files had a portion stored on that drive (and stored most files on a single drive only) we could even live with that. But as long as I have to learn something totally new, I may as well go all the way to ZFS. Unlike LVM, though, I would also have to change my operating system (?) so that increases the distance between where I am and where I want to be. I used a version of solaris at uni and wouldn't mind it terribly, though. On the other end on the IT spectrum, I think I may also explore FreeNAS and/or Openfiler, but that doesn't really solve the how-to-combine-drives issue.

    Read the article

  • Cluster Nodes as RAID Drives

    - by BuckWoody
    I'm unable to sleep tonight so I thought I would push this post out VERY early. When you don't sleep your mind takes interesting turns, which can be a good thing. I was watching a briefing today by a couple of friends as they were talking about various ways to arrange a Windows Server Cluster for SQL Server. I often see an "active" node of a cluster with a "passive" node backing it up. That means one node is working and accepting transactions, and the other is not doing any work but simply "standing by" waiting for the first to fail over. The configuration in the demonstration I saw was a bit different. In this example, there were three nodes that were actively working, and a fourth standing by for all three. I've put configurations like this one into place before, but as I was looking at their architecture diagram, it looked familar - it looked like a RAID drive setup! And that's not a bad way to think about your cluster arrangements. The same concerns you might think about for a particular RAID configuration provides a good way to think about protecting your systems in general. So even if you're not staying awake all night thinking about SQL Server clusters, take this post as an opportunity for "lateral thinking" - a way of combining in your mind the concepts from one piece of knowledge to another. You might find a new way of making your technical environment a little better. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SCSI drives not showing up in Linux CentOS 4

    - by Mohammad
    So I have a poweredge 6650 with Perc 3 installed. on the first channel of raid controller I have 2x 73gb configured in raid 1. On the second channel I have two 300GB drives that are stand alone. The two 300gb drives do not show up in linux, (no /dev/sdb*)... Can perc 3 support non-raid and raid drives combined? Is there any settings I may be missing? Thanks in advance :)

    Read the article

  • Cannot install grub to RAID1 (md0)

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat This is the my mdadm output after sync: /dev/md0: Version : 0.90 Creation Time : Wed Feb 17 16:18:25 2010 Raid Level : raid1 Array Size : 470455360 (448.66 GiB 481.75 GB) Used Dev Size : 470455360 (448.66 GiB 481.75 GB) Raid Devices : 2 Total Devices : 2 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Thu Nov 1 15:19:31 2012 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 UUID : 92e6ff4e:ed3ab4bf:fee5eb6c:d9b9cb11 Events : 0.11049560 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. This is my fdisk -l output: Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057d19 Device Boot Start End Blocks Id System /dev/sda1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sda2 940910985 976768064 17928540 5 Extended /dev/sda5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table This is my grub install output: root@answe:~# grub-install /dev/sda /usr/sbin/grub-setup: warn: Attempting to install GRUB to a disk with multiple partition labels or both partition label and filesystem. This is not supported yet.. /usr/sbin/grub-setup: error: embedding is not possible, but this is required for cross-disk install. root@answe:~# grub-install /dev/sdb Installation finished. No error reported. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Deploy Xen

    - by Marek Jelen
    I would like to deploy Xen virtualization, however I definately not sure which way to go. Citrix Xen Server has (AFAIK) better managment tools, but does not provide software raid. Ubuntu / CentOS has Xen installations and support software RAID. Is it worth to go and buy HW RAID or just stick with SW RAID and Xen budnled with linux distribution. Which way would you suggest? Are there any other things I should consider?

    Read the article

  • Optimal disk partitions for database setup (15 Drives)

    - by Jason
    We are setting up a new database system and have 15 drives to play with (+2 on-board for the OS). With a total of 15 drives would it be better to setup all 14 as one RAID-10 block (+1 hot spare) OR split into two RAID-10 sets one for Data (8 disks) and one for logs/backups (6 disks). My question boils down to the following: is there a specific point where having more drives in a RAID-10 setup will out preform having the drives broken into smaller RAID-10 sets.

    Read the article

  • SQLAuthority News – Monthly list of Puzzles and Solutions on SQLAuthority.com

    - by pinaldave
    This month has been very interesting month for SQLAuthority.com we had multiple and various puzzles which everybody participated and lots of interesting conversation which we have shared. Let us start in latest puzzles and continue going down. There are few answers also posted on facebook as well. SQL SERVER – Puzzle Involving NULL – Resolve – Error – Operand data type void type is invalid for sum operator This puzzle involves NULL and throws an error. The challenge is to resolve the error. There are multiple ways to resolve this error. Readers has contributed various methods. Few of them even have supplied the answer why this error is showing up. NULL are very important part of the database and if one of the column has NULL the result can be totally different than the one expected. SQL SERVER – T-SQL Scripts to Find Maximum between Two Numbers I modified script provided by friend to find greatest number between two number. My script has small bug in it. However, lots of readers have suggested better scripts. Madhivanan has written blog post on the subject over here. SQL SERVER – BI Quiz Hint – Performance Tuning Cubes – Hints This quiz is hosted on my friend Jacob‘s site. I have written many hints how one can tune cubes. Now one can take part here and win exciting prizes. SQL SERVER – Solution – Generating Zero Without using Any Numbers in T-SQL Madhivanan has asked very interesting question on his blog about How to Generate Zero without using Any Numbers in T-SQL. He has demonstrated various methods how one can generate Zero. I asked the same question on blog and got many interesting answers which I have shared. SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once I have to accept that this was most difficult puzzle. In this puzzle I have asked even though settings are correct, why statistics of the tables are not getting updated. In this puzzle one is tested with various concepts 1) Indexes, 2) Statistics, 3) database settings etc. There are multiple ways of solving this puzzles. It was interesting as many took interest but only few got it right. SQL SERVER – Question to You – When to use Function and When to use Stored Procedure This is rather straight forward question and not the typical puzzle. The answers from readers are great however, still there is chance of more detailed answers. SQL SERVER – Selecting Domain from Email Address I wrote on selecting domains from email addresses. Madhivanan makes puzzle out of a simple question. He wrote a follow-up post over here. In his post he writes various way how one can find email addresses from list of domains. Well, this is not a puzzle but amazing Guest Post by Feodor Georgiev who has written on subject Job Interviewing the Right Way (and for the Right Reasons). An article which everyone should read. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • SQL SERVER – Three Puzzling Questions – Need Your Answer

    - by pinaldave
    Last week I had asked three questions on my blog. I got very good response to the questions. I am planning to write summary post for each of three questions next week. Before I write summary post and give credit to all the valid answers. I was wondering if I can bring to notice of all of you this week. Why SELECT * throws an error but SELECT COUNT(*) does not This is indeed very interesting question as not quite many realize that this kind of behavior SQL Server demonstrates out of the box. Once you run both the code and read the explanation it totally makes sense why SQL Server is behaving how it is behaving. Also there is connect item is associated with it. Also read the very first comment by Rob Farley it also shares very interesting detail. Statistics are not Updated but are Created Once This puzzle has multiple right answer. I am glad to see many of the correct answer as a comment to this blog post. Statistics are very important and it really helps SQL Server Engine to come up with optimal execution plan. DBA quite often ignore statistics thinking it does not need to be updated, as they are automatically maintained if proper database setting is configured (auto update and auto create). Well, in this question, we have scenario even though auto create and auto update statistics are ON, statistics is not updated. There are multiple solutions but what will be your solution in this case? When to use Function and When to use Stored Procedure This question is rather open ended question – there is no right or wrong answer. Everybody developer has always used functions and stored procedures. Here is the chance to justify when to use Stored Procedure and when to use Functions. I want to acknowledge that they can be used interchangeably but there are few reasons when one should not do that. There are few reasons when one is better than other. Let us discuss this here. Your opinion matters. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Performance, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • Creating properly aligned partitions on a replacement disk

    - by Marius Gedminas
    I've a typical small office server with two hard disks configured for RAID-1 (mirroring). Each disk has several partitions: one for swap, the others paired in several /dev/mdX arrays. Every couple of years one of the disks dies and is replaced. The replacement typically goes something like this: # copy partition table from the remaining good disk to the empty replacement disk # (instead of /dev/good_disk and /dev/new_disk I use /dev/sda and /dev/sdb, as appropriate) sfdisk -d /dev/good_disk | sfdisk /dev/new_disk # install boot loader grub-install /dev/new_disk # create swap partition reusing the same UUID, so I don't need to edit /etc/fstab mkswap /dev/new_disk1 -U xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx # hot-add the new partitions to my RAID arrays mdadm /dev/md0 -a /dev/new_disk2 mdadm /dev/md1 -a /dev/new_disk5 mdadm /dev/md2 -a /dev/new_disk6 mdadm /dev/md3 -a /dev/new_disk7 mdadm /dev/md4 -a /dev/new_disk8 The disks were originally partitioned with cfdisk back in 2009, and so the partition table is aligned traditionally to cylinder boundaries (255 heads * 63 sectors). This is not the optimum configuration for new 4K-sector drives. My question is: how can I create a set of partitions for the new disk and ensure they're properly aligned, and have correct sizes for my RAID arrays (rounding up is acceptable, I suppose, but rounding down is definitely not)?

    Read the article

  • Did I lost my RAID again?

    - by BarsMonster
    Hi! A little history: 2 years ago I was really excited to find out that mdadm is so powerful so it even can reshape arrays so you can start with a smaller array and the grow it as you need. I've bought 3x1Tb drives and made RAID-5. It was fine for a year. Then I bought 2x more, and tried to reshape to RAID-6 out of 5 drives, and due to some mess with superblock versions, lost all content. Had to rebuild it from scratch, but 2Tb of data were gone. Yesterday I bought 2 more drives, and this time I had everything: properly built array, UPS. I've disabled write intent map, added 2 new drives as a spare and run a command to grow array to 7-disk. It started working, but speed was ridiculously slow, ~100kb/sec. AFter processing first 37Mb at such an amasing speed, one of old HDDs fails. I properly shutdown PC and disconnected failed drive. After bootup it appeared it recreated intent map as it was still in mdadm config, so I removed it from config and rebooted again. Now all I see is that all mdadm processes deadlocks, and don't do anything. PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1937 root 20 0 12992 608 444 D 0 0.1 0:00.00 mdadm 2283 root 20 0 12992 852 704 D 0 0.1 0:00.01 mdadm 2287 root 20 0 0 0 0 D 0 0.0 0:00.01 md0_reshape 2288 root 18 -2 12992 820 676 D 0 0.1 0:00.01 mdadm And all I see in mdstat is: $ cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sdb1[1] sdg1[4] sdf1[7] sde1[6] sdd1[0] sdc1[5] 2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [7/6] [UU_UUUU] [>....................] reshape = 0.0% (37888/976561152) finish=567604147.2min speed=0K/sec I've already tried mdadm 2.6.7, 3.1.4 and 3.2 - nothing helps. Did I lost my data again? Any suggestions how can I make it work? OS is Ubuntu Server 10.04.2... PS. Needless to say that data is unaccessible - I cannot mount /dev/md0 as save the most valuable data. You can see my disappointment - the very specific thing I was excited about failed twice taking 5Tb of my data with it.

    Read the article

  • Replace a failed drive in Linux RAID

    <b>Tech Republic:</b> "A few weeks ago I had the distinct displeasure of waking up to a series of emails indicating that a series of RAID arrays on a remote system had degraded. The remote system was still running, but one of the hard drives was pretty much dead."

    Read the article

  • PROUHD: RAID for the end-user

    <b>Linuxconfig:</b> "Therefore, there is currently no storage solution that manages heterogeneous storage devices efficiently. In this article, we propose such a solution and we call it PROUHD (Pool of RAID Over User Heterogeneous Devices)."

    Read the article

  • Linux Software RAID recovery

    - by Zoredache
    I am seeing a discrepancy between the output of mdadm --detail and mdadm --examine, and I don't understand why. This output mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Array Size : 3662760640 (3493.08 GiB 3750.67 GB) Used Dev Size : 1465104256 (1397.23 GiB 1500.27 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Seems to contradict this. (the same for every disk in the array) mdadm --examine /dev/sdc2 /dev/sdc2: Magic : a92b4efc Version : 0.90.00 UUID : 1f54d708:60227dd6:163c2a05:89fa2e07 (local to host) Creation Time : Wed Mar 14 18:20:52 2012 Raid Level : raid10 Used Dev Size : 1465104320 (1397.23 GiB 1500.27 GB) Array Size : 2930208640 (2794.46 GiB 3000.53 GB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 2 The array was created like this. mdadm -v --create /dev/md2 \ --level=raid10 --layout=o2 --raid-devices=5 \ --chunk=64 --metadata=0.90 \ /dev/sdg2 /dev/sdf2 /dev/sde2 /dev/sdd2 /dev/sdc2 Each of the 5 individual drives have partitions like this. Disk /dev/sdc: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00057754 Device Boot Start End Blocks Id System /dev/sdc1 2048 34815 16384 83 Linux /dev/sdc2 34816 2930243583 1465104384 fd Linux raid autodetect Backstory So the SATA controller failed in a box I provide some support for. The failure was a ugly and so individual drives fell out of the array a little at a time. While there are backups, we the are not really done as frequently as we really need. There is some data that I am trying to recover if I can. I got additional hardware and I was able to access the drives again. The drives appear to be fine, and I can get the array and filesystem active and mounted (using read-only mode). I am able to access some data on the filesystem and have been copying that off, but I am seeing lots of errors when I try to copy the most recent data. When I am trying to access that most recent data I am getting errors like below which makes me think that the array size discrepancy may be the problem. Mar 14 18:26:04 server kernel: [351588.196299] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.196309] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.196313] dm-7: rw=0, want=6619839616, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.199260] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.199264] dm-7: rw=0, want=20647626304, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.202446] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.202450] dm-7: rw=0, want=19973212288, limit=6442450944 Mar 14 18:26:04 server kernel: [351588.205516] attempt to access beyond end of device Mar 14 18:26:04 server kernel: [351588.205520] dm-7: rw=0, want=8009695096, limit=6442450944

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

  • hyper-v multiple virtual machines with >2TB volumes from one raid

    - by wurlog
    I have a server with two Raids. Raid 0: 2x 1TB Raid 6: 8x 2TB The first raid I used for the hyper-v installation itself. The virtual machines should use the Raid 6, but how can I config it? I need at least one file server with the most of the disc space (maybe a second). But every vhd has a maximum of 2 TB and I can't use the volume directly because other virtual machines have to have access the Raid6. What do I do?

    Read the article

  • MediaShield RAID 5 is showing up as 760GB when the actual size is 2.7TB

    - by Ilya Volodin
    I just finished setting up Windows 2003 Server on my new server. And I started setting up a RAID 5 for it. I have 4x1TB Hard Drives. From MediaSheild RAID Utility (at boot time) the RAID size is displayed as 2.7TB. Linux also shows it as 2.7TB. However, in Windows, everything (including Windows Disk Management as well as Windows based MediaShield utility) is reporting only 760Gb. I already tried converting partitioning table to GUID from MBR, because I read somewhere that Windows can only handle up to 2TB MBR tables, that didn't help much. Tried searching for partitioning utilities that I could use, but couldn't find anything free. Formatted the disk as NTFS partition from within Linux, it stop showing in Windows all together, even MediaShield windows utility isn't showing at anymore. Windows is installed on a separate 500Gb hard drive, that's setup not to support RAID. Any ideas?

    Read the article

  • Possible to get SSD TRIM (discard) working on ext4 + LVM + software RAID in Linux?

    - by Don MacAskill
    We use RAID1+0 with md on Linux (currently 2.6.37) to create an md device, then use LVM to provide volume management on top of the device, and then use ext4 as our filesystem on the LVM volume groups. With SSDs as the drives, we'd like to see the TRIM commands propagate through the layers (ext4 - LVM - md - SSD) to the devices. It looks like recent 2.6.3x kernels have had a lot of new SSD-related TRIM support added, including lots more coverage of Device Mapper scenarios, but we still can't seem to get it to cascade down properly. Is this possible yet? If so, how? If not, is any progress being made?

    Read the article

< Previous Page | 23 24 25 26 27 28 29 30 31 32 33 34  | Next Page >