On Tue, Nov 29, 2011 at 9:10 AM, Mark Knecht <email@example.com> wrote:
> On Mon, Nov 28, 2011 at 8:10 PM, Michael Mol <firstname.lastname@example.org> wrote:
> Hi Michael,
> * Welcome to the world of what ever sort of multi-disk environment
> you choose. It's a HUGE topic and a conversation I look forward to
> having as you dig through it.
> * My main compute system here at home has six 500GB WD RE3 drives.
> Five are in use with one as a cold spare. *I'm using md. It's pretty
> mature and you have good access to the main developer through the
> email list. I don't know much about dm. If this is your first time
> putting RAID on a box (it was for me) then I think md is a good
> choice. On the other hand you're more system software savy than I am
> so go with what you think is best for you.
Last time I set up RAID was three or four years ago. Two volumes, on
RAID5 of three 1.5TB drives (Seagate econo drives, but they worked
well enough for me), one RIAD0 of three 1TB drives (WD Caviar Black).
The RAID0 was for some video munging scratch space. The RAID5, I
mounted as /home. Those volumes lasted a couple years, before I
rebuilt all of them as two LVM pvgs, using the same drive sets.
> 1) First lesson - not all hard drives make good RAID hard drives. I
> started with six 1TB WD Green drives and found they made _terrible_
> RAID units so I took them out and bought _real_ RAID drives. They were
> only half as large for the same price but they have worked perfectly
> for nearly 2 years.
What makes a good RAID unit, and what makes a terrible RAID unit?
Unless we're talking rapid failure, I'd think anything striped would
be faster than the bare drive alone.
> 2) Second lesson - prepare to build a few RAID configurations and
> TEST, TEST, TEST __BEFORE__ (BEFORE!!!) you make _ANY_ decision about
> what sort of RAID you really want. There are a LOT of parameter
> choices that effect performance, reliability, capacity and I think to
> some extent your ability to change RAID types later on. To name a few:
> The obvious RAID type (0,1,2,3,4,5,6,10, etc.) but also chunk size,
> metadata type, physical layout for certain RAID types, etc. I strongly
> suggest building 5-10 different configurations and testing them with
> bonnie++ to gauge speed. I didn't do enough of this before I built
> this system and I've been dealing with the effects ever since.
I'm familiar with the different RAID types and how they operate. I'm
familiar with some of the impacts of chunk size, what that can mean in
impacts on caching and sector overlap (for SSD and 2TB+ drives, at
The purpose of this array (or set of arrays) is for volume aggregation
with a touch of redundancy. Speed is a tertiary concern, and if it
becomes a real issue, I'll adapt; I've got 730GB left free on the
system's primary disk which I can throw into the mix any which way.
(use it raw as I currently am, or stripe a logical volume into it...)
> 3) Third lesson - think deeply about what happens when 1 drive goes
> bad and you are in the process of fixing the system. Do you have a
> spare drive ready?
Don't plan to, but I don't plan on storing vital or
operations-dependent data in the volume without backup. These are
going to be volumes of convenience.
> Is it in the box? Hot or cold? What happens if a
> second drive in the system fails while you're rebuilding the RAID?
Drop the failed drives, rebuild with the remaining drives, copy back a backup.
> It's from the same manufacturing lot so it probably suffers from the
> same weaknesses. My decision for the most part was (for data or system
> drives) 3-drive RAID1 or 5-drive RAID6. For backup I went with 5-drive
> RAID5. It all makes me feel good, but it's too complicated.
> 4) Lastly - as they say all the time on the mdadm list: RAID is not a backup.
Absolutely. I've had discussions of RAID and disk storage many times
with some rather apt and experienced friends, but dmraid and btrfs are
relatively new on the block, and the gentoo-user list is a new,
mostly-untapped resource of expertise. I wanted to pick up any
additional knowledge or references I hadn't heard before.
> * Personally I like your idea of one big RAID with lvm on top but I
> haven't done it myself. I think it's what I would look at today if I
> was starting from scratch, but I'm not sure. It would take some study.
It's probably the simplest way forward. I notice there are some
network-syncing block devices in the kernel (acting as RAID1 over a
network) I'd like to play with, but I haven't done anything with OCFS2
(or whatever other multi-operator filesystems are in the 3.0.6 kernel)
> Hope this helps even a little,
Certainly does. Also, your email has a permanent URL through at least
a couple mailing list archivers, so it'll be a good thing to link to
in the future.