FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Gentoo > Gentoo User

 
 
LinkBack Thread Tools
 
Old 10-15-2008, 04:48 PM
Volker Armin Hemmann
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Mittwoch 15 Oktober 2008, Dan Cowsill wrote:
> Hi guys,
>
> I've had some experience in the past with software (BIOS) RAID.
> Obviously there would be a big performance difference with hardware vs
> BIOS RAID. Has anyone done any benchmarks to the effect of BIOS vs
> linux kernel RAID?

yes. google for it. linux software always wins. Faster, more flexible.
 
Old 10-16-2008, 07:42 PM
"Paul Hartman"
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Wed, Oct 15, 2008 at 8:08 AM, Wolfgang Liebich
<wolfgang.liebich@siemens.com> wrote:
> Hi,
> I'm in the process of setting up a new private computer. I've bought one
> with two drives b/c I wanted to setup a RAID system - RAID1 for
> important partitions, RAID0 for scratch files maybe.
> Additionally I would like to use LVM2 --- on my work PC I've grown to
> like the flexibility of that.
> The Intel DQ35JO motherboard now supports some kind of mobo based RAID.
> Is it better to use this HW raid, or to ignore that and use only the
> linux kernel's software RAID.
> Additionally the LVM2 utilities seem to have limited mirroring/striping
> capabilities of their own - I only want to use RAID levels 0 and 1
> anyways -- would LVM's methods be better here?

Hi,

I've got 4 regular 500gb SATA drives in a linux software RAID5 (BIOS
fakeraid disabled), not using LVM, and with a AES dmcrypt on top of
it, and the performance is really good in my opinion. The encrypted
RAID has a faster read speed than a single, non-RAID, non-encrypted
SATA drive of the same model. Obviously with the encryption & parity
calculations the writes are not as fast, but it's still 25 megabytes
per second write speed which seems pretty good to me. I have a Core 2
E6600 (overclocked to 3ghz).

The time to rebuild the RAID after a system failure for this 4x500gb
is about 90 minutes.

Good luck,
Paul
 
Old 10-17-2008, 10:40 AM
Wolfgang Liebich
 
Default Is an Intel motherboard RAID better or worse than software RAID?

Hi,

Alan McKinnon schrieb:
> On Wednesday 15 October 2008 15:13:45 Pintér Tibor wrote:
>
>>> I'm in the process of setting up a new private computer. I've bought one
>>> with two drives b/c I wanted to setup a RAID system - RAID1 for
>>> important partitions, RAID0 for scratch files maybe.
>>> Additionally I would like to use LVM2 --- on my work PC I've grown to
>>> like the flexibility of that.
>>> The Intel DQ35JO motherboard now supports some kind of mobo based RAID.
>>> Is it better to use this HW raid, or to ignore that and use only the
>>> linux kernel's software RAID.
>>>
>> thats not hardware raid, it never was, it never will be.
>>
>
> Rule of thumb:
>
> For any machine you buy to use at home, dump the on-board RAID and use Linux
> software raid instead.
>
> Reason: kernel raid works, that on-board crap doesn't
> Other reason: real hardware raid costs many times more than that entire
> computer you bought for home use
>
>
OK - nearly everyone here (and at work, too) told me to forget the
onboard fake raid controller. So this is what I will do :-)
The RAID-Howto as well as the LVM howto are however woefully out of
date. I will try to work with the linux-raid website's info.

Basically I plan to do:
- Put the boot partition on a RAID1
- Put the root partition on another RAID1 (I thought about putting the
root filesystem into my LVM setup, too -- it is REALLY annoying if the
root partition get's to small),
but it seems safer to let root be an own partition. Or are there any
different opinions here? I'm very interested in hearing experiences...
- Build a RAID1 partition for the rest of the system (will be a LVM2
container)
- Build a last RAID0 partition for scratch data (/tmp, /var/tmp,
/usr/portage, scratch data).

Any comments? Obviously insane? :-) Don't think so.
- Wolfgang
 
Old 10-17-2008, 11:31 AM
Neil Bothwick
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Fri, 17 Oct 2008 12:40:52 +0200, Wolfgang Liebich wrote:

> Basically I plan to do:
> - Put the boot partition on a RAID1
> - Put the root partition on another RAID1 (I thought about putting the
> root filesystem into my LVM setup, too -- it is REALLY annoying if the
> root partition get's to small),
> but it seems safer to let root be an own partition. Or are there any
> different opinions here? I'm very interested in hearing experiences...

I have a small root partition on RAID1 and everything else (except swap)
in an LVM group, also on RAID. This avoids the need for a separate /boot.


--
Neil Bothwick

Isn't 'Criminal Lawyer' rather redundant?
 
Old 10-17-2008, 11:43 AM
Volker Armin Hemmann
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Freitag 17 Oktober 2008, Wolfgang Liebich wrote:
> Hi,
>
> Alan McKinnon schrieb:
> > On Wednesday 15 October 2008 15:13:45 Pintér Tibor wrote:
> >>> I'm in the process of setting up a new private computer. I've bought
> >>> one with two drives b/c I wanted to setup a RAID system - RAID1 for
> >>> important partitions, RAID0 for scratch files maybe.
> >>> Additionally I would like to use LVM2 --- on my work PC I've grown to
> >>> like the flexibility of that.
> >>> The Intel DQ35JO motherboard now supports some kind of mobo based RAID.
> >>> Is it better to use this HW raid, or to ignore that and use only the
> >>> linux kernel's software RAID.
> >>
> >> thats not hardware raid, it never was, it never will be.
> >
> > Rule of thumb:
> >
> > For any machine you buy to use at home, dump the on-board RAID and use
> > Linux software raid instead.
> >
> > Reason: kernel raid works, that on-board crap doesn't
> > Other reason: real hardware raid costs many times more than that entire
> > computer you bought for home use
>
> OK - nearly everyone here (and at work, too) told me to forget the
> onboard fake raid controller. So this is what I will do :-)
> The RAID-Howto as well as the LVM howto are however woefully out of
> date. I will try to work with the linux-raid website's info.

the howtos on gentoo-wiki worked well for me.


> - Put the root partition on another RAID1 (I thought about putting the
> root filesystem into my LVM setup, too -- it is REALLY annoying if the
> root partition get's to small),

yeah, but if you have 20+ gb root is always big enough AFAIK lvm kills
barriers. You use raid for better data security. So using lvm is a bit..
contra productive.


> - Build a RAID1 partition for the rest of the system (will be a LVM2
> container)
> - Build a last RAID0 partition for scratch data (/tmp, /var/tmp,
> /usr/portage, scratch data).

I have /tmp and /var/tmp on tmpfs - /tmp is so small it is not worth wasting a
partition for it.
 
Old 10-18-2008, 05:31 AM
jormaa
 
Default Is an Intel motherboard RAID better or worse than software RAID?

Wolfgang Liebich wrote:
> Hi,
>
>
> OK - nearly everyone here (and at work, too) told me to forget the
> onboard fake raid controller. So this is what I will do :-)
> The RAID-Howto as well as the LVM howto are however woefully out of
> date. I will try to work with the linux-raid website's info.
>
> Basically I plan to do:
> - Put the boot partition on a RAID1
> - Put the root partition on another RAID1 (I thought about putting the
> root filesystem into my LVM setup, too -- it is REALLY annoying if the
> root partition get's to small),
> but it seems safer to let root be an own partition. Or are there any
> different opinions here? I'm very interested in hearing experiences...
> - Build a RAID1 partition for the rest of the system (will be a LVM2
> container)
> - Build a last RAID0 partition for scratch data (/tmp, /var/tmp,
> /usr/portage, scratch data).
>
> Any comments? Obviously insane? :-) Don't think so.
> - Wolfgang
>
>
>
Likewhoa has a nice writedown of raid and LVM2 on gentoo forums
http://forums.gentoo.org/viewtopic-t-702681-highlight-likewhoa+recipe.html?sid=e9df56d90808ed712323ca693 936a004.

Using that it should be easy enough to adjust to your needs.

Greets jormaa
 
Old 10-18-2008, 04:54 PM
Peter Humphrey
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Friday 17 October 2008 12:43:15 Volker Armin Hemmann wrote:

> I have /tmp and /var/tmp on tmpfs - /tmp is so small it is not worth
> wasting a partition for it.

Yes, and you can enlarge it by creating plenty of swap. My 4GB of real RAM
isn't enough to compile the biggest programs, but setting /etc/fstab
thus: "tmpfs /tmp tmpfs nodev,nosuid,size=6g 0 0" I get enough /tmp
space when I need it without have to go out and spend money on more RAM.
Neat.

--
Rgds
Peter
 
Old 10-20-2008, 09:13 AM
"Conway S. Smith"
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Mon, 20 Oct 2008 08:54:20 +0200
Wolfgang Liebich <Wolfgang.Liebich@siemens.com> wrote:
> Hi,
>
> <SNIP>
> >
> > the howtos on gentoo-wiki worked well for me.
>
> I'm working with them, too. Just one question remains: I want to use
> udev. Do I have to create the md devices or does udev that for me?
>

udev will do it for you. But make sure your initramfs init script
unmounts /sys & /proc. On the box I'm working on setting up it
wasn't unmounting /sys on the initramfs, so when it switched to the
real root it thought /sys was already mounted & didn't mount /sys
under the real root, which meant that udev didn't work - which took
me a while to figure out.

> >
> >
> > > - Put the root partition on another RAID1 (I thought about
> > > putting the root filesystem into my LVM setup, too -- it is
> > > REALLY annoying if the root partition get's to small),
> >
> > yeah, but if you have 20+ gb root is always big enough AFAIK
> > lvm kills barriers. You use raid for better data security. So
> > using lvm is a bit.. contra productive.
>
> Sorry, I'm neither a LVM nor a RAID export - could you please
> elaborate on that?
> I like LVM because of the convenience it adds.
>

Write barriers are a feature to allow write caching on the hard disks
w/out endangering filesystem integrity. Write caching helps
performance significantly, but also allows the disk to re-order write
requests - the disk may actually write a write-request that was
received later before a write-request that was received earlier,
which in some situations can lead to filesystem corruption. Write
barriers are a special type of request that the disk is not allowed
to reorder around - everything the disk receives before the write
barrier must be written before anything received after the write
barrier. But in order to work, write barriers need to be supported
by every layer from the filesystem down to the actual disk; if your
filesystem is on top of LVM & LVM doesn't support write barriers,
then you won't be able to use them, and if write caching is enabled
on the actual disks, you may be risking fileystem corruption. The
Device Mapper kernel subsystem (dm-crypt, dm-raid, LVM, etc.) does
not support write barriers - but neither does MD RAID except for
RAID1, so write caching is dangerous except for filesystems directly
on disk partitions or on RAID1 (if the RAID1 is directly on disk
partitions).

I personally decided against using LVM because from what I read it's
difficult to correctly stripe-align LVM, and incorrect alignment can
have a very big performance impact.


Good luck,
Conway S. Smith
--
The only "intuitive" interface is the nipple. After that, it's all
learned. (Bruce Ediger, bediger@teal.csn.org, in comp.os.linux.misc,
on X interfaces.)
 
Old 10-20-2008, 11:24 AM
Volker Armin Hemmann
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Montag 20 Oktober 2008, Conway S. Smith wrote:
> On Mon, 20 Oct 2008 08:54:20 +0200
>
> Wolfgang Liebich <Wolfgang.Liebich@siemens.com> wrote:
> > Hi,
> >
> > <SNIP>
> >
> > > the howtos on gentoo-wiki worked well for me.
> >
> > I'm working with them, too. Just one question remains: I want to use
> > udev. Do I have to create the md devices or does udev that for me?
>
> udev will do it for you. But make sure your initramfs init script
> unmounts /sys & /proc.

just don't use an initramfs/initrd.

> > Sorry, I'm neither a LVM nor a RAID export - could you please
> > elaborate on that?
> > I like LVM because of the convenience it adds.
>
> Write barriers are a feature to allow write caching on the hard disks
> w/out endangering filesystem integrity. Write caching helps
> performance significantly, but also allows the disk to re-order write
> requests - the disk may actually write a write-request that was
> received later before a write-request that was received earlier,
> which in some situations can lead to filesystem corruption. Write
> barriers are a special type of request that the disk is not allowed
> to reorder around - everything the disk receives before the write
> barrier must be written before anything received after the write
> barrier. But in order to work, write barriers need to be supported
> by every layer from the filesystem down to the actual disk; if your
> filesystem is on top of LVM & LVM doesn't support write barriers,
> then you won't be able to use them, and if write caching is enabled
> on the actual disks, you may be risking fileystem corruption. The
> Device Mapper kernel subsystem (dm-crypt, dm-raid, LVM, etc.) does
> not support write barriers - but neither does MD RAID except for
> RAID1, so write caching is dangerous except for filesystems directly
> on disk partitions or on RAID1 (if the RAID1 is directly on disk
> partitions).

also, reiserfs and xfs turn barriers on by default, ext3 turns it off per
default. Because of 'performance reasons'.
 
Old 10-20-2008, 01:31 PM
"Conway S. Smith"
 
Default Is an Intel motherboard RAID better or worse than software RAID?

On Mon, 20 Oct 2008 13:24:11 +0200
Volker Armin Hemmann <volker.armin.hemmann@tu-clausthal.de> wrote:
> On Montag 20 Oktober 2008, Conway S. Smith wrote:
> > On Mon, 20 Oct 2008 08:54:20 +0200
> >
> > Wolfgang Liebich <Wolfgang.Liebich@siemens.com> wrote:
> > > Hi,
> > >
> > > <SNIP>
> > >
> > > > the howtos on gentoo-wiki worked well for me.
> > >
> > > I'm working with them, too. Just one question remains: I want
> > > to use udev. Do I have to create the md devices or does udev
> > > that for me?
> >
> > udev will do it for you. But make sure your initramfs init script
> > unmounts /sys & /proc.
>
> just don't use an initramfs/initrd.
>

From my reading initramfs/initrd is the preferred way of handling
root filesystem on MD RAID - and the only way for metadata 1.[012]
(although I'm having trouble finding where I read that only 0.90
works w/ in-kernel detection/assembly).

From /usr/share/doc/mdadm-2.6.7/README.initramfs.bz2: "The preferred
way to assemble md arrays at boot time is using 'mdadm' or
'mdassemble' (which is a trimmed-down mdadm). To assemble an array
which contains the root filesystem, mdadm needs to be run before that
filesystem is mounted, and so needs to be run from an initial-ram-fs."


Conway S. Smith
--
The only "intuitive" interface is the nipple. After that, it's all
learned. (Bruce Ediger, bediger@teal.csn.org, in comp.os.linux.misc,
on X interfaces.)
 

Thread Tools




All times are GMT. The time now is 11:44 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org