FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Ubuntu > Ubuntu Server Development

 
 
LinkBack Thread Tools
 
Old 10-20-2011, 04:56 PM
Diego Xirinachs
 
Default Hardware vs software raid

Again, thanks all for the input,
Michael, great information! just to be clear, I meant RAID 1 not 0 (little typo there). I currently have RAID 1 software raid on a samba server I mantain. I think Im going to use the same type for this server (only that with hardware RAID) because of the easy troubleshooting in case of failure (just replace RAID card and thats it).

Also, I was wondering if you can provide the Dell mailist list subscribe link? or where can I find it?


El jueves 20 de octubre de 2011, Aaron C. de Bruyn escribió:


I'n not sure about the whole hardware -vs- software faster/slower



issue--but I do know when we picked hardware RAID, we had issues with

cards failing and the underlying drives being inaccessible without the

card.

You'd think if you have two drives in a RAID1 and the card died you

could simply plug one of the disks directly into the motherboard and

continue on without the card--but that doesn't seem to be the case.



The partition tables are usually offset because the card reserves an

initial chunk of the drive for its data. *We would regularly have to

run the linux 'testdisk' command to locate and re-write the partition

table in the correct spot. *Then re-RAIDing the drives after the RAID

card came in would be a huge hassle because one drive would have

issues because we re-wrote the partition table.



On the flip-side, I've never had trouble with software RAID in Linux.

It *seems* a little slower to me, but I've never run any tests.



-A



On Thu, Oct 20, 2011 at 07:46, Dan Trevino <dantrevino@gmail.com> wrote:

> Software raid.* The biggest impacts to your performance are going to be

> outside the hardware/software raid decision (network, DB, etc, etc).

>

> Also, I'm not sure if this applies in your case, but never depend on drivers

> that are only available from a single source.

>

> Dan

>

> On Oct 19, 2011 7:36 PM, "Diego Xirinachs" <dxiri343@gmail.com> wrote:

>>

>> Hi list,

>>

>> We are about to implement open bravo on our organization and need a new

>> server.

>>

>> I decided to get a dell r310 but I dont know if I should get the hardware

>> raid or just configure software raid on the server.

>>

>> I have been reading about this but still I am undecided.

>>

>> What do you think?

>>

>> Cheers

>>

>> --

>> ubuntu-server mailing list

>> ubuntu-server@lists.ubuntu.com

>> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server

>> More info: https://wiki.ubuntu.com/ServerTeam

>

> --

> ubuntu-server mailing list

> ubuntu-server@lists.ubuntu.com

> https://lists.ubuntu.com/mailman/listinfo/ubuntu-server

> More info: https://wiki.ubuntu.com/ServerTeam

>





--
X1R1

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 10-20-2011, 05:18 PM
Michael Zoet
 
Default Hardware vs software raid

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Am 20.10.2011 18:56, schrieb Diego Xirinachs:
> Again, thanks all for the input,
>
> Michael, great information! just to be clear, I meant RAID 1 not 0
> (little typo there).

Ah OK that's fine ;-).


> I currently have RAID 1 software raid on a samba server I mantain. I
> think Im going to use the same type for this server (only that with
> hardware RAID) because of the easy troubleshooting in case of
> failure (just replace RAID card and thats it).

You should be carefull which raid controller you choose. In general
the cheaper the poorer the performance. I had to learn this the hard
way after buying a server for a customer...

>
> Also, I was wondering if you can provide the Dell mailist list
> subscribe link? or where can I find it?

There are 2 (maybe more?) lists:

https://lists.us.dell.com/mailman/listinfo/linux-poweredge
https://lists.us.dell.com/mailman/listinfo/linux-poweredge-announce

General Linux starter page from Dell:
http://linux.dell.com/

For now I am really satisified with the Dell support for Ubuntu.

Hope this helps,
Michael
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk6gV/MACgkQBvfZ5167qr/B0wCg0h2HWVR0MOd6a8ZSA0QriAmt
Lj4Anjj+dMV7qfA4SLOhiW0AM72sw8zD
=P7U/
-----END PGP SIGNATURE-----


--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 10-20-2011, 05:48 PM
Preston Hagar
 
Default Hardware vs software raid

On Thu, Oct 20, 2011 at 11:56 AM, Diego Xirinachs <dxiri343@gmail.com> wrote:
> Again, thanks all for the input,
> Michael, great information! just to be clear, I meant RAID 1 not 0 (little
> typo there). I currently have RAID 1 software raid on a samba server I
> mantain. I think Im going to use the same type for this server (only that
> with hardware RAID) because of the easy troubleshooting in case of failure
> (just replace RAID card and thats it).
> Also, I was wondering if you can provide the Dell mailist list subscribe
> link? or where can I find it?
>
>

We have mainly Dell hardware and used their hardware RAID (PERC cards)
for a while, but have been trying to phase them out in favor of Linux
or FreeBSD software RAID for a while now. We never saw significant
performance gains (as another poster said, it is often other things
like network that are the bottleneck) and the management was much more
difficult with the PERC cards.

As a simple test, if you can before you go into production, setup your
RAID array in your hardware card, then pull a drive out while the
system is running. Then try to go through the steps to get it back
fully on line. Do the same with a software RAID machine. We always
found software RAID to be much easier. With most of Dell stuff, it
often unofficially supports Ubuntu, but officially they typically only
support Red Hat enterprise and Suse enterprise. It will probably work
on Ubuntu, but we have found that, even with the extra paid support
contracts, unless you are running a version of Linux they like, with
their servers, their controller cards, and even their drives,
purchased from them with their firmware, they won't really support you
and will blame it on whatever they can.

We have had many problems with PERC-based hardware arrays not
rebuilding, complaining about drives not matching (even though they
do) or other oddities. With software RAID, you can throw just about
anything at it and it will make the disks work.

If you are working for a large organization with hundreds to thousands
of Dell servers, I could see going with their hardware RAID. If you
have a handful of servers (say less than 15 or so), then I would stick
with Software RAID. You will get much more support from the community
and Wikis than you will from Dell when things go wrong and it will be
much easier to deal with when you have a drive failure and you need to
replace it.

just my 2 cents.

Preston

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 10-20-2011, 06:04 PM
Nick Fox
 
Default Hardware vs software raid

I can tell you from experience running an open bravo server you will want all the hardware performance you can get. Go hardware raid.

On Oct 20, 2011 12:49 PM, "Preston Hagar" <prestonh@gmail.com> wrote:
On Thu, Oct 20, 2011 at 11:56 AM, Diego Xirinachs <dxiri343@gmail.com> wrote:

> Again, thanks all for the input,

> Michael, great information! just to be clear, I meant RAID 1 not 0 (little

> typo there). I currently have RAID 1 software raid on a samba server I

> mantain. I think Im going to use the same type for this server (only that

> with hardware RAID) because of the easy troubleshooting in case of failure

> (just replace RAID card and thats it).

> Also, I was wondering if you can provide the Dell mailist list subscribe

> link? or where can I find it?

>

>



We have mainly Dell hardware and used their hardware RAID (PERC cards)

for a while, but have been trying to phase them out in favor of Linux

or FreeBSD software RAID for a while now. *We never saw significant

performance gains (as another poster said, it is often other things

like network that are the bottleneck) and the management was much more

difficult with the PERC cards.



As a simple test, if you can before you go into production, setup your

RAID array in your hardware card, then pull a drive out while the

system is running. *Then try to go through the steps to get it back

fully on line. *Do the same with a software RAID machine. *We always

found software RAID to be much easier. *With most of Dell stuff, it

often unofficially supports Ubuntu, but officially they typically only

support Red Hat enterprise and Suse enterprise. *It will probably work

on Ubuntu, but we have found that, even with the extra paid support

contracts, unless you are running a version of Linux they like, with

their servers, their controller cards, and even their drives,

purchased from them with their firmware, they won't really support you

and will blame it on whatever they can.



We have had many problems with PERC-based hardware arrays not

rebuilding, complaining about drives not matching (even though they

do) or other oddities. *With software RAID, you can throw just about

anything at it and it will make the disks work.



If you are working for a large organization with hundreds to thousands

of Dell servers, I could see going with their hardware RAID. *If you

have a handful of servers (say less than 15 or so), then I would stick

with Software RAID. *You will get much more support from the community

and Wikis than you will from Dell when things go wrong and it will be

much easier to deal with when you have a drive failure and you need to

replace it.



just my 2 cents.



Preston



--

ubuntu-server mailing list

ubuntu-server@lists.ubuntu.com

https://lists.ubuntu.com/mailman/listinfo/ubuntu-server

More info: https://wiki.ubuntu.com/ServerTeam


--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 10-20-2011, 06:31 PM
Preston Hagar
 
Default Hardware vs software raid

On Thu, Oct 20, 2011 at 1:04 PM, Nick Fox <nickj.fox@gmail.com> wrote:
> I can tell you from experience running an open bravo server you will want
> all the hardware performance you can get. Go hardware raid.
>

Sometimes software raid can be faster. Since Open Bravo uses a
postgresql backend, I found a Postgresql benchmark:

http://wiki.postgresql.org/wiki/HP_ProLiant_DL380_G5_Tuning_Guide

If you look at the data, in many cases, software raid beat out
hardware RAID. Often hardware wins out in writes, where software can
often win in reads, so it really depends on if you are doing a lot of
writes or not.

Also, if speed is that vital, instead of spending the extra several
hundred for a hardware RAID card, get more drives and go RAID 10. You
get the redundancy of RAID 1 with speed improvements of RAID 0.
Again, it will really depend on the server, but with the speed and
drop in cost of RAM and CPUs (espeically multi-core CPUs), I have seen
many setups where software RAID can beat out a lot of hardware RAID
cards.

I guess if a minimal performance is vital, then setting both up and
doing benchmarks is probably the only way to be sure you are getting
the most performance you can out of your hardware.

Really though, with Open Bravo, what makes it slow is Tomcat and all
that JAVA, which is generally more RAM and CPU intensive and not I/O
intensive. Most of the I/O will be Postgresql, which can be tuned to
run great even with a little bit slower drives.

I am sure everyone has their own experiences, the only thing I would
stress to the OP is to test out failure scenarios, because that will
be the time you want your system to really be good to work with. I
have had too many headaches with hardware RAID failues (with PERC
cards, Areaca cards, 3ware cards, and Adaptec cards) to really go
through the stress anymore. If I have one of those cards, I will
typically just use it as pass-through/JBOD and then do software RAID,
although that can even present a problem. Again though, most of my
setups are small business, with a handful of servers, not large
setups, so I could see it being different for a large business.

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 10-20-2011, 08:28 PM
Alex Muntada
 
Default Hardware vs software raid

+ Preston Hagar <prestonh@gmail.com>:

> I am sure everyone has their own experiences, the only thing I would
> stress to the OP is to test out failure scenarios, because that will
> be the time you want your system to really be good to work with.

Fully agreed.

> I have had too many headaches with hardware RAID failues (with PERC
> cards, Areaca cards, 3ware cards, and Adaptec cards) to really go
> through the stress anymore. *If I have one of those cards, I will
> typically just use it as pass-through/JBOD and then do software RAID,
> although that can even present a problem.

Me too.

>*Again though, most of my setups are small business, with a handful of
> servers, not large setups, so I could see it being different for a large
> business.

In that sense, the only hardware RAID we use now at work is what's
included in specialized storage appliances (i.e. large arrays of disks
with fault tolerant clustered heads). On our servers, software RAID.

--
Alex Muntada <alexm@alexm.org>
http://alexm.org/

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 12-30-2011, 10:59 PM
Mat Cantin
 
Default Hardware vs software raid

+ Preston Hagar <prestonh@gmail.com>:

I have had too many headaches with hardware RAID failues (with PERC
cards, Areaca cards, 3ware cards, and Adaptec cards) to really go
through the stress anymore. *If I have one of those cards, I will
typically just use it as pass-through/JBOD and then do software RAID,
although that can even present a problem.



*Again though, most of my setups are small business, with a handful of
servers, not large setups, so I could see it being different for a large
business.




I also mostly deal with small businesses and I typically always go for
a linux software RAID when I can. I've mostly used a hardware RAID
card with pass-through as you mentioned, but from your comments it
seems that you use something else if given the choice?


As a general question, when there aren't enough SATA ports on the
motherboard, what hardware do people generally use when setting up a
software RAID in a server?


matoc

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 01-03-2012, 06:27 PM
Preston Hagar
 
Default Hardware vs software raid

On Fri, Dec 30, 2011 at 5:59 PM, Mat Cantin <mat@cantinbrothers.ca> wrote:
>> + Preston Hagar <prestonh@gmail.com>:
>>>
>>> I have had too many headaches with hardware RAID failues (with PERC
>>>
>>> cards, Areaca cards, 3ware cards, and Adaptec cards) to really go
>>> through the stress anymore. *If I have one of those cards, I will
>>> typically just use it as pass-through/JBOD and then do software RAID,
>>> although that can even present a problem.
>>
>>
>>> *Again though, most of my setups are small business, with a handful of
>>> servers, not large setups, so I could see it being different for a large
>>> business.
>>
>>
>
> I also mostly deal with small businesses and I typically always go for a
> linux software RAID when I can. I've mostly used a hardware RAID card with
> pass-through as you mentioned, but from your comments it seems that you use
> something else if given the choice?
>
> As a general question, when there aren't enough SATA ports on the
> motherboard, what hardware do people generally use when setting up a
> software RAID in a server?
>
> matoc
>

If there aren't enough on-board SATA slots, I tend to favor fairly
generic SATA controller cards with the SIL3124 chipset from Silicon
Image (or there is an updated chipset for PCI-E cards, I'll have to
look it up). It is really well supported in Linux and has been for
quite a while. As long as you can determine that they have that
chipset, even some of the cheaper boards, like Syba for instance work
well. If you go to Silicon Image's website:

http://www.siliconimage.com/support/searchresults.aspx?pid=27&cat=15&os=0

You can download "base" BIOS files for the cards (the ones that start
with b) that remove the crummy fake raid from the card and basically
just present the drives like regular drives plugged into the
motherboard. Depending on your needs, these cards (or a lot of them
at least, since it is easy to saturate the PCI bus) may not give you
the highest performance, but for most applications I find that it is
more than sufficient (I have it setup on Zoneminder camera servers,
Samba servers, NFS file servers, among other things). It just might
not be best for say putting a database on.

Hope this helps.

Preston

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 01-03-2012, 11:23 PM
Mat Cantin
 
Default Hardware vs software raid

Depending on your needs, these cards (or a lot of them
at least, since it is easy to saturate the PCI bus) may not give you
the highest performance, but for most applications I find that it is
more than sufficient (I have it setup on Zoneminder camera servers,
Samba servers, NFS file servers, among other things). It just might
not be best for say putting a database on.


I've looked at trying to find a reliable means to monitor whether the
PCI bus is being saturated, only to come up with what seemed like a
hardware gimmick. Is there a way to determine if the PCI bus is being
saturated within the OS? I suppose one could calculate the theoretical
PCI bus throughput in Mb/s, but this doesn't seem very concrete to me
since the quality of hardware often determines if the top thresholds
can be met.


--
matoc

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 
Old 01-04-2012, 08:35 PM
Preston Hagar
 
Default Hardware vs software raid

On Tue, Jan 3, 2012 at 6:23 PM, Mat Cantin <mat@cantinbrothers.ca> wrote:
>> Depending on your needs, these cards (or a lot of them
>> at least, since it is easy to saturate the PCI bus) may not give you
>> the highest performance, but for most applications I find that it is
>> more than sufficient (I have it setup on Zoneminder camera servers,
>> Samba servers, NFS file servers, among other things). *It just might
>> not be best for say putting a database on.
>
>
> I've looked at trying to find a reliable means to monitor whether the PCI
> bus is being saturated, only to come up with what seemed like a hardware
> gimmick. Is there a way to determine if the PCI bus is being saturated
> within the OS? I suppose one could calculate the theoretical PCI bus
> throughput in Mb/s, but this doesn't seem very concrete to me since the
> quality of hardware often determines if the top thresholds can be met.
>
> --
>
> matoc
>

Not really that I have found. I usually just take the claimed and/or
benchmarked drive rates (including what they should be with a RAID
setup) and then look at the bus I am using and see if it looks like I
am going to max it out. For example, a 32-bit/66mhz PCI bus (the
older kind), has a theoretical max bandwidth of 266 MB/s. SATA 2 has
a max bandwidth of 375 MB/s (although I haven't really seen any
non-ssd drives that come very close to that), so one drive alone could
saturate it. A RAID 0 or RAID 10 setup could have an even bigger
performance hit. That said, usually with drives, you are waiting on
seek time, not bus throughput, so it isn't always an issue.

If you are using PCI-E, then bus bandwidth becomes less of an issue
since even PCI-E 1x is 8 GB/s, but you have to keep an eye on the
cards though since a lot of them are PCI cards crammed onto a PCI-E
board, so they don't really take advantage of the faster speed.

One other thing I have done in the past where performance is an issue
is to run bonnie++ tests to try to gauge what the speeds were. If
bonnie reports that my reads or writes are about what the bus maximum
is, then I know that is the bottleneck. If I'm not close, then I know
either my array should be fine or (if I think it should be faster)
then I need to look at why drive performance is bad.

Sorry it isn't a direct answer, but hopefully it might help out some.

Preston

--
ubuntu-server mailing list
ubuntu-server@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-server
More info: https://wiki.ubuntu.com/ServerTeam
 

Thread Tools




All times are GMT. The time now is 07:39 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright ©2007 - 2008, www.linux-archive.org