FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 04-13-2010, 07:19 PM
Don Krause
 
Default 12-15 TB RAID storage recommendations

On Apr 13, 2010, at 11:57 AM, nate wrote:

> John R Pierce wrote:
>
>> well, IF your controller totally screams and can rebuild the drives at
>> wire speeds with full overlap, you'll be reading 7 * 2TB of data at
>> around 100MB/sec average and writing the XOR of that to the 8th drive in
>> a 8 spindle raid5 (14tb total). just reading one drive at wirespeed is
>> 2000,000MB / 100MB == 20,000 seconds, or about 5.5 hours, so thats about
>> the shortest it possibly could be done.
>
> More likely your looking at 24+ hours, because really no disk system
> is going to read your SATA disks at 100MB/second. If your really lucky
> perhaps you can get 10MB/second.
>
> With the fastest RAID controllers in the industry my own storage
> array(which does heavy amounts of random I/O) averages about
> 2.2MB/second for a SATA disk, with peaks at around 4MB/second.
>
> Our previous storage array averaged about 4-6 hours to rebuild
> a RAID 5 12+1 array with 146GB 10k RPM disks, on an array that was
> in excess of 90% idle. Rebuilding a 400GB SATA-I array often
> took upwards of 48 hours.
>
> nate
>
>


For a "real life" example, we have a 3 year old 12x 1TB SATA box using an Adaptec RAID controller, doing RAID 6 that takes about 3 days to rebuild the array each time a drive fails. Which, to date, has happened 10 times... (Fortunately, this is only a BackupPC box.)

FWIW, we've not experienced a second drive failure during the rebuild process, yet. But we have had drives fail within a few weeks of each other, so it's probably going to happen one of these days..

--
Don Krause
Head Systems Geek,
Waver of Deceased Chickens.
Optivus Proton Therapy, Inc.
www.optivus.com
"This message represents the official view of the voices in my head."






_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:24 PM
Drew Weaver
 
Default 12-15 TB RAID storage recommendations

Those drives are likely fading out of the array because they aren't meant to be in arrays in the first place, Adaptec has told us that if you use consumer drives with their cards you are operating at your own risk.

-Drew


-----Original Message-----
From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Don Krause
Sent: Tuesday, April 13, 2010 3:20 PM
To: CentOS mailing list
Subject: Re: [CentOS] 12-15 TB RAID storage recommendations


On Apr 13, 2010, at 11:57 AM, nate wrote:

> John R Pierce wrote:
>
>> well, IF your controller totally screams and can rebuild the drives at
>> wire speeds with full overlap, you'll be reading 7 * 2TB of data at
>> around 100MB/sec average and writing the XOR of that to the 8th drive in
>> a 8 spindle raid5 (14tb total). just reading one drive at wirespeed is
>> 2000,000MB / 100MB == 20,000 seconds, or about 5.5 hours, so thats about
>> the shortest it possibly could be done.
>
> More likely your looking at 24+ hours, because really no disk system
> is going to read your SATA disks at 100MB/second. If your really lucky
> perhaps you can get 10MB/second.
>
> With the fastest RAID controllers in the industry my own storage
> array(which does heavy amounts of random I/O) averages about
> 2.2MB/second for a SATA disk, with peaks at around 4MB/second.
>
> Our previous storage array averaged about 4-6 hours to rebuild
> a RAID 5 12+1 array with 146GB 10k RPM disks, on an array that was
> in excess of 90% idle. Rebuilding a 400GB SATA-I array often
> took upwards of 48 hours.
>
> nate
>
>


For a "real life" example, we have a 3 year old 12x 1TB SATA box using an Adaptec RAID controller, doing RAID 6 that takes about 3 days to rebuild the array each time a drive fails. Which, to date, has happened 10 times... (Fortunately, this is only a BackupPC box.)

FWIW, we've not experienced a second drive failure during the rebuild process, yet. But we have had drives fail within a few weeks of each other, so it's probably going to happen one of these days..

--
Don Krause
Head Systems Geek,
Waver of Deceased Chickens.
Optivus Proton Therapy, Inc.
www.optivus.com
"This message represents the official view of the voices in my head."






_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:25 PM
"James A. Peltier"
 
Default 12-15 TB RAID storage recommendations

On Tue, 13 Apr 2010, Boris Epstein wrote:

> Hello listmates,
>
> I would like to build a 12-15 TB RAID 5 data server to run under
> ContOS. Any recommendations as far as hardware, configuration, etc?
>
> Thanks.
>
> Boris.

Smaller volumes is best, but really it depends on your I/O type as well.
I have 15TB volumes loaded with medical imaging data that happily run and
fsck just fine. We've had a couple of disk failures and the MD 3000
and MD1000 units handled this just fine taking around 26 hours to sync 1TB
drives. The file system here is XFS

On the other hand, I have natural language data sets which are millions of
small files residing on a 4.5TB EXT4 file system. This file system has
had a problem and to this day I still cannot perform a file system check
to correct the errors because the e4fsck program chews up more than 42GB
of memory and then dies. For details check out.

https://bugzilla.redhat.com/show_bug.cgi?id=570639

What I'm trying to say is, understand your usage patterns. Large
streaming files is far less intensive on the controller then millions of
small files.

Understand your hardware and what it is capable of in each configuration.
RAID-0 vs 5 vs 6 vs 10. It is incredibly important.

Understand your file system. Figure out what file system works best for
your workload, how it functions and how the underlying hardware needs to
be configured to maximize throughput.

That is all for now.


--
James A. Peltier
Systems Analyst (FASNet), VIVARIUM Technical Director
HPC Coordinator
Simon Fraser University - Burnaby Campus
Phone : 778-782-6573
Fax : 778-782-3045
E-Mail : jpeltier@sfu.ca
Website : http://www.fas.sfu.ca | http://vivarium.cs.sfu.ca
http://blogs.sfu.ca/people/jpeltier
MSN : subatomic_spam@hotmail.com

TEAMWORK
There's power in numbers. Learn to work together.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:29 PM
Seth Bardash
 
Default 12-15 TB RAID storage recommendations

Just finished building a new server for inhouse use. 12.8TB

Needed a nfs and samba server which could store 10TB and be reliable.
Also wanted to replace out DHCP server and our internal DNS server.
So needed to run dnsmasq plus ntp for time serving.

This machine replace 3 older units.

We used a Supermicro 3U case with 15 sata shuttles and 2N+1 760 Watt PS.
On ebay for 300.00

Used a Supermicro H8DME-2 MB cause I had one sitting around.
If I was going to buy one it would have been a Tyan S2932G2NR-SI for
reliability (we have sold a bunch of these and they always work)

Installed dual AMD 2382's and 16 GB DDR2-800 RECC plus 4 wire HSF's.

Went to the 3ware online store and purchased a 9550SXU-16ML with the
breakout cables for 340.00 Yes, I know its PCI-X. It was inexpensive
and it runs almost as fast as the 9650 series at twice the price.

Installed 15 pcs of the WD1001FALS disks. Very reliable and low cost
even though they are desktop drives. Used these since we always have 3
on the shelf as spares and WD turns a bad one around in about 5 days. We
have installed over 200 of these drives in raid arrays and have had 2
fail in the last 6 months.

BTW, Initializing took <8 hours not days. Set it up as a Raid 5 array
with one hot spare and autorebuild. Have tested it with both Centos 5.4
x86_64 and OpenSUSE 11.1 x86_64. Also installed an LSI SCSI card and a
Sony AIT-5 Tape drive with Bacula for data backup and recovery. Again,
all worked fine and the machine was able to do backups and keep a GigE
connection saturated.

Using an in house piece of code called disktest we are seeing 268 MB /
sec on writes and 347 MB / sec on reads using 32 GB test files, 131K
buffer sizes and single threading.

If you would like to see a system that is almost identical, have a look
at Coraid.com. They use a dual xeon and a BSD custom kernel.

Hope this config helps......

Seth Bardash

Integrated Solutions and Systems LLC
seth@integratedsolutions.org
Failure cannot survive knowledge and perseverance!
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:32 PM
Don Krause
 
Default 12-15 TB RAID storage recommendations

They weren't supposed to be consumer drives. The box was provided by the vendor of a disk-disk-tape backup system.

They are Western Digital Enterprise RE2-GP drives.

I wouldn't purchase them again, thanks for asking...

Not that this provides any real info to the OP, other than the time it takes to rebuild the array.

=Don=

On Apr 13, 2010, at 12:24 PM, Drew Weaver wrote:

> Those drives are likely fading out of the array because they aren't meant to be in arrays in the first place, Adaptec has told us that if you use consumer drives with their cards you are operating at your own risk.
>
> -Drew
>
>
> -----Original Message-----
> From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Don Krause
> Sent: Tuesday, April 13, 2010 3:20 PM
> To: CentOS mailing list
> Subject: Re: [CentOS] 12-15 TB RAID storage recommendations
>
>
> On Apr 13, 2010, at 11:57 AM, nate wrote:
>
>> John R Pierce wrote:
>>
>>> well, IF your controller totally screams and can rebuild the drives at
>>> wire speeds with full overlap, you'll be reading 7 * 2TB of data at
>>> around 100MB/sec average and writing the XOR of that to the 8th drive in
>>> a 8 spindle raid5 (14tb total). just reading one drive at wirespeed is
>>> 2000,000MB / 100MB == 20,000 seconds, or about 5.5 hours, so thats about
>>> the shortest it possibly could be done.
>>
>> More likely your looking at 24+ hours, because really no disk system
>> is going to read your SATA disks at 100MB/second. If your really lucky
>> perhaps you can get 10MB/second.
>>
>> With the fastest RAID controllers in the industry my own storage
>> array(which does heavy amounts of random I/O) averages about
>> 2.2MB/second for a SATA disk, with peaks at around 4MB/second.
>>
>> Our previous storage array averaged about 4-6 hours to rebuild
>> a RAID 5 12+1 array with 146GB 10k RPM disks, on an array that was
>> in excess of 90% idle. Rebuilding a 400GB SATA-I array often
>> took upwards of 48 hours.
>>
>> nate
>>
>>
>
>
> For a "real life" example, we have a 3 year old 12x 1TB SATA box using an Adaptec RAID controller, doing RAID 6 that takes about 3 days to rebuild the array each time a drive fails. Which, to date, has happened 10 times... (Fortunately, this is only a BackupPC box.)
>
> FWIW, we've not experienced a second drive failure during the rebuild process, yet. But we have had drives fail within a few weeks of each other, so it's probably going to happen one of these days..
>
> --
> Don Krause
> Head Systems Geek,
> Waver of Deceased Chickens.
> Optivus Proton Therapy, Inc.
> www.optivus.com
> "This message represents the official view of the voices in my head."
>







_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:38 PM
Les Mikesell
 
Default 12-15 TB RAID storage recommendations

On 4/13/2010 2:29 PM, Seth Bardash wrote:
> Just finished building a new server for inhouse use. 12.8TB
>
> Needed a nfs and samba server which could store 10TB and be reliable.
> Also wanted to replace out DHCP server and our internal DNS server.
> So needed to run dnsmasq plus ntp for time serving.
>
> This machine replace 3 older units.

I'm not sure I'd put those on the same box - at least not for most
scenarios. If the machine ever goes down and needs an fsck before
coming up, your DNS and DHCP services are going to be down for a long
time while it completes, killing the rest of your network. too.
--
Les Mikesell
lesmikesell@gmail.com
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:49 PM
Alan Hodgson
 
Default 12-15 TB RAID storage recommendations

On Tuesday 13 April 2010, Drew Weaver <drew.weaver@thenap.com> wrote:
> Those drives are likely fading out of the array because they aren't meant
> to be in arrays in the first place, Adaptec has told us that if you use
> consumer drives with their cards you are operating at your own risk.
>

Every hard drive dies. It's just a matter of when.

--
"No animals were harmed in the recording of this episode. We tried but that
damn monkey was just too fast."
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:50 PM
"Joseph L. Casale"
 
Default 12-15 TB RAID storage recommendations

>Unless you have a good storage system..
>
>a blog entry I wrote last year:
>http://www.techopsguys.com/2009/11/24/81000-raid-arrays/
>
>Another one where I ripped into Equallogic's claims:
>http://www.techopsguys.com/2010/03/26/enterprise-equallogic/

Lol, Nate...
The op was looking at spending a few grand, not a few million
you show off
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:50 PM
Seth Bardash
 
Default 12-15 TB RAID storage recommendations

On 4/13/2010 1:38 PM, Les Mikesell wrote:
> On 4/13/2010 2:29 PM, Seth Bardash wrote:
>> Just finished building a new server for inhouse use. 12.8TB
>>
>> Needed a nfs and samba server which could store 10TB and be reliable.
>> Also wanted to replace out DHCP server and our internal DNS server.
>> So needed to run dnsmasq plus ntp for time serving.
>>
>> This machine replace 3 older units.
>
> I'm not sure I'd put those on the same box - at least not for most
> scenarios. If the machine ever goes down and needs an fsck before
> coming up, your DNS and DHCP services are going to be down for a long
> time while it completes, killing the rest of your network. too.
>
>
>
>
> No virus found in this incoming message.
> Checked by AVG - www.avg.com
> Version: 9.0.801 / Virus Database: 271.1.1/2808 - Release Date: 04/13/10 00:32:00
>

Point Taken and correct!! I always have 2 DNS / DHCP / NTP servers up.
One running, one just a service xxx start away. Since we need an inhouse
use web server for testing before we go live we use that machine as the
standby.

Seth Bardash

Integrated Solutions and Systems LLC
seth@integratedsolutions.org
Failure cannot survive knowledge and perseverance!
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 04-13-2010, 07:59 PM
"nate"
 
Default 12-15 TB RAID storage recommendations

Joseph L. Casale wrote:
>>Unless you have a good storage system..
>>
>>a blog entry I wrote last year:
>>http://www.techopsguys.com/2009/11/24/81000-raid-arrays/
>>
>>Another one where I ripped into Equallogic's claims:
>>http://www.techopsguys.com/2010/03/26/enterprise-equallogic/
>
> Lol, Nate...
> The op was looking at spending a few grand, not a few million
> you show off

million? Nowhere close to that, you can get a 12-15TB system(raw)
in the ~$130-150k range (15k RPM). If you want SATA instead
say $80k.

Few million and you can get a world record breaker array with
more than a thousand drives(15k RPM) and loaded with all the
software they have.

The capabilities of the system is the same from the low end
to the high end the only difference is scale really.

nate




_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 08:48 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org