On Tue, Feb 9, 2010 at 4:33 PM, Fernando Gleiser <firstname.lastname@example.org> wrote:
> ----- Original Message ----
>> From: nate <email@example.com>
>> Not sure I know what the issue is but telling us how many disks,
>> what the RPM of the disks are, and what level of RAID would probably
>> It sounds like perhaps you have a bunch of 7200RPM disks in a RAID
>> setup where the data
arity ratio may be way out of whack(e.g. high
>> number of data disks to parity disks), which will result in very
>> poor write performance.
> yes, ita bunch of 12 7k2 RPM disks organized as 1 hot spare, 2 parity
> disks, 9 data disks in a RAID 5 configuration. is 9/2 a "high ratio"?
A bit. Your RAID array is configured for a read-mostly configuration.
Here is a simple rule, given you have a HW RAID controller with
write-back cache, assume each write will span the whole stripe
width (controller tries to cache full stripe writes), if that is the
case then the write IOPS will be equal to the IOPS of your slowest
disk within the set as the next write can't go until the first write
Of course with RAID5/RAID6 the write performance can be much, much
worse if the write is short of the whole stripe width as it will then
have to read the remaining stripe set (in order to caclulate parity),
then write the whole stripe set out. It sounds like your data is
sequential though so this shouldn't happen much, maybe the first or
last stripes, so using the above simple rule is a good guide.
For software RAID5/RAID6 that doesn't have a write-cache to cache the
stripe-width, make sure the file system knows the stripe width and
hope it does the write thing
CentOS mailing list