FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian ISP

 
 
LinkBack Thread Tools
 
Old 06-26-2008, 07:35 PM
Andy Smith
 
Default thoughts on moving to shared storage for VM hosting

Hi,

Say I've got a handful of servers hosting virtual machines. They're
typical 1U boxes with 4 SATA disks and a 3ware RAID card configured
as RAID-10. The local storage is put into LVM and logical volumes
exported to the VMs.

The servers are split across two racks in different suites of a
datacentre. Each rack has two switches for redundancy, and the two
racks are cross connected with two links from each switch also.

I would like to investigate moving to shared storage and would be
very interested in hearing people's opinions of the best way to go.
The goals/requirements are as follows:

- Easier manageability; moving VMs between hosts is currently a bit
of a hassle involving creating new LVs on the target and copying
the data.

- Better provision for hardware failure; at the moment I need to
keep spare servers but if the storage were more flexible then I
could go to an N+1 arrangement of hosts and quickly bring up VMs
from a failed host on all of the other hosts.

- Lower the cost of scaling up; cheaper CPU nodes with little or no
local disk. I have some hope of reduced power consumption also,
since I am billed per Volt-Amp and this represents over 60% of my
recurring colo charges.

- Should be as cheap as possible while not being any less resilient
than the current setup. If I have to hand build it out of Linux
and NBD then that's fine.

- The current choice of SATA is due to customer demand for large
amounts of storage. It is not economical in the current setup for
me to go to SAS or SCSI even though it has higher performance, so
it is unlikely to be economical to do it with shared storage.

- No requirement for cluster operation; each block
device/LUN/whatever will only ever be in use by one host at a
time.

Local disks in a RAID-10 is probably one of the most performant
configurations so I have no expectation of greater performance, but
obviously it needs to not totally suck in that regard.

For redundancy purposes there most likely needs to be one disk box
per rack, with servers from both racks able to use either disk box.
Power failures on a rack or suite basis do happen from time to time,
so if there were only one disk box in that scenario then dual power
would not help and the resultant outage to servers in the unaffected
suite would be unacceptable.

The immediate question then is how to do that. Take for example
this disk box:

http://www.span.com/catalog/product_info.php?products_id=4770

Two of those could be used, each in RAID-10, exported by iSCSI and
then software RAID-1 on the servers would allow for operation even
in the face of the complete death of either disk box.

The downside is that 75% of the raw capacity is gone. Does anyone
have any feel for how much of a performance penalty would be
incurred by configuring each one as say a RAID-50 (two 5-spindle
RAID-5s, striped) in each with 2 hot spares and then software RAID-1
on the servers?

Given 12x500G disks in each box, this would result in
(((12-2)/2)-1)x2x500G = 4T usable for 12T raw. The
previously-mentioned RAID-10, RAID-1 configuration would result in
(12-2)/2x500G = 2.5T usable for 12T raw. A straight up 10-disk
RAID-5 on each disk box would give (12-2-1)x500G = 4.5T usable for
12T raw, but 10 spindles seems too big for a RAID-5 to me, plus
RAID-5 write performance sucks and I understand -50 goes some way to
mitigate that.

Still 4T usable seems like a poor amount to end up with after buying
12T of storage, but I can't see how anything except RAID-1 across the
two disk boxes would allow for one of them to die. With 6T written
off to start with, perhaps getting 4T out of the remaining 6T does
not seem so bad.

A crazy idea would be to set both disk boxes up as JBOD and export
all 24 disks out, handling all the redundancy on the servers using
MD. That really does sound crazy and hard to manage though!

As for the server end, is software RAID of iSCSI exports the right
choice here? Would I be better off doing multipath?

My next concern is iSCSI. I've not yet played with that in Debian.
How usable is it in Debian Etch, assuming commodity hardware and a
dedicated 1GbE network with jumbo frames? Would I be better off
building my own Linux-based disk box and going with AoE or NBD? The
downside is needing to buy something like two of:

http://www.span.com/catalog/product_info.php?cPath=18_711_2401&products_id=159 75

plus two storage servers with SAS to export out AoE or NBD.

At the moment I am gathering I/O usage statistics from one of my
busiest servers and I'll respond with those later if they will help.

If anyone has any experience of any of this, or any thoughts, I'd
love to hear what you have to say.

Cheers,
Andy
 
Old 06-30-2008, 08:23 PM
Maarten Vink
 
Default thoughts on moving to shared storage for VM hosting

Hi Andy,



Local disks in a RAID-10 is probably one of the most performant
configurations so I have no expectation of greater performance, but
obviously it needs to not totally suck in that regard.



Your best solution for this would be iSCSI, preferrably over a
separate network.





The immediate question then is how to do that. Take for example
this disk box:

http://www.span.com/catalog/product_info.php?products_id=4770

Two of those could be used, each in RAID-10, exported by iSCSI and
then software RAID-1 on the servers would allow for operation even
in the face of the complete death of either disk box.


Sounds like a good plan, but I'd spend a lot of time testing what
happens when one of the machines goes down. Does the software RAID
detect this, and what happens to your performance when tens or even
hundreds of exported disks start resyncing?




The downside is that 75% of the raw capacity is gone. Does anyone
have any feel for how much of a performance penalty would be
incurred by configuring each one as say a RAID-50 (two 5-spindle
RAID-5s, striped) in each with 2 hot spares and then software RAID-1
on the servers?


I'd suggest choosing a platform that supports RAID-6 instead of
RAID-5, and use that, optionally with a hot spare. You might even skip
the hot spare, since you can lose up to two active disks in a RAID-6
array.
If you choose RAID-5 with a 12-disk array, sooner or later Murphy will
catch up with you. The chance of a second drive in your RAID-5 array
failing while it's doing a rebuild to the hot spare are larger than
you might think.




Given 12x500G disks in each box, this would result in
(((12-2)/2)-1)x2x500G = 4T usable for 12T raw. The
previously-mentioned RAID-10, RAID-1 configuration would result in
(12-2)/2x500G = 2.5T usable for 12T raw. A straight up 10-disk
RAID-5 on each disk box would give (12-2-1)x500G = 4.5T usable for
12T raw, but 10 spindles seems too big for a RAID-5 to me, plus
RAID-5 write performance sucks and I understand -50 goes some way to
mitigate that.


The disk statistics you're gathering will help a lot in evaluating
your future storage platform. /proc/diskstats will give you a lot of
info on this. If you use cacti for monitoring trends on your servers I
can send you some scripts that will help you graph disk I/O both in
megabytes/second and disk I/O operations.




A crazy idea would be to set both disk boxes up as JBOD and export
all 24 disks out, handling all the redundancy on the servers using
MD. That really does sound crazy and hard to manage though!

As for the server end, is software RAID of iSCSI exports the right
choice here? Would I be better off doing multipath?


As I said before, test this well. You might also look at handling the
replication and fail-over on the storage servers instead of the
clients; you'll need to build your own storage boxes for that, and
invest a lot of time in testing the failover scenario's, but it'll be
easier to manage.
You can run RAID-1 over NBD on your storage servers themselves, or use
DRBD to handle the synchronization.



My next concern is iSCSI. I've not yet played with that in Debian.
How usable is it in Debian Etch, assuming commodity hardware and a
dedicated 1GbE network with jumbo frames? Would I be better off
building my own Linux-based disk box and going with AoE or NBD? The
downside is needing to buy something like two of:

http://www.span.com/catalog/product_info.php?cPath=18_711_2401&products_id=159 75

plus two storage servers with SAS to export out AoE or NBD.



You don't have to use external storage; there are lots of
manufacturers that offer servers with 12 or 16 hot-swappable disks.
Supermicro comes to mind: http://www.supermicro.com/products/chassis/3U/836/SC836TQ-R800.cfm


There are a couple of other suppliers, it shouldn't be too hard to
find one in the UK.


Regards,

Maarten



--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 07-01-2008, 07:11 PM
Henrique de Moraes Holschuh
 
Default thoughts on moving to shared storage for VM hosting

On Mon, 30 Jun 2008, Maarten Vink wrote:
> I'd suggest choosing a platform that supports RAID-6 instead of RAID-5,
> and use that, optionally with a hot spare. You might even skip the hot
> spare, since you can lose up to two active disks in a RAID-6 array.
> If you choose RAID-5 with a 12-disk array, sooner or later Murphy will
> catch up with you. The chance of a second drive in your RAID-5 array
> failing while it's doing a rebuild to the hot spare are larger than you
> might think.

Same here.

And bit-rot is a big deal with RAID-5 (as well as two-disk RAID1), RAID-6
mitigates that as well... you'd still need to run a RAID full read and
repair any bad sectors every once in a while, but murphy will need to work
harder to cause data loss.

--
"One disk to rule them all, One disk to find them. One disk to bring
them all and in the darkness grind them. In the Land of Redmond
where the shadows lie." -- The Silicon Valley Tarot
Henrique Holschuh


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 
Old 07-01-2008, 07:51 PM
Andrew Miehs
 
Default thoughts on moving to shared storage for VM hosting

I am coming very late into this discussion -

My understanding is that NetApp recommends NFS with RAID-DP (a
modified raid 6) for VMWare hosting.



Andrew


--
To UNSUBSCRIBE, email to debian-isp-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
 

Thread Tools




All times are GMT. The time now is 04:32 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org