FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 03-13-2009, 02:36 PM
James Pearson
 
Default Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

I'm trying to test out an Equallogic PS5500 with a server running CentOS 4.7

I can create a volume and mount it fine using the standard
iscsi-initiator-utils tools.

The Equallogic box has 3 Gigabit interfaces and I would like to try to
set up things so I can read/write from/to the volume using multiple NICs
on the server i.e. get 200+ Mbyte/s access to the volume - I've had some
basic info from Equallogic (Dell) saying that I need to set up DM
multipathing - however, the info I have starts by saying that I have to:

"Make sure the host can see multiple devices representing the same
target volume"

However, I'm not sure how I get to this point i.e. how do I set up the
Equallogic and/or server so that I can see a single volume over multiple
network links?

Has anyone set up one of these Equallogic boxes in this way using
CentOS4/RHEL4 ?

Thanks

James Pearson
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 03-13-2009, 03:49 PM
"nate"
 
Default Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

James Pearson wrote:
> I'm trying to test out an Equallogic PS5500 with a server running CentOS 4.7
>
> I can create a volume and mount it fine using the standard
> iscsi-initiator-utils tools.
>
> The Equallogic box has 3 Gigabit interfaces and I would like to try to
> set up things so I can read/write from/to the volume using multiple NICs
> on the server i.e. get 200+ Mbyte/s access to the volume - I've had some
> basic info from Equallogic (Dell) saying that I need to set up DM
> multipathing - however, the info I have starts by saying that I have to:
>
> "Make sure the host can see multiple devices representing the same
> target volume"
>
> However, I'm not sure how I get to this point i.e. how do I set up the
> Equallogic and/or server so that I can see a single volume over multiple
> network links?

Dell should be able to tell you this.

If you want 2Gbit+/sec throughput what you should probably be looking
at instead of multipathing is something like 802.3ad, if the equalogic
box has different IPs on each interface you will be able to get up
to 3Gbit/s of throughput and still have fault tolerance and not have
to mess with multipathing(assuming the equalogic box does IP takeover
when one of the interfaces goes down).

Now I have a hard time believing that the software iscsi client in
linux can handle that much throughput but maybe it can, I'd put money
down that you'd be looking at a pretty good chunk of your CPU time
being spent on that though.

And of course use jumbo frames, and have a dedicated network for
storage(at least dedicated VLANs), and dedicated NICs. Last I heard
equallogic didn't support anything other than jumbo frames so you
should have that setup already.

multipathing is more for storage systems that have multiple controllers
and detecting when one of those controllers is not accessible and
failing over to it. While you may be able to come up with a multipathing
config with device mapper that has different volumes being presented
down different paths thus aggregating the total throughput of the
system(to the different volumes), I'm not aware of a way myself to
aggregate paths in device mapper to a single volume.

For my storage array I use device mapper with round robin multipathing
which alternates between however many I/O paths I have(typically there
are 4 paths per volume), however at any given moment for any
given volume only 1 path is used. This can only be used on arrays that
are truely "active-active", do not attempt this on an active-passive
system or you'll run into a heap of trouble.

I haven't used equallogic stuff before but if they have IP takeover
for downed interfaces/controllers than your best bet is likely handling
link aggregation/failover at the network level, move the controller
failover entirely to the equalogic system, instead of trying to track
it at the server level(of course with 802.3ad you will be tracking
local network link status at the server level).

My storage vendor gave explicit step by step instructions(PDF) as to what
was needed to setup device mapper to work properly with their systems.
Given Dell is a big company too, I'd expect you to easily be able
to get that information from them as well.

While most of the connectivity on my storage array is Fiber, there are
4x1Gbps iSCSI ports on it, however the ports are not aggregated, so
the most any single system can pull from the array at a time is 1Gbps
(despite there being 4x1Gbps ports, two on each HBA, there is actually
only 1Gbps of throughput available on each HBA, so in essence there are
2x1Gbps ports available). In aggregate though, with multiple systems
hitting the array simultaneously you can drive more throughput.

That is assuming you still want to go the device mapper route.

nate



_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 03-13-2009, 04:50 PM
James Pearson
 
Default Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

nate wrote:
>
> Dell should be able to tell you this.

That's what I thought ... I asked if I could get more than 2Gbit+/sec
throughput to a single target from a single host - they said yes, but
have not (yet) provided any useful information on how to actually set
this up. I have a support call open, but I thought I'd ask here as well
- just in case anyone else had actually done this with this type of storage.

Thanks

James



_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 03-13-2009, 05:42 PM
"nate"
 
Default Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

nate wrote:

> If you want 2Gbit+/sec throughput what you should probably be looking
> at instead of multipathing is something like 802.3ad, if the equalogic
> box has different IPs on each interface you will be able to get up
> to 3Gbit/s of throughput and still have fault tolerance and not have
> to mess with multipathing(assuming the equalogic box does IP takeover
> when one of the interfaces goes down).


Now that I think about it this won't work either for iSCSI for a
single volume. You could get more throughput only with more than
1 volume mounted at a different IP.

Though it would still be a simpler setup than configuring device mapper.
(assuming the Equallogic box does IP takeover)

If the Equallogic box has a single IP over multiple interfaces and
runs 802.3ad on them you also won't get aggregate performance
improvements from a single host to it as 802.3ad seperates the
data transfer on either per-IP basis or per-IPort basis, if your
connecting to a single IPort between two systems that have
say 4x1Gbps connections the max connections you'll ever use for
that is 1.

So I'd say split up your volumes, to get faster throughput, but I
gotta say that 1Gbit of performance is VERY fast, more than
100 megabytes per second, assuming you can get line rate on the
software initiator..Depending on your workload, for my workload
I typically see about 3MB/second per SATA disk before the disk is
saturated(mostly random writes), so to get to 100 megabytes per
second I'd need 33 disks, just for that one link.

In my case I have 200 SATA disks, and can manage to get roughly
750 megabytes per second aggregate(peak), though the spindle
response times are very high at those levels.

This storage array(controllers) are among the fastest in the world,
but you still may be limited by the disks. The array is rated for
6.4 gigabytes per second with 4 controllers(3.2GB/s on front end
3.2GB/s on back end). With 200 SATA disks they top out at about
1.3-1.4 gigabytes/second of total(front and back end) throughput,
add another 200 SATA disks and the first pair of controllers still
won't be saturated.

nate


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 03-14-2009, 06:58 PM
"nate"
 
Default Help setting up multipathing on CentOS 4.7 to an Equallogic iSCSI target

Ross Walker wrote:

> You could ask on the open-iscsi list too.
>
> I thought if it treated each ip as a separate target portal on the
> initiator you would be able to connect to two "different" targets at
> the same time and let dm-multipath figure out it's the same disk. No?

You can do this, I'm sure of it. But the catch is this doesn't
aggregate the links, even if you use round robin multipathing at
any particular instant in time your only using one link.

>From the docs from device mapper multipath on CentOS 5.2

Path Group:
A grouping of paths. With DM-MP, only one path group--the
active path group--receives I/O at any time. Within a path
group, DM-MP selects which ready path should receive I/O in a
round robin fashion. Path groups can be in various states (refer to
"Path Group States").

So as far as I can see you can't aggregate paths in CentOS 5.2
multipath either for a single volume. The only way to use more
than one path is to have more than one volume. You could set it
up as active/passive and have each volume "prefer" a different
path, or use round-robin and just know that at some points
in time both volumes will be going down the same path.

I suppose you could aggregate the volumes themselves using LVM
or software RAID, to present a single file system to the OS that
uses more than one path simultaneously depending on what data is
being accessed.

I think you'll probably find the software iSCSI has more serious
performance bottlenecks before your able to max out a 1Gbps link
anyways.

nate


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 10:06 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org