Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   Device-mapper Development (http://www.linux-archive.org/device-mapper-development/)
-   -   [dm-devel] failover vs multibus (http://www.linux-archive.org/device-mapper-development/1239-dm-devel-failover-vs-multibus.html)

"Scott Moseman" 11-20-2007 01:19 PM

[dm-devel] failover vs multibus
 
Re: default_path_grouping_policy in multipath.conf

failover = 1 path per priority group
multibus = all valid paths in 1 priority group

Does this mean that if I'm using failover I'm not going to get
multiple path throughput? And, on the flip side, if I'm sending data
through multiple paths I'm not going to get failover support?
Obviously I want to have both (or all) channels sending traffic, but
still being able to failover in the event I lose a path. Maybe this
means my multipath.conf is going to get more complex than the standard
configuration sample?

Thanks,
Scott

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Tore Anderson 11-20-2007 01:32 PM

[dm-devel] failover vs multibus
 
* Scott Moseman

> Re: default_path_grouping_policy in multipath.conf
>
> failover = 1 path per priority group
> multibus = all valid paths in 1 priority group
>
> Does this mean that if I'm using failover I'm not going to get
> multiple path throughput? And, on the flip side, if I'm sending data
> through multiple paths I'm not going to get failover support?

Correct, and incorrect. With "failover" topology only one path will be
used at a time. With "multibus" all of them will be - but failing paths
_will not_ be used. So if you have eight paths to your storage and are
using multibus topology, load will be balanced over all eight paths. If
one fails, load will be balanced over the remaining seven. And if you
fix the failed path I/O will be balanced over all eight again.

There's also the group_by_prio or group_by_serial topologies which is
normally used in setups with an active/passive controller pair (most
midrange gear are built in this way). In this case I/O is load balanced
over all the paths to the primary controller of a volume only, while the
remaining paths (usually to a standby controller) will only be used if
all (or enough) of the primary paths fail.

Regards
--
Tore Anderson

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

"Scott Moseman" 11-21-2007 01:54 PM

[dm-devel] failover vs multibus
 
On Nov 20, 2007 8:32 AM, Tore Anderson <tore@linpro.no> wrote:
>
> > Re: default_path_grouping_policy in multipath.conf
> >
> > failover = 1 path per priority group
> > multibus = all valid paths in 1 priority group
> >
> > Does this mean that if I'm using failover I'm not going to get
> > multiple path throughput? And, on the flip side, if I'm sending data
> > through multiple paths I'm not going to get failover support?
>
> Correct, and incorrect. With "failover" topology only one path will be
> used at a time. With "multibus" all of them will be - but failing paths
> _will not_ be used. So if you have eight paths to your storage and are
> using multibus topology, load will be balanced over all eight paths. If
> one fails, load will be balanced over the remaining seven. And if you
> fix the failed path I/O will be balanced over all eight again.
>
> There's also the group_by_prio or group_by_serial topologies which is
> normally used in setups with an active/passive controller pair (most
> midrange gear are built in this way). In this case I/O is load balanced
> over all the paths to the primary controller of a volume only, while the
> remaining paths (usually to a standby controller) will only be used if
> all (or enough) of the primary paths fail.
>

Hey Tore,

I tested a 'multibus' config and it performs just as I'm hoping! :)

Thanks!
Scott

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


All times are GMT. The time now is 01:08 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.