FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 01-13-2009, 08:11 AM
dwu
 
Default An multipath performance issue on RHEL 5

A customer has done a test of multipath on RHEL 5, and he found that the
speed is 30-40MB/sec, but it can reach 160MB/sec when using EMC
powerpath or using multipath on RHEL 4. That customer just uses the
default configuration and I found there was a little difference between
the configurationfor EMC DGC in the hwtable of multipath on RHEL4 and
RHEL5. But I don't think that will impact performance. I have no idea
now. Who can help me? thanks.



r += store_hwe_ext(hw,
"DGC",
"*",
GROUP_BY_PRIO,
DEFAULT_GETUID,
"/sbin/mpath_prio_emc /dev/%n",
"1 emc",
"1 queue_if_no_path",
"emc_clariion",
-FAILBACK_IMMEDIATE,
"LUNZ",
0, //no_path_retry,
0, //rr_weight,
0); //rr_min_io

{
.vendor = "DGC",
.product = ".*",
.bl_product = "LUNZ",
.getuid = DEFAULT_GETUID,
.getprio = "/sbin/mpath_prio_emc /dev/%n",
.features = "1 queue_if_no_path",
.hwhandler = "1 emc",
.selector = DEFAULT_SELECTOR,
.pgpolicy = GROUP_BY_PRIO,
.pgfailback = -FAILBACK_IMMEDIATE,
.rr_weight = RR_WEIGHT_NONE,
.no_path_retry = (300 / DEFAULT_CHECKINT),
.minio = DEFAULT_MINIO,
.checker_name = EMC_CLARIION,
},




[root@clnode2 ~]# hdparm -t /dev/mapper/mpath0

/dev/mapper/mpath0:
Timing buffered disk reads: 118 MB in 3.01 seconds = 39.22 MB/sec
[root@clnode2 ~]# hdparm -t /dev/mapper/mpath5

/dev/mapper/mpath5:
Timing buffered disk reads: 132 MB in 3.04 seconds = 43.38 MB/sec
[root@clnode2 tmp]# hdparm -t /dev/sdm

/dev/sdm:
Timing buffered disk reads: 112 MB in 3.04 seconds = 36.89 MB/sec
[root@clnode2 tmp]# hdparm -t /dev/sdaa

/dev/sdaa:
Timing buffered disk reads: 108 MB in 3.02 seconds = 35.81 MB/sec
[root@clnode2 tmp]# hdparm -t /dev/sdf

/dev/sdf:
Timing buffered disk reads: read() failed: Input/output error
[root@clnode2 tmp]# hdparm -t /dev/sdt

/dev/sdt:
Timing buffered disk reads: read() failed: Input/output error


[root@clnode2 tmp]# multipath -l
mpath2 (36006016067a21a001c72ef78bdd0dd11) dm-9 DGC,RAID 10
[size=43G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=0][active]
\_ 1:0:0:1 sdp 8:240 [active][undef]
\_ 2:0:0:1 sdb 8:16 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:1 sdi 8:128 [active][undef]
\_ 1:0:1:1 sdw 65:96 [active][undef]
mpath1 (36006016067a21a00901b9565bdd0dd11) dm-10 DGC,RAID 10
[size=3.5G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:3 sdd 8:48 [active][undef]
\_ 1:0:0:3 sdr 65:16 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:3 sdk 8:160 [active][undef]
\_ 1:0:1:3 sdy 65:128 [active][undef]
mpath0 (36006016067a21a00029dbb1dbed0dd11) dm-0 DGC,RAID 10
[size=30G][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=0][active]
\_ 2:0:1:0 sdh 8:112 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:0 sda 8:0 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:0:0 sdo 8:224 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 1:0:1:0 sdv 65:80 [active][undef]
mpath6 (36006016067a21a007c189b0cbed0dd11) dm-8 DGC,RAID 10
[size=1.0G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=0][active]
\_ 2:0:1:6 sdn 8:208 [active][undef]
\_ 1:0:1:6 sdab 65:176 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:6 sdg 8:96 [active][undef]
\_ 1:0:0:6 sdu 65:64 [active][undef]
mpath5 (36006016067a21a00c4aece82bdd0dd11) dm-6 DGC,RAID 10
[size=43G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=0][active]
\_ 2:0:1:5 sdm 8:192 [active][undef]
\_ 1:0:1:5 sdaa 65:160 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:5 sdf 8:80 [active][undef]
\_ 1:0:0:5 sdt 65:48 [active][undef]
mpath4 (36006016067a21a004cf7526fbdd0dd11) dm-7 DGC,RAID 10
[size=43G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=0][active]
\_ 2:0:1:4 sdl 8:176 [active][undef]
\_ 1:0:1:4 sdz 65:144 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:0:4 sde 8:64 [active][undef]
\_ 1:0:0:4 sds 65:32 [active][undef]
mpath3 (36006016067a21a00300e128dbdd0dd11) dm-11 DGC,RAID 10
[size=43G][features=1 queue_if_no_path][hwhandler=1 emc]
\_ round-robin 0 [prio=0][active]
\_ 2:0:0:2 sdc 8:32 [active][undef]
\_ 1:0:0:2 sdq 65:0 [active][undef]
\_ round-robin 0 [prio=0][enabled]
\_ 2:0:1:2 sdj 8:144 [active][undef]
\_ 1:0:1:2 sdx 65:112 [active][undef]

--
吴德新 Mark Wu <dwu@redhat.com>
Associate Technical Support Engineer
Global Support Services

Red Hat China
Tel: +86 10 6533 9338

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 01-13-2009, 02:23 PM
Konrad Rzeszutek
 
Default An multipath performance issue on RHEL 5

On Tue, Jan 13, 2009 at 05:11:23PM +0800, dwu wrote:
> A customer has done a test of multipath on RHEL 5, and he found that the
> speed is 30-40MB/sec, but it can reach 160MB/sec when using EMC powerpath

Does it reach that when do a test on individual disks in the setup
with EMC powerpath?

>
> [root@clnode2 ~]# hdparm -t /dev/mapper/mpath0
>
> /dev/mapper/mpath0:
> Timing buffered disk reads: 118 MB in 3.01 seconds = 39.22 MB/sec
> [root@clnode2 ~]# hdparm -t /dev/mapper/mpath5
>
> /dev/mapper/mpath5:
> Timing buffered disk reads: 132 MB in 3.04 seconds = 43.38 MB/sec
> [root@clnode2 tmp]# hdparm -t /dev/sdm
>
> /dev/sdm:
> Timing buffered disk reads: 112 MB in 3.04 seconds = 36.89 MB/sec
> [root@clnode2 tmp]# hdparm -t /dev/sdaa
>
> /dev/sdaa:
> Timing buffered disk reads: 108 MB in 3.02 seconds = 35.81 MB/sec
> [root@clnode2 tmp]# hdparm -t /dev/sdf
>
> /dev/sdf:
> Timing buffered disk reads: read() failed: Input/output error
> [root@clnode2 tmp]# hdparm -t /dev/sdt
>
> /dev/sdt:
> Timing buffered disk reads: read() failed: Input/output error
>

Since you did the test on the underlaying SCSI subsystem (that
was the next thing to test) - which has no connection to multipath
it eliminates the multipath layer. When you do the hdparam test
on RHEL4 OS on those disks - are the numbers the same?

Is your RHEL4 rig the same exact machine with the same exact
fibre connection? You could have the RHEL5 using a 1GB connection
while the RHEL4 might be using 2GB?

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 07:59 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org