FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.

» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

LinkBack Thread Tools
Old 05-11-2010, 06:59 PM
Rodrigo Nascimento
Default DM-MP Read Performance

Hi All,

I have a Oracle Enterprise Linux box, the kernel is 2.6.18...,
accessing LUs in a NetApp Box.
At NetApp box side, I have two Gigabit Ethernet NICs, each NIC is
member of a VLAN and this two interfaces are members of a
At Host side, I have two Gigabit Ethernet NICs, each NIC is member of a VLAN.
I have this configuration at DM-MP conf file:

use_friendly_names no
max_fds 4096
rr_min_io 128

devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"

vendor "NetApp"
product "LUN"
flush_on_last_del yes
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout "/sbin/mpath_prio_netapp /dev/n%"
features "1 queue_if_no_path"
hardware_handler 0
path_grouping_policy multibus
failback immediate
path_checker directio

When I simulate write operations on a LU I reach 90MB/s on each NIC.
When I simulate read operations on a LU I reach 40MB/s on each NIC.
It's a very poor number.
When it is running on Read Operations I can see the devices /dev/sdb
and /dev/sdc at 50% busy each and the dm-1 at 100%.

Anyone can help me to identify Why this throughput to read is poor?


NetApp - Enjoy it!

dm-devel mailing list

Thread Tools

All times are GMT. The time now is 05:22 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org