FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Redhat > Device-mapper Development

 
 
LinkBack Thread Tools
 
Old 06-29-2011, 03:55 PM
Jason Price
 
Default dm-multipath fails when one path is taken offline.

I seem to have multipath setup correctly:

# multipath -ll

vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdc 8:32 active ready running

`-+- policy='round-robin 0' prio=0 status=enabled
*`- 5:0:0:0 sdb 8:16 active ready running


When I fail the /dev/sdb path, things work fine. *I can fail and restore the path all day. *When I fail the /dev/sdc path, the kernel marks the filesystem as read-only.

Background:
Running v0.4.9 of device-mapper-multipath
Hitachi AMS2500 array, using ports 0F and 1F
Brocade 5300 switches, split into two fabrics. *(fabric A, port 30,
fabric B port 30)
Host has a qlogic 2462 card with two ports in use.
device-mapper names the devices, LVM2 PV created from that, made into a VG, and then an LV (see below)

Failures produced by disabling the fabric port at the switch level (or physically disconnecting the fiber).


Current multipath.conf:

blacklist {
* * * *devnode "^sda$"
}

defaults {
* * * * * * * *checker_timeout * * * * 5
* * * * * * * *polling_interval * * * *5
}

multipaths {

* * * *multipath {
* * * * * * * *wwid 360060e8010053b90052fb06900000190
* * * * * * * *alias * * * * * * * * * vrp
# * * * * * * * path_selector * * * * * "round-robin 0"
* * * *}
}

results of 'multipath -ll'


# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled

| `- 6:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
*`- 5:0:0:0 sdb 8:16 active ready running

I've also worked with several revisions of the multipath.conf file. *If I

remember correctly, with some device stanza revisions, I've had multipath -ll
returning this result instead:

# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active
|- 6:0:0:0 sdc 8:32 active ready running
`- 5:0:0:0 sdb 8:16 active ready running

However, both are affected by the same problem.

Excerpt from /etc/fstab:

LABEL=vrp-db * * * * * */vrp-db * * * * * * * * ext3 * *defaults * * * *0 2

relevant line from pvscan:
*PV /dev/mpath/vrp * VG vrpdg * * lvm2 [70.00 GB / 0 * *free]

relevant line from vgscan:
*Found volume group "vrpdg" using metadata type lvm2


While I'd prefer an 'active-active' setup, I'd accept an active/passive
setup, provided it failed over correctly... preferably with a fast failback.

I'm more than happy to provide any other information.


--Jason

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 
Old 06-29-2011, 03:55 PM
Jason Price
 
Default dm-multipath fails when one path is taken offline.

I seem to have multipath setup correctly:

# multipath -ll

vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled
| `- 6:0:0:0 sdc 8:32 active ready running

`-+- policy='round-robin 0' prio=0 status=enabled
*`- 5:0:0:0 sdb 8:16 active ready running


When I fail the /dev/sdb path, things work fine. *I can fail and restore the path all day. *When I fail the /dev/sdc path, the kernel marks the filesystem as read-only.

Background:
Running v0.4.9 of device-mapper-multipath
Hitachi AMS2500 array, using ports 0F and 1F
Brocade 5300 switches, split into two fabrics. *(fabric A, port 30,
fabric B port 30)
Host has a qlogic 2462 card with two ports in use.
device-mapper names the devices, LVM2 PV created from that, made into a VG, and then an LV (see below)

Failures produced by disabling the fabric port at the switch level (or physically disconnecting the fiber).


Current multipath.conf:

blacklist {
* * * *devnode "^sda$"
}

defaults {
* * * * * * * *checker_timeout * * * * 5
* * * * * * * *polling_interval * * * *5
}

multipaths {

* * * *multipath {
* * * * * * * *wwid 360060e8010053b90052fb06900000190
* * * * * * * *alias * * * * * * * * * vrp
# * * * * * * * path_selector * * * * * "round-robin 0"
* * * *}
}

results of 'multipath -ll'


# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=1 status=enabled

| `- 6:0:0:0 sdc 8:32 active ready running
`-+- policy='round-robin 0' prio=0 status=enabled
*`- 5:0:0:0 sdb 8:16 active ready running

I've also worked with several revisions of the multipath.conf file. *If I

remember correctly, with some device stanza revisions, I've had multipath -ll
returning this result instead:

# multipath -ll
vrp (360060e8010053b90052fb06900000190) dm-8 HITACHI,DF600F
size=70G features='0' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=1 status=active
|- 6:0:0:0 sdc 8:32 active ready running
`- 5:0:0:0 sdb 8:16 active ready running

However, both are affected by the same problem.

Excerpt from /etc/fstab:

LABEL=vrp-db * * * * * */vrp-db * * * * * * * * ext3 * *defaults * * * *0 2

relevant line from pvscan:
*PV /dev/mpath/vrp * VG vrpdg * * lvm2 [70.00 GB / 0 * *free]

relevant line from vgscan:
*Found volume group "vrpdg" using metadata type lvm2


While I'd prefer an 'active-active' setup, I'd accept an active/passive
setup, provided it failed over correctly... preferably with a fast failback.

I'm more than happy to provide any other information.


--Jason

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel
 

Thread Tools




All times are GMT. The time now is 07:01 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org