Linux Archive

Linux Archive (http://www.linux-archive.org/)
-   Device-mapper Development (http://www.linux-archive.org/device-mapper-development/)
-   -   oblem with lvm and multipath on fedora 13 (http://www.linux-archive.org/device-mapper-development/419289-oblem-lvm-multipath-fedora-13-a.html)

"Stamper, Brian P. (ARC-D)[Logyx LLC]" 08-27-2010 06:03 PM

oblem with lvm and multipath on fedora 13
 
Last night I upgraded a box from Fedora 11 to Fedora 13, which upgraded
multipath from 4.8-10 to 4.9-14. After the upgrade, multipath is failing to
create maps for some of my volumes. The volumes are coming from a 3par
system, which is directly attached to Qlogic HBAs.

The volumes I'm having problems with contain vgs and lvms. They're also
snapshot volumes. I have a base volume (non-snapshot) that contains a vg
that is working fine. I would not expect that these volumes being snapshots
would be significant, but it's the only common thread I've found so far.

It seems almost like a timing issue, where lvm is grabbing the disks before
multipath has a chance to create the maps. What I can't figure is why it
only affects these volumes. Looking through /var/log/messages from startup,
I am seeing some "unknown partition type" messages that do seem to
correspond to the volumes that dracut is reporting as duplicate PVs, so I'll
investigate that.

Help would be much appreciated.

-Brian

Here's the error I'm seeing (snip from multipath -v4):

Aug 27 10:52:06 | sdm: ownership set to op-tst-fsdata03-rw-04Jun2010
Aug 27 10:52:06 | sdm: not found in pathvec
Aug 27 10:52:06 | sdm: mask = 0xc
Aug 27 10:52:06 | sdm: get_state
Aug 27 10:52:06 | sdm: path checker = tur (controller setting)
Aug 27 10:52:06 | sdm: checker timeout = 300000 ms (internal default)
Aug 27 10:52:06 | sdm: state = running
Aug 27 10:52:06 | sdm: state = 3
Aug 27 10:52:06 | sdm: prio = const (controller setting)
Aug 27 10:52:06 | sdm: const prio = 1
Aug 27 10:52:06 | sdaa: ownership set to op-tst-fsdata03-rw-04Jun2010
Aug 27 10:52:06 | sdaa: not found in pathvec
Aug 27 10:52:06 | sdaa: mask = 0xc
Aug 27 10:52:06 | sdaa: get_state
Aug 27 10:52:06 | sdaa: path checker = tur (controller setting)
Aug 27 10:52:06 | sdaa: checker timeout = 300000 ms (internal default)
Aug 27 10:52:06 | sdaa: state = running
Aug 27 10:52:06 | sdaa: state = 3
Aug 27 10:52:06 | sdaa: prio = const (controller setting)
Aug 27 10:52:06 | sdaa: const prio = 1
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: verified path sdm dev_t
8:192
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: verified path sdaa dev_t
65:160
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: pgfailback = 15 (controller
setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: pgpolicy = multibus
(controller setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: selector = round-robin 0
(controller setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: features = 0 (controller
setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: hwhandler = 0 (controller
setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: rr_weight = 2 (controller
setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: minio = 1000 (controller
setting)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: no_path_retry = -2
(controller setting)
Aug 27 10:52:06 | pg_timeout = NONE (internal default)
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: set ACT_CREATE (map does not
exist)
Aug 27 10:52:06 | libdevmapper: ioctl/libdm-iface.c(1772): device-mapper:
reload ioctl failed: Device or resource busy
Aug 27 10:52:06 | libdevmapper: libdm-common.c(1056): semid 294912: semop
failed for cookie 0xd4d3598: incorrect semaphore state
Aug 27 10:52:06 | libdevmapper: libdm-common.c(1230): Could not signal
waiting process using notification semaphore identified by cookie value
223163800 (0xd4d3598)
Aug 27 10:52:06 | libdevmapper: ioctl/libdm-iface.c(1772): device-mapper:
reload ioctl failed: Device or resource busy
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: domap (0) failure for
create/reload map
Aug 27 10:52:06 | op-tst-fsdata03-rw-04Jun2010: remove multipath map
Aug 27 10:52:06 | sdm: orphaned
Aug 27 10:52:06 | sdaa: orphaned

---------------------

[root@testfs ~]# dmsetup table
testfsdata01: 0 4194304000 multipath 1 queue_if_no_path 0 1 1 round-robin 0
2 1 8:32 1000 65:0 1000
opfsdata01--vg-opfsdata01--lv: 0 4292853760 linear 8:161 384
opfsdata01--vg-opfsdata01--lv: 4292853760 4292853760 linear 8:177 384
opfsdata01--vg-opfsdata01--lv: 8585707520 4292853760 linear 8:193 384
opfsdata01--vg-opfsdata01--lv: 12878561280 4292853760 linear 8:209 384
testNFS-testNFS: 0 4194295808 linear 8:48 384
testNFS-testNFS: 4194295808 4180893696 linear 8:64 384
filestoreVG-filestore: 0 4187594752 linear 8:32 384
vg_testfs-LogVol02: 0 32768000 linear 8:2 551649664
vg_testfs-LogVol01: 0 32768000 linear 8:2 518881664
testsnapfslog01p1: 0 419424957 linear 253:11 63
testsnapfslog02: 0 419430400 multipath 1 queue_if_no_path 0 1 1 round-robin
0 2 1 8:224 1000 65:192 1000
testsnapfslog02p1: 0 419424957 linear 253:12 63
testfslog01: 0 419430400 multipath 1 queue_if_no_path 0 1 1 round-robin 0 2
1 8:16 1000 8:240 1000
testsnapfslog01: 0 419430400 multipath 1 queue_if_no_path 0 1 1 round-robin
0 2 1 8:144 1000 65:112 1000
vg_testfs-lv_root: 0 518881280 linear 8:2 384
testfslog01p1: 0 419424957 linear 253:7 63
testnfs02: 0 4194304000 multipath 1 queue_if_no_path 0 1 1 round-robin 0 2 1
8:64 1000 65:32 1000
testnfs01: 0 4194304000 multipath 1 queue_if_no_path 0 1 1 round-robin 0 2 1
8:48 1000 65:16 1000
spqfsdata01--vg-spqfsdata01--lv: 0 4292853760 linear 8:81 384
spqfsdata01--vg-spqfsdata01--lv: 4292853760 4292853760 linear 8:97 384
spqfsdata01--vg-spqfsdata01--lv: 8585707520 4292853760 linear 8:113 384
spqfsdata01--vg-spqfsdata01--lv: 12878561280 4292853760 linear 8:129 384

---------------------

[root@testfs ~]# cat /etc/multipath.conf
defaults {
user_friendly_names yes
}
devnode_blacklist {
wwid 36001e4f02bc746000f60789e05a38474
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
devnode "^cciss!c[0-9]d[0-9]*"
}
multipaths {
multipath {
wwid 350002ac000e505d8
alias testfsdata01
}
multipath {
wwid 350002ac000e605d8
alias testfslog01
}
multipath {
wwid 350002ac000e905d8
alias testnfs01
}
multipath {
wwid 350002ac000ea05d8
alias testnfs02
}
multipath {
wwid 350002ac0010905d8
alias testsnapfslog01
}
multipath {
wwid 350002ac001ca05d8
alias testsnapfslog02
}
multipath {
wwid 350002ac0022005d8
alias spq-test-fsdata01-rw-27May2010
}
multipath {
wwid 350002ac0022105d8
alias spq-test-fsdata02-rw-27May2010
}
multipath {
wwid 350002ac0022205d8
alias spq-test-fsdata03-rw-27May2010
}
multipath {
wwid 350002ac0022305d8
alias spq-test-fsdata04-rw-27May2010
}
##
multipath {
wwid 350002ac0021b05d8
alias op-tst-fsdata01-rw-04Jun2010
}
multipath {
wwid 350002ac0021c05d8
alias op-tst-fsdata02-rw-04Jun2010
}
multipath {
wwid 350002ac0021d05d8
alias op-tst-fsdata03-rw-04Jun2010
}
multipath {
wwid 350002ac0021e05d8
alias op-tst-fsdata04-rw-04Jun2010
}
}
devices {
device {
vendor "3PARdata"
product "VV"
path_grouping_policy multibus
getuid_callout "/lib/udev/scsi_id -g -u -d /dev/%n"
path_checker tur
path_selector "round-robin 0"
hardware_handler "0"
failback 15
rr_weight priorities
no_path_retry queue
}
}

---------------------

[root@testfs ~]# vgdisplay
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not
/dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not
/dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not
/dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not
/dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not
/dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not
/dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not
/dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not
/dev/sdab1
--- Volume group ---
VG Name opfsdata01-vg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 4
Act PV 4
VG Size 8.00 TiB
PE Size 4.00 MiB
Total PE 2096120
Alloc PE / Size 2096120 / 8.00 TiB
Free PE / Size 0 / 0
VG UUID FeSlsp-mVzr-Xo6B-RIc5-72cv-xJLY-XRzndA

--- Volume group ---
VG Name spqfsdata01-vg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 4
Act PV 4
VG Size 8.00 TiB
PE Size 4.00 MiB
Total PE 2096120
Alloc PE / Size 2096120 / 8.00 TiB
Free PE / Size 0 / 0
VG UUID cpKgCE-wdlC-ee4V-n5gd-ysGj-bJKq-c0Uk5g

--- Volume group ---
VG Name testNFS
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.91 TiB
PE Size 4.00 MiB
Total PE 1023998
Alloc PE / Size 1022362 / 3.90 TiB
Free PE / Size 1636 / 6.39 GiB
VG UUID WvSi5z-IpGB-h3tE-NzSy-Bou6-6mcK-ew3RaK

--- Volume group ---
VG Name filestoreVG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.95 TiB
PE Size 4.00 MiB
Total PE 511999
Alloc PE / Size 511181 / 1.95 TiB
Free PE / Size 818 / 3.20 GiB
VG UUID dLvVeo-yuIk-xBFN-PAOV-PhVr-LXUk-mTR0CZ

--- Volume group ---
VG Name vg_testfs
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 278.67 GiB
PE Size 4.00 MiB
Total PE 71340
Alloc PE / Size 71340 / 278.67 GiB
Free PE / Size 0 / 0
VG UUID I8OmYL-lcr6-M01j-cker-Toga-6cG5-GHT02c



--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Malahal Naineni 08-27-2010 06:26 PM

oblem with lvm and multipath on fedora 13
 
Stamper, Brian P. (ARC-D)[Logyx LLC] [brian.p.stamper@nasa.gov] wrote:
> Last night I upgraded a box from Fedora 11 to Fedora 13, which upgraded
> multipath from 4.8-10 to 4.9-14. After the upgrade, multipath is failing to
> create maps for some of my volumes. The volumes are coming from a 3par
> system, which is directly attached to Qlogic HBAs.
>
> The volumes I'm having problems with contain vgs and lvms. They're also
> snapshot volumes. I have a base volume (non-snapshot) that contains a vg
> that is working fine. I would not expect that these volumes being snapshots
> would be significant, but it's the only common thread I've found so far.
>
> It seems almost like a timing issue, where lvm is grabbing the disks before
> multipath has a chance to create the maps. What I can't figure is why it
> only affects these volumes. Looking through /var/log/messages from startup,
> I am seeing some "unknown partition type" messages that do seem to
> correspond to the volumes that dracut is reporting as duplicate PVs, so I'll
> investigate that.
>
> Help would be much appreciated.

If you can, deactivate the affected VG, restart multipath (it should be
able to load maps now) and then activate the VG. If this works, then
there is something that is causing LVM to claim devices before multipath
does.

Thanks, Malahal.
PS: If you don't have multipath in initrd, LVM *may* claim paths as PV's
before the multipath from active root can claim them. Don't have enough
info on Fedora13 initrd if this can happen though.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

"Stamper, Brian P. (ARC-D)[Logyx LLC]" 08-27-2010 06:57 PM

oblem with lvm and multipath on fedora 13
 
Title: Re: [dm-devel] oblem with lvm and multipath on fedora 13





That works, now how do I stop it?



-Brian



[root@testfs ~]# multipath -ll

op-tst-fsdata01-rw-04Jun2010 (350002ac0021b05d8) dm-0 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:9 *sdk *8:160 *active ready running

**`- 4:0:0:9 *sdy *65:128 active ready running

testfsdata01 (350002ac000e505d8) dm-8 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:1 *sdc *8:32 **active ready running

**`- 4:0:0:1 *sdq *65:0 **active ready running

op-tst-fsdata03-rw-04Jun2010 (350002ac0021d05d8) dm-16 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:11 sdm *8:192 *active ready running

**`- 4:0:0:11 sdaa 65:160 active ready running

testsnapfslog02 (350002ac001ca05d8) dm-12 3PARdata,VV

size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:13 sdo *8:224 *active ready running

**`- 4:0:0:13 sdac 65:192 active ready running

testfslog01 (350002ac000e605d8) dm-7 3PARdata,VV

size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:0 *sdb *8:16 **active ready running

**`- 4:0:0:0 *sdp *8:240 *active ready running

testsnapfslog01 (350002ac0010905d8) dm-11 3PARdata,VV

size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:8 *sdj *8:144 *active ready running

**`- 4:0:0:8 *sdx *65:112 active ready running

op-tst-fsdata02-rw-04Jun2010 (350002ac0021c05d8) dm-18 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:10 sdl *8:176 *active ready running

**`- 4:0:0:10 sdz *65:144 active ready running

testnfs02 (350002ac000ea05d8) dm-10 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:3 *sde *8:64 **active ready running

**`- 4:0:0:3 *sds *65:32 *active ready running

testnfs01 (350002ac000e905d8) dm-9 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:2 *sdd *8:48 **active ready running

**`- 4:0:0:2 *sdr *65:16 *active ready running

op-tst-fsdata04-rw-04Jun2010 (350002ac0021e05d8) dm-17 3PARdata,VV

size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw

`-+- policy='round-robin 0' prio=2 status=active

**|- 3:0:0:12 sdn *8:208 *active ready running

**`- 4:0:0:12 sdab 65:176 active ready running



Aug 27 11:52:26 | sdk: ownership set to op-tst-fsdata01-rw-04Jun2010

Aug 27 11:52:26 | sdk: not found in pathvec

Aug 27 11:52:26 | sdk: mask = 0xc

Aug 27 11:52:26 | sdk: get_state

Aug 27 11:52:26 | sdk: state = running

Aug 27 11:52:26 | sdk: state = 3

Aug 27 11:52:26 | sdk: const prio = 1

Aug 27 11:52:26 | sdy: ownership set to op-tst-fsdata01-rw-04Jun2010

Aug 27 11:52:26 | sdy: not found in pathvec

Aug 27 11:52:26 | sdy: mask = 0xc

Aug 27 11:52:26 | sdy: get_state

Aug 27 11:52:26 | sdy: state = running

Aug 27 11:52:26 | sdy: state = 3

Aug 27 11:52:26 | sdy: const prio = 1

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: pgfailback = 15 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: pgpolicy = multibus (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: selector = round-robin 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: features = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: hwhandler = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: rr_weight = 2 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: minio = 1000 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: no_path_retry = -2 (controller setting)

Aug 27 11:52:26 | pg_timeout = NONE (internal default)

Aug 27 11:52:26 | op-tst-fsdata01-rw-04Jun2010: set ACT_CREATE (map does not exist)

create: op-tst-fsdata01-rw-04Jun2010 (350002ac0021b05d8) undef 3PARdata,VV

size=2.0T features='0' hwhandler='0' wp=undef

`-+- policy='round-robin 0' prio=2 status=undef

**|- 3:0:0:9 *sdk *8:160 *undef ready running

**`- 4:0:0:9 *sdy *65:128 undef ready running

Aug 27 11:52:26 | sdm: ownership set to op-tst-fsdata03-rw-04Jun2010

Aug 27 11:52:26 | sdm: not found in pathvec

Aug 27 11:52:26 | sdm: mask = 0xc

Aug 27 11:52:26 | sdm: get_state

Aug 27 11:52:26 | sdm: state = running

Aug 27 11:52:26 | sdm: state = 3

Aug 27 11:52:26 | sdm: const prio = 1

Aug 27 11:52:26 | sdaa: ownership set to op-tst-fsdata03-rw-04Jun2010

Aug 27 11:52:26 | sdaa: not found in pathvec

Aug 27 11:52:26 | sdaa: mask = 0xc

Aug 27 11:52:26 | sdaa: get_state

Aug 27 11:52:26 | sdaa: state = running

Aug 27 11:52:26 | sdaa: state = 3

Aug 27 11:52:26 | sdaa: const prio = 1

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: pgfailback = 15 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: pgpolicy = multibus (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: selector = round-robin 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: features = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: hwhandler = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: rr_weight = 2 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: minio = 1000 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: no_path_retry = -2 (controller setting)

Aug 27 11:52:26 | pg_timeout = NONE (internal default)

Aug 27 11:52:26 | op-tst-fsdata03-rw-04Jun2010: set ACT_CREATE (map does not exist)

create: op-tst-fsdata03-rw-04Jun2010 (350002ac0021d05d8) undef 3PARdata,VV

size=2.0T features='0' hwhandler='0' wp=undef

`-+- policy='round-robin 0' prio=2 status=undef

**|- 3:0:0:11 sdm *8:192 *undef ready running

**`- 4:0:0:11 sdaa 65:160 undef ready running

Aug 27 11:52:26 | sdn: ownership set to op-tst-fsdata04-rw-04Jun2010

Aug 27 11:52:26 | sdn: not found in pathvec

Aug 27 11:52:26 | sdn: mask = 0xc

Aug 27 11:52:26 | sdn: get_state

Aug 27 11:52:26 | sdn: state = running

Aug 27 11:52:26 | sdn: state = 3

Aug 27 11:52:26 | sdn: const prio = 1

Aug 27 11:52:26 | sdab: ownership set to op-tst-fsdata04-rw-04Jun2010

Aug 27 11:52:26 | sdab: not found in pathvec

Aug 27 11:52:26 | sdab: mask = 0xc

Aug 27 11:52:26 | sdab: get_state

Aug 27 11:52:26 | sdab: state = running

Aug 27 11:52:26 | sdab: state = 3

Aug 27 11:52:26 | sdab: const prio = 1

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: pgfailback = 15 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: pgpolicy = multibus (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: selector = round-robin 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: features = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: hwhandler = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: rr_weight = 2 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: minio = 1000 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: no_path_retry = -2 (controller setting)

Aug 27 11:52:26 | pg_timeout = NONE (internal default)

Aug 27 11:52:26 | op-tst-fsdata04-rw-04Jun2010: set ACT_CREATE (map does not exist)

create: op-tst-fsdata04-rw-04Jun2010 (350002ac0021e05d8) undef 3PARdata,VV

size=2.0T features='0' hwhandler='0' wp=undef

`-+- policy='round-robin 0' prio=2 status=undef

**|- 3:0:0:12 sdn *8:208 *undef ready running

**`- 4:0:0:12 sdab 65:176 undef ready running

Aug 27 11:52:26 | sdl: ownership set to op-tst-fsdata02-rw-04Jun2010

Aug 27 11:52:26 | sdl: not found in pathvec

Aug 27 11:52:26 | sdl: mask = 0xc

Aug 27 11:52:26 | sdl: get_state

Aug 27 11:52:26 | sdl: state = running

Aug 27 11:52:26 | sdl: state = 3

Aug 27 11:52:26 | sdl: const prio = 1

Aug 27 11:52:26 | sdz: ownership set to op-tst-fsdata02-rw-04Jun2010

Aug 27 11:52:26 | sdz: not found in pathvec

Aug 27 11:52:26 | sdz: mask = 0xc

Aug 27 11:52:26 | sdz: get_state

Aug 27 11:52:26 | sdz: state = running

Aug 27 11:52:26 | sdz: state = 3

Aug 27 11:52:26 | sdz: const prio = 1

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: pgfailback = 15 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: pgpolicy = multibus (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: selector = round-robin 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: features = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: hwhandler = 0 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: rr_weight = 2 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: minio = 1000 (controller setting)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: no_path_retry = -2 (controller setting)

Aug 27 11:52:26 | pg_timeout = NONE (internal default)

Aug 27 11:52:26 | op-tst-fsdata02-rw-04Jun2010: set ACT_CREATE (map does not exist)

create: op-tst-fsdata02-rw-04Jun2010 (350002ac0021c05d8) undef 3PARdata,VV

size=2.0T features='0' hwhandler='0' wp=undef

`-+- policy='round-robin 0' prio=2 status=undef

**|- 3:0:0:10 sdl *8:176 *undef ready running

**`- 4:0:0:10 sdz *65:144 undef ready running







On 8/27/10 11:26 AM, "Malahal Naineni" <malahal@us.ibm.com> wrote:



Stamper, Brian P. (ARC-D)[Logyx LLC] [brian.p.stamper@nasa.gov] wrote:

> Last night I upgraded a box from Fedora 11 to Fedora 13, which upgraded

> multipath from 4.8-10 to 4.9-14. *After the upgrade, multipath is failing to

> create maps for some of my volumes. *The volumes are coming from a 3par

> system, which is directly attached to Qlogic HBAs.

>

> The volumes I'm having problems with contain vgs and lvms. *They're also

> snapshot volumes. *I have a base volume (non-snapshot) that contains a vg

> that is working fine. *I would not expect that these volumes being snapshots

> would be significant, but it's the only common thread I've found so far.

>

> It seems almost like a timing issue, where lvm is grabbing the disks before

> multipath has a chance to create the maps. *What I can't figure is why it

> only affects these volumes. *Looking through /var/log/messages from startup,

> I am seeing some "unknown partition type" messages that do seem to

> correspond to the volumes that dracut is reporting as duplicate PVs, so I'll

> investigate that.

>

> Help would be much appreciated.



If you can, deactivate the affected VG, restart multipath (it should be

able to load maps now) and then activate the VG. If this works, then

there is something that is causing LVM to claim devices before multipath

does.



Thanks, Malahal.

PS: If you don't have multipath in initrd, LVM *may* claim paths as PV's

before the multipath from active root can claim them. Don't have enough

info on Fedora13 initrd if this can happen though.



--

dm-devel mailing list

dm-devel@redhat.com

https://www.redhat.com/mailman/listinfo/dm-devel







--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

"Stamper, Brian P. (ARC-D)[Logyx LLC]" 08-27-2010 09:48 PM

oblem with lvm and multipath on fedora 13
 
Title: Re: [dm-devel] oblem with lvm and multipath on fedora 13





Here’s what I’ve found:



If I vgchange the 2 affected VGs to inactive, multipath will pick up the devices. *However, once I do that, I can’t seem to get the VGs reactivated in such a way to make the volumes mountable. *I use vgchange –a y to make the VG active, then do “vgscan —mknodes”. *That doesn’t seem to create the devices in /dev, only in /dev/mapper, so I’ve tried issuing /etc/init.d/udev-post reload, which does create the volumes in /dev/<vg>/<lv>. *The issue is, it has no blkid. *And if I try to mount the mapper device directly, mount doesn’t know what filesystem type it is. *I’m not an lvm pro, so it’s possible I’m missing a step or two, but in the past when moving around snapshots of lvms, vgscan —mknodes has always been sufficient.



As for not having multipath in initrd, I have created an initrd image with the following:



mkinitrd /boot/initramfs-with-multipath.img 2.6.33.8-149.fc13.x86_64 --with=dm-multipath



I then created a grub entry for it, booted off it, and no change in behavior. *Is that what you had in mind?



-Brian



On 8/27/10 11:26 AM, "Malahal Naineni" <malahal@us.ibm.com> wrote:



If you can, deactivate the affected VG, restart multipath (it should be

able to load maps now) and then activate the VG. If this works, then

there is something that is causing LVM to claim devices before multipath

does.



Thanks, Malahal.

PS: If you don't have multipath in initrd, LVM *may* claim paths as PV's

before the multipath from active root can claim them. Don't have enough

info on Fedora13 initrd if this can happen though.



--

dm-devel mailing list

dm-devel@redhat.com

https://www.redhat.com/mailman/listinfo/dm-devel







--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Malahal Naineni 08-28-2010 05:47 PM

oblem with lvm and multipath on fedora 13
 
Stamper, Brian P. (ARC-D)[Logyx LLC] [brian.p.stamper@nasa.gov] wrote:
> Here’s what I’ve found:
>
> If I vgchange the 2 affected VGs to inactive, multipath will pick up the
> devices. *However, once I do that, I can’t seem to get the VGs reactivated
> in such a way to make the volumes mountable. *I use vgchange –a y to make
> the VG active, then do “vgscan —mknodes”. *That doesn’t seem to create the
> devices in /dev, only in /dev/mapper, so I’ve tried issuing
> /etc/init.d/udev-post reload, which does create the volumes in
> /dev/<vg>/<lv>. *The issue is, it has no blkid. *And if I try to mount the
> mapper device directly, mount doesn’t know what filesystem type it is.
> *I’m not an lvm pro, so it’s possible I’m missing a step or two, but in
> the past when moving around snapshots of lvms, vgscan —mknodes has always
> been sufficient.

The entries in /dev/mapper should work. If they don't work, other
entries don't work either. Based on the above information, can I assume
that you were able to mount such logical volumes without multipath (this
is very unusual, if true)? Inactivate affected VGs, run "multipath -F"
to delete multipath maps, and then activate your VGs. You should now be
where you were before and see if you can mount logical volumes that were
failing before (I doubt if they work though).


> As for not having multipath in initrd, I have created an initrd image with
> the following:
>
> mkinitrd /boot/initramfs-with-multipath.img 2.6.33.8-149.fc13.x86_64
> --with=dm-multipath
>
> I then created a grub entry for it, booted off it, and no change in
> behavior. *Is that what you had in mind?

RedHat mkinitrd scripts go to great lengths to not include multipath in
initrd unless you are currently using your 'root' on multipath! Most
likely it didn't include multipath. You can expand the initrd (it is a
compressed cpio archive, 'zcat initrd.img | cpio -icd' should extract
all the files) and see if multipath is included (look for multipath
in the extracted files). I would suggest you work on your first problem
(not able to mount stuff).

Thanks, Malahal.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

"Stamper, Brian P. (ARC-D)[Logyx LLC]" 08-28-2010 09:48 PM

oblem with lvm and multipath on fedora 13
 
________________________________________
From: dm-devel-bounces@redhat.com [dm-devel-bounces@redhat.com] On Behalf Of Malahal Naineni [malahal@us.ibm.com]
Sent: Saturday, August 28, 2010 10:47 AM
To: dm-devel@redhat.com
Subject: Re: [dm-devel] oblem with lvm and multipath on fedora 13

>The entries in /dev/mapper should work. If they don't work, other
>entries don't work either. Based on the above information, can I assume
>that you were able to mount such logical volumes without multipath (this
>is very unusual, if true)?

Yes.

>Inactivate affected VGs, run "multipath -F"
>to delete multipath maps, and then activate your VGs. You should now be
>where you were before and see if you can mount logical volumes that were
>failing before (I doubt if they work though).

Yes, this worked, if I remove the multipath devices and do a vgscan, I'm back to where I was before deactivating and can once again mount the volumes.

I've included a typescript of the session below this, perhaps you can find steps I'm missing. But it sounds as though you're surprised by the behavior.

>RedHat mkinitrd scripts go to great lengths to not include multipath in
>initrd unless you are currently using your 'root' on multipath! Most
>likely it didn't include multipath. You can expand the initrd (it is a
>compressed cpio archive, 'zcat initrd.img | cpio -icd' should extract
>all the files) and see if multipath is included (look for multipath
>in the extracted files). I would suggest you work on your first problem
>(not able to mount stuff).
>
>Thanks, Malahal.

[root@testfs boot]# mkdir foo
[root@testfs boot]# cd foo
[root@testfs foo]# zcat ../initramfs-with-multipath.img | cpio -icd
26392 blocks
[root@testfs foo]# ls
bin emergency initqueue-finished mount proc tmp
cmdline etc initqueue-settled pre-pivot sbin usr
dev init lib pre-trigger sys var
dracut-005-3.fc13 initqueue lib64 pre-udev sysroot
[root@testfs foo]# find . -print | grep -i multi
./lib/modules/2.6.33.8-149.fc13.x86_64/kernel/drivers/md/dm-multipath.ko

-----------------------------------

Typescript of unmount, deactivate, multipath, activate, vgscan, try to mount, deactivate, multipath -F, vgscan, reactivate, mount:

Script started on Sat 28 Aug 2010 02:18:22 PM PDT
[root@testfs ~]# multipath -ll
testfsdata01 (350002ac000e505d8) dm-8 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:1 sdc 8:32 active ready running
`- 4:0:0:1 sdq 65:0 active ready running
testsnapfslog02 (350002ac001ca05d8) dm-12 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:13 sdo 8:224 active ready running
`- 4:0:0:13 sdac 65:192 active ready running
testfslog01 (350002ac000e605d8) dm-7 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:0 sdb 8:16 active ready running
`- 4:0:0:0 sdp 8:240 active ready running
testsnapfslog01 (350002ac0010905d8) dm-11 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:8 sdj 8:144 active ready running
`- 4:0:0:8 sdx 65:112 active ready running
testnfs02 (350002ac000ea05d8) dm-10 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:3 sde 8:64 active ready running
`- 4:0:0:3 sds 65:32 active ready running
testnfs01 (350002ac000e905d8) dm-9 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:2 sdd 8:48 active ready running
`- 4:0:0:2 sdr 65:16 active ready running
[root@testfs ~]# mount
/dev/mapper/vg_testfs-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext3 (rw,nodev)
/dev/mapper/filestoreVG-filestore on /filestore type ext3 (rw,noatime,nodiratime)
/dev/mapper/testfslog01p1 on /filestore/fsdatadir/transactionLog type ext3 (rw,noatime,nodiratime)
/dev/mapper/testNFS-testNFS on /export type ext3 (rw)
/dev/mapper/opfsdata01--vg-opfsdata01--lv on /soc/san/filestore/snapshots/op-test-fsdata01-rw-04Jun2010 type ext3 (rw,noatime,nodiratime)
/dev/mapper/testsnapfslog02p1 on /soc/san/filestore/snapshots/op-test-fsdata01-rw-04Jun2010/fsdatadir/transactionLog type ext3 (rw,nodev,noatime,nodiratime)
/dev/mapper/spqfsdata01--vg-spqfsdata01--lv on /soc/san/filestore/snapshots/spq-test-fsdata01-rw-27May2010 type ext3 (rw,noatime,nodiratime)
/dev/mapper/testsnapfslog01p1 on /soc/san/filestore/snapshots/spq-test-fsdata01-rw-27May2010/fsdatadir/transactionLog type ext3 (rw,nodev,noatime,nodiratime)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
nfsd on /proc/fs/nfsd type nfsd (rw)
[root@testfs ~]# blkid
/dev/sda1: UUID="675e8280-95d6-4c21-a233-c15fad937362" SEC_TYPE="ext2" TYPE="ext3"
/dev/filestoreVG/filestore: UUID="0b820e66-ea43-4a99-a4f4-d6dbba678245" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/filestoreVG-filestore: UUID="0b820e66-ea43-4a99-a4f4-d6dbba678245" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/testfslog01p1: UUID="e509829d-4333-4f03-8d25-3968bb6da965" SEC_TYPE="ext2" TYPE="ext3"
/dev/testNFS/testNFS: UUID="4458c5b3-edf9-41fa-8714-84f1f6a25b3f" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdb1: UUID="e509829d-4333-4f03-8d25-3968bb6da965" TYPE="ext3"
/dev/mapper/vg_testfs-LogVol01: TYPE="swap" UUID="fa144978-2dae-4eee-88e7-72a70f70e104"
/dev/mapper/vg_testfs-lv_root: UUID="9db6bbe3-fa81-4889-802a-25daa9b36059" TYPE="ext4"
/dev/mapper/vg_testfs-LogVol02: TYPE="swap" UUID="cdb5fa92-8fa2-401d-ba48-c937342edf64"
/dev/mapper/testNFS-testNFS: UUID="4458c5b3-edf9-41fa-8714-84f1f6a25b3f" TYPE="ext3"
/dev/mapper/testsnapfslog01p1: UUID="986595d9-042c-4777-8d72-1607eebff422" TYPE="ext3" SEC_TYPE="ext2"
/dev/sdp1: UUID="e509829d-4333-4f03-8d25-3968bb6da965" TYPE="ext3" SEC_TYPE="ext2"
/dev/mapper/testsnapfslog02p1: UUID="1491e12d-78a5-4b1a-ad61-7b6b8232149e" TYPE="ext3" SEC_TYPE="ext2"
/dev/sdm1: UUID="5cOmDe-oW4S-IAPn-z2AD-C1Mj-Rx0j-d2xkeA" TYPE="LVM2_member"
/dev/sdn1: UUID="IqnCrw-AB33-4KIQ-2Wnm-gDDE-z3B6-f2VTiR" TYPE="LVM2_member"
/dev/sdo1: UUID="1491e12d-78a5-4b1a-ad61-7b6b8232149e" SEC_TYPE="ext2" TYPE="ext3"
/dev/sdq: UUID="2VqqWi-cWZo-Tu5T-cC40-z4CW-ZOti-aWhi5F" TYPE="LVM2_member"
/dev/sdr: UUID="JV8nOF-NxdU-WAZP-ZY9h-iXo8-HJWs-CjSjTj" TYPE="LVM2_member"
/dev/sdu1: UUID="Nh6BDQ-VuD8-w3hE-2Qwi-eNUE-uQl6-iooRBm" TYPE="LVM2_member"
/dev/sds: UUID="Crn275-rSvZ-6Bl1-LplP-c0iG-PvtZ-OVofd8" TYPE="LVM2_member"
/dev/sda2: UUID="SQ8Lg0-uiV2-xjUN-jWO0-xy9A-Ct6Y-PfRcGV" TYPE="LVM2_member"
/dev/sdc: UUID="2VqqWi-cWZo-Tu5T-cC40-z4CW-ZOti-aWhi5F" TYPE="LVM2_member"
/dev/sdd: UUID="JV8nOF-NxdU-WAZP-ZY9h-iXo8-HJWs-CjSjTj" TYPE="LVM2_member"
/dev/sdf1: UUID="3MgaHr-PuGs-Yc41-hT5Z-bZlF-UH8h-jdAwoa" TYPE="LVM2_member"
/dev/sde: UUID="Crn275-rSvZ-6Bl1-LplP-c0iG-PvtZ-OVofd8" TYPE="LVM2_member"
/dev/sdg1: UUID="Nh6BDQ-VuD8-w3hE-2Qwi-eNUE-uQl6-iooRBm" TYPE="LVM2_member"
/dev/sdi1: UUID="twn6Tb-5qNV-Y3WZ-xvWI-4zZe-N3ji-KLj0oX" TYPE="LVM2_member"
/dev/sdj1: UUID="986595d9-042c-4777-8d72-1607eebff422" TYPE="ext3" SEC_TYPE="ext2"
/dev/sdh1: UUID="LG96Gx-jwLw-c7U5-gqAo-7VZY-5s1m-Em51l6" TYPE="LVM2_member"
/dev/sdk1: UUID="MMRd81-t77d-zriG-iRUH-VB13-kfMN-lMZczT" TYPE="LVM2_member"
/dev/sdl1: UUID="GS2aK5-UfeT-WYzN-AJtJ-6CRd-Z6GC-5gP1Eb" TYPE="LVM2_member"
/dev/sdt1: UUID="3MgaHr-PuGs-Yc41-hT5Z-bZlF-UH8h-jdAwoa" TYPE="LVM2_member"
/dev/sdv1: UUID="LG96Gx-jwLw-c7U5-gqAo-7VZY-5s1m-Em51l6" TYPE="LVM2_member"
/dev/sdw1: UUID="twn6Tb-5qNV-Y3WZ-xvWI-4zZe-N3ji-KLj0oX" TYPE="LVM2_member"
/dev/sdx1: UUID="986595d9-042c-4777-8d72-1607eebff422" TYPE="ext3" SEC_TYPE="ext2"
/dev/sdy1: UUID="MMRd81-t77d-zriG-iRUH-VB13-kfMN-lMZczT" TYPE="LVM2_member"
/dev/sdz1: UUID="GS2aK5-UfeT-WYzN-AJtJ-6CRd-Z6GC-5gP1Eb" TYPE="LVM2_member"
/dev/sdaa1: UUID="5cOmDe-oW4S-IAPn-z2AD-C1Mj-Rx0j-d2xkeA" TYPE="LVM2_member"
/dev/sdab1: UUID="IqnCrw-AB33-4KIQ-2Wnm-gDDE-z3B6-f2VTiR" TYPE="LVM2_member"
/dev/sdac1: UUID="1491e12d-78a5-4b1a-ad61-7b6b8232149e" SEC_TYPE="ext2" TYPE="ext3"
/dev/mapper/testfsdata01: UUID="2VqqWi-cWZo-Tu5T-cC40-z4CW-ZOti-aWhi5F" TYPE="LVM2_member"
/dev/mapper/testnfs01: UUID="JV8nOF-NxdU-WAZP-ZY9h-iXo8-HJWs-CjSjTj" TYPE="LVM2_member"
/dev/mapper/testnfs02: UUID="Crn275-rSvZ-6Bl1-LplP-c0iG-PvtZ-OVofd8" TYPE="LVM2_member"
/dev/vg_testfs/LogVol01: UUID="fa144978-2dae-4eee-88e7-72a70f70e104" TYPE="swap"
/dev/block/253:4: UUID="9db6bbe3-fa81-4889-802a-25daa9b36059" TYPE="ext4"
/dev/block/8:2: UUID="SQ8Lg0-uiV2-xjUN-jWO0-xy9A-Ct6Y-PfRcGV" TYPE="LVM2_member"
/dev/mapper/opfsdata01--vg-opfsdata01--lv: UUID="4e86d36f-bbd8-4013-833f-13156cb74383" TYPE="ext3"
/dev/mapper/spqfsdata01--vg-spqfsdata01--lv: UUID="37430186-3e39-4410-a5b6-231eae1563c3" TYPE="ext3"
[root@testfs ~]# cat unmountem
umount /soc/san/filestore/snapshots/op-test-fsdata01-rw-04Jun2010/fsdatadir/transactionLog
umount /soc/san/filestore/snapshots/op-test-fsdata01-rw-04Jun2010
umount /soc/san/filestore/snapshots/spq-test-fsdata01-rw-27May2010/fsdatadir/transactionLog
umount /soc/san/filestore/snapshots/spq-test-fsdata01-rw-27May2010
[root@testfs ~]# ./unmountem
[root@testfs ~]# cat deact
vgchange -a n spqfsdata01-vg
vgchange -a n opfsdata01-vg
[root@testfs ~]# ./deact
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
0 logical volume(s) in volume group "spqfsdata01-vg" now active
/dev/dm-1: open failed: No such device or address
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
0 logical volume(s) in volume group "opfsdata01-vg" now active
[root@testfs ~]# multipath -v2
create: spq-test-fsdata01-rw-27May2010 (350002ac0022005d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:4 sdf 8:80 undef ready running
`- 4:0:0:4 sdt 65:48 undef ready running
create: spq-test-fsdata04-rw-27May2010 (350002ac0022305d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:7 sdi 8:128 undef ready running
`- 4:0:0:7 sdw 65:96 undef ready running
create: spq-test-fsdata02-rw-27May2010 (350002ac0022105d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:5 sdg 8:96 undef ready running
`- 4:0:0:5 sdu 65:64 undef ready running
create: spq-test-fsdata03-rw-27May2010 (350002ac0022205d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:6 sdh 8:112 undef ready running
`- 4:0:0:6 sdv 65:80 undef ready running
create: op-tst-fsdata01-rw-04Jun2010 (350002ac0021b05d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:9 sdk 8:160 undef ready running
`- 4:0:0:9 sdy 65:128 undef ready running
create: op-tst-fsdata04-rw-04Jun2010 (350002ac0021e05d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:12 sdn 8:208 undef ready running
`- 4:0:0:12 sdab 65:176 undef ready running
create: op-tst-fsdata02-rw-04Jun2010 (350002ac0021c05d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:10 sdl 8:176 undef ready running
`- 4:0:0:10 sdz 65:144 undef ready running
create: op-tst-fsdata03-rw-04Jun2010 (350002ac0021d05d8) undef 3PARdata,VV
size=2.0T features='0' hwhandler='0' wp=undef
`-+- policy='round-robin 0' prio=2 status=undef
|- 3:0:0:11 sdm 8:192 undef ready running
`- 4:0:0:11 sdaa 65:160 undef ready running
[root@testfs ~]# multipath -ll
op-tst-fsdata01-rw-04Jun2010 (350002ac0021b05d8) dm-18 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:9 sdk 8:160 active ready running
`- 4:0:0:9 sdy 65:128 active ready running
spq-test-fsdata04-rw-27May2010 (350002ac0022305d8) dm-1 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:7 sdi 8:128 active ready running
`- 4:0:0:7 sdw 65:96 active ready running
spq-test-fsdata01-rw-27May2010 (350002ac0022005d8) dm-0 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:4 sdf 8:80 active ready running
`- 4:0:0:4 sdt 65:48 active ready running
testfsdata01 (350002ac000e505d8) dm-8 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:1 sdc 8:32 active ready running
`- 4:0:0:1 sdq 65:0 active ready running
op-tst-fsdata03-rw-04Jun2010 (350002ac0021d05d8) dm-21 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:11 sdm 8:192 active ready running
`- 4:0:0:11 sdaa 65:160 active ready running
spq-test-fsdata03-rw-27May2010 (350002ac0022205d8) dm-17 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:6 sdh 8:112 active ready running
`- 4:0:0:6 sdv 65:80 active ready running
testsnapfslog02 (350002ac001ca05d8) dm-12 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:13 sdo 8:224 active ready running
`- 4:0:0:13 sdac 65:192 active ready running
testfslog01 (350002ac000e605d8) dm-7 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:0 sdb 8:16 active ready running
`- 4:0:0:0 sdp 8:240 active ready running
testsnapfslog01 (350002ac0010905d8) dm-11 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:8 sdj 8:144 active ready running
`- 4:0:0:8 sdx 65:112 active ready running
op-tst-fsdata02-rw-04Jun2010 (350002ac0021c05d8) dm-20 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:10 sdl 8:176 active ready running
`- 4:0:0:10 sdz 65:144 active ready running
testnfs02 (350002ac000ea05d8) dm-10 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:3 sde 8:64 active ready running
`- 4:0:0:3 sds 65:32 active ready running
spq-test-fsdata02-rw-27May2010 (350002ac0022105d8) dm-16 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:5 sdg 8:96 active ready running
`- 4:0:0:5 sdu 65:64 active ready running
testnfs01 (350002ac000e905d8) dm-9 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:2 sdd 8:48 active ready running
`- 4:0:0:2 sdr 65:16 active ready running
op-tst-fsdata04-rw-04Jun2010 (350002ac0021e05d8) dm-19 3PARdata,VV
size=2.0T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:12 sdn 8:208 active ready running
`- 4:0:0:12 sdab 65:176 active ready running
[root@testfs ~]# ls /dev/op*
ls: cannot access /dev/op*: No such file or directory
[root@testfs ~]# vgscan --mknodes
Reading all physical volumes. This may take a while...
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
Found volume group "opfsdata01-vg" using metadata type lvm2
Found volume group "spqfsdata01-vg" using metadata type lvm2
Found volume group "testNFS" using metadata type lvm2
Found volume group "filestoreVG" using metadata type lvm2
Found volume group "vg_testfs" using metadata type lvm2
[root@testfs ~]# vgdisplay
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
--- Volume group ---
VG Name opfsdata01-vg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 8.00 TiB
PE Size 4.00 MiB
Total PE 2096120
Alloc PE / Size 2096120 / 8.00 TiB
Free PE / Size 0 / 0
VG UUID FeSlsp-mVzr-Xo6B-RIc5-72cv-xJLY-XRzndA

--- Volume group ---
VG Name spqfsdata01-vg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 8.00 TiB
PE Size 4.00 MiB
Total PE 2096120
Alloc PE / Size 2096120 / 8.00 TiB
Free PE / Size 0 / 0
VG UUID cpKgCE-wdlC-ee4V-n5gd-ysGj-bJKq-c0Uk5g

--- Volume group ---
VG Name testNFS
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.91 TiB
PE Size 4.00 MiB
Total PE 1023998
Alloc PE / Size 1022362 / 3.90 TiB
Free PE / Size 1636 / 6.39 GiB
VG UUID WvSi5z-IpGB-h3tE-NzSy-Bou6-6mcK-ew3RaK

--- Volume group ---
VG Name filestoreVG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.95 TiB
PE Size 4.00 MiB
Total PE 511999
Alloc PE / Size 511181 / 1.95 TiB
Free PE / Size 818 / 3.20 GiB
VG UUID dLvVeo-yuIk-xBFN-PAOV-PhVr-LXUk-mTR0CZ

--- Volume group ---
VG Name vg_testfs
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 278.67 GiB
PE Size 4.00 MiB
Total PE 71340
Alloc PE / Size 71340 / 278.67 GiB
Free PE / Size 0 / 0
VG UUID I8OmYL-lcr6-M01j-cker-Toga-6cG5-GHT02c

[root@testfs ~]# lvdisplay
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
--- Logical volume ---
LV Name /dev/opfsdata01-vg/opfsdata01-lv
VG Name opfsdata01-vg
LV UUID n9EGjn-Jfsi-pIJP-Wd02-mLd9-rGAF-0VIZhq
LV Write Access read/write
LV Status NOT available
LV Size 8.00 TiB
Current LE 2096120
Segments 4
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/spqfsdata01-vg/spqfsdata01-lv
VG Name spqfsdata01-vg
LV UUID n9EGjn-Jfsi-pIJP-Wd02-mLd9-rGAF-0VIZhq
LV Write Access read/write
LV Status NOT available
LV Size 8.00 TiB
Current LE 2096120
Segments 4
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/testNFS/testNFS
VG Name testNFS
LV UUID I8MMOA-Qfin-7WTe-HwcM-7Xey-WUPj-Kw8ra1
LV Write Access read/write
LV Status available
# open 1
LV Size 3.90 TiB
Current LE 1022362
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/filestoreVG/filestore
VG Name filestoreVG
LV UUID pdeKUQ-fdMQ-Pmhc-ONYW-Jrtz-knAk-JSnEH5
LV Write Access read/write
LV Status available
# open 1
LV Size 1.95 TiB
Current LE 511181
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Name /dev/vg_testfs/lv_root
VG Name vg_testfs
LV UUID DQoYPJ-koBr-xyxy-eXAx-8tS2-2inZ-4few5U
LV Write Access read/write
LV Status available
# open 1
LV Size 247.42 GiB
Current LE 63340
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Name /dev/vg_testfs/LogVol01
VG Name vg_testfs
LV UUID 71ymYW-3ozU-8JI0-P030-qMd5-aJi1-0alm8B
LV Write Access read/write
LV Status available
# open 1
LV Size 15.62 GiB
Current LE 4000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Name /dev/vg_testfs/LogVol02
VG Name vg_testfs
LV UUID johyzz-tzb4-TdQO-AfLy-8UfU-O3q8-eh7bUM
LV Write Access read/write
LV Status available
# open 1
LV Size 15.62 GiB
Current LE 4000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

[root@testfs ~]# cat react
vgchange -a y spqfsdata01-vg
vgchange -a y opfsdata01-vg
[root@testfs ~]# ./react
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
device-mapper: reload ioctl failed: Invalid argument
1 logical volume(s) in volume group "spqfsdata01-vg" now active
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
device-mapper: reload ioctl failed: Invalid argument
1 logical volume(s) in volume group "opfsdata01-vg" now active
[root@testfs ~]# lvdisplay
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
--- Logical volume ---
LV Name /dev/opfsdata01-vg/opfsdata01-lv
VG Name opfsdata01-vg
LV UUID n9EGjn-Jfsi-pIJP-Wd02-mLd9-rGAF-0VIZhq
LV Write Access read/write
LV Status available
# open 0
LV Size 8.00 TiB
Current LE 2096120
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:23

--- Logical volume ---
LV Name /dev/spqfsdata01-vg/spqfsdata01-lv
VG Name spqfsdata01-vg
LV UUID n9EGjn-Jfsi-pIJP-Wd02-mLd9-rGAF-0VIZhq
LV Write Access read/write
LV Status available
# open 0
LV Size 8.00 TiB
Current LE 2096120
Segments 4
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:22

--- Logical volume ---
LV Name /dev/testNFS/testNFS
VG Name testNFS
LV UUID I8MMOA-Qfin-7WTe-HwcM-7Xey-WUPj-Kw8ra1
LV Write Access read/write
LV Status available
# open 1
LV Size 3.90 TiB
Current LE 1022362
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/filestoreVG/filestore
VG Name filestoreVG
LV UUID pdeKUQ-fdMQ-Pmhc-ONYW-Jrtz-knAk-JSnEH5
LV Write Access read/write
LV Status available
# open 1
LV Size 1.95 TiB
Current LE 511181
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Name /dev/vg_testfs/lv_root
VG Name vg_testfs
LV UUID DQoYPJ-koBr-xyxy-eXAx-8tS2-2inZ-4few5U
LV Write Access read/write
LV Status available
# open 1
LV Size 247.42 GiB
Current LE 63340
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Name /dev/vg_testfs/LogVol01
VG Name vg_testfs
LV UUID 71ymYW-3ozU-8JI0-P030-qMd5-aJi1-0alm8B
LV Write Access read/write
LV Status available
# open 1
LV Size 15.62 GiB
Current LE 4000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Name /dev/vg_testfs/LogVol02
VG Name vg_testfs
LV UUID johyzz-tzb4-TdQO-AfLy-8UfU-O3q8-eh7bUM
LV Write Access read/write
LV Status available
# open 1
LV Size 15.62 GiB
Current LE 4000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

[root@testfs ~]# ls /dev/op*
ls: cannot access /dev/op*: No such file or directory
[root@testfs ~]# ls /dev/mapper/opfsdata01--vg-opfsdata01--lv
/dev/mapper/opfsdata01--vg-opfsdata01--lv
[root@testfs ~]# ls -als /dev/mapper/opfsdata01--vg-opfsdata01--lv
0 brw-rw---- 1 root disk 253, 23 Aug 28 14:20 /dev/mapper/opfsdata01--vg-opfsdata01--lv
[root@testfs ~]# mount /dev/mapper/opfsdata01--vg-opfsdata01--lv /mnt
mount: you must specify the filesystem type
[root@testfs ~]# blkid | grep opfs
[root@testfs ~]# blkid -p /dev/mapper/opfsdata01--vg-opfsdata01--lv
[root@testfs ~]# /etc/init.d/udev-post reload
Retrigger failed udev events OK
[root@testfs ~]# ls -als /dev/op*
ls: cannot access /dev/op*: No such file or directory
[root@testfs ~]# vgscan --mknodes
Reading all physical volumes. This may take a while...
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
Found volume group "opfsdata01-vg" using metadata type lvm2
Found volume group "spqfsdata01-vg" using metadata type lvm2
Found volume group "testNFS" using metadata type lvm2
Found volume group "filestoreVG" using metadata type lvm2
Found volume group "vg_testfs" using metadata type lvm2
The link /dev/opfsdata01-vg/opfsdata01-lv should had been created by udev but it was not found. Falling back to direct link creation.
The link /dev/spqfsdata01-vg/spqfsdata01-lv should had been created by udev but it was not found. Falling back to direct link creation.
[root@testfs ~]# ls -als /dev/op*
total 0
0 drwxr-xr-x 2 root root 60 Aug 28 14:22 .
0 drwxr-xr-x 23 root root 5840 Aug 28 14:22 ..
0 lrwxrwxrwx 1 root root 41 Aug 28 14:22 opfsdata01-lv -> /dev/mapper/opfsdata01--vg-opfsdata01--lv
[root@testfs ~]# blkid /dev/opfsdata01-vg/opfsdata01-lv
[root@testfs ~]# blkid -p /dev/opfsdata01-vg/opfsdata01-lv
[root@testfs ~]# mount /dev/mapper/opfsdata01--vg-opfsdata01--lv /mnt
mount: you must specify the filesystem type
[root@testfs ~]# ./cat deact
vgchange -a n spqfsdata01-vg
vgchange -a n opfsdata01-vg
[root@testfs ~]# ./deact
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
Node /dev/mapper/spqfsdata01--vg-spqfsdata01--lv was not removed by udev. Falling back to direct node removal.
The link /dev/spqfsdata01-vg/spqfsdata01-lv should have been removed by udev but it is still present. Falling back to direct link removal.
0 logical volume(s) in volume group "spqfsdata01-vg" now active
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
Node /dev/mapper/opfsdata01--vg-opfsdata01--lv was not removed by udev. Falling back to direct node removal.
The link /dev/opfsdata01-vg/opfsdata01-lv should have been removed by udev but it is still present. Falling back to direct link removal.
0 logical volume(s) in volume group "opfsdata01-vg" now active
[root@testfs ~]# mulitipath -F
Aug 28 14:23:05 | testfslog01: map in use
[root@testfs ~]# multipath -ll
testfslog01 (350002ac000e605d8) dm-7 3PARdata,VV
size=200G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=2 status=active
|- 3:0:0:0 sdb 8:16 active ready running
`- 4:0:0:0 sdp 8:240 active ready running
[root@testfs ~]# vgscan --mknodes
Reading all physical volumes. This may take a while...
Found duplicate PV 2VqqWicWZoTu5TcC40z4CWZOtiaWhi5F: using /dev/sdc not /dev/sdq
Found duplicate PV JV8nOFNxdUWAZPZY9hiXo8HJWsCjSjTj: using /dev/sdd not /dev/sdr
Found duplicate PV Crn275rSvZ6Bl1LplPc0iGPvtZOVofd8: using /dev/sde not /dev/sds
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
Found volume group "opfsdata01-vg" using metadata type lvm2
Found volume group "spqfsdata01-vg" using metadata type lvm2
Found volume group "filestoreVG" using metadata type lvm2
Found volume group "testNFS" using metadata type lvm2
Found volume group "vg_testfs" using metadata type lvm2
[root@testfs ~]# vgdisplay
Found duplicate PV 2VqqWicWZoTu5TcC40z4CWZOtiaWhi5F: using /dev/sdc not /dev/sdq
Found duplicate PV JV8nOFNxdUWAZPZY9hiXo8HJWsCjSjTj: using /dev/sdd not /dev/sdr
Found duplicate PV Crn275rSvZ6Bl1LplPc0iGPvtZOVofd8: using /dev/sde not /dev/sds
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
--- Volume group ---
VG Name opfsdata01-vg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 8.00 TiB
PE Size 4.00 MiB
Total PE 2096120
Alloc PE / Size 2096120 / 8.00 TiB
Free PE / Size 0 / 0
VG UUID FeSlsp-mVzr-Xo6B-RIc5-72cv-xJLY-XRzndA

--- Volume group ---
VG Name spqfsdata01-vg
System ID
Format lvm2
Metadata Areas 4
Metadata Sequence No 9
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 4
Act PV 4
VG Size 8.00 TiB
PE Size 4.00 MiB
Total PE 2096120
Alloc PE / Size 2096120 / 8.00 TiB
Free PE / Size 0 / 0
VG UUID cpKgCE-wdlC-ee4V-n5gd-ysGj-bJKq-c0Uk5g

--- Volume group ---
VG Name filestoreVG
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size 1.95 TiB
PE Size 4.00 MiB
Total PE 511999
Alloc PE / Size 511181 / 1.95 TiB
Free PE / Size 818 / 3.20 GiB
VG UUID dLvVeo-yuIk-xBFN-PAOV-PhVr-LXUk-mTR0CZ

--- Volume group ---
VG Name testNFS
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 2
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 2
Act PV 2
VG Size 3.91 TiB
PE Size 4.00 MiB
Total PE 1023998
Alloc PE / Size 1022362 / 3.90 TiB
Free PE / Size 1636 / 6.39 GiB
VG UUID WvSi5z-IpGB-h3tE-NzSy-Bou6-6mcK-ew3RaK

--- Volume group ---
VG Name vg_testfs
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 3
Open LV 3
Max PV 0
Cur PV 1
Act PV 1
VG Size 278.67 GiB
PE Size 4.00 MiB
Total PE 71340
Alloc PE / Size 71340 / 278.67 GiB
Free PE / Size 0 / 0
VG UUID I8OmYL-lcr6-M01j-cker-Toga-6cG5-GHT02c

[root@testfs ~]# lvscan
Found duplicate PV 2VqqWicWZoTu5TcC40z4CWZOtiaWhi5F: using /dev/sdc not /dev/sdq
Found duplicate PV JV8nOFNxdUWAZPZY9hiXo8HJWsCjSjTj: using /dev/sdd not /dev/sdr
Found duplicate PV Crn275rSvZ6Bl1LplPc0iGPvtZOVofd8: using /dev/sde not /dev/sds
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
inactive '/dev/opfsdata01-vg/opfsdata01-lv' [8.00 TiB] inherit
inactive '/dev/spqfsdata01-vg/spqfsdata01-lv' [8.00 TiB] inherit
ACTIVE '/dev/filestoreVG/filestore' [1.95 TiB] inherit
ACTIVE '/dev/testNFS/testNFS' [3.90 TiB] inherit
ACTIVE '/dev/vg_testfs/lv_root' [247.42 GiB] inherit
ACTIVE '/dev/vg_testfs/LogVol01' [15.62 GiB] inherit
ACTIVE '/dev/vg_testfs/LogVol02' [15.62 GiB] inherit
[root@testfs ~]# lvdisplay
Found duplicate PV 2VqqWicWZoTu5TcC40z4CWZOtiaWhi5F: using /dev/sdc not /dev/sdq
Found duplicate PV JV8nOFNxdUWAZPZY9hiXo8HJWsCjSjTj: using /dev/sdd not /dev/sdr
Found duplicate PV Crn275rSvZ6Bl1LplPc0iGPvtZOVofd8: using /dev/sde not /dev/sds
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
--- Logical volume ---
LV Name /dev/opfsdata01-vg/opfsdata01-lv
VG Name opfsdata01-vg
LV UUID n9EGjn-Jfsi-pIJP-Wd02-mLd9-rGAF-0VIZhq
LV Write Access read/write
LV Status NOT available
LV Size 8.00 TiB
Current LE 2096120
Segments 4
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/spqfsdata01-vg/spqfsdata01-lv
VG Name spqfsdata01-vg
LV UUID n9EGjn-Jfsi-pIJP-Wd02-mLd9-rGAF-0VIZhq
LV Write Access read/write
LV Status NOT available
LV Size 8.00 TiB
Current LE 2096120
Segments 4
Allocation inherit
Read ahead sectors auto

--- Logical volume ---
LV Name /dev/filestoreVG/filestore
VG Name filestoreVG
LV UUID pdeKUQ-fdMQ-Pmhc-ONYW-Jrtz-knAk-JSnEH5
LV Write Access read/write
LV Status available
# open 1
LV Size 1.95 TiB
Current LE 511181
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2

--- Logical volume ---
LV Name /dev/testNFS/testNFS
VG Name testNFS
LV UUID I8MMOA-Qfin-7WTe-HwcM-7Xey-WUPj-Kw8ra1
LV Write Access read/write
LV Status available
# open 1
LV Size 3.90 TiB
Current LE 1022362
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3

--- Logical volume ---
LV Name /dev/vg_testfs/lv_root
VG Name vg_testfs
LV UUID DQoYPJ-koBr-xyxy-eXAx-8tS2-2inZ-4few5U
LV Write Access read/write
LV Status available
# open 1
LV Size 247.42 GiB
Current LE 63340
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:4

--- Logical volume ---
LV Name /dev/vg_testfs/LogVol01
VG Name vg_testfs
LV UUID 71ymYW-3ozU-8JI0-P030-qMd5-aJi1-0alm8B
LV Write Access read/write
LV Status available
# open 1
LV Size 15.62 GiB
Current LE 4000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5

--- Logical volume ---
LV Name /dev/vg_testfs/LogVol02
VG Name vg_testfs
LV UUID johyzz-tzb4-TdQO-AfLy-8UfU-O3q8-eh7bUM
LV Write Access read/write
LV Status available
# open 1
LV Size 15.62 GiB
Current LE 4000
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6

[root@testfs ~]# cat react
vgchange -a y spqfsdata01-vg
vgchange -a y opfsdata01-vg
[root@testfs ~]# ./react
Found duplicate PV 2VqqWicWZoTu5TcC40z4CWZOtiaWhi5F: using /dev/sdc not /dev/sdq
Found duplicate PV JV8nOFNxdUWAZPZY9hiXo8HJWsCjSjTj: using /dev/sdd not /dev/sdr
Found duplicate PV Crn275rSvZ6Bl1LplPc0iGPvtZOVofd8: using /dev/sde not /dev/sds
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
1 logical volume(s) in volume group "spqfsdata01-vg" now active
Found duplicate PV 2VqqWicWZoTu5TcC40z4CWZOtiaWhi5F: using /dev/sdc not /dev/sdq
Found duplicate PV JV8nOFNxdUWAZPZY9hiXo8HJWsCjSjTj: using /dev/sdd not /dev/sdr
Found duplicate PV Crn275rSvZ6Bl1LplPc0iGPvtZOVofd8: using /dev/sde not /dev/sds
Found duplicate PV 3MgaHrPuGsYc41hT5ZbZlFUH8hjdAwoa: using /dev/sdf1 not /dev/sdt1
Found duplicate PV Nh6BDQVuD8w3hE2QwieNUEuQl6iooRBm: using /dev/sdg1 not /dev/sdu1
Found duplicate PV LG96GxjwLwc7U5gqAo7VZY5s1mEm51l6: using /dev/sdh1 not /dev/sdv1
Found duplicate PV twn6Tb5qNVY3WZxvWI4zZeN3jiKLj0oX: using /dev/sdi1 not /dev/sdw1
Found duplicate PV MMRd81t77dzriGiRUHVB13kfMNlMZczT: using /dev/sdk1 not /dev/sdy1
Found duplicate PV GS2aK5UfeTWYzNAJtJ6CRdZ6GC5gP1Eb: using /dev/sdl1 not /dev/sdz1
Found duplicate PV 5cOmDeoW4SIAPnz2ADC1MjRx0jd2xkeA: using /dev/sdm1 not /dev/sdaa1
Found duplicate PV IqnCrwAB334KIQ2WnmgDDEz3B6f2VTiR: using /dev/sdn1 not /dev/sdab1
1 logical volume(s) in volume group "opfsdata01-vg" now active
[root@testfs ~]# ls /dev/opfsdata01-vg/opfsdata01-lv
/dev/opfsdata01-vg/opfsdata01-lv
[root@testfs ~]# ls -als /dev/opfsdata01-vg/opfsdata01-lv
0 lrwxrwxrwx 1 root root 7 Aug 28 14:23 /dev/opfsdata01-vg/opfsdata01-lv -> ../dm-1
[root@testfs ~]# mount /soc/san/filestore/snapshots/op-test-fsdata01-rw-04Jun2010
[root@testfs ~]# cd /soc/san/filestore/snapshots/op-test-fsdata01-rw-04Jun2010/
[root@testfs op-test-fsdata01-rw-04Jun2010]# ls -als
total 36
8 drwxr-xr-x. 4 root root 4096 Jan 29 2010 .
4 drwxr-xr-x 13 socops socops 4096 Aug 26 20:55 ..
4 drwx------ 7 socops socops 4096 May 10 15:50 fsdatadir
20 drwx------. 2 root root 16384 Feb 26 2009 lost+found
[root@testfs op-test-fsdata01-rw-04Jun2010]# cd fsdatadir/
[root@testfs fsdatadir]# ls
[root@testfs fsdatadir]# ls
anc-bad blob lock stack.dump.csv transactionLog
anc-bad.tar.gz commitorder.seq mts stack.dump.old.csv ts
[root@testfs fsdatadir]#
[root@testfs fsdatadir]# exit
exit

Script done on Sat 28 Aug 2010 02:25:00 PM PDT
[root@testfs ~]#

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Malahal Naineni 08-29-2010 06:39 PM

oblem with lvm and multipath on fedora 13
 
> [root@testfs boot]# mkdir foo
> [root@testfs boot]# cd foo
> [root@testfs foo]# zcat ../initramfs-with-multipath.img | cpio -icd
> 26392 blocks
> [root@testfs foo]# ls
> bin emergency initqueue-finished mount proc tmp
> cmdline etc initqueue-settled pre-pivot sbin usr
> dev init lib pre-trigger sys var
> dracut-005-3.fc13 initqueue lib64 pre-udev sysroot
> [root@testfs foo]# find . -print | grep -i multi
> ./lib/modules/2.6.33.8-149.fc13.x86_64/kernel/drivers/md/dm-multipath.ko

The dm-multipath.ko kernel module is present. It needs multipath command
calling from an initrd script (/init ???)

> /dev/sdq: UUID="2VqqWi-cWZo-Tu5T-cC40-z4CW-ZOti-aWhi5F" TYPE="LVM2_member"
> /dev/sdr: UUID="JV8nOF-NxdU-WAZP-ZY9h-iXo8-HJWs-CjSjTj" TYPE="LVM2_member"
> /dev/sds: UUID="Crn275-rSvZ-6Bl1-LplP-c0iG-PvtZ-OVofd8" TYPE="LVM2_member"
> /dev/sdc: UUID="2VqqWi-cWZo-Tu5T-cC40-z4CW-ZOti-aWhi5F" TYPE="LVM2_member"
> /dev/sdd: UUID="JV8nOF-NxdU-WAZP-ZY9h-iXo8-HJWs-CjSjTj" TYPE="LVM2_member"
> /dev/sde: UUID="Crn275-rSvZ-6Bl1-LplP-c0iG-PvtZ-OVofd8" TYPE="LVM2_member"

The above drives are added to LVM without any partitions. Everything
else has partitions, so you need to run "kpartx" on each multipath
device to create mappings for partitions. Did you enable multipathd
service (I thought it should call kpartx or maybe a udev rule)?

I saw device mapper reload ioctl failures when you enabled the VGs. The
LVM scan seems to use single paths rather than multipath devices (from
duplicate PV messages).

So run "kpartx -a /dev/mapper/<yourmultipathdevice>" on each multipath
device and then run LVM scan.

If LVM still uses paths and complains about duplicate PVs, then you may
need create a right filter in /etc/lvm/lvm.conf to use intended devices.
I hope this is not necessary.

Thanks, Malahal.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

"Stamper, Brian P. (ARC-D)[Logyx LLC]" 08-29-2010 07:08 PM

oblem with lvm and multipath on fedora 13
 
>So run "kpartx -a /dev/mapper/<yourmultipathdevice>" on each multipath
>device and then run LVM scan.

>If LVM still uses paths and complains about duplicate PVs, then you may
>need create a right filter in /etc/lvm/lvm.conf to use intended devices.
>I hope this is not necessary.

This works and is a huge improvement. At least I can get my devices back into multipath manually. Thanks so much for this. I still need to get to the bottom of why these specific volumes are getting grabbed before multipath can get them, but this is a big step.

>Did you enable multipathd service (I thought it should call kpartx or maybe a udev rule)?

Yes, it's chkconfig'ed on, and I bumped forward its start order to start up prior to lvm2-monitor, thinking lvm2-monitor (which does vgscan) might be part of the problem.

>> [root@testfs foo]# find . -print | grep -i multi
>> ./lib/modules/2.6.33.8-149.fc13.x86_64/kernel/drivers/md/dm-multipath.ko
>The dm-multipath.ko kernel module is present. It needs multipath command
>calling from an initrd script (/init ???)

I'm not positive that I'm sure what you're asking. Do you mean the init.d script I reference above, or is there some other configuration change to call mulitpath from initrd?

I feel much better knowing that I can get back to a proper configuration manually, but any ideas about why this is happening on bootup? FYI, I played around with filtering in lvm.conf in my first night of troubleshooting and tried filtering out all /dev/sd.* drives other than /dev/sda, but it seems like dracut was ignoring the lvm filters, despite the lvm.conf. With the filter in place a pvscan would find no duplicates (and no sd.* devices) but on reboot dracut would find them all and report the dupes.

Thanks for your help so far Malahal.

-Brian

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

Malahal Naineni 08-29-2010 08:27 PM

oblem with lvm and multipath on fedora 13
 
Stamper, Brian P. (ARC-D)[Logyx LLC] [brian.p.stamper@nasa.gov] wrote:
> Yes, it's chkconfig'ed on, and I bumped forward its start order to
> start up prior to lvm2-monitor, thinking lvm2-monitor (which does
> vgscan) might be part of the problem.

Don't change any default orders. That is not a proper fix even if it
works and you may break other things.

> >> [root@testfs foo]# find . -print | grep -i multi
> >> ./lib/modules/2.6.33.8-149.fc13.x86_64/kernel/drivers/md/dm-multipath.ko
> >The dm-multipath.ko kernel module is present. It needs multipath command
> >calling from an initrd script (/init ???)
>
> I'm not positive that I'm sure what you're asking. Do you mean the
> init.d script I reference above, or is there some other configuration
> change to call mulitpath from initrd?
>

What I was trying to say is that dm-multipath.ko is included in the
initrd image, but there are other things that need to be included in the
initrd image to make the multipath configuration work in the initrd. For
example, you need multipath.conf in the initrd image as well as some
script in the initrd image calling 'multipath' binary.

> I feel much better knowing that I can get back to a proper
> configuration manually, but any ideas about why this is happening on
> bootup?

My best guess is that LVM gets configured in initrd and multipath is not
there until the active root FS. Your best bet would be to include
multipath in the initrd (I have no working instructions on how to build
initrd with multipath on recent RedHat distros).

You may be able to get around the problem by restricting LVM in the
initrd image by doing this:

1. Modify the filter in /etc/lvm.conf file to include only the root/swap
devices
2. Make a new initrd image. Use this to boot now onwards
Since your filter only includes root/swap paths, LVM won't find or
configure logical volumes other than root and swap at initrd time!
3. Change the filter to what it should be (the distro default should be
fine)
4. Now reboot.


> FYI, I played around with filtering in lvm.conf in my first
> night of troubleshooting and tried filtering out all /dev/sd.* drives
> other than /dev/sda, but it seems like dracut was ignoring the lvm
> filters, despite the lvm.conf. With the filter in place a pvscan
> would find no duplicates (and no sd.* devices) but on reboot dracut
> would find them all and report the dupes.

Did you make a new initrd after changing the lvm.conf file? Your initrd
will have a copy of lvm.conf file (it would be some old file unless you
made a new initrd image) and that would be used at boot up as you have
configured LVM in initrd.

> Thanks for your help so far Malahal.

You are welcome.

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel

"Stamper, Brian P. (ARC-D)[Logyx LLC]" 08-29-2010 09:10 PM

oblem with lvm and multipath on fedora 13
 
________________________________________
From: dm-devel-bounces@redhat.com [dm-devel-bounces@redhat.com] On Behalf Of Malahal Naineni [malahal@us.ibm.com]

>What I was trying to say is that dm-multipath.ko is included in the
>initrd image, but there are other things that need to be included in the
>initrd image to make the multipath configuration work in the initrd. For
>example, you need multipath.conf in the initrd image as well as some
>script in the initrd image calling 'multipath' binary.

Alright, I'll look into this.

>My best guess is that LVM gets configured in initrd and multipath is not
>there until the active root FS. Your best bet would be to include
>multipath in the initrd (I have no working instructions on how to build
>initrd with multipath on recent RedHat distros).

There's info out there about getting systems to boot on multipath volumes, so I'll try to track this down.

>Did you make a new initrd after changing the lvm.conf file? Your initrd
>will have a copy of lvm.conf file

No, this didn't occur to me. I'll do some testing with this as well.

>> Thanks for your help so far Malahal.
>
>You are welcome.

Here's what's throwing me at this point. These aren't my only multipath volumes. They're not even my only multipath volumes with lvm on them. All my multipath volumes are from the same source, our 3par SAN. What I can't figure is if multipath is missing from initrd, and that's an issue, why does it only affect these 2 VGs. Why not the others? The testNFS vg works fine. It's a vg created across 2 luns with a single lv. The filestoreVG works fine as well, it's a single lun in a vg. There are only a few differences. Whoever set up the testNFS and filestoreVG lvs created the vg on the base volumes (/dev/mapper/testnfs0[1..2]) and not on partitions. The volumes themselves on the SAN are different, the two I'm having an issue with are snapshot volumes and the others are base volumes. I can't imagine that has any effect. I'll keep researching your recommendations above, but I'm finding it difficult to understand why there would be such a low-level problem with multipa!
th when most of my multipath devices work fine.

Thanks again,
-Brian

--
dm-devel mailing list
dm-devel@redhat.com
https://www.redhat.com/mailman/listinfo/dm-devel


All times are GMT. The time now is 10:10 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.