Operating System - Linux
1748032 Members
4980 Online
108757 Solutions
New Discussion юеВ

How to clean up DMMPIO device tree

 
skt_skt
Honored Contributor

How to clean up DMMPIO device tree

How to clean up DMMPIO device tree

Red Hat Enterprise Linux AS release 4 (Nahant Update 7)

I am seeing single device(220GB one) twice in my multipath o/p.

# multipath -ll
mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=505 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:0:2 sdd 8:48 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:1:2 sdh 8:112 [active][ready]

mpath1 (360060160f17311000c55033921b2dd11)
[size=175 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:1 sdg 8:96 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:1 sdc 8:32 [active][ready]

mpath0 (360060160f173110012f68b1e56b3dd11)
[size=220 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:0 sdb 8:16 [active][ready]

mpath3 (360060160f1731100c642e85121b2dd11)
[size=220 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]


Here is what it suppose to show. Notice the presence of 300GB disk. Some how we see this 300GB very rarely or often missing.

~]# multipath -ll
mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=300 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:3 sdi 8:128 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:3 sde 8:64 [active][ready]

mpath1 (360060160f17311000c55033921b2dd11)
[size=505 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:0:2 sdd 8:48 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:1:2 sdh 8:112 [active][ready]

mpath0 (360060160f173110012f68b1e56b3dd11)
[size=175 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:1 sdg 8:96 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:1 sdc 8:32 [active][ready]

mpath3 (360060160f1731100c642e85121b2dd11)
[size=220 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]


We already reinstalled the Multipath rpm in the server (device mapper rpm was not tocuhed)and which did not make any difference. I am able to access the 300Gb al the time through "fdsik -l". SO THIS IS APROBLEM ony with DMMPIO. Hence looking for a device tree clean up hpign that will help.
4 REPLIES 4
skt_skt
Honored Contributor

Re: How to clean up DMMPIO device tree

# multipath -v2
remove: mpath2 (dup of mpath3)
mpath2: map in use

Also same PVID shows for two device

mpath3
--- NEW Physical volume ---
PV Name /dev/dm-32
VG Name
PV Size 220.09 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID TQTgvi-gIns-32Sf-B0nN-30mY-y37A-vcUrrh

mpath0
--- NEW Physical volume ---
PV Name /dev/dm-35
VG Name
PV Size 220.09 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID TQTgvi-gIns-32Sf-B0nN-30mY-y37A-vcUrrh
Matti_Kurkela
Honored Contributor

Re: How to clean up DMMPIO device tree

> mpath0 (360060160f173110012f68b1e56b3dd11)
> mpath3 (360060160f1731100c642e85121b2dd11)

Looks like the storage system is either presenting two different WWIDs for the same unit of storage (which is not supposed to happen) or it's really presenting you with two distinct LUNs, both 220 GB in size. Or the dm-multipath is seriously confused.

> mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=300 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
> mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=505 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]

Hmm... The dm-multipath seems to be under the impression that the 300 GB device in the "good" situation has the same WWID as the 505 GB device in the "bad" situation.

---

# multipath -v2
remove: mpath2 (dup of mpath3)
mpath2: map in use

Hey, I've seen this before!

Your /var/lib/multipath/bindings file says a particular WWID should be mpath3, but it's already in use as mpath2, so the multipath tool cannot change the mapping.

Most likely this happened because the /var/lib/multipath/bindings file was not available at the time the SAN disks were first detected at system boot time. When the file became available, the disks were already in use and it was too late to change the mappings.

First, you should read these RHKB articles:
http://kbase.redhat.com/faq/docs/DOC-5544
http://kbase.redhat.com/faq/docs/DOC-17650

Second, I think there used to be a bug in the dm-multipath tools of RHEL 4, which allowed the /var/lib/multipath/bindings file to contain duplicate entries in some situations (possibly related to the above-mentioned RHKB articles).

When you see "mpathX: map in use" error messages in RHEL 4, check your /var/lib/multipath/bindings file: each WWID should be listed in there only once.

If you see duplicates, you can either just delete the /var/lib/multipath/bindings file and allow "multipath -v2" to re-create it (if you can live with the fact that the mpathN numbers may change), or correct the bindings file manually to match your current situation, by comparing the bindings file to "multipath -l" output and correcting the WWIDs in the file.

To prevent this from happening again:

- install any available dm-multipath updates from the RHN

- if your /var is a separate filesystem, implement the configuration change detailed in the RHKB articles (above)

- if you boot from SAN, use mkinitrd to re-create your initrd file after making the configuration change

MK
MK
skt_skt
Honored Contributor

Re: How to clean up DMMPIO device tree

I had already verified that i ahve uniq WWID inside bindings file. Aslo tried moving the file and restarting the restarting the multipath. The real confusion is i am not sure which one (bidings or multipath -ll) to trust. Going through the KBs now
Matti_Kurkela
Honored Contributor

Re: How to clean up DMMPIO device tree

The "multipath -l" documents the active configuration ("what is now"). The bindings file documents the state in the past ("what used to be").

If you're using LVM on your multipath devices, remember that LVM is pretty much totally immune to storage device name changes. Just run "vgscan" and it will find all the correct devices again, by looking at the VG UUIDs in the PV headers on disks.

MK
MK