- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- How to clean up DMMPIO device tree
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-13-2010 10:14 PM
тАО01-13-2010 10:14 PM
How to clean up DMMPIO device tree
Red Hat Enterprise Linux AS release 4 (Nahant Update 7)
I am seeing single device(220GB one) twice in my multipath o/p.
# multipath -ll
mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=505 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:0:2 sdd 8:48 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:1:2 sdh 8:112 [active][ready]
mpath1 (360060160f17311000c55033921b2dd11)
[size=175 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:1 sdg 8:96 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:1 sdc 8:32 [active][ready]
mpath0 (360060160f173110012f68b1e56b3dd11)
[size=220 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:0 sdb 8:16 [active][ready]
mpath3 (360060160f1731100c642e85121b2dd11)
[size=220 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]
Here is what it suppose to show. Notice the presence of 300GB disk. Some how we see this 300GB very rarely or often missing.
~]# multipath -ll
mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=300 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:3 sdi 8:128 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:3 sde 8:64 [active][ready]
mpath1 (360060160f17311000c55033921b2dd11)
[size=505 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:0:2 sdd 8:48 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:1:2 sdh 8:112 [active][ready]
mpath0 (360060160f173110012f68b1e56b3dd11)
[size=175 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:1 sdg 8:96 [active][ready]
\_ round-robin 0 [enabled]
\_ 3:0:0:1 sdc 8:32 [active][ready]
mpath3 (360060160f1731100c642e85121b2dd11)
[size=220 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
\_ round-robin 0 [prio=1][active]
\_ 3:0:1:0 sdf 8:80 [active][ready]
We already reinstalled the Multipath rpm in the server (device mapper rpm was not tocuhed)and which did not make any difference. I am able to access the 300Gb al the time through "fdsik -l". SO THIS IS APROBLEM ony with DMMPIO. Hence looking for a device tree clean up hpign that will help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-13-2010 10:18 PM
тАО01-13-2010 10:18 PM
Re: How to clean up DMMPIO device tree
remove: mpath2 (dup of mpath3)
mpath2: map in use
Also same PVID shows for two device
mpath3
--- NEW Physical volume ---
PV Name /dev/dm-32
VG Name
PV Size 220.09 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID TQTgvi-gIns-32Sf-B0nN-30mY-y37A-vcUrrh
mpath0
--- NEW Physical volume ---
PV Name /dev/dm-35
VG Name
PV Size 220.09 GB
Allocatable NO
PE Size (KByte) 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID TQTgvi-gIns-32Sf-B0nN-30mY-y37A-vcUrrh
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-14-2010 02:50 AM
тАО01-14-2010 02:50 AM
Re: How to clean up DMMPIO device tree
> mpath3 (360060160f1731100c642e85121b2dd11)
Looks like the storage system is either presenting two different WWIDs for the same unit of storage (which is not supposed to happen) or it's really presenting you with two distinct LUNs, both 220 GB in size. Or the dm-multipath is seriously confused.
> mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=300 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
> mpath2 (360060160f1731100b254eb42b9b4dd11)
[size=505 GB][features="1 queue_if_no_path"][hwhandler="1 emc"]
Hmm... The dm-multipath seems to be under the impression that the 300 GB device in the "good" situation has the same WWID as the 505 GB device in the "bad" situation.
---
# multipath -v2
remove: mpath2 (dup of mpath3)
mpath2: map in use
Hey, I've seen this before!
Your /var/lib/multipath/bindings file says a particular WWID should be mpath3, but it's already in use as mpath2, so the multipath tool cannot change the mapping.
Most likely this happened because the /var/lib/multipath/bindings file was not available at the time the SAN disks were first detected at system boot time. When the file became available, the disks were already in use and it was too late to change the mappings.
First, you should read these RHKB articles:
http://kbase.redhat.com/faq/docs/DOC-5544
http://kbase.redhat.com/faq/docs/DOC-17650
Second, I think there used to be a bug in the dm-multipath tools of RHEL 4, which allowed the /var/lib/multipath/bindings file to contain duplicate entries in some situations (possibly related to the above-mentioned RHKB articles).
When you see "mpathX: map in use" error messages in RHEL 4, check your /var/lib/multipath/bindings file: each WWID should be listed in there only once.
If you see duplicates, you can either just delete the /var/lib/multipath/bindings file and allow "multipath -v2" to re-create it (if you can live with the fact that the mpathN numbers may change), or correct the bindings file manually to match your current situation, by comparing the bindings file to "multipath -l" output and correcting the WWIDs in the file.
To prevent this from happening again:
- install any available dm-multipath updates from the RHN
- if your /var is a separate filesystem, implement the configuration change detailed in the RHKB articles (above)
- if you boot from SAN, use mkinitrd to re-create your initrd file after making the configuration change
MK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-14-2010 05:41 AM
тАО01-14-2010 05:41 AM
Re: How to clean up DMMPIO device tree
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-14-2010 06:33 AM
тАО01-14-2010 06:33 AM
Re: How to clean up DMMPIO device tree
If you're using LVM on your multipath devices, remember that LVM is pretty much totally immune to storage device name changes. Just run "vgscan" and it will find all the correct devices again, by looking at the VG UUIDs in the PV headers on disks.
MK