Operating System - Linux
1823375 Members
2512 Online
109654 Solutions
New Discussion юеВ

Re: device-mapper-multipath

 
Tonatiuh
Super Advisor

device-mapper-multipath

Red Hat Enterprise Linux 4
(Kernel 2.6.9-22ELsmp)

I have executed the "multipath -v2" command and it generates and shows grouped multipathed devices.

But in several articules I have seen that it should generates a "/dev/disk/by-name/WWID" block devices, but I only get a "/dev/dm-X", "/dev/mapper/WWID" and "/dev/mapper/dm-XpY" block devices; and this generated devices are lost after reboot.

"/dev/dm-X", and "/dev/mapper/WWID" block devices are created after I run again the "multipath -v2" command. And "/dev/mapper/dm-XpY" block devices are created after I run the "kpartx -a /dev/dm-X" command.

Any idea about my case?
22 REPLIES 22
Matti_Kurkela
Honored Contributor

Re: device-mapper-multipath

Apparently you haven't enabled the multipathd daemon, which is needed by device-mapper-multipath.

Go to /usr/share/doc/device-mapper-multipath- directory on your RHEL4 server and read all the documentation found there.

There are specific initial setup instructions (Multipath-usage.txt in the above-mentioned directory). Follow these instructions exactly.

Note that some of the documentation in this directory is directly from the "generic" multipath-tools source package, and does *not* refer specifically to RHEL 4 unless it explicitly says so.

You should get your multipath devices as /dev/mapper/mpath*, and symlinks to them as /dev/mpath/mpath*.

The "/dev/disk/by-name/WWID" style of naming seems to be achievable when the multipath tools and udev are configured to work together. Apparently this integration was not yet done when RHEL 4 configuration was frozen for release. At that point, the device-mapper-multipath seems to have been quite recent development and not very well documented. Any documentation written later is likely to refer to the newer (much changed) versions, unless it is specifically written for RHEL 4.

RHEL 4 has multipath-tools version 0.4.5: the udev integration was done in 0.4.6. Further changes were made in 0.4.7, which is the "latest" version.

The device-mapper-multipath developers' web page has a Change Log which might be useful:
http://christophe.varoqui.free.fr/wiki/wakka.php?wiki=Home
MK
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Matt, your info is VERY ambigous to me. I am a VERY newbie about this multipathing technolgies.

The multipathd is started up at startup time of the server. The problem is other configuration.
Tonatiuh
Super Advisor

Re: device-mapper-multipath

My /var/log/messages is being adding with this messages every 11 seconds (constantly):

Aug 20 17:32:44 rac1 kernel: Device sdaa not ready.
Aug 20 17:32:44 rac1 kernel: Device sdb not ready.
Aug 20 17:32:44 rac1 kernel: Device sdd not ready.
Aug 20 17:32:44 rac1 kernel: Device sdg not ready.
Aug 20 17:32:44 rac1 kernel: Device sdh not ready.
Aug 20 17:32:44 rac1 kernel: Device sdj not ready.
Aug 20 17:32:44 rac1 kernel: Device sdl not ready.
Aug 20 17:32:44 rac1 kernel: Device sdm not ready.
Aug 20 17:32:44 rac1 kernel: Device sdp not ready.
Aug 20 17:32:44 rac1 kernel: Device sdr not ready.
Aug 20 17:32:44 rac1 kernel: Device sdu not ready.
Aug 20 17:32:44 rac1 kernel: Device sdv not ready.
Aug 20 17:32:44 rac1 kernel: Device sdx not ready.
Aug 20 17:32:44 rac1 kernel: Device sdz not ready.
Kodjo Agbenu
Honored Contributor

Re: device-mapper-multipath

Hi,

Make sure that the multipath daemon is automatically loaded at reboot :
grep -i autopath /etc/rc.d/rc.sysinit
find /etc/rc.d -print | grep -i autopath

In your last mesage, it looks like a path have changed or have been lost, then all LUNs seen under that path disappeared and Linux kernel is trying to reconnect to them.

Reboot the system to get a neat situation, then check again that autopath daemon has been automatically loaded.

Good lcuk.
Kodjo
Learn and explain...
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Both commands:

grep -i autopath /etc/rc.d/rc.sysinit
find /etc/rc.d -print | grep -i autopath

Returns nothing.

The situation with the messages in /var/log/messages is the same after a reboot.

Tonatiuh
Super Advisor

Re: device-mapper-multipath

If I change the commands to look for "multipath" instead of "autopath" they returns something:

# grep -i multipath /etc/rc.d/rc.sysinit
if [ -x /sbin/lvm.static -o -x /sbin/multipath -o -x /sbin/dmraid ]; then
if [ -f /etc/multipath.conf -a -x /sbin/multipath ]; then
modprobe dm-multipath >/dev/null 2>&1
/sbin/multipath -v 0
if [ -x /sbin/multipath ]; then
modprobe dm-multipath >/dev/null 2>&1
/sbin/multipath -v 0
if [ -x /sbin/multipath ]; then
modprobe dm-multipath >/dev/null 2>&1
/sbin/multipath -v 0
# find /etc/rc.d -print | grep -i multipath
/etc/rc.d/rc4.d/S13multipathd
/etc/rc.d/rc3.d/S13multipathd
/etc/rc.d/rc6.d/K87multipathd
/etc/rc.d/rc0.d/K87multipathd
/etc/rc.d/init.d/multipathd
/etc/rc.d/rc2.d/S13multipathd
/etc/rc.d/rc1.d/K87multipathd
/etc/rc.d/rc5.d/S13multipathd
David Child_1
Honored Contributor

Re: device-mapper-multipath

Okay, first thing make sure multipathd is set to run on bootup;

# chkconfig --list multipathd
multipathd 0:off 1:off 2:on 3:on 4:on 5:on 6:off

Check if it's currently running;
# service multipathd status
multipathd (pid 3961) is running...

Now, with all that said multipathd deals with path checking and restoration. It has nothing to do with setting up the device files on bootup.

I've never seen the /dev/sdk/by-name/WWID setup, but it is possible by setting up multipath and udev rules correctly (and maybe a little scripting). Generally you will have /dev/mpath/ for your names.

1. Unless you really want the WWID name (e.g. SEMC_____SYMMETRIX______9903673E9000) you should set up some aliases in /etc/multipath.conf. Here is an example of one of mine;

multipath {
wwid SEMC_____SYMMETRIX______9903673E9000
alias sym3E9mp
}

2. You may need to edit /etc/udev/rules.d/40-multipath.rules if you want to tweak your end-results. The defaults should work in most cases.

To help get more specific I would need to know what storage array you are using and perhaps an example of a WWN.

Thanks,
David
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Hi David,

The "chkconfig --list multipathd" and "service multipathd status" returns me that the service is correctly started at server start up time.

Eventhough the devices are not created on startup time. I need to run "multipath -v2" and "kpartx -a ..." manually to generate again the devices.

About the name of devices and udev. No more questions about that.

David Child_1
Honored Contributor

Re: device-mapper-multipath

I created the attached script for the creation of device files. It handles the loading of the modules, device file creation (e.g. multipath -v2), and kpartx. I set it up to be used with the service command for ease of implementation.

Just put the file in /etc/rc.d/init.d/create_multipath_devices (or whatever you want to call it). Then run 'chkconfig create_multipath_devices on'.

You can just use it as is or roll your own.

David
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Yes, I have seen that posibility, but I do not like a lot. Because the devices should be crated after every reboot. But I will implement that kind of workaround if I cannot get the "official" red hat solution.
David Child_1
Honored Contributor

Re: device-mapper-multipath

I'm not sure what the official red hat setup is out of the box. You can do a lot through the udev rules. After you run the 'chkconfig create_multipath_devices on' command this script will run during each reboot and recreate them automatically every time. You can also create them when needed by running 'service create_multipath_devices reload'.

David
Tonatiuh
Super Advisor

Re: device-mapper-multipath

David,

You are right, I have implemented the script (modified), and configured the /etc/multipath.conf and udev rules and the result is good.

The only one problem I still have is the messages about "Device ... not ready" that (I guess) the kernel sends to the /var/log/messages. This messages are sent consistently every 11 seconds, after I (or the script) issue the "multipath -v2"

Aug 20 17:32:44 rac1 kernel: Device sdaa not ready.
Aug 20 17:32:44 rac1 kernel: Device sdb not ready.
Aug 20 17:32:44 rac1 kernel: Device sdd not ready.
Aug 20 17:32:44 rac1 kernel: Device sdg not ready.
Aug 20 17:32:44 rac1 kernel: Device sdh not ready.
Aug 20 17:32:44 rac1 kernel: Device sdj not ready.
Aug 20 17:32:44 rac1 kernel: Device sdl not ready.
Aug 20 17:32:44 rac1 kernel: Device sdm not ready.
Aug 20 17:32:44 rac1 kernel: Device sdp not ready.
Aug 20 17:32:44 rac1 kernel: Device sdr not ready.
Aug 20 17:32:44 rac1 kernel: Device sdu not ready.
Aug 20 17:32:44 rac1 kernel: Device sdv not ready.
Aug 20 17:32:44 rac1 kernel: Device sdx not ready.
Aug 20 17:32:44 rac1 kernel: Device sdz not ready.
David Child_1
Honored Contributor

Re: device-mapper-multipath

What type of array is this connected to? There are certain settings that should be made in /etc/multipath.conf depending on array type. If you have an active-passive array and have multipath.conf set up for an active-active array you might get these types of errors as it tries to get both paths up at the same time.

Could you post a sample (one device) output of 'multipath -l' and your basic multipath.conf settings?

Thanks,
David
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Hello David,

I have discovered some data coincidences:

I attach output for 3 commands:

multipath -v2
multipath -l
fdisk -l

The output for "multipath -l" command shows some "devices" with a "status" of
"failed". I have marked this devices with a "<-" symbol beside of them.

All "devices" that appear with this "failed" status, forms the list of
"devices" that kernel reports to "/var/log/messages" as "Device ... not ready".

Output for "fdisk -l" command shows as real devices only the devices that shows
a "status" of "active" (in the output for "multipath -l"). The rest of
devices reported by kernel as "not ready" devices are really nonexisting
devices.

Any idea about what is happening here?
What is the meaning of this data?
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Hello David,

I have discovered some data coincidences:

I attach output for 3 commands:

multipath -v2
multipath -l
fdisk -l

The output for "multipath -l" command shows some "devices" with a "status" of
"failed". I have marked this devices with a "<-" symbol beside of them.

All "devices" that appear with this "failed" status, forms the list of
"devices" that kernel reports to "/var/log/messages" as "Device ... not ready".

Output for "fdisk -l" command shows as real devices only the devices that shows
a "status" of "active" (in the output for "multipath -l"). The rest of
devices reported by kernel as "not ready" devices are really nonexisting
devices.

Any idea about what is happening here?
What is the meaning of this data?
David Child_1
Honored Contributor

Re: device-mapper-multipath

Okay, it looks like we're getting close. It looks like you have an active-passive array. Can you tell me what type of array you have?

If Clariion you need to have the something similar to the following set up in multipath.conf;

device {
vendor "DGC"
product "*"
path_grouping_policy group_by_prio
prio_callout "/sbin/mpath_prio_emc /dev/%n"
hardware_handler "1 emc"
features "1 queue_if_no_path"
checker "emc_clariion"
}

The key one for this issue is the "path_grouping_policy group_by_prio" setting.

Basically with an active-passive array, only two of your four paths will be active at any one time. Since multipath is set up as 'multibus' it thinks all four should be up at the same time and thus the failure.

Your 'multipath -l' output should look something like;

LUN0 (3600508b400012bc300019000002a0000)
[size=1 GB][features="0"][hwhandler="0"]
\_ round-robin 0 [enabled]
\_ 1:0:1:7 sdaa 65:160 [active]
\_ 0:0:0:7 sdf 8:80 [active]
\_round-robin 0
\_ 0:0:1:7 sdm 8:192 [ready]
\_ 1:0:0:7 sdt 65:48 [ready]

(I don't remember the states that show up, etc. as I use a Symmetrix, but the key thing is the two path groups).

Anyway, if you post your array information we can drill down on the correct settings.

David
Tonatiuh
Super Advisor

Re: device-mapper-multipath

My array is an HP StorageWorks EVA300.
David Child_1
Honored Contributor

Re: device-mapper-multipath

Refer to this doc for details: http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00635587/c00635587.pdf?jumpid=reg_R1002_USEN#search=%22EVA3000%20multipath.conf%22

Basically your multipath.conf file should use the following settings;

device {
vendor "HP "
product "HSV101 \(C\)COMPAQ" #note: only for RHEL4 U3
path_grouping_policy group_by_prio
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
path_checker tur
path_selector "round-robin 0"
prio_callout "/sbin/mpath_prio_alua %d"
failback immediate
no_path_retry 60 #note: only for RHEL4 U3
}


Note: the doc shows two items are for RHEL4 U3, but may be needed for U2 as well. You might just have to play with that.

Don't forget to do the following after updating multipath.conf;

1) /sbin/multipath ├в v0
2) /etc/init.d/multipathd restart

David
Tonatiuh
Super Advisor

Re: device-mapper-multipath

Hi David,

I have applyed the HP document you gave me and the result is the same.

Any other idea?
Uwe Zessin
Honored Contributor

Re: device-mapper-multipath

> product "HSV101 \(C\)COMPAQ" #note: only for RHEL4 U3

That would be an EVA3000 with Active/Active firmware - if your system has VCS version 3 loaded, it is running Active/Passive firmware, which is not supported by the device mapper - see page 10 of the document.
.
Tonatiuh
Super Advisor

Re: device-mapper-multipath

How can I check this firmwere and know if my storage is HSV100 or HSV101 ?
Uwe Zessin
Honored Contributor

Re: device-mapper-multipath

Check the storage system properties page in Command View-EVA.
.