System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

QLogic driver for QMH2426 FC HBA problem

SOLVED
Go to solution
Jakub Stuglik
Occasional Visitor

QLogic driver for QMH2426 FC HBA problem

Hi.

I have a problem installing and bringing to use SAN connection on Fedora Core 8 (kernel 2.6.23.1-42.fc8).
I'm total newbie to this so I'd like to ask you for your help if you should be kind enough. I couldn't find anything useful on google so far.
I know also that Fedora Core distribution is not supported by QLogic, but it is quite important to me not to change my OS.

OK. Here is how it is. I have a HP Blade BL460c server with HP BLc QLogic QMH2426 FC HBA. There is QLA2xxx driver installed. As I mentioned before I am a newbie so I don't know if there is some problem or if I am the problem (meaning I can't mount my SAN area:-) ).
I attached the snippet from /var/log/messages regarding the matter (with some other things mixed in probably).

Please take a look at that and tell me what should I do with this to make it work (e.g. see it mounted at /mnt/SAN and use it). Thank you for all the replies in advance.

Best regards,
Jakub Stuglik

6 REPLIES
Jakub Stuglik
Occasional Visitor

Re: QLogic driver for QMH2426 FC HBA problem

I forgot to mention an important thing: there should be RAID 5 set up on the SAN (so I guess that's why there is 4 disks sda through sdd in the attached log).

Jakub Stuglik
Jimmy Vance
HPE Pro
Solution

Re: QLogic driver for QMH2426 FC HBA problem

The RAID level of the storage doesn't make a difference, the HBA nor the OS will see that. From looking at the log, you only have one port of the HBA presented to the SAN. The SAN probably has multiple paths into the fabric which is why you see four disks. I take it you are only presenting one LUN to the server. You need to enable device mapper MPIO to handle the multiple paths so you end up seeing one LUN device

If you present the other HBA port to the storage you will see eight paths to the same LUN instead of four as you see now without the multipath daemon running.




__________________________________________________
No support by private messages. Please ask the forum!      I work for HPE

If you feel this was helpful please click the KUDOS! thumb below!   
Jakub Stuglik
Occasional Visitor

Re: QLogic driver for QMH2426 FC HBA problem

Thank you for your reply!
I looked at configuration file for MPIO deamon and don't know what to do for now but I guess I'll figure it out eventually.

Thanks.
Jakub Stuglik
Matti_Kurkela
Honored Contributor

Re: QLogic driver for QMH2426 FC HBA problem

In another thread, I collected a series of links to RedHat Knowledge Base about configuring RedHat Enterprise Linux for FC. Most of the information should be applicable to Fedora too.

http://forums.itrc.hp.com/service/forums/questionanswer.do?threadId=1318321

MK
MK
Jakub Stuglik
Occasional Visitor

Re: QLogic driver for QMH2426 FC HBA problem

Thank you very much for those resources but unfortunately I still have a problem configuring multipathing now.
Infact there are two problems:
1. Multipathd won't configure mpath device properly because there is an error running mpath_prio_hds_modular. The error is:

error calling out mpath_prio_hds_modular

As I found on google this is some sort of bug which has a workaround - put symlink or copy mpath_prio_hds_modular file into /. Unfortunately it doesn't work for me. If I run "multipath -v2" from /sbin (where mpath_prio_hds_modular resides) everything is OK. But I can't figure out what is home dir of /etc/init.d/multipathd (I put "echo `pwd'" into the script to see where is it's home and it claims it's / but it still doesn't work). I tried various locations (/etc/init.d, /etc/rc3.d, /etc/rc.d) but still nothing. Maybe you know something about it? Moreover those errors are accompanied (in /var/log/messages) with something like this:

setroubleshoot: #012 SELinux is preventing multipathd (lvm_t) "execute" to (etc_runtime_t).#012 For complete SELinux messages. run sealert -l c347bb54-0580-4297-a4fc-6cbedb520035

Unfortunately running command on the end of this line gives me no clue about what should I do. Again: maybe you know more about it?

2. The second thing is more connected to multipath.conf. When I asked guys from where I rent the server how should I configure this (and I don't want to do this anymore because they charge me for every info) they told me something like this: "Use only one path to configure multipath, set the rest of them (3) as backup ones. In particular avoid multipath configuration with round robin."
Now: I think they meant to use the right policy but I'm not sure which one they referred to. Do you think it should be "group_by_node_name"? Or maybe my assumption that this is about policy is wrong?

I attach my current /etc/multipath.conf file for you.

Once again thank you for all your replies in advance.

Jakub Stuglik
Matti_Kurkela
Honored Contributor

Re: QLogic driver for QMH2426 FC HBA problem

Problem 1):
SELinux security rules are apparently preventing multipathd from starting the mpath_prio_hds_modular. This is probably because multipathd is trying to run it from the wrong location.

You'll have to either adjust the SELinux rules to allow multipathd to start mpath_prio_hds_modular using the symlink, or disable SELinux. In both cases, you will still need the symlink to overcome the multipathd bug.

Problem 2):
For a good answer, we would really need to know the model of the storage system connected to your server, so we could refer to the storage system documentation to find out its specific requirements.

But it sounds like the storage system might be active/passive type: this means only one FC interface on the storage controller is active at any time, and switching to another interface causes significant delays.

(DISCLAIMER: I have very little experience with active/passive storage arrays: I've mostly worked with arrays of active/active type.)

In multipathd.conf terms this means that path_grouping_policy must not be "multibus" and path_checker must not be "readsector0". The "directio" path_checker is probably a bad choice too.

For path_grouping_policy, one of the "group_by_*" options is probably correct.

For path_checker, "tur" would be first guess if the type of the storage system is not known. If the storage is EMC Clariion, there is a specific "emc_clariion" path_checker for it. Similarly, (some?) HP StorageWorks devices have a "hp_sw" path_checker.

I understand recent versions of dm-multipath will auto-detect many storage systems and will use correct options by default.

If the auto-detection does not work and you use the wrong settings, you will probably notice it fairly soon, because the storage system may give you rather bad performance. If other people are using the same storage system, your incorrect settings may harm their performance too - and vice versa.

Here are some links to dm-multipath configuration instructions. Unfortunately, they tend to expect that the reader is already familiar with FC storage terminology.

http://www.centos.org/docs/5/html/5.1/DM_Multipath/config_file_devices.html

http://christophe.varoqui.free.fr/

MK
MK