MSA Storage
cancel
Showing results for 
Search instead for 
Did you mean: 

MSA2040 multipath config

 
Rafael-Antonio
Occasional Contributor

MSA2040 multipath config

Hi,

Please your help, we have the following scenario:

  • Redhat 6.9
  • BL460c Gen8 (c7000 chassis with 2 SAN switch, all well interconnected)
  • MSA2040 SAN
  • One presented volume 6TB to this host (only)

An the following issue:

SAN_SW1
HBA_port1
CTRL_A_A1   /dev/sdb --> 11MBps    Prio 10
CTRL_B_B1   /dev/sde --> 67MBps    Prio 50 Owner

SAN_SW2
HBA_port2
CTRL_A_A2  /dev/sdc --> 11MBps    Prio 10
CTRL_B_B2  /dev/sdd --> 67MBps    Prio 50 Owner

 

As you see, the throughput is very bad over Controller A, the multipath.conf is the following, on device part:

devices {
        device {
        vendor "HP"
        product "MSA 2040 SAN"
        path_grouping_policy group_by_prio
        getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
        prio alua
        path_selector "round-robin 0"
        path_checker tur
        hardware_handler "0"
        failback immediate
        rr_weight uniform
        rr_min_io_rq 1
        no_path_retry 18
        }
}

The commands used to generate IO are:


fio --name=test_rand_read  --filename=/dev/sdb --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

fio --name=test_rand_read  --filename=/dev/sdc --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

fio --name=test_rand_read  --filename=/dev/sdd --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

fio --name=test_rand_read  --filename=/dev/sde --ioengine=libaio --iodepth=1 --rw=read --bs=4k --direct=1 --size=512mb --numjobs=1 --runtime=240 --group_reporting

And the multipath output is:

[root@XXXXXX ~]# multipath -ll
mpathb (3600c0ff0002946fd82e0e45a01000000) dm-0 HP,MSA 2040 SAN
size=6.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
|-+- policy='round-robin 0' prio=50 status=active
| |- 4:0:1:0 sdd 8:48 active ready running
| `- 3:0:1:0 sde 8:64 active ready running
`-+- policy='round-robin 0' prio=10 status=enabled
  |- 3:0:0:0 sdb 8:16 active ready running
  `- 4:0:0:0 sdc 8:32 active ready running

 

Pls, what could be the reason for this behaviour? we detect that whole mpathb was degraded, we are guessing that it is due this scenario.

Plaease your comments.

Regards

Rafael

2 REPLIES 2
arun_r
Frequent Advisor

Re: MSA2040 multipath config

Rafael,

multipath.conf seems to be correct as per HPE guide.

I have few queries:

1.  Do we have volumes created under each controller ?

The preferred data path for controller A owned volume would be controller A host ports.

Similarly, The preferred data path for controller B owned volume would be controller B host ports.

2. If there are different volumes under controller A and controller B, are the hard drives used in each volume of the same type ?

ex: 10K SAS, 15K SAS)

Log a case with HPE support team if the unit is under warranty:

https://support.hpe.com/hpesc/public/home

 

I am an HPE Employee

 

 

I am an HPE Employee

Accept or Kudo

arun_r
Frequent Advisor

Re: MSA2040 multipath config

Hello Raphael,


Can you please help us to understand if the issue got resolved or not?
If issue got resolved then how?


This will help for everyone who are all following your forum. 

If you feel this was helpful please click the KUDOS! thumb below!

I am an HPE Employee

I am an HPE Employee

Accept or Kudo