HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
System Administration
cancel
Showing results for 
Search instead for 
Did you mean: 

hpux 11.31 , native mutlitpathing and clariion cx3-80

 
SOLVED
Go to solution
Deepak Seth_1
Regular Advisor

hpux 11.31 , native mutlitpathing and clariion cx3-80

Anybody like to share how you did it and how is the performance . i went through some older posts but there are all different opinions . Some say use failover mode 2 and other 4 and same for HP tresspass v/s no tress pass . Any quick suggestion or pointer to a correct document is appreciated.
hpux 11.31 (sept,2008 bundle)
clarrion cx3-80 - flare code 26
11 REPLIES
Solution

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

Deepak,

use mode 4 - all the stuff about mode 2 was just work-arounds before FLARE26 which supported mode 4 came out. I seem to recall trespass becomes irrelevant once you are in mode 4.

It's covered in detail in the EMC connectivity guide for HP-UX, which you should be able to get from EMC. I think they also have a whitepaper on using ALUA (which is what mode 4 does) as well.

Get all the info at powerlink.emc.com

HTH

Duncan

HTH

Duncan
Deepak Seth_1
Regular Advisor

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

i saw one of your earlier posting where you mentioned using the following

Using the Naviseccli command line
To use Naviseccli, you must have:
â  Naviseccli installed on a host with network access to the
CLARiiON array
â  Navisphere Agent running on the 11iv3 host you are connecting
to the CLARiiON array.
The following example uses the command line to configure the array:
naviseccli -h 172.23.180.130 -h 172.23.180.131 storagegroup -sethost -host
hpint064 -failovermode 4 â  arraycommpath 1 -type 10
WARNING: Changing configuration options may cause the array to stop functioning
correctly. Do you wish to continue (y/n)? y
The syntax for the command is as follows:
-h 172.23.179.130 â  the array ips
-host hpint064 â  the host name (provided by Navisphere
Agent)
-failovermode 4 â  the failover mode for ALUA support
-type 10 â  the host initiator options
For ALUA, the only option (other than host) that can be modified for
support is the â  typeâ  options.
There are two supported values for the initiator options:
â  10
Using 10 sets the system for HP No Auto Trespass initiator
options.
â  3
Using 3 sets the system for CLARiiON Open initiator options.

Can i just use the above syntax on configure it . I have to handover the system in next 20 minutes and really don't want to go into details of reading stuff . I have already setup my clariion initiators as HP No Auto Tress pass and failover mode to 4 . I can see the LUN on my host . Should i create a VG and continue to work in background on ALAU whitepaper . I just want to make sure that i don't need to destroy my VG etc .
inukoti
Frequent Advisor

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

Hi deepak,

It all depends how you have connected the EMC to HPUX box via SAN switches or direct attached. If you are direct attached try to use tresspass if it is connected via SAN use no tresspass or EMC open.
If you have Connected vi SAN switches with tresspass you will have immense performance problem.

bobby
Deepak Seth_1
Regular Advisor

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

i am connected through SAN - cisco MDS and therefore setting HP no tresspass using failover mode 4 . Is that's it or do i need to do something at host end as well . I have created the VG and here is my vg information.

--- Physical volumes ---
PV Name /dev/dsk/c7t0d0
PV Name /dev/dsk/c6t0d0 Alternate Link
PV Status available
Total PE 19199
Free PE 2806
Autoswitch On
Proactive Polling On


Anything else or i am all set . How to monitor if the load balancing is happening (either at host or array level)

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

Deepak,

I think you're good to go... if you can get as far as creating a VG, the worst thing thats going to happen is that you get appalling performance cos your load balancing isn't working correctly - I don't think you'd actually have to go back and re-create the VG.

Once you have a disk presented, post the output of:

scsimgr lun_map -D /dev/rdisk/diskN

for your LUN.

HTH

Duncan

HTH

Duncan

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

to see if you have load balancing working correctly, set off some hefty IO to the LUN, such as:

dd if=/dev/rdisk/disk10 of=/dev/null bs=8k

and then

sar -L 2 10

and look to see you get IO down more than 1 lunpath. Remember as the Clariion is an ALUA array, you'll only get IO to those LUN paths marked as active when you ran scsimgr lun_map (i.e. all the ports on one controller that you have presented the LUN out of)

HTH

Duncan

HTH

Duncan
Deepak Seth_1
Regular Advisor

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

here is how my sar output looks like .

09:21:40 lunpath %busy avque r/s w/s blks/s avwait avserv
%age num num num num msec msec
09:21:42 disk41_lunpath0 1.00 0.50 0 2 16 0.00 4.81
disk59_lunpath55 40.80 125.19 0 983 123672 51.53 3.23
disk55_lunpath51 1.00 0.50 0 2 16 0.00 4.84
09:21:44 disk41_lunpath0 1.50 0.50 10 1 136 0.00 1.29
disk59_lunpath55 68.50 136.83 0 1748 220296 53.11 3.05
09:21:46 disk41_lunpath0 0.50 0.50 0 2 20 0.00 2.80
disk59_lunpath55 70.00 120.10 0 1883 237568 40.96 2.90
disk55_lunpath51 0.50 0.50 0 2 20 0.00 2.36
09:21:48 disk41_lunpath0 0.50 0.50 0 2 10 0.00 2.56
disk59_lunpath55 65.50 130.47 0 1775 222708 48.78 2.86
disk55_lunpath51 1.00 0.50 0 2 10 0.00 4.37
09:21:50 disk41_lunpath0 0.50 0.50 0 2 17 0.00 4.70
disk59_lunpath55 76.88 168.80 0 1824 230362 73.75 3.27
disk55_lunpath51 0.50 0.50 0 2 17 0.00 3.14
(

Here is the other output

(herhxp02 root):/> scsimgr lun_map -D /dev/rdisk/disk59

LUN PATH INFORMATION FOR LUN : /dev/rdisk/disk59

Total number of LUN paths = 2
World Wide Identifier(WWID) = 0x60060160f1dd1a00182eaa19f4c2dd11

LUN path : lunpath54
Class = lunpath
Instance = 54
Hardware path = 0/0/6/1/0/4/0.0x5006016239a01781.0x4000000000000000
SCSI transport protocol = fibre_channel
State = STANDBY
Last Open or Close state = STANDBY

LUN path : lunpath55
Class = lunpath
Instance = 55
Hardware path = 0/0/4/1/0/4/0.0x5006016839a01781.0x4000000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE


Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

So it looks like you only have the LUN presented on one port on each controller. This means you'll have LUN failover to the other controller working OK, but no load balancing as there is only 1 active LUN path.

I don't know the CX3-80 that well - does it only have 1 port on each controller, or is it not possible to present a LUN out of multiple ports on the same controller? (That's how the EVA works in ALUA mode).

HTH

Duncan

HTH

Duncan
Deepak Seth_1
Regular Advisor

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

u caught it right . i need to make a zoning change and have my LUN see mutiple ports on same SP . So that would really make it working more efficiently .
LUN 1 - SPA - port 1 , port 2
SPB - port 1 , port 2

Then it will be load balancing . I think right now it is just set to do a fail over in case HBA or SP failed.

But other then that , i think i am ok to proceed . correct ?
Deepak Seth_1
Regular Advisor

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

hi duncan,
is this how it looks like for you .

(herhxp02 root):/> scsimgr lun_map -D /dev/rdisk/disk59

LUN PATH INFORMATION FOR LUN : /dev/rdisk/disk59

Total number of LUN paths = 4
World Wide Identifier(WWID) = 0x60060160f1dd1a00182eaa19f4c2dd11

LUN path : lunpath54
Class = lunpath
Instance = 54
Hardware path = 0/0/6/1/0/4/0.0x5006016239a01781.0x4000000000000000
SCSI transport protocol = fibre_channel
State = STANDBY
Last Open or Close state = STANDBY

LUN path : lunpath59
Class = lunpath
Instance = 59
Hardware path = 0/0/6/1/0/4/0.0x5006016339a01781.0x4000000000000000
SCSI transport protocol = fibre_channel
State = STANDBY
Last Open or Close state = STANDBY

LUN path : lunpath58
Class = lunpath
Instance = 58
Hardware path = 0/0/4/1/0/4/0.0x5006016b39a01781.0x4000000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE

LUN path : lunpath55
Class = lunpath
Instance = 55
Hardware path = 0/0/4/1/0/4/0.0x5006016839a01781.0x4000000000000000
SCSI transport protocol = fibre_channel
State = ACTIVE
Last Open or Close state = ACTIVE


how to test , if the load balancing is happening . Is the 2 path showing ACTIVE means it balancing between 2 ports ?

Re: hpux 11.31 , native mutlitpathing and clariion cx3-80

Deepak,

Yeah that looks correct - you're right it should just load balance across the 2 active paths, and if both of those fail it will failover to the standby paths (and then load balance across them) which is correct behaviour for a Clariion disk array in ALUA mode.

Test by repeating your dd and sar test I outlined above - you should now see roughly equal IO volumes to both of the active lunpaths

HTH

Duncan

HTH

Duncan