Operating System - HP-UX
1745791 Members
4013 Online
108722 Solutions
New Discussion юеВ

tresspassing on clariion LUNS in HPVM machines

 
SOLVED
Go to solution
Telia BackOffice
Valued Contributor

tresspassing on clariion LUNS in HPVM machines

Hi

I have a problem with tresspassing on clariion LUNS in a couple of my HPVM machines.

I have a physical machine, crusher, with 2 HBAs, connected to a clariioncabinet (switched topology). On that machine, laforge and cochrane is running as HPVM machines.

The storage guys are seeing excessive tresspassing on the luns presented to crusher, but it is LUNS that are mapped to laforge and cochrane that causes problems.

We are using vxvm and dmp. The physical box has 4 paths to each LUN. the virtuals have 1 path to the presented lun.

Doing heavy I/O on the virutal machines updates the dmp iostat couns in the virtual machines alone, so apparently dmp on crusher, the physical machine does not notice I/O on the virtual machines. Is this how it is supposed to be?

The HPVM setup wrt. disks:

/dev/rdsk/c16t0d3:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:cochrane:6006-0160-6d96-1600-e041-55f5-2c6b-db11

/dev/rdsk/c16t0d2:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:cochrane:6006-0160-6d96-1600-e0df-43c3-2c6b-db11

/dev/rdsk/c16t0d4:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:cochrane:6006-0160-6d96-1600-401e-3a9c-0a70-db11

/dev/rdsk/c16t0d5:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:cochrane:6006-0160-6d96-1600-42d9-69b4-0a70-db11

/dev/rdsk/c16t0d6:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:cochrane:6006-0160-6d96-1600-3e43-7bf6-0a70-db11

/dev/rdsk/c16t0d0:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:laforge:6006-0160-6d96-1600-38cf-820f-2b68-db11

/dev/rdsk/c16t0d1:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:laforge:6006-0160-6d96-1600-d4ad-b138-2b68-db11

/dev/rdsk/c16t1d5:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:cochrane:6006-0160-7f96-1600-dae7-4a4f-186c-dc11

/dev/rdsk/c26t0d5:CONFIG=gdev,EXIST=YES,DEVTYPE=DISK,SHARE=NO:laforge:M18E75040418E000M18E

Shouldn't /dev/rdsk/c.... be /dev/rdmp/... in a DMP environment?

Thomas
9 REPLIES 9
Steven E. Protter
Exalted Contributor

Re: tresspassing on clariion LUNS in HPVM machines

Shalom Thomas,

The storage guys are seeing excessive tresspassing on the luns presented to crusher, but it is LUNS that are mapped to laforge and cochrane that causes problems.

Should this not be solved by zoning on the disk array?

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Torsten.
Acclaimed Contributor

Re: tresspassing on clariion LUNS in HPVM machines

laforge and cochrane can't make this work?
Unlikely ;-))


Do you use the plain LUNs for the HPVMs or logical volumes from VxVM?

Without any multipathing software this cannot work. For this case you need a single special (virtual) disk device for all the real hardware pathes.

see

http://docs.hp.com/en/T2767-90105/ch07s02.html#multipath_solutions_section

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Telia BackOffice
Valued Contributor

Re: tresspassing on clariion LUNS in HPVM machines

Steven.

The only disks lforge and cochrane have is the ones from crusher. I must admit that I is not a HPVM expert, and I have not done this setup.

As I read the config above, cochrane and laforge have some disks from crusher mapped to them, but why are do the HPVM config contain both the physical path on crusher and the unique disk LUN id (6006.....)?

How do I/O map down through the HPVM system? ie.

1. laforge -> crusher, /dev/dsk/c.... -> dmp -> hba ->
2. laforge -> hba -> lun
3. laforge -> dmp -> hba -> lun
4. laforge -> crusher, /dev/dskc.... -> hba

As I do not see the iostat on vxdmpadm being incremented on cruhser, when doing I/O on laforge, I think I can rule out 1 and 3.

But if one of you specialists can help me out here.

wrt. to zoning, I get the the following on laforge:

thosan@laforge|pts/3:/home/thosan$ sudo ./inq.HPUXIA64 -clar_wwn -nodots

Password:

Inquiry utility, Version V7.3-872 (Rev 0.0) (SIL Version V6.5.0.0 (Edit Level 872)

Copyright (C) by EMC Corporation, all rights reserved.

For help type inq -h.







----------------------------------------------------------------------------------------------------

CLARiiON Device Array Serial # SP IP Address LUN WWN (all 32 hex digits required)

----------------------------------------------------------------------------------------------------

thosan@laforge|pts/3:/home/thosan$ sudo ioscan -fnC fc

So laforge does not appear to have an virtual hba. and inq. does not return any clariion devices. So I do not think that this has something to do with zoning, but if you can elaborate I might change my standpoint ;-)

Thomas
Torsten.
Acclaimed Contributor

Re: tresspassing on clariion LUNS in HPVM machines

Please understand the idea - HPVM guests cannot do any multipathing. If you give them a single path to a device, they can only use this path. If this path fails - your guests don't have a disk any longer.

With multipathing software the host will manage the balancing between the real pathes and give the guest only one single path.

BTW, all storage devices are mapped for the guests (to SCSI or AVIO) , so you can never find a fiber channel device inside the guest.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Torsten.
Acclaimed Contributor

Re: tresspassing on clariion LUNS in HPVM machines

correction:

...With multipathing software the host will manage the balancing between the real pathes and give the guest only one single **logical** path (or a logical volume via the volume manger or even a plain file) . ...



see also

http://docs.hp.com/en/T2767-90105/ch07s02.html#io_stack

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Telia BackOffice
Valued Contributor

Re: tresspassing on clariion LUNS in HPVM machines

Torsten

I'm trying, I'm trying. Cut me some slack here ;-)

I _have_ dmp installed in the VMs at the moment. With at single path, ie.

ENCLR_NAME ENCLR_TYPE ENCLR_SNO STATUS ARRAY_TYPE

============================================================================

Disk Disk DISKS CONNECTED Disk

thosan@laforge|pts/3:/home/thosan$ sudo vxdmpadm getdmpnode enclosure=Disk

NAME STATE ENCLR-TYPE PATHS ENBL DSBL ENCLR-NAME

=========================================================================

c0t0d0 ENABLED Disk 1 1 0 Disk

c0t1d0 ENABLED Disk 1 1 0 Disk

But in my mind that does not explain why the machines tresspass.

As i read table 7.1 on

http://docs.hp.com/en/T2767-90105/ch07s02.html#multipath_solutions_section

I am presenting whole disks/LUNS to the VMs and yet are still using the unsupported DMP software?

My hypothesis on this is that by doing so, I/O goes to the physical disk on whatever path the OS chooses. For DMX it doesn't really matter, but for Clariion this will cause tresspassing. Correct?
kirkkak
Advisor

Re: tresspassing on clariion LUNS in HPVM machines

We had similar issue couple of years ago where HPUX made excessive trespass on Clarion SP's and there was primus solution nr emc21180 in powerlink (we used first hint: change pv timeout to 90), i will surf around for newer version of it.
Solution

Re: tresspassing on clariion LUNS in HPVM machines

Thomas,

Lets try and straighten this out.

You cannot do any sort of storage MPIO in a VM guest - be that Veritas DMP, LVM PVlinks or PowerPath. So there's no point even having DMP enabled in a guest.

You do all your multi-pathing in your VM host - so that means several choices:

i) You present the primary path to the guest and have no MPIO at all.

ii) You install EMC Powerpath and let that handle MPIO - IIRC with PowerPath for Clariion you still see both paths to the storage, you just use one and PowerPath takes care of any failover.

iii) You use LVM logical volumes with PVlinks on the VM host - these do work with Clariion but there are some specific settings you'd need to use on the array - see powerlink for details. In this case you present the logical volumes (/dev/vgXX/rlvolN) to the guests. THis isn't as IO efficient as raw disks presented to the guest.

iv) You use VxVM with DMP on the VM host. I don't know if there is an array support library for Clariion available from Symantec, but you should check. Again, like with LVM, you *do not* present the physical disks to the VM guest, but the VxVM volumes you create using VxVM. So you present something like /dev/vx/rdsk/mydg/myvol. You *can not* present the DMP nodes in /dev/vx/rdmp to the VM guest - this is stated explicitly in the release notes:

http://docs.hp.com/en/T2767-90076/ch08s01.html


Let me be clear about this - if you want to present a whole disk backing store to a VM rather than a file or a LVM/VxVM volume, you must use some sort of MPIO and that *does not* include DMP. You can only use DMP when presenting a VxVM volume (not disk) to a guest.

In several weeks we should have Integrity VM4.0 released which supports 11iv3 as the VM host - the good news here is that 11iv3 has native MPIO, so this challenge goes away completely.

HTH

Duncan




I am an HPE Employee
Accept or Kudo
Telia BackOffice
Valued Contributor

Re: tresspassing on clariion LUNS in HPVM machines

Turned the VMs into physical machines as we were running a non supported setup.

After that it worked like a charm.