Operating System - HP-UX
1753886 Members
7534 Online
108809 Solutions
New Discussion юеВ

How to configure a client with over 150 mount points

 
dirkdevos
Frequent Advisor

How to configure a client with over 150 mount points

Hi,

We are trying to see if we can move one of our physical servers to a client on a VSP 6.3 blade server. Looking at some best practices and client limitations that are imposed by the virtualization software we are going to have some issues defining the 150 plus mount points. The disks are all defined on our 3PAR and accessed via 2 fiber controllers on the server. The disks are contained in 15 volume groups.

If anybody has any suggestions on how to configure the client or if this is even possible to do I would appriate any suggestions.

Thanks.

6 REPLIES 6
Ajin_1
Valued Contributor

Re: How to configure a client with over 150 mount points

Hi

Using Storage replication to replicate those LUNs and present the same to VPAR

Thanks & Regards
Ajin.S
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Dave Olker
HPE Pro

Re: How to configure a client with over 150 mount points

What limitation are you concerned about?  VMs and vPars can handle 256 AVIO disks and if you configure the VM or vPar with NPIV virtual HBAs, those can address 2048 LUNs.

Dave

I work for HPE

[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Eric SAUBIGNAC
Honored Contributor

Re: How to configure a client with over 150 mount points

Bonjour,

 

Assuming the box you want to move runs HP-UX 11iv3, my general guideline would be :

- First check the HPVM 6.3 release notes to verify that your 11iv3 release is supported (>= March 2012 ?). If not you will have to upgrade it.

- Unless it is already done, install on the box you want to move /opt/hpvm/guest-images/hpux/11iv3/hpvm_guest_depot.11iv3.sd from the VSP server

- In a workinfg directory under vg00 export all VG, with the -p option [ configuration file generated but export not really done ] and with the -s option to have VGID in the configuration file

- Create a full vg00 [ -x inc_entire=vg00 ] ignite backup to an ignite server.

- Create the VM on the VSP server with NPIV virtual HBA.

- Zone VM's NPIV virtual WWN to the 3Par

- Create the VM host in the 3Par IMC or SSMC. Persona HP-UX 11iv3. You will have to manually do that.

- Present to the VM all the VV that are currently presented to the physical box, but vg00's VV if the physical box boot from the SAN. If possible try to use the same LUN as with the source box. [ Not sure it is mandatory, but it is an old reflex from previous HP-UX releases ]

- From the 3Par create a dedicated VV for the new VM's vg00 large enough as the source one and present it to the VM.

- Stop the source box.

- If you want to avoid any risk of a race condition to the data disks between the physical box and the VM , unzone the physical box, or unpresent it from the 3Par [ first choice is simplier in case you need to deal back with the physical box ]

- Install the VM from the ignite image created at the beginning

Because instantiation of disks is now aware of WWNs, I guess that data disks' dsf will be the very same between the physical incarnation of your system and the virtual one. If so the job is almost terminated.

If not,  you will have to export all data vg in the VM and import them back from configuration files generated at the beginning and with the -s option. If you can't work with NPIV [ for example the box you want to move is HP-UX 11iv2 ] and AVIO is the only way to configure VM's disks this step will be mandatory.

Finally you will probably have to modify network settings (not the same lan instance ?).

Then read read again release notes and admin guide against patches and so on and do any recommended action.

In case you need to go back to the physical box, just stop the VM, zone back the physical box to the 3Par or present back data disk to it and make sure the VM can't access any more data disks.

As said, it is just a general guideline and you can imagine other ways to do that. For example you can also proceed with a DRD clone or start the VM to see its WWN from the 3Par management console to create the host ... You can also secure a little bit more the process by taking a snapshot of all the data disks before installation of the VM. And so on ...

 

Hope this help

Eric

[ Lumbago is good to find time enough to post in forums ;-) ]

 

dirkdevos
Frequent Advisor

Re: How to configure a client with over 150 mount points

Dave,

Thanks for the information. It turns out that the person who gave me the limit of 30 virtual storage devices per virtual machine was looking at some old documentation. I am so sorry about that.

I have been looking at using NPIV but from the documentation that I have I have set up a GUID server to manage the WWN's used for NPIV but I have not been able to see how I tie that to the LUN's on our 3Par.

Thanks,

     Dirk

 

 

dirkdevos
Frequent Advisor

Re: How to configure a client with over 150 mount points

Eric,

Thank you for the information. We had already done most of the steps that you had mentioned. We are not planning on cloning the system as we have to install a new DBMS on the virtual server and the bulk of the LUNs contain our database. So we installed the latest version of HP-UX 11i v3 and VSP 6.3.5 and got everyting up and running. Our two big concern were the number of LUN's and multipathing in the guest. As Dave had mentioned that the guest can handle 256 AVIO disks it just leaves the multipathing issue.

As we have not used NPIV before nor had a real need for multipathing this part is still a little bit of a mystery. We have the GUID server configured and have created NPIV resources but linking the WWN's and the LUN's is the issue.

Thanks again for the help so far,

    Dirk

Dave Olker
HPE Pro

Re: How to configure a client with over 150 mount points

The GUID manager manages the WWNs of the virtual NPIV HBAs, not the LUNs.  On the 3PAR you need to create a host persona containing the WWN's that the LUNs will be presented to.  The GUID manager is the one that assigns these "virtual" HBAs with "virtual" WWNs.  Once the HBAs have their assigned WWNs, you use those WWNs in the host persona so that any LUNs you present on the 3PAR to the host will be visible to the VM/vPar using the virtual HBA. 

The GUID manager ensures that once you assign a virtual WWN to one vHBA, that WWN will not be re-used by a different VM/vPar.  The GUID manager is not required, but HPE provides it as a free service to help ensure you do not end up with multiple virtual HBAs using the same WWN and somehow you get multiple VMs/vPars thinking they own a particular LUN.

Dave

I work for HPE

[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo