1753448 Members
5367 Online
108794 Solutions
New Discussion юеВ

HPUX Virtual Machines

 
SOLVED
Go to solution
Alan Shearer_1
Frequent Advisor

HPUX Virtual Machines

Hi,

We're looking at using HPUX VM.

I'm pretty familiar with vPars, but never used HPUX VM.

Was wondering if anyone had any experiences they could share on the best way to set them up, e.g. VM disk setup best practices, and/or any issues people have had to work around or way to get the best performance.

Thanks,

Alan
6 REPLIES 6
Torsten.
Acclaimed Contributor

Re: HPUX Virtual Machines

Hi,

the documentation is here:

http://docs.hp.com/en/vse.html#HP%20Integrity%20Virtual%20Machines

Regarding the storage - some points to consider:

If you have a multipathed storage, you need to use a multipath software solution or you go with LVM based backing storage.

But I recommend to take a look at the documentation first (manual, top ten tips, etc.)

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Hemanth Gurunath Basrur
Honored Contributor

Re: HPUX Virtual Machines

Hi Alan,

Refer the "Using Ignite-UX with Integrity Virtual Machines" white paper available at

http://h71028.www7.hp.com/ERC/downloads/c00589728.pdf

and

"Introducing HP Integrity Virtual Machines" white paper available at

http://h71028.www7.hp.com/enterprise/downloads/Intro_VM_WP_12_Sept%2005.pdf.

Hope this helps.

Regards,
Hemanth
Alan Shearer_1
Frequent Advisor

Re: HPUX Virtual Machines

Hi,

Thankyou both v much for your responses.

We will be using LVM PVlinks for the storage.

I am reviewing the docs.

I was hoping for some comments on people's 'real' world experiences with HPUX VMs - hopefully learn from the mistakes of others etc.!!

Easy points in exchange for any useful comments on planning and deploying HPUX VMs!

Thanks,

Alan
Torsten.
Acclaimed Contributor

Re: HPUX Virtual Machines

"hopefully learn from the mistakes of others "

Read the docs and you won't do so much mistakes ;-)

I did a setup on several systems now and it is pretty easy. I would recommend to create a volume group for each virtual machine and put all the data/OS stuff into it (vlo1 for boot, lvol2 for data ...).
If you have enough HBAs, try to balance the load across them.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
Hemanth Gurunath Basrur
Honored Contributor

Re: HPUX Virtual Machines

Hi Alan,

Refer the foll. links:

http://h71028.www7.hp.com/ERC/downloads/4AA0-5800ENW.pdf

http://whitepapers.techrepublic.com.com/whitepaper.aspx?&docid=294205&promo=100510

Also, search in ITRC if members have discussed about HP-UX VM.

Regards,
Hemanth
Eric SAUBIGNAC
Honored Contributor
Solution

Re: HPUX Virtual Machines

Hi Alan,

I have some experience with building HP VM guests, but not yet enough to see if my choices were ok : my clients are still not in a production phase.

Well, some considerations :

- be careful with RAM. I have 2 cases where we were too much "greedy", and finally the host was almost freezed, it was impossible to stop a VM, ... In fact you must consider the memory requirements in page 22 of "HP Integrity Virtual Machines A.03.00 Installation, Configuration, and Administration" then add 2 or 4 Go. For example, if i need 20 Go for all VM Guests, i will give to the host 28 Go : 25 Go + a bit more.

- carefully consider and plan number of virtual CPU you give to a VM, especially if you plan to use icap to decrease number of physical cores in order to increase those in an other nPar. If you have more virtual CPU in a VM (note : max 4 vCPU) than in the physical host some strange behavior can occur.

- work with entitlement in a relative way, not in absolute. For example, in a VM Host with 2 VM Guests, if you want to give 60% to one VM and 25% to the second one, it is better to give 12% to one and 5% to the other, that is the same ratio.

- Not possible to add or remove a cpu on a working VM. Why ?!

- You say you will use "LVM PVLinks for the storage". I think that you know that it is not supported inside the guest ? Well, but by using PV Links at host level for multipathing you must present logicals volumes to the VM Guest, not raw devices like disks. So you will have to make a choice between performance and availability. For performance reasons I do prefer working with raw devices in VM Guests (have a look at page 10 of the attached document). So for multipathing, i do it at a lower level than LVM, for example with securepath/autopath for EVA storage.

- In version 1 of HP VM, it was possible to use PV Links in a VM. And do you know why it is no more possible ? Hardware events, like a missing path, are not transfered to a VM Guest. In fact if you see a "NO_HW" path at the host level you will see it's corresponding path "claimed" at Guest level. An heavy problem to handle PVLinks, no ?

- More ? For the same reason, doing mirroring at the Guest level is not recommended, not expressly unsupported, but not recommended. That means that you still have to make a choice, between performance and security :-( As LVM Mirror inside guest is not expressly unsupported, i do LVM mirror in VM Guest and change PV Timeout to a very low value. It seems to work like this.

- Doing some load balance ... yes really nice idea but generally it means a bit more LUNs to achieve it. Not a problem on a physical host but DON'T forget that you cannot have more than 30 virtual disks in a VM Guest. Some of my clients are very angry about that ...


In my experience, Integrity VM is not yet a good product to do production : too many limitations. It can be a good deal for developpement or test environnements, that's all. Not as good enough as IBM micro-partition, that is quite a pity :-(


But you have some good things :

- better approch for disaster recovery scenario at host level
- new vision of MC/ServiceGuard : you jump from application level to system level if you put a VM in a package.
- ease of deployment with ignite (less than 1 hour)
- you can have a "master" system disk that you can clone rapidly to install a VM (less than 1/2 hour)


Help this will help and please be fair with my English ;-)

Eric