Operating System - HP-UX
1748198 Members
2522 Online
108759 Solutions
New Discussion юеВ

Integrity Virtual Machines, on HPUX 11.23 ia64

 
Johnny Damtoft
Regular Advisor

Integrity Virtual Machines, on HPUX 11.23 ia64

Hi all,

This area is very new to me, and the bad part is - that we have performance issues on the server/servers, so i need help badly. :/

First, some background:
I have an nPar configured on a SuperDome server.
This nPar has 6 CPUs, 128GB Mem, 2x 4GB HBA, 4x 4Port GB-ETH NIC, and some other internal stuff.

This nPar is holding 8 VM (Virtual Machines), that has the following hardware config:
SVR-1, 4CPU, 30GB Mem
SVR-2, 2CPU, 9GB Mem
SVR-3, 2CPU, 9GB Mem
SVR-4, 2CPU, 30GB Mem
SVR-5, 2CPU, 10GB Mem
SVR-6, 2CPU, 5GB Mem
SVR-7, 2CPU, 9GB Mem
SVR-8, 2CPU, 6GB Mem

All of the above VM servers is sharing FiberChannel to XP StorageArrays.

We are using the VM servers for test and development, but since HP installed the server 6 months ago, the servers are performing badly.

It seems that IO Wait might be the problem.

Now to my question:
- How do i perform a detailed analyzes of the nPar and the VM's?
- How do i dertermine where the problems is?

If this is IO Wait, how can i prove this - and how do i tune the system?

Is there any recomendations for how the kernel perm. should be configured for an nPar that operated VMs?

--
Just to give you all a small example. I wanted to extract a tar file that containes 150MB of data.
Normally, this is realy fast - but on the VM's it handled each file for 4-8 seconds.

Average filesize, for the files in the TAR archive, is 3-4 MB


Looking forward to hearing from you.



Johnny
12 REPLIES 12
Steven E. Protter
Exalted Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

Shalom,

As far as I know, you need to perform I/O analysis on the machine level.

http://www.hpux.ws/?p=6

It is no wonder with this many machines sharing the same fiber channel that I/O could be a problem.

You probably also need to do analysis on the disk array, and see which luns are having the issue.

A correlation of the data will hopefully lead you t a better I/O layout on the disk array, versus having to break and rebuild all of these machines.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Tim Nelson
Honored Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

The first place to review performance is at the VMHost.

Start there using Glance to review CPU, Mem and Disk.


If IO we will start asking what type of backing stores you are using, raw, lvol, fs ?

How is your VMHost configured, enough swap ? Have you reserved enough memory for the VMHost to operate ?

Many many possible issues but let's start with some stats from the VMHost.

Once an area is determined we can start reviewing what can be done.
Geoff Wild
Honored Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

What does Capacity Advisor show you?

Using System Insight Manager you might be able to see the bottle neck...

Are you running anything on the VM Host other then Integrity Virtual Machines?

Also, is the OE Foundation?

Here's my kernel parms on a RX6600 11.23 FOE VM Host:

# kctune
Tunable Value Expression Changes
NSTREVENT 50 Default
NSTRPUSH 16 Default
NSTRSCHED 0 Default
STRCTLSZ 1024 Default
STRMSGSZ 0 Default
acctresume 4 Default
acctsuspend 2 Default
aio_listio_max 256 Default Immed
aio_max_ops 2048 Default Immed
aio_monitor_run_sec 30 Default Immed
aio_physmem_pct 10 Default
aio_prio_delta_max 20 Default Immed
aio_proc_thread_pct 70 Default Immed
aio_proc_threads 1024 Default Immed
aio_req_per_thread 1 Default Immed
allocate_fs_swapmap 0 Default
alwaysdump 0 Default Immed
chanq_hash_locks 256 Default
core_addshmem_read 0 Default Immed
core_addshmem_write 0 Default Immed
create_fastlinks 0 Default
dbc_max_pct 1 1 Immed
dbc_min_pct 1 1 Immed
default_disk_ir 0 Default
disksort_seconds 0 Default
dma32_pool_size 268435456 Default
dmp_rootdev_is_vol 0 Default
dmp_swapdev_is_vol 0 Default
dnlc_hash_locks 512 Default
dontdump 0 Default Immed
dst 1 Default
dump_compress_on 1 Default Immed
enable_idds 0 Default
eqmemsize 15 Default
executable_stack 0 0 Immed
fr_statemax 800000 Default Immed
fr_tcpidletimeout 86400 Default Immed
fs_async 0 Default
fs_symlinks 20 Default Immed
ftable_hash_locks 64 Default
gvid_no_claim_dev 0 Default
hp_hfs_mtra_enabled 1 Default
intr_strobe_ics_pct 100 Default Auto
io_ports_hash_locks 64 Default
ioforw_timeout 0 Default Auto
ksi_alloc_max 33600 Default Immed
ksi_send_max 32 Default
max_acct_file_size 2560000 Default Immed
max_async_ports 50 Default
max_mem_window 0 Default
max_thread_proc 3000 3000 Immed
maxdsiz 2063835136 2063835136 Immed
maxdsiz_64bit 34359738368 34359738368 Immed
maxfiles 2048 Default
maxfiles_lim 4096 Default Immed
maxrsessiz 8388608 Default
maxrsessiz_64bit 8388608 Default
maxssiz 8388608 Default Immed
maxssiz_64bit 268435456 Default Immed
maxtsiz 100663296 Default Immed
maxtsiz_64bit 1073741824 Default Immed
maxuprc 256 Default Immed
maxvgs 80 80
msgmap 1026 Default
msgmax 8192 Default Immed
msgmnb 16384 Default Immed
msgmni 512 Default
msgseg 8192 Default
msgssz 96 Default
msgtql 1024 Default
ncdnode 150 Default
nclist 8292 Default
ncsize 8976 Default
nfile 65536 Default Auto
nflocks 4096 Default Auto
ninode 4880 Default
nkthread 8416 Default Immed
nproc 4200 Default Immed
npty 60 Default
nstrpty 60 60
nstrtel 60 Default
nswapdev 10 Default
nswapfs 10 Default
nsysmap 8400 Default
nsysmap64 8400 Default
o_sync_is_o_dsync 0 Default
pa_maxssiz_32bit 83648512 Default
pa_maxssiz_64bit 536870912 Default
pagezero_daemon_enabled 1 Default Immed
pfdat_hash_locks 128 Default
physical_io_buffers 1280 Default Auto
region_hash_locks 128 Default
remote_nfs_swap 0 Default
rng_bitvals 9876543210 Default
rng_sleeptime 2 Default
rtsched_numpri 32 Default
scroll_lines 100 Default
scsi_max_qdepth 8 Default Immed
scsi_maxphys 1048576 Default
secure_sid_scripts 1 Default Immed
semaem 16384 Default
semmni 2048 Default
semmns 4096 Default
semmnu 256 Default
semmsl 2048 Default Immed
semume 100 Default
semvmx 32767 Default
sendfile_max 0 Default
shmmax 1073741824 Default Immed
shmmni 400 Default Immed
shmseg 300 Default Immed
st_ats_enabled 0 Default
st_fail_overruns 0 Default
st_large_recs 0 Default
st_san_safe 0 Default Immed
streampipes 0 Default
swapmem_on 0 0
swchunk 4096 4096
sysv_hash_locks 128 Default
tcphashsz 2048 Default
timeslice 10 Default
timezone 420 Default
unlockable_mem 0 Default
vnode_cd_hash_locks 128 Default
vnode_hash_locks 128 Default
vol_checkpt_default 10240 Default
vol_dcm_replay_size 262144 Default
vol_default_iodelay 50 Default
vol_fmr_logsz 4 Default
vol_max_bchain 32 Default
vol_max_nconfigs 20 Default
vol_max_nlogs 20 Default
vol_max_nmpool_sz 4194304 Default Immed
vol_max_prm_dgs 1024 Default
vol_max_rdback_sz 4194304 Default Immed
vol_max_vol 8388608 Default
vol_max_wrspool_sz 4194304 Default Immed
vol_maxio 256 Default
vol_maxioctl 32768 Default
vol_maxkiocount 2048 Default
vol_maxparallelio 256 Default
vol_maxspecialio 256 Default
vol_maxstablebufsize 256 Default
vol_min_lowmem_sz 532480 Default Immed
vol_mvr_maxround 256 Default
vol_nm_hb_timeout 10 Default
vol_rootdev_is_vol 0 Default
vol_rvio_maxpool_sz 4194304 Default Immed
vol_subdisk_num 4096 Default
vol_swapdev_is_vol 0 Default
vol_vvr_transport 1 Default
vol_vvr_use_nat 0 Default
volcvm_cluster_size 16 Default
volcvm_smartsync 1 Default
voldrl_max_drtregs 2048 Default
voldrl_min_regionsz 512 Default
voliomem_chunk_size 65536 Default
voliomem_maxpool_sz 4194304 Default
voliot_errbuf_dflt 16384 Default
voliot_iobuf_default 8192 Default
voliot_iobuf_limit 131072 Default
voliot_iobuf_max 65536 Default
voliot_max_open 32 Default
volpagemod_max_memsz 6144 Default Immed
volraid_rsrtransmax 1 Default
vps_ceiling 16 Default
vps_chatr_ceiling 1048576 Default
vps_pagesize 4 Default
vx_maxlink 32767 Default
vx_ninode 0 Default
vxfs_bc_bufhwm 0 Default
vxfs_ifree_timelag 0 Default Immed
vxtask_max_monitors 32 Default


Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Eric SAUBIGNAC
Honored Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

Bonsoir Johnny,

Many good things have been said in previous posts :-) So only 2 remarks :

- You have configured 108 GB for your VM. If you use formulas given in the configuration guide you will see that you need exactly 128 GB of pysical RAM to achieve this.

You have no gap. I guess that if you stop a VM then try to restart it, you will have some problems. Does the Host swap ? Maybe you should reduce a little virtual memory or give more physical RAM to the host.

- There is a new version of HPVM. Here is an extract of what you can read in HP's documentation :

"The HP Integrity Virtual Machines A.03.50 release introduces accelerated storage and networking I/O products (AVIO) to improve the overall I/O performance. The Accelerated Virtual I/O (AVIO) products provide up to 60% reduction in service demand and as much as twice the throughput over the existing virtualized storage and networking Integrity VM solutions."

HP (in a non official way ;-) agree that they have some difficulties with IO. If you take a look at CPU utilization inside the guest, you will probably see a high cpu utilization under %SYS (sar -u 5 100). This new version is said to have a great impact on that.

So, I have not yet tested this version, but I guess you should ...

- And a third one : as Tim has suggested, the type of backing store has also a great impact. How is it configured ?

Hope this will help

Regards

Eric

likid0
Honored Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

Yes, Have a look if you are using 3.50 with the
new Accelerated Virtual I/O (AVIO) solution, When I installed it on a configuration similar to yours, only that i have 4 hbas for 8 systems, only 2 is quite a gamble.

Windows?, no thanks
likid0
Honored Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

Sorry i cut my thread before, as I said when I installed it, IO disk times improved a lot.
Windows?, no thanks
Michael Steele_2
Honored Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

Hi Johnny:

Please confirm that you've got two machines. A superdome that's running fine and a : "...We are using the VM servers for test and development, but since HP installed the server 6 months ago, the servers are performing badly..."

First, there is always going to be some additional overhead on any guest host while the host will perform noticbly better. So its important to keep your patch level up to date with latest release since it's my opinion that this is still a work in project for HP.

Also, are you using HPVM or not? The command set will be different.

Here are some commands to help you analyze performance:

vparstatus -A -v (* specific to each guest *)
hpvmstatus
http://docs.hp.com/en/T2767-90024/ch08s01.html

HPVM
http://docs.hp.com/en/T2767-90024/index.html

Also, I'd liked Eric's response best. This underscores the importance of keeping up to date with latest patches and HP's problems with HPVM.
Support Fatherhood - Stop Family Law
Johnny Damtoft
Regular Advisor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

Hi all,

Thanks for all of your replies - I'll try to give as much information as i can at this time :)

First of all, yes we have more than one SuperDome in our production - there is two.

SD1 for production, without VM
SD2 for test/devel, with VM (performace problems)

On SD2, there is 3 nPars - where one of the nPars is handeling 8 VM's.

Currently I'm not sure of which kind of "backing store" that the VM's are using - but im pretty sure that we are NOT using file-backing-store.
- How can i see, the configuration of the backing store from the nPar?

The VM software is in version: A.03.00, so here is something that we need to be working on.

I'll get back to you with more information later on - but please help me with the above questions.

Thanks.

// Johnny
Torsten.
Acclaimed Contributor

Re: Integrity Virtual Machines, on HPUX 11.23 ia64

The result of these commands will help to get more info:

# hpvmstatus
# hpvmstatus -r
# hpvmstatus -P

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!