- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: vxfsd won't decrease process usage
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2006 11:49 PM
03-07-2006 11:49 PM
Is this normal.. it seems as though it just keeps climbing.
But my disk usage is low..
I am running a large oracle server but it has 15G of buffer cache, so most of its hits should be memory cache. But that wouldn't matter anyhow as it users raw partitions and wouldn't effect vxfsd...
root@ims:/root-> sar -d 1 1
HP-UX ims B.11.23 U ia64 03/08/06
07:44:13 device %busy avque r+w/s blks/s avwait avserv
07:44:14 c0t6d0 2.97 0.50 5 48 0.00 17.97
c4t6d0 2.97 0.50 4 44 0.00 16.64
c14t0d1 2.97 0.50 3 34 0.00 9.64
c14t1d2 0.99 0.50 6 28 0.00 2.94
c14t1d5 0.99 0.50 3 40 0.00 4.90
c14t14d1 0.99 0.50 1 16 0.00 2.40
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2006 11:55 PM
03-07-2006 11:55 PM
Re: vxfsd won't decrease process usage
Probably, you can take look at PHKL_34122,
http://www1.itrc.hp.com/service/patch/patchDetail.do?BC=patch.breadcrumb.main|patch.breadcrumb.search|&patchid=PHKL_34122&context=hpux:800:11:11
-Arun
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-07-2006 11:57 PM
03-07-2006 11:57 PM
Re: vxfsd won't decrease process usage
http://docs.hp.com/en/5580/Misconfigured_Resources.pdf
JFS Inode cache is probably why
Steve Steel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 12:04 AM
03-08-2006 12:04 AM
Re: vxfsd won't decrease process usage
Try run "swapinfo -tam" several times and observe if reserve are increasing.
Some oracle process can reserve swap area, and this happened, vxfs increase CPU utilization.
Read about pseudo swap too.
Schimidt
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 01:38 AM
03-08-2006 01:38 AM
Re: vxfsd won't decrease process usage
As for the swapinfo, I have 48Gb of RAM of which 9-10GB is free. So I dont swap to disk.
The JFS is possible I guess..
Kernel:
vx_ninode = 0
ninode = 8192 --32% used
here is a dump of the vxfsstat:
09:29:22.230 Wed Mar 08 2006 -- absolute sample
Lookup, DNLC & Directory Cache Statistics
593920 maximum entries in dnlc
8923646028 total lookups 94.60% fast lookup
8940525235 total dnlc lookup 94.78% dnlc hit rate
4625696658087215104 total enter 94.78 hit per enter
0 total dircache setup 0.00 calls per setup
494364473 total directory scan 1.29% fast directory scan
inode cache statistics
209509 inodes current 594173 peak 593755 maximum
1119419498 lookups 59.59% hit rate
15915010 inodes alloced 15705501 freed
3214 sec recycle age [not limited by maximum]
1800 sec free age
vxi_alloc_emap 26153935 vxi_alloc_expand_retry 188
vxi_alloc_find_retry 28 vxi_alloc_findfail 15274275
vxi_alloc_findfix 0 vxi_alloc_mapflush 0
vxi_alloc_prev 1431837 vxi_alloc_search 11113387
vxi_alloc_smap 13 vxi_alloc_sumclean 0
vxi_alloc_sumsum 4937292 vxi_alloc_try 12545224
vxi_async_iupdat 11417533 vxi_async_realloc 755490
vxi_async_shorten 1619811 vxi_bawrite 25738962
vxi_bcache_curkbyte 926976 vxi_bcache_maxkbyte 969472
vxi_bcache_recycleage 49173 vxi_bc_chunksteal 0
vxi_bc_hits 2185360835 vxi_bc_lookups 2208473035
vxi_bc_reuse 23108011 vxi_bc_subflush 1183
vxi_bc_waits 1473351 vxi_bdwrite 268927487
vxi_bdwrite_tflush 82530 vxi_bmap 4145151159
vxi_bmap_cache 3347462101 vxi_bmap_indirect 14867
vxi_bread 23011939 vxi_brelse 1682821996
vxi_brelse_tflush 0 vxi_btwrite 245564
vxi_bufspace_delay 0 vxi_bufspace_tranflush 0
vxi_bwrite 19112200 vxi_clonemap 438788
vxi_cutwrite 34 vxi_dirblk 1210590978
vxi_dirlook 496881199 vxi_dirlook_dot 15428065
vxi_dirlook_dotdot 172241 vxi_dirlook_notfound 13742138
vxi_fast_lookup 8440797071 vxi_dnlc_hit 8254816798
vxi_dnlc_enter 462961862 vxi_dnlc_miss 466376243
vxi_dnlc_neg_hit 217927772 vxi_dnlc_neg_enter 13742139
vxi_dnlc_size 593920 vxi_dirscan 487977912
vxi_fast_dirscan 6353027 vxi_eau_cleaned 432900
vxi_eau_expand 189 vxi_eau_unexpand 22
vxi_eau_write 143399 vxi_flush_throttle 12852
vxi_getblk 1833663511 vxi_iaccess 1434083705
vxi_iflush_cut 1 vxi_icache_allocedino 15911282
vxi_icache_freedino 15701235 vxi_icache_curino 210047
vxi_icache_inuseino 2561 vxi_icache_maxino 593755
vxi_icache_peakino 594173 vxi_icache_recycleage 3797
vxi_ifree_timelag 1800 vxi_iget 1119383710
vxi_iget_found 667001931 vxi_iget_loop 1191310427
vxi_iinactive 5087755917 vxi_iinactive_front 2434656
vxi_iinactive_slow 24616644 vxi_inofail 0
vxi_inopage 397888207 vxi_ipage 54283536
vxi_iupdat 15316599 vxi_iupdat_cluster 148402931
vxi_log 126446500 vxi_log_blks 80336658
vxi_log_delayed 125998060 vxi_log_flush 26748368
vxi_log_idle 37783 vxi_log_write 27251030
vxi_lread 2204684426 vxi_lwrite 94799155
vxi_maj_fault 0 vxi_map_write 1643126
vxi_pagecluster 395378 vxi_pagestrategy 4542386
vxi_pgin 0 vxi_pgout 0
vxi_pgpgin 0 vxi_pgpgout 0
vxi_execpgin 0 vxi_execpgout 0
vxi_anonpgin 0 vxi_anonpgout 0
vxi_fspgin 0 vxi_fspgout 0
vxi_physmem_mbyte 49119 vxi_qtrunc 3959241
vxi_ra 56084595 vxi_randwrite_throttle 0
vxi_rapgpgin 0 vxi_rasectin 0
vxi_read_dio 8014999 vxi_read_rand 339444148
vxi_read_seq 164042407 vxi_sectin 0
vxi_sectout 0 vxi_setattr_nochange 540370
vxi_sumupd 101943 vxi_superwrite 1599482
vxi_sync_delxwri 0 vxi_sync_inode 0
vxi_sync_page 0 vxi_ntran -214
vxi_tflush_cut 33 vxi_tflush_inode 5153180
vxi_tflush_map_async 364742 vxi_tflush_map_clone 2883
vxi_tflush_map_sync 488146 vxi_tran_commit 126446501
vxi_tran_low 6550 vxi_tran_retry 1
vxi_tran_space 143425013 vxi_tran_subfuncs 255459429
vxi_tranidflush 23630928 vxi_tranidflush_flush 23774162
vxi_tranidflush_none 80164031 vxi_tranleft_asyncflush 4931766
vxi_tranleft_delay 16 vxi_tranleft_syncflush 96268
vxi_tranlogflush 92368019 vxi_tranlogflush_flush 2219257
vxi_trunc 6393350 vxi_unlockmap_async 350110
vxi_write_asynccnt 159733829 vxi_write_dio 460669
vxi_write_donetran 16 vxi_write_logged 15340091
vxi_write_logonly 0 vxi_write_only 2521831
vxi_write_rand 116934890 vxi_write_seq 63497333
vxi_write_synccnt 2375803 vxi_write_throttle 0
vxi_clone_create 0 vxi_clone_remove 0
vxi_clone_rename 0 vxi_clone_stat 0
vxi_clone_convnodata 0 vxi_clone_mkpfset 0
vxi_clone_cntl 0 vxi_clone_dispose 0
vxi_read_to_map 8323 vxi_virtmem_mbyte 0
vxi_clustblk_ino 30563681 vxi_dirc_setup 0
vxi_dirc_purge 0 vxi_dirc_hit 0
vxi_dirc_miss 0 vxi_dirc_spchit 0
vxi_dirc_spcmiss 0
Anything stand out as unusual?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 01:51 AM
03-08-2006 01:51 AM
Re: vxfsd won't decrease process usage
It could be that your buffer cache is all used up and its having to swap that out or it could be that you don't have enough disk swap configured for your system.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 01:56 AM
03-08-2006 01:56 AM
Re: vxfsd won't decrease process usage
and is only at 82% usage.
And you can see that I am using 0% disk swap.
Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4194304 0 4194304 0% 0 - 1 /dev/vg00/lvol2
reserve - 4194304 -4194304
memory 50298088 25596328 24701760 51%
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 02:02 AM
03-08-2006 02:02 AM
Re: vxfsd won't decrease process usage
You're reserved is the same as your available so that means you've used all of your available memory (HP-UX reserves room for all processes in case it needs to swap).
Secondly, you have 48 meg of RAM and you only have 41 gig of disk swap which is about 55 gig less than HP recommends (generally 2x RAM worth of disk swap (excluding DBC)).
I think you're problem would go away if you added a good chunk of device swap.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 02:15 AM
03-08-2006 02:15 AM
Re: vxfsd won't decrease process usage
The figures are very high but not absurd.
If you are using oracle you are using oracle buffer caching and you can tune the hp standard buffer caching down.
see explanation on
ftp://eh:spear9@hprc.external.hp.com/memory.htm appendix A
make the inode cache static because.
By default you get a dynamic cache which increases and decreases over time. Thus at regular intervals one of the vxfsd daemons scans the inode free list and free's inactive inodes.
15915010 inodes alloced 15705501 freed
The tunable kernel parameter vx_ifree_timelag can be changed to control how long an inode is inactive
before it is freed. Default is 1800 seconds or 30 minutes. By Setting
vx_ifree_timelag to 0 you make the inode cache static.
Steve Steel
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 02:15 AM
03-08-2006 02:15 AM
Re: vxfsd won't decrease process usage
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 02:16 AM
03-08-2006 02:16 AM
Re: vxfsd won't decrease process usage
We too have large servers 32-64GB Memory and 8-16 CPUs AND DirectIO/RAW storage and on average, vxfsd uses anywhere between 20 to 50 percent of a lone CPU.
Here's one on a comparable 11.11 server (PARISC):
9 ? 62 root 152 20 16000K 16000K run 4792:04 31.99 31.93 vxfsd
Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 03:52 AM
03-08-2006 03:52 AM
Re: vxfsd won't decrease process usage
Your swapinfo output is correct?
Kb Kb Kb PCT START/ Kb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4194304 0 4194304 0% 0 - 1 /dev/vg00/lvol2
reserve - 4194304 -4194304
memory 50298088 25596328 24701760 51%
Do you have only 4Gb de swap space and 48 GB de RAM?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-08-2006 11:13 PM
03-08-2006 11:13 PM
Re: vxfsd won't decrease process usage
Adding more swap did nothing to change vxfsd taking 30% of 1 cpu.(a reboot does I think?.....patch?)
It does however cost me a reboot this weekend to bring it up past 33gb as my swap chunk is not large enough.
I'll just have to agree with nelson that this is just normal.
Thanks all!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2006 12:47 AM
03-09-2006 12:47 AM
SolutionI have over a dozen fairly large servers in the same league as yours, raw, directIO, Oracle - same dbc buffer ranges and vxfsd behaves exactly the same accross.
Again, an indication of how heavily vxfsd is used is not judged via individual SAR disk stats but rather filesystem stats (vxfsstat) or "glance -i".
HTH.
Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-09-2006 01:25 AM
03-09-2006 01:25 AM