<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: vxfsd won't decrease process usage in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965488#M416685</link>
    <description>Sorry .. that should read "You've used all of your available SWAP"&lt;BR /&gt;</description>
    <pubDate>Wed, 08 Mar 2006 10:15:41 GMT</pubDate>
    <dc:creator>Kent Ostby</dc:creator>
    <dc:date>2006-03-08T10:15:41Z</dc:date>
    <item>
      <title>vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965479#M416676</link>
      <description>6   ?       57 root     152 20 33192K 29504K run   4281:47 30.62 30.56 vxfsd&lt;BR /&gt;&lt;BR /&gt;Is this normal.. it seems as though it just keeps climbing.&lt;BR /&gt;But my disk usage is low..&lt;BR /&gt;I am running a large oracle server but it has 15G of buffer cache, so most of its hits should be memory cache. But that wouldn't matter anyhow as it users raw partitions and wouldn't effect vxfsd...&lt;BR /&gt;&lt;BR /&gt;root@ims:/root-&amp;gt; sar -d 1 1&lt;BR /&gt;&lt;BR /&gt;HP-UX ims B.11.23 U ia64    03/08/06&lt;BR /&gt;&lt;BR /&gt;07:44:13   device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;07:44:14   c0t6d0    2.97    0.50       5      48    0.00   17.97&lt;BR /&gt;           c4t6d0    2.97    0.50       4      44    0.00   16.64&lt;BR /&gt;          c14t0d1    2.97    0.50       3      34    0.00    9.64&lt;BR /&gt;          c14t1d2    0.99    0.50       6      28    0.00    2.94&lt;BR /&gt;          c14t1d5    0.99    0.50       3      40    0.00    4.90&lt;BR /&gt;         c14t14d1    0.99    0.50       1      16    0.00    2.40&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 07:49:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965479#M416676</guid>
      <dc:creator>Mark Huff II</dc:creator>
      <dc:date>2006-03-08T07:49:14Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965480#M416677</link>
      <description>Hello, &lt;BR /&gt;&lt;BR /&gt;Probably, you can take look at PHKL_34122, &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www1.itrc.hp.com/service/patch/patchDetail.do?BC=patch.breadcrumb.main" target="_blank"&gt;http://www1.itrc.hp.com/service/patch/patchDetail.do?BC=patch.breadcrumb.main&lt;/A&gt;|patch.breadcrumb.search|&amp;amp;patchid=PHKL_34122&amp;amp;context=hpux:800:11:11&lt;BR /&gt;&lt;BR /&gt;-Arun</description>
      <pubDate>Wed, 08 Mar 2006 07:55:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965480#M416677</guid>
      <dc:creator>Arunvijai_4</dc:creator>
      <dc:date>2006-03-08T07:55:03Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965481#M416678</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/5580/Misconfigured_Resources.pdf" target="_blank"&gt;http://docs.hp.com/en/5580/Misconfigured_Resources.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;JFS Inode cache is probably why&lt;BR /&gt;&lt;BR /&gt;             Steve Steel</description>
      <pubDate>Wed, 08 Mar 2006 07:57:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965481#M416678</guid>
      <dc:creator>Steve Steel</dc:creator>
      <dc:date>2006-03-08T07:57:10Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965482#M416679</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Try run "swapinfo -tam" several times and observe if reserve are increasing.&lt;BR /&gt;&lt;BR /&gt;Some oracle process can reserve swap area, and this happened, vxfs increase CPU utilization.&lt;BR /&gt;&lt;BR /&gt;Read about pseudo swap too.&lt;BR /&gt;&lt;BR /&gt;Schimidt</description>
      <pubDate>Wed, 08 Mar 2006 08:04:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965482#M416679</guid>
      <dc:creator>Carlos Roberto Schimidt</dc:creator>
      <dc:date>2006-03-08T08:04:47Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965483#M416680</link>
      <description>The patch you specified is for HPUX 11.11, but my post says I am running 11.23 Itanium.&lt;BR /&gt;&lt;BR /&gt;As for the swapinfo, I have 48Gb of RAM of which 9-10GB is free. So I dont swap to disk.&lt;BR /&gt;&lt;BR /&gt;The JFS is possible I guess..&lt;BR /&gt;&lt;BR /&gt;Kernel: &lt;BR /&gt;vx_ninode = 0&lt;BR /&gt;ninode = 8192 --32% used&lt;BR /&gt;&lt;BR /&gt;here is a dump of the vxfsstat:&lt;BR /&gt;&lt;BR /&gt;09:29:22.230 Wed Mar 08 2006 -- absolute sample&lt;BR /&gt;&lt;BR /&gt;Lookup, DNLC &amp;amp; Directory Cache Statistics&lt;BR /&gt;   593920 maximum entries in dnlc&lt;BR /&gt;8923646028 total lookups           94.60% fast lookup&lt;BR /&gt;8940525235 total dnlc lookup       94.78% dnlc hit rate&lt;BR /&gt;4625696658087215104 total enter            94.78  hit per enter&lt;BR /&gt;        0 total dircache setup      0.00  calls per setup&lt;BR /&gt;494364473 total directory scan      1.29% fast directory scan&lt;BR /&gt;&lt;BR /&gt;inode cache statistics&lt;BR /&gt;   209509 inodes current    594173 peak               593755 maximum&lt;BR /&gt;1119419498 lookups            59.59% hit rate&lt;BR /&gt; 15915010 inodes alloced   15705501 freed&lt;BR /&gt;     3214 sec recycle age [not limited by maximum]&lt;BR /&gt;     1800 sec free age&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;vxi_alloc_emap             26153935    vxi_alloc_expand_retry          188&lt;BR /&gt;vxi_alloc_find_retry             28    vxi_alloc_findfail         15274275&lt;BR /&gt;vxi_alloc_findfix                 0    vxi_alloc_mapflush                0&lt;BR /&gt;vxi_alloc_prev              1431837    vxi_alloc_search           11113387&lt;BR /&gt;vxi_alloc_smap                   13    vxi_alloc_sumclean                0&lt;BR /&gt;vxi_alloc_sumsum            4937292    vxi_alloc_try              12545224&lt;BR /&gt;vxi_async_iupdat           11417533    vxi_async_realloc            755490&lt;BR /&gt;vxi_async_shorten           1619811    vxi_bawrite                25738962&lt;BR /&gt;vxi_bcache_curkbyte          926976    vxi_bcache_maxkbyte          969472&lt;BR /&gt;vxi_bcache_recycleage         49173    vxi_bc_chunksteal                 0&lt;BR /&gt;vxi_bc_hits                2185360835    vxi_bc_lookups             2208473035&lt;BR /&gt;vxi_bc_reuse               23108011    vxi_bc_subflush                1183&lt;BR /&gt;vxi_bc_waits                1473351    vxi_bdwrite                268927487&lt;BR /&gt;vxi_bdwrite_tflush            82530    vxi_bmap                   4145151159&lt;BR /&gt;vxi_bmap_cache             3347462101    vxi_bmap_indirect             14867&lt;BR /&gt;vxi_bread                  23011939    vxi_brelse                 1682821996&lt;BR /&gt;vxi_brelse_tflush                 0    vxi_btwrite                  245564&lt;BR /&gt;vxi_bufspace_delay                0    vxi_bufspace_tranflush            0&lt;BR /&gt;vxi_bwrite                 19112200    vxi_clonemap                 438788&lt;BR /&gt;vxi_cutwrite                     34    vxi_dirblk                 1210590978&lt;BR /&gt;vxi_dirlook                496881199    vxi_dirlook_dot            15428065&lt;BR /&gt;vxi_dirlook_dotdot           172241    vxi_dirlook_notfound       13742138&lt;BR /&gt;vxi_fast_lookup            8440797071    vxi_dnlc_hit               8254816798&lt;BR /&gt;vxi_dnlc_enter             462961862    vxi_dnlc_miss              466376243&lt;BR /&gt;vxi_dnlc_neg_hit           217927772    vxi_dnlc_neg_enter         13742139&lt;BR /&gt;vxi_dnlc_size                593920    vxi_dirscan                487977912&lt;BR /&gt;vxi_fast_dirscan            6353027    vxi_eau_cleaned              432900&lt;BR /&gt;vxi_eau_expand                  189    vxi_eau_unexpand                 22&lt;BR /&gt;vxi_eau_write                143399    vxi_flush_throttle            12852&lt;BR /&gt;vxi_getblk                 1833663511    vxi_iaccess                1434083705&lt;BR /&gt;vxi_iflush_cut                    1    vxi_icache_allocedino      15911282&lt;BR /&gt;vxi_icache_freedino        15701235    vxi_icache_curino            210047&lt;BR /&gt;vxi_icache_inuseino            2561    vxi_icache_maxino            593755&lt;BR /&gt;vxi_icache_peakino           594173    vxi_icache_recycleage          3797&lt;BR /&gt;vxi_ifree_timelag              1800    vxi_iget                   1119383710&lt;BR /&gt;vxi_iget_found             667001931    vxi_iget_loop              1191310427&lt;BR /&gt;vxi_iinactive              5087755917    vxi_iinactive_front         2434656&lt;BR /&gt;vxi_iinactive_slow         24616644    vxi_inofail                       0&lt;BR /&gt;vxi_inopage                397888207    vxi_ipage                  54283536&lt;BR /&gt;vxi_iupdat                 15316599    vxi_iupdat_cluster         148402931&lt;BR /&gt;vxi_log                    126446500    vxi_log_blks               80336658&lt;BR /&gt;vxi_log_delayed            125998060    vxi_log_flush              26748368&lt;BR /&gt;vxi_log_idle                  37783    vxi_log_write              27251030&lt;BR /&gt;vxi_lread                  2204684426    vxi_lwrite                 94799155&lt;BR /&gt;vxi_maj_fault                     0    vxi_map_write               1643126&lt;BR /&gt;vxi_pagecluster              395378    vxi_pagestrategy            4542386&lt;BR /&gt;vxi_pgin                          0    vxi_pgout                         0&lt;BR /&gt;vxi_pgpgin                        0    vxi_pgpgout                       0&lt;BR /&gt;vxi_execpgin                      0    vxi_execpgout                     0&lt;BR /&gt;vxi_anonpgin                      0    vxi_anonpgout                     0&lt;BR /&gt;vxi_fspgin                        0    vxi_fspgout                       0&lt;BR /&gt;vxi_physmem_mbyte             49119    vxi_qtrunc                  3959241&lt;BR /&gt;vxi_ra                     56084595    vxi_randwrite_throttle            0&lt;BR /&gt;vxi_rapgpgin                      0    vxi_rasectin                      0&lt;BR /&gt;vxi_read_dio                8014999    vxi_read_rand              339444148&lt;BR /&gt;vxi_read_seq               164042407    vxi_sectin                        0&lt;BR /&gt;vxi_sectout                       0    vxi_setattr_nochange         540370&lt;BR /&gt;vxi_sumupd                   101943    vxi_superwrite              1599482&lt;BR /&gt;vxi_sync_delxwri                  0    vxi_sync_inode                    0&lt;BR /&gt;vxi_sync_page                     0    vxi_ntran                      -214&lt;BR /&gt;vxi_tflush_cut                   33    vxi_tflush_inode            5153180&lt;BR /&gt;vxi_tflush_map_async         364742    vxi_tflush_map_clone           2883&lt;BR /&gt;vxi_tflush_map_sync          488146    vxi_tran_commit            126446501&lt;BR /&gt;vxi_tran_low                   6550    vxi_tran_retry                    1&lt;BR /&gt;vxi_tran_space             143425013    vxi_tran_subfuncs          255459429&lt;BR /&gt;vxi_tranidflush            23630928    vxi_tranidflush_flush      23774162&lt;BR /&gt;vxi_tranidflush_none       80164031    vxi_tranleft_asyncflush     4931766&lt;BR /&gt;vxi_tranleft_delay               16    vxi_tranleft_syncflush        96268&lt;BR /&gt;vxi_tranlogflush           92368019    vxi_tranlogflush_flush      2219257&lt;BR /&gt;vxi_trunc                   6393350    vxi_unlockmap_async          350110&lt;BR /&gt;vxi_write_asynccnt         159733829    vxi_write_dio                460669&lt;BR /&gt;vxi_write_donetran               16    vxi_write_logged           15340091&lt;BR /&gt;vxi_write_logonly                 0    vxi_write_only              2521831&lt;BR /&gt;vxi_write_rand             116934890    vxi_write_seq              63497333&lt;BR /&gt;vxi_write_synccnt           2375803    vxi_write_throttle                0&lt;BR /&gt;vxi_clone_create                  0    vxi_clone_remove                  0&lt;BR /&gt;vxi_clone_rename                  0    vxi_clone_stat                    0&lt;BR /&gt;vxi_clone_convnodata              0    vxi_clone_mkpfset                 0&lt;BR /&gt;vxi_clone_cntl                    0    vxi_clone_dispose                 0&lt;BR /&gt;vxi_read_to_map                8323    vxi_virtmem_mbyte                 0&lt;BR /&gt;vxi_clustblk_ino           30563681    vxi_dirc_setup                    0&lt;BR /&gt;vxi_dirc_purge                    0    vxi_dirc_hit                      0&lt;BR /&gt;vxi_dirc_miss                     0    vxi_dirc_spchit                   0&lt;BR /&gt;vxi_dirc_spcmiss                  0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Anything stand out as unusual?</description>
      <pubDate>Wed, 08 Mar 2006 09:38:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965483#M416680</guid>
      <dc:creator>Mark Huff II</dc:creator>
      <dc:date>2006-03-08T09:38:59Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965484#M416681</link>
      <description>Can you post swapinfo output?&lt;BR /&gt;&lt;BR /&gt;It could be that your buffer cache is all used up and its having to swap that out or it could be that you don't have enough disk swap configured for your system.&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 09:51:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965484#M416681</guid>
      <dc:creator>Kent Ostby</dc:creator>
      <dc:date>2006-03-08T09:51:59Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965485#M416682</link>
      <description>dbc_max_pct  35  &lt;BR /&gt;and is only at 82% usage.&lt;BR /&gt;&lt;BR /&gt;And you can see that I am using 0% disk swap.&lt;BR /&gt;&lt;BR /&gt;             Kb      Kb      Kb   PCT  START/      Kb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev     4194304       0 4194304    0%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;reserve       - 4194304 -4194304&lt;BR /&gt;memory  50298088 25596328 24701760   51%&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 09:56:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965485#M416682</guid>
      <dc:creator>Mark Huff II</dc:creator>
      <dc:date>2006-03-08T09:56:39Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965486#M416683</link>
      <description>Actually, that's not what I see at all.&lt;BR /&gt;&lt;BR /&gt;You're reserved is the same as your available so that means you've used all of your available memory (HP-UX reserves room for all processes in case it needs to swap).&lt;BR /&gt;&lt;BR /&gt;Secondly, you have 48 meg of RAM and you only have 41 gig of disk swap which is about 55 gig less than HP recommends (generally 2x RAM worth of disk swap (excluding DBC)).&lt;BR /&gt;&lt;BR /&gt;I think you're problem would go away if you added a good chunk of device swap.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 10:02:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965486#M416683</guid>
      <dc:creator>Kent Ostby</dc:creator>
      <dc:date>2006-03-08T10:02:54Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965487#M416684</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;The figures are very high but not absurd.&lt;BR /&gt;&lt;BR /&gt;If you are using oracle you are using oracle buffer caching and you can tune the hp standard buffer caching down.&lt;BR /&gt;&lt;BR /&gt;see explanation on&lt;BR /&gt;ftp://eh:spear9@hprc.external.hp.com/memory.htm     appendix A&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;make the inode cache static because.  &lt;BR /&gt;By default you get a dynamic cache which increases and decreases  over time.  Thus at regular intervals one of the vxfsd daemons scans the inode free list and free's inactive inodes.  &lt;BR /&gt;&lt;BR /&gt;15915010 inodes alloced 15705501 freed &lt;BR /&gt;&lt;BR /&gt;The tunable kernel parameter vx_ifree_timelag can be changed  to control how long an inode is  inactive&lt;BR /&gt;before it is freed.  Default is 1800 seconds or 30 minutes.  By Setting&lt;BR /&gt;vx_ifree_timelag to 0 you make the inode cache static.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;                  Steve Steel&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 10:15:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965487#M416684</guid>
      <dc:creator>Steve Steel</dc:creator>
      <dc:date>2006-03-08T10:15:40Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965488#M416685</link>
      <description>Sorry .. that should read "You've used all of your available SWAP"&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 10:15:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965488#M416685</guid>
      <dc:creator>Kent Ostby</dc:creator>
      <dc:date>2006-03-08T10:15:41Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965489#M416686</link>
      <description>I would say it is normal. The vxfsd daemon's activity should not be directly proportional to how busy your disks are. If you've glance, look at how busy your filesystems are - (glance -i). And remember your OS disks are VxFS filesystems too..&lt;BR /&gt;&lt;BR /&gt;We too have large servers 32-64GB Memory and 8-16 CPUs AND DirectIO/RAW storage and on average, vxfsd uses anywhere between 20 to 50 percent of a lone CPU.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Here's one on a comparable 11.11 server (PARISC):&lt;BR /&gt;&lt;BR /&gt; 9   ?      62 root     152 20 16000K 16000K run   4792:04 31.99 31.93 vxfsd&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Mar 2006 10:16:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965489#M416686</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2006-03-08T10:16:24Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965490#M416687</link>
      <description>Hi Mark,&lt;BR /&gt;&lt;BR /&gt;Your swapinfo output is correct?&lt;BR /&gt;&lt;BR /&gt;Kb Kb Kb PCT START/ Kb&lt;BR /&gt;TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME&lt;BR /&gt;dev 4194304 0 4194304 0% 0 - 1 /dev/vg00/lvol2&lt;BR /&gt;reserve - 4194304 -4194304&lt;BR /&gt;memory 50298088 25596328 24701760 51%&lt;BR /&gt;&lt;BR /&gt;Do you have only 4Gb de swap space and 48 GB de RAM?</description>
      <pubDate>Wed, 08 Mar 2006 11:52:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965490#M416687</guid>
      <dc:creator>Carlos Roberto Schimidt</dc:creator>
      <dc:date>2006-03-08T11:52:04Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965491#M416688</link>
      <description>Yes that is correct when you purchase a large server from HP with alot of ram, HP defaulty uses 4gb for swap?? what the heck! Well my bad for not catching it when I migrated off my old server to this one..&lt;BR /&gt;&lt;BR /&gt;Adding more swap did nothing to change vxfsd taking 30% of 1 cpu.(a reboot does I think?.....patch?)&lt;BR /&gt;It does however cost me a reboot this weekend to bring it up past 33gb as my swap chunk is not large enough.&lt;BR /&gt;&lt;BR /&gt;I'll just have to agree with nelson that this is just normal.&lt;BR /&gt;&lt;BR /&gt;Thanks all!</description>
      <pubDate>Thu, 09 Mar 2006 07:13:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965491#M416688</guid>
      <dc:creator>Mark Huff II</dc:creator>
      <dc:date>2006-03-09T07:13:29Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965492#M416689</link>
      <description>Normal indeed.&lt;BR /&gt;&lt;BR /&gt;I have over a dozen fairly large servers in the same league as yours, raw, directIO, Oracle - same dbc buffer ranges and vxfsd behaves exactly the same accross.&lt;BR /&gt;&lt;BR /&gt;Again, an indication of how heavily vxfsd is used is not judged via individual SAR disk stats but rather filesystem stats (vxfsstat) or "glance -i".&lt;BR /&gt;&lt;BR /&gt;HTH.&lt;BR /&gt;</description>
      <pubDate>Thu, 09 Mar 2006 08:47:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965492#M416689</guid>
      <dc:creator>Zinky</dc:creator>
      <dc:date>2006-03-09T08:47:58Z</dc:date>
    </item>
    <item>
      <title>Re: vxfsd won't decrease process usage</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965493#M416690</link>
      <description>Thanks all!</description>
      <pubDate>Thu, 09 Mar 2006 09:25:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vxfsd-won-t-decrease-process-usage/m-p/4965493#M416690</guid>
      <dc:creator>Mark Huff II</dc:creator>
      <dc:date>2006-03-09T09:25:14Z</dc:date>
    </item>
  </channel>
</rss>

