<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: vmunix in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210462#M793301</link>
    <description>&lt;BR /&gt;If you're talking about a message printed in syslog,&lt;BR /&gt;it is indeed printed when the global file descriptor&lt;BR /&gt;table is found to be full.&lt;BR /&gt;&lt;BR /&gt;You can probably improve the situation by increasing&lt;BR /&gt;a kernel tunable like nfile.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 05 Mar 2004 07:55:53 GMT</pubDate>
    <dc:creator>Sriram Narayanaswamy</dc:creator>
    <dc:date>2004-03-05T07:55:53Z</dc:date>
    <item>
      <title>vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210456#M793295</link>
      <description>I am looking for advice on the source and resolution to the following error that is occuring occassionally on our HP9000 L2000 running HPUX 11.11 :-&lt;BR /&gt;&lt;BR /&gt;vmunix: file: table is full&lt;BR /&gt;&lt;BR /&gt;I think it may be a kernel parameter (hopefully a configurable one) that has been exceeded when a large number of users are connected to the server, and using a maximum  number of open files&lt;BR /&gt;&lt;BR /&gt;Any advice would be greatly appreciated.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Keith</description>
      <pubDate>Fri, 05 Mar 2004 07:03:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210456#M793295</guid>
      <dc:creator>Keith Bevan_1</dc:creator>
      <dc:date>2004-03-05T07:03:03Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210457#M793296</link>
      <description>Hi Keith,&lt;BR /&gt;&lt;BR /&gt;Yep it's either:&lt;BR /&gt;&lt;BR /&gt;1) NFILE - most likely&lt;BR /&gt;2) maxfiles - soft limit&lt;BR /&gt;3) maxfiles_lim - hard limit&lt;BR /&gt;&lt;BR /&gt;Probably #1 above.&lt;BR /&gt;&lt;BR /&gt;Rgds,&lt;BR /&gt;Jeff</description>
      <pubDate>Fri, 05 Mar 2004 07:08:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210457#M793296</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2004-03-05T07:08:05Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210458#M793297</link>
      <description>Check this parameters:&lt;BR /&gt;&lt;BR /&gt;nfile, ninode, nproc.&lt;BR /&gt;&lt;BR /&gt;Also run&lt;BR /&gt;&lt;BR /&gt;#sar -v 5 5 &lt;BR /&gt;&lt;BR /&gt;to get some info.&lt;BR /&gt;&lt;BR /&gt;sks&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 07:10:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210458#M793297</guid>
      <dc:creator>Sanjay Kumar Suri</dc:creator>
      <dc:date>2004-03-05T07:10:41Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210459#M793298</link>
      <description>The file table is sized by the kernel parameter "nfile".&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;&lt;BR /&gt;Kent M. Ostby&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 07:17:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210459#M793298</guid>
      <dc:creator>Kent Ostby</dc:creator>
      <dc:date>2004-03-05T07:17:56Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210460#M793299</link>
      <description>Hi Keith,&lt;BR /&gt;&lt;BR /&gt;Take a look at this doc,&lt;BR /&gt;&lt;BR /&gt;Document description: vmunix: file: table is full&lt;BR /&gt;Document id: HPUXKBRC00008909&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www5.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&amp;amp;docId=200000064128770" target="_blank"&gt;http://www5.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&amp;amp;docId=200000064128770&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Reagrds,&lt;BR /&gt;&lt;BR /&gt;Robert-Jan</description>
      <pubDate>Fri, 05 Mar 2004 07:26:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210460#M793299</guid>
      <dc:creator>Robert-Jan Goossens</dc:creator>
      <dc:date>2004-03-05T07:26:02Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210461#M793300</link>
      <description>Keith,&lt;BR /&gt;&lt;BR /&gt;It sounds like you need to up the value of nfile in the kernel.&lt;BR /&gt;&lt;BR /&gt;You can run&lt;BR /&gt;&lt;BR /&gt;sar -v 5 5&lt;BR /&gt;&lt;BR /&gt;to look at the file table now, but the problem may have gone away. Under the adm users cron we run 2 jobs to collect stats&lt;BR /&gt;&lt;BR /&gt;0,5,10,15,20,25,30,35,40,45,50,55 * * * 0-6 /usr/lib/sa/sa1&lt;BR /&gt;5 23 * * 1-5 /usr/lib/sa/sa2 -s 1:00 -e 23:40 -i 300 -A&lt;BR /&gt;&lt;BR /&gt;(See man sa1)&lt;BR /&gt;&lt;BR /&gt;Then you can run &lt;BR /&gt;&lt;BR /&gt;sar -v&lt;BR /&gt;&lt;BR /&gt;to get historical data and see exactly when the problem occurred and what filled up.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave.&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 07:54:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210461#M793300</guid>
      <dc:creator>David Burgess</dc:creator>
      <dc:date>2004-03-05T07:54:41Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210462#M793301</link>
      <description>&lt;BR /&gt;If you're talking about a message printed in syslog,&lt;BR /&gt;it is indeed printed when the global file descriptor&lt;BR /&gt;table is found to be full.&lt;BR /&gt;&lt;BR /&gt;You can probably improve the situation by increasing&lt;BR /&gt;a kernel tunable like nfile.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 07:55:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210462#M793301</guid>
      <dc:creator>Sriram Narayanaswamy</dc:creator>
      <dc:date>2004-03-05T07:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210463#M793302</link>
      <description>Yes - as others have said - increase nfile.&lt;BR /&gt;&lt;BR /&gt;Here's some info on Kernel Problems:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;KERNEL PROBLEMS&lt;BR /&gt;&lt;BR /&gt;Common Kernel Parameters which need to be modified &amp;amp; associated errors&lt;BR /&gt;&lt;BR /&gt;nfile, ninode, nproc, maxuser:&lt;BR /&gt;&lt;BR /&gt;when these parameters need to be increased you will see errors in &lt;BR /&gt;/var/adm/syslog/syslog.log in the format:&lt;BR /&gt;"vmunix: file table is full" (or "proc table is full", or "inode table is full")&lt;BR /&gt;users may see errors such as "file table overflow"&lt;BR /&gt;&lt;BR /&gt;nproc:  maximum number of system wide processes&lt;BR /&gt;nfile:  maximum number of files that can be open simultaneously at any given&lt;BR /&gt;        time.  3 nfile entries will be used for each process &lt;BR /&gt;        (stdin, stdout, stderror), 2 entries for each pipe (stdin, stdout)&lt;BR /&gt;ninode: maximum number of inodes kept in main memory&lt;BR /&gt;&lt;BR /&gt;these parameters can be monitored with:&lt;BR /&gt;# sar -v 5 5      (sample 5 times at 5 second interval)&lt;BR /&gt;which will produce output in the format:&lt;BR /&gt;&lt;BR /&gt;16:45:12 text-sz  ov  proc-sz  ov  inod-sz  ov  file-sz  ov&lt;BR /&gt;16:45:17   N/A   N/A  131/276   0  476/476   0  420/800   0&lt;BR /&gt;&lt;BR /&gt;In this example we are using 131 out of 276 entries in the nproc table,&lt;BR /&gt;420 out of 800 entries in the nfile table.  The inode table is used for&lt;BR /&gt;DNLC (directory name lookup cache).  Since the inode table is used for&lt;BR /&gt;cache, sar and glance will typically show usage at 90 - 100 %.  If the&lt;BR /&gt;customer is not seeing errors: "inode table is full" they probably do not &lt;BR /&gt;need to increase ninode.  There is a utility dnlcount which will show&lt;BR /&gt;actual ninode usage, see GLP211-2 for more information.&lt;BR /&gt;&lt;BR /&gt;As shipped, nfile, ninode, and nproc are formulas which depend on a variable&lt;BR /&gt;called maxusers.  Increasing maxusers will increase the values of nfile,&lt;BR /&gt;ninode, nproc, npty, and nstrpty.  To eliminate errors:&lt;BR /&gt;a. double maxusers (shotgun approach, not recommended)&lt;BR /&gt;b. double the appropriate parameter &lt;BR /&gt;&lt;BR /&gt;kernel cost (ie, RAM used per entry):&lt;BR /&gt;nfile: 30 bytes     nproc: 180 bytes      ninode:  286 bytes&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;maxdsiz, maxtsiz, maxssiz&lt;BR /&gt;&lt;BR /&gt;maxdsiz: maximum size of a data segment (all data which a program accesses)&lt;BR /&gt;(maximum size 944 MB)&lt;BR /&gt;common errors indicating maxdsiz may be underconfigured:&lt;BR /&gt;         out of memory; ENOMEM from malloc&lt;BR /&gt;maxtsiz: maximum size of a text segment (the compiled program)&lt;BR /&gt;maxssiz: maximum size of the instruction stack (grows dynamically as a program&lt;BR /&gt;executes, maximum size 80 MB)&lt;BR /&gt;&lt;BR /&gt;note: These values are entered in bytes.  SAM will display a hexadecimal value,&lt;BR /&gt;however you can enter values in decimal format.&lt;BR /&gt;&lt;BR /&gt;semmns, semmni&lt;BR /&gt;&lt;BR /&gt;semmns: maximum number of user-accessible semaphores&lt;BR /&gt;- errors indicating underconfigured: semget ENOSPC errors in syslog, or&lt;BR /&gt;messages such as not enough semaphores or can't allocate semaphores from app&lt;BR /&gt;semmni: maximum number of semaphore identifiers&lt;BR /&gt;&lt;BR /&gt;to view current usage:&lt;BR /&gt;# ipcs -as   you will see a column labeled "ID", this corresponds to semmni&lt;BR /&gt;(each number counts as one).  To the right the column labeled NSEMS corresponds&lt;BR /&gt;to semmns (each number counts as its value).&lt;BR /&gt;symptom: customer can run 1 or 2 instances of a database, fails when&lt;BR /&gt;   attempting to open nth instance - check semmns&lt;BR /&gt;&lt;BR /&gt;npty, nstrpty&lt;BR /&gt;npty: Specifies the maximum number of pseudo-tty data structures&lt;BR /&gt;      available on the system. (used by telnet at 10.x, at 11.0 telnet&lt;BR /&gt;      uses nstrpty) &lt;BR /&gt;nstrtel:  Specifies the number of telnet device files that the kernel can &lt;BR /&gt;          support for incoming telnet sessions.&lt;BR /&gt;nstrpty: maximum number of streams based pseudo-tty data structures available&lt;BR /&gt;         on the system (used by rlogind)&lt;BR /&gt;         &lt;BR /&gt;         to create more device files (500 total in this example:)&lt;BR /&gt;         # cd /dev&lt;BR /&gt;         # insf -n 500 -C pseudo&lt;BR /&gt;                   - or -&lt;BR /&gt;         # insf -d ptys -n 500&lt;BR /&gt;         # insf -d ptym -n 500&lt;BR /&gt;         # insf -d pts -s 500 -e -v   (11.0 only)&lt;BR /&gt;         &lt;BR /&gt;# ls /dev/pty | wc -w&lt;BR /&gt;# ls /dev/ptym | wc -w&lt;BR /&gt;         &lt;BR /&gt;common errors: "connection refused" from telnet&lt;BR /&gt;"maximum number of users already logged in" from telnet&lt;BR /&gt;"unable to allocate pty" from remsh&lt;BR /&gt;&lt;BR /&gt;dbc_max_pct, dbc_min_pct, nbuf, bufpages&lt;BR /&gt;&lt;BR /&gt;dbc_min_pct: minimum ram used for buffer cache (default 5%)&lt;BR /&gt;dbc_max_pct: maximum ram used for buffer cache (default 50%)&lt;BR /&gt;   on systems with &amp;gt; 2 GB RAM it is recommended dbc_max_pct be&lt;BR /&gt;   set to 10 - 20 % &lt;BR /&gt;nbuf: number of static buffer headers, provided for backward&lt;BR /&gt;      compatibility, should be set to 0&lt;BR /&gt;bufpages: pages ot static buffer cache, provided for backward&lt;BR /&gt;      compatibility, should be set to 0&lt;BR /&gt;      &lt;BR /&gt;      &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;maxfiles, maxfiles_lim&lt;BR /&gt;maxfiles: soft limit for number of files a process can have open simultaneously&lt;BR /&gt;maxfiles_lim: hard limit for number of files a process can have open &lt;BR /&gt;              simultaneously&lt;BR /&gt;&lt;BR /&gt;maxuprc: maximum number of processes for an individual user&lt;BR /&gt;symptom: user receives error: no more processes, or cannot fork&lt;BR /&gt;no error in syslog&lt;BR /&gt;&lt;BR /&gt;maxswapchunks, swchunk&lt;BR /&gt;maxswapchunks: maximum number of swap chunks&lt;BR /&gt;swchunk: swap chunk size default = 2048  (this should not be modified)&lt;BR /&gt;maximum swap = maxswapchunks * swchunk * DEV_BSIZE&lt;BR /&gt;DEV_BSIZE = 1024 bytes, so on a system with swchunk at the default of 2048&lt;BR /&gt;and maxswapchunks = 256:&lt;BR /&gt;&lt;BR /&gt;maximum swap = 256 * 2048 * 1024 = 536,870,912 bytes = 512 MB&lt;BR /&gt;&lt;BR /&gt;to increase the amount of swap that can be configured increase maxswapchunks&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;maxvgs: the maximum number of volume groups which can be configured&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;shmmax: the maximum shared memory segment size (system wide)&lt;BR /&gt;limits are as follows:&lt;BR /&gt;10.01, 10.10:  1.75 GB  (quadrants 3 &amp;amp; 4)&lt;BR /&gt;10.20:         2.75 GB  (quadrants 3 &amp;amp; 4, quadrant 2 w/phkl_16751)&lt;BR /&gt;application must be relinked as type EXEC_MAGIC then chatr'd to type &lt;BR /&gt;SHMEM_MAGIC.  An individual segment cannot exceed 1 GB, however the&lt;BR /&gt;application can use several contiguous segments which are treated as one.&lt;BR /&gt;11.0:  32 bit: 1 GB per individual segment, 2.75 GB total&lt;BR /&gt;11.0:  64 bit: 1 TB per individual segment, 4 TB total&lt;BR /&gt;&lt;BR /&gt;errors: shmget: not enough space, application may hang, or give errors&lt;BR /&gt;such as not enough memory, or insufficient table space&lt;BR /&gt;</description>
      <pubDate>Fri, 05 Mar 2004 08:54:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210463#M793302</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2004-03-05T08:54:13Z</dc:date>
    </item>
    <item>
      <title>Re: vmunix</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210464#M793303</link>
      <description>Thanks for all the recommendations made.&lt;BR /&gt;&lt;BR /&gt;Just waiting for some down time to make the changes to 'nfile' &amp;amp; kernel rebuild, following the current review of the sar stats.&lt;BR /&gt;&lt;BR /&gt;** No more posts thank-you **&lt;BR /&gt;&lt;BR /&gt;Keith</description>
      <pubDate>Fri, 05 Mar 2004 08:54:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/vmunix/m-p/3210464#M793303</guid>
      <dc:creator>Keith Bevan_1</dc:creator>
      <dc:date>2004-03-05T08:54:16Z</dc:date>
    </item>
  </channel>
</rss>

