<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: rpc.statd looke like runaway consuming 100% cpu in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061527#M436863</link>
    <description>&lt;P&gt;Yes it is mandatory to bounce both. You should never just stop/restart lockd or statd without the other because they talk to each other and if one of them starts up on a different port the other may have the old port information cached and then unexpected results could happen. &lt;BR /&gt;&lt;BR /&gt;I don't believe removing the entries will resolve the problem as I believe rpc.statd builds a cache of the entries at initialization time so it wouldn't notice the entries are gone until it is stopped and restarted.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; ideally it should not affect anything.&lt;BR /&gt;&lt;BR /&gt;Are there any entries in /var/statmon/sm? That would tell you if this system is doing any NFS file locking as a client or server. If there are no entries in /var/statmon/sm then stopping and restarting these daemons should have no effect.&lt;BR /&gt;&lt;BR /&gt;If you're concerned about how long the services will be available then you can write a small Kshell script that terminate the running daemons, deletes the /var/stamon/sm.bak entries and restarts the daemons. The whole thing should take less than a second to run. Also, if there are no entries in /var/statmon/sm then you can start the rpc.lockd daemon with the "-g 0" option and that will tell the daemon not to use a grace period. This means it will accept new lock requests immediately. Again, virtually no down time for the locking service.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jun 2022 12:02:36 GMT</pubDate>
    <dc:creator>Dave Olker</dc:creator>
    <dc:date>2022-06-29T12:02:36Z</dc:date>
    <item>
      <title>rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061518#M436854</link>
      <description>From the GPM' process system calls are very high and the two main syscall name are getrlimit and poll.&lt;BR /&gt;&lt;BR /&gt;CPU TTY     PID USERNAME PRI NI   SIZE    RES STATE    TIME %WCPU  %CPU COMMAND&lt;BR /&gt; 3   ?     2379 root     152 20  4436K   656K run   32915:13 100.09 99.92 rpc.statd&lt;BR /&gt;&lt;BR /&gt;here is what i see in "tusc -p pid"&lt;BR /&gt;&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;[2379] poll(0x6d3f0590, 4, -1) ........................................................................ = 1&lt;BR /&gt;[2379] sigblock(0x2000) ............................................................................... = 0&lt;BR /&gt;[2379] getrlimit(RLIMIT_NOFILE, 0x6d3f4608) ........................................................... = 0&lt;BR /&gt;[2379] sigsetmask(NULL) ............................................................................... = 8192&lt;BR /&gt;&lt;BR /&gt;Planning to restart the nfs.client as it will restarts the rpc.statd, rpc.lockd , rpc.mountd and nfsd.&lt;BR /&gt;&lt;BR /&gt;any suggetions would be welcomed..</description>
      <pubDate>Fri, 03 Aug 2007 23:47:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061518#M436854</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2007-08-03T23:47:39Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061519#M436855</link>
      <description>&amp;gt;Planning to restart the nfs.client as it will restarts the rpc.statd, rpc.lockd, rpc.mountd and nfsd.&lt;BR /&gt;&lt;BR /&gt;It seems reasonable, unless it keeps happening.</description>
      <pubDate>Sat, 04 Aug 2007 01:24:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061519#M436855</guid>
      <dc:creator>Dennis Handly</dc:creator>
      <dc:date>2007-08-04T01:24:28Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061520#M436856</link>
      <description>i forgot to mention that this node is part of a CRS cluster and there are NFS file systems mounted here(CRSVIPip:/mount_point) which will be unmounted during the nfs.client restart. That mount point is used by the CRS instance running. Not sure how the CRS react to if the mount point is temporarily unavailable.</description>
      <pubDate>Sat, 04 Aug 2007 23:13:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061520#M436856</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2007-08-04T23:13:14Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061521#M436857</link>
      <description>&lt;P&gt;My first recommendation would be as follows:&lt;BR /&gt;&lt;BR /&gt;1) Get a listing of the nodes in /var/statmon/sm:&lt;BR /&gt;&lt;BR /&gt;# ll /var/statmon/sm&lt;BR /&gt;&lt;BR /&gt;2) Get a listing of nodes in /var/statmon/sm.bak:&lt;BR /&gt;&lt;BR /&gt;# ll /var/statmon/sm.bak&lt;BR /&gt;&lt;BR /&gt;3) Collect a debug logfile from rpc.statd and rpc.lockd&lt;BR /&gt;&lt;BR /&gt;# ps -ef | grep rpc&lt;BR /&gt;# kill -17 &lt;BR /&gt;# kill -17 &lt;BR /&gt;&lt;BR /&gt;wait 30 seconds&lt;BR /&gt;&lt;BR /&gt;# kill -17 &lt;BR /&gt;# kill -17 &lt;BR /&gt;&lt;BR /&gt;I'd want to examine the debug logfile from rpc.statd to see what it's doing before terminating and restarting it, or just terminating/restarting it might not be enough to clear the race condition.&lt;BR /&gt;&lt;BR /&gt;Let me know if you need any help interpreting the data.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jun 2022 09:51:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061521#M436857</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2022-06-30T09:51:01Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061522#M436858</link>
      <description>i have two node names listed under &lt;BR /&gt;/var/statmon/sm.bak.&lt;BR /&gt;&lt;BR /&gt;But both are not currently active. Is that statd looking for these nodes?&lt;BR /&gt;&lt;BR /&gt;i will perform  the debug later.</description>
      <pubDate>Mon, 06 Aug 2007 14:03:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061522#M436858</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2007-08-06T14:03:43Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061523#M436859</link>
      <description>When you say both are not currently active, do you mean they are temporarily out of service or permanently out of service?  In other words, is there any reason why the local system will ever need to talk to those systems again?&lt;BR /&gt;&lt;BR /&gt;Also, can you cat each of the files in /var/statmon/sm.bak and ensure the contents of the files match the names of the files?&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Mon, 06 Aug 2007 14:43:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061523#M436859</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-08-06T14:43:43Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061524#M436860</link>
      <description>those servers are permanently removed.&lt;BR /&gt;Also the filename and content do match.&lt;BR /&gt;&lt;BR /&gt;This server is one of the partitions from a two npartitioned physical box. And the name i see under sm.bak are thier old console names&lt;BR /&gt;&lt;BR /&gt;so the partions are currently named X1 and X2(node in qusetion) are earlier y1 and y2 . What i see in sm.bak is y1 an y2.&lt;BR /&gt;&lt;BR /&gt;Also under node X1 i am able to see only y2 listed under sm.bak&lt;BR /&gt;&lt;BR /&gt;i am not sure where we are going by this...</description>
      <pubDate>Mon, 06 Aug 2007 14:55:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061524#M436860</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2007-08-06T14:55:45Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061525#M436861</link>
      <description>If the servers in /var/statmon/sm.bak are permanently removed then here is what you should do:&lt;BR /&gt;&lt;BR /&gt;1) Kill rpc.statd and rpc.lockd&lt;BR /&gt;&lt;BR /&gt;# kill $(ps -e | egrep 'rpc.statd|rpc.lockd' | awk '{print $1}')&lt;BR /&gt;&lt;BR /&gt;2) Remove any entries from /var/statmon/sm.bak for systems that are permanently gone from your environment&lt;BR /&gt;&lt;BR /&gt;3) Restart rpc.statd and rpc.lockd&lt;BR /&gt;&lt;BR /&gt;# /usr/sbin/rpc.statd&lt;BR /&gt;# /usr/sbin/rpc.lockd&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Let me know if the problem persists after doing this.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Mon, 06 Aug 2007 15:09:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061525#M436861</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-08-06T15:09:13Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061526#M436862</link>
      <description>The debug mode shows that statd.d is looking for for those old hosts.&lt;BR /&gt;&lt;BR /&gt;but lockd is not reporting anything.&lt;BR /&gt;aded141p:root [/var/adm] cat rpc.lockd.log&lt;BR /&gt;08.06 21:06:39  aded141p  pid=2385   rpc.lockd&lt;BR /&gt;     *********** Toggle Trace on *************&lt;BR /&gt;08.06 21:06:39  X2hostname  pid=2385   rpc.lockd&lt;BR /&gt;     LOCKD QUEUES:&lt;BR /&gt;***** granted reclocks *****&lt;BR /&gt;*****no entry in msg queue *****&lt;BR /&gt;***** no blocked reclocks ****&lt;BR /&gt;used_le=0, used_fe=0, used_me=0&lt;BR /&gt;08.06 21:07:46  X2hostname  pid=2385   rpc.lockd&lt;BR /&gt;     *********** Toggle Trace off *************&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;is it mandatory to bounce both the service?Is it OK to just remove these host entris from /var/statmon/sm.bak.&lt;BR /&gt;&lt;BR /&gt;ideally restarting rpc.statd should not effect anything else.?Unfortunatly i dont have a DEV CRS instance to test with.Not sure how CRS will respond to bouncing the stat.d</description>
      <pubDate>Mon, 06 Aug 2007 21:49:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061526#M436862</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2007-08-06T21:49:27Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061527#M436863</link>
      <description>&lt;P&gt;Yes it is mandatory to bounce both. You should never just stop/restart lockd or statd without the other because they talk to each other and if one of them starts up on a different port the other may have the old port information cached and then unexpected results could happen. &lt;BR /&gt;&lt;BR /&gt;I don't believe removing the entries will resolve the problem as I believe rpc.statd builds a cache of the entries at initialization time so it wouldn't notice the entries are gone until it is stopped and restarted.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; ideally it should not affect anything.&lt;BR /&gt;&lt;BR /&gt;Are there any entries in /var/statmon/sm? That would tell you if this system is doing any NFS file locking as a client or server. If there are no entries in /var/statmon/sm then stopping and restarting these daemons should have no effect.&lt;BR /&gt;&lt;BR /&gt;If you're concerned about how long the services will be available then you can write a small Kshell script that terminate the running daemons, deletes the /var/stamon/sm.bak entries and restarts the daemons. The whole thing should take less than a second to run. Also, if there are no entries in /var/statmon/sm then you can start the rpc.lockd daemon with the "-g 0" option and that will tell the daemon not to use a grace period. This means it will accept new lock requests immediately. Again, virtually no down time for the locking service.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jun 2022 12:02:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061527#M436863</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2022-06-29T12:02:36Z</dc:date>
    </item>
    <item>
      <title>Re: rpc.statd looke like runaway consuming 100% cpu</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061528#M436864</link>
      <description>that did work as expected</description>
      <pubDate>Tue, 28 Aug 2007 15:40:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/rpc-statd-looke-like-runaway-consuming-100-cpu/m-p/5061528#M436864</guid>
      <dc:creator>skt_skt</dc:creator>
      <dc:date>2007-08-28T15:40:52Z</dc:date>
    </item>
  </channel>
</rss>

