<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Sockets .. in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017389#M541471</link>
    <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;maxuprc&lt;BR /&gt;&lt;BR /&gt;Maximum number of files per user. Default is 75, thats easy to hit.&lt;BR /&gt;&lt;BR /&gt;nfile&lt;BR /&gt;nproc&lt;BR /&gt;maxfile_lim&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Mon, 11 Jun 2007 09:57:00 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2007-06-11T09:57:00Z</dc:date>
    <item>
      <title>Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017387#M541469</link>
      <description>My app team wrote:&lt;BR /&gt;&lt;BR /&gt;The behavior we see is a steady increase in open files (lsof -p &lt;PID&gt; | wc -l) until it reaches the current max of 2048.  At that point, the process must be bounced.  It looks like the "open files" are sockets (from looking at lsof -p &lt;PID&gt;).  &lt;BR /&gt;&lt;BR /&gt;There are two servers .. servera and serverb. Same config. So we thought. There where some kernel changes made to sync the systems up but that did not change the issue.&lt;BR /&gt;&lt;BR /&gt;I checked ndd settings and can not find anyting related to socks. &lt;BR /&gt;&lt;BR /&gt;Am I missing something?&lt;BR /&gt;&lt;BR /&gt;&lt;/PID&gt;&lt;/PID&gt;</description>
      <pubDate>Mon, 11 Jun 2007 09:37:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017387#M541469</guid>
      <dc:creator>rleon</dc:creator>
      <dc:date>2007-06-11T09:37:52Z</dc:date>
    </item>
    <item>
      <title>Re: Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017388#M541470</link>
      <description>It would really help if there was a question here but I think the answer is that you are hitting the kernel tunable maxfiles --- which also applies to sockets. Man 2 socket for details. The answer may to increase maxfiles (which may also require an increase in maxfiles_lim) but it may also be to fix your code. It sounds as though you are calling socket() many times without doing corresponding shutdown()'s or close()'es.</description>
      <pubDate>Mon, 11 Jun 2007 09:53:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017388#M541470</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-06-11T09:53:08Z</dc:date>
    </item>
    <item>
      <title>Re: Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017389#M541471</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;maxuprc&lt;BR /&gt;&lt;BR /&gt;Maximum number of files per user. Default is 75, thats easy to hit.&lt;BR /&gt;&lt;BR /&gt;nfile&lt;BR /&gt;nproc&lt;BR /&gt;maxfile_lim&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Mon, 11 Jun 2007 09:57:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017389#M541471</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2007-06-11T09:57:00Z</dc:date>
    </item>
    <item>
      <title>Re: Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017390#M541472</link>
      <description>The application should issue a shutdown(2) socket system call to close the current connection alongwith the socket descriptor before it opens a new one? If that is not the case then the app team needs to modify their code accordingly.</description>
      <pubDate>Mon, 11 Jun 2007 09:59:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017390#M541472</guid>
      <dc:creator>Sandman!</dc:creator>
      <dc:date>2007-06-11T09:59:19Z</dc:date>
    </item>
    <item>
      <title>Re: Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017391#M541473</link>
      <description>maxfiles_lim is set to 2048.&lt;BR /&gt;But here is the kicker .. it is set to 2048 on both servers. But one still have the issue with the sockets and one doesnt. &lt;BR /&gt;&lt;BR /&gt;same hardware, same build image, same kernel parameters.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 11 Jun 2007 10:18:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017391#M541473</guid>
      <dc:creator>rleon</dc:creator>
      <dc:date>2007-06-11T10:18:37Z</dc:date>
    </item>
    <item>
      <title>Re: Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017392#M541474</link>
      <description>There are 2 limits for the same value that do the soft and hard limits respectively, maxfiles and maxfiles_lim. Maxfiles_lim should be &amp;gt; maxfiles. The application actually hits maxfiles and that will result in errno on socket being set to EMFILE. Well-written applications will then attempt to increase maxfiles (up to a maximum of maxfiles_lim). In any event, you can't simply compare two servers unless their loading is almost identical.</description>
      <pubDate>Mon, 11 Jun 2007 10:27:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017392#M541474</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-06-11T10:27:29Z</dc:date>
    </item>
    <item>
      <title>Re: Sockets ..</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017393#M541475</link>
      <description>One possible translation:&lt;BR /&gt;&lt;BR /&gt;Our application is leaking file descriptors.&lt;BR /&gt;&lt;BR /&gt;(sockets are file descriptors just as squares are rectangles)&lt;BR /&gt;&lt;BR /&gt;The suggestion to call shutdown() is probably on the right track, but insufficient - shutdown() does not actually free the file descriptor.  You have to call close() for that.&lt;BR /&gt;&lt;BR /&gt;So, some tusc'ing of the server application to see what system calls it is making, and if it is properly responding to say read returns of zero (indicating client close) would seem to be inorder.&lt;BR /&gt;&lt;BR /&gt;If the client's aren't closing on their own accord then perhaps the server application requires a means to cull older connections on its own.</description>
      <pubDate>Tue, 12 Jun 2007 12:26:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/sockets/m-p/4017393#M541475</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-06-12T12:26:20Z</dc:date>
    </item>
  </channel>
</rss>

