<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: FIN_WAIT_2 in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481409#M562152</link>
    <description>We have had a similiar problem occassionally.  I wrote a script which allows us to change the timeout to anything between 0 and 20 seconds (we do a temporary change).  The idea is that it's executed with say a 20 second timeout, then we watch netstat for the FIN_WAIT2's to go away, then reset it back to 0.  We were advised that we really shouldn't change it permanently.</description>
    <pubDate>Tue, 08 Feb 2005 15:23:37 GMT</pubDate>
    <dc:creator>Gary L. Paveza, Jr.</dc:creator>
    <dc:date>2005-02-08T15:23:37Z</dc:date>
    <item>
      <title>FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481407#M562150</link>
      <description>Hey all,&lt;BR /&gt;&lt;BR /&gt;   I see that this area has been talked about but I would like to get a newer response and one that is geared to my companies enviroment. Here is it goes. We are running an N4000 with HPUX 11 and a legacy system call Universe (aka PICK). We are using the Universe ODBC (UNIObjects) clients on a couple windows servers running .Net applications. When I randomly run 'netstat -na | grep 31438' I see tcp connections from one of the specific windows server where the status remains at FIN_WAIT_2. I have read about using a script to clean up these hung FIN_WAIT_2 and also setting the timeout from 0 to 60 minutes. What is the best approach? Is it something in the .Net application that is not closing correctly? Should I set the time out to 60 minutes? Or use the script to hard kill the hung connections? One last thing, we have another server that run .Net applications making a similar connection and it never leaves these.&lt;BR /&gt;&lt;BR /&gt;Matt</description>
      <pubDate>Tue, 08 Feb 2005 15:07:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481407#M562150</guid>
      <dc:creator>Matt Mumford</dc:creator>
      <dc:date>2005-02-08T15:07:22Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481408#M562151</link>
      <description>&lt;BR /&gt;see this thread: &lt;A href="http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0xe0d97680e012d71190050090279cd0f9%2C00.html&amp;amp;admit=716493758+1107893897004+28353475" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/parseCurl.do?CURL=%2Fcm%2FQuestionAnswer%2F1%2C%2C0xe0d97680e012d71190050090279cd0f9%2C00.html&amp;amp;admit=716493758+1107893897004+28353475&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;btw, I programmed in pick (Universe) for 10 years.&lt;BR /&gt;&lt;BR /&gt;and get "lsof", as netstat SUCKS: &lt;A href="http://hpux.cs.utah.edu/hppd/hpux/Sysadmin/lsof-4.74/" target="_blank"&gt;http://hpux.cs.utah.edu/hppd/hpux/Sysadmin/lsof-4.74/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry d brown jr</description>
      <pubDate>Tue, 08 Feb 2005 15:20:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481408#M562151</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2005-02-08T15:20:08Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481409#M562152</link>
      <description>We have had a similiar problem occassionally.  I wrote a script which allows us to change the timeout to anything between 0 and 20 seconds (we do a temporary change).  The idea is that it's executed with say a 20 second timeout, then we watch netstat for the FIN_WAIT2's to go away, then reset it back to 0.  We were advised that we really shouldn't change it permanently.</description>
      <pubDate>Tue, 08 Feb 2005 15:23:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481409#M562152</guid>
      <dc:creator>Gary L. Paveza, Jr.</dc:creator>
      <dc:date>2005-02-08T15:23:37Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481410#M562153</link>
      <description>Matt,&lt;BR /&gt;&lt;BR /&gt;The sugguestions already given should work well to keep those FIN_WAIT_2s cleaned up.  I  would recommend trying to identify  why the one server is leaving those hanging and the other works fine. &lt;BR /&gt;&lt;BR /&gt;Are  both  servers  running the same application(s)?  Are the same users on  both servers or  are  they a  different  set of users on  each server? Perhaps the  users  on one server are not using it correctly. What about  the application revision? Is it  the  same on both  servers?&lt;BR /&gt;&lt;BR /&gt;Just some ideas,&lt;BR /&gt;David</description>
      <pubDate>Tue, 08 Feb 2005 15:43:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481410#M562153</guid>
      <dc:creator>David Child_1</dc:creator>
      <dc:date>2005-02-08T15:43:01Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481411#M562154</link>
      <description>Hey all,&lt;BR /&gt;&lt;BR /&gt;   I installed LSOF and when I ran:&lt;BR /&gt;&lt;BR /&gt;   lsof -i tcp:31438&lt;BR /&gt;&lt;BR /&gt;   I got:&lt;BR /&gt;&lt;BR /&gt;   root@tzg # lsof -i tcp:31438&lt;BR /&gt;   Memory fault(coredump)&lt;BR /&gt;&lt;BR /&gt;   Any thoughts? Help.&lt;BR /&gt;&lt;BR /&gt;Matt</description>
      <pubDate>Wed, 09 Feb 2005 11:49:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481411#M562154</guid>
      <dc:creator>Matt Mumford</dc:creator>
      <dc:date>2005-02-09T11:49:21Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481412#M562155</link>
      <description>I do nothink that anything has changed wrt this issue over the years.  Likely as not those windows clients are doing abortive closes and the RST's are lost, or they are ignoring the FIN when the server closes.&lt;BR /&gt;&lt;BR /&gt;I would suggest you first make sure that the FIN_WAIT_2's are hanging around for longer than tcp_keepalive_detached_interval + tcp_ip_abort_interval before you start altering other timer settings.  If the server application is calling close(), the connection becomes "detached" (ie has no associated socket) and after tcp_keepalive_detached_interval, keepalive probes will be sent.  Those will likely generate RST's from the windows system if the windows system did an abortive close.  That will clear the FIN_WAIT_2.&lt;BR /&gt;&lt;BR /&gt;If there is no response, it will keep sending probes for tcp_ip_abort_interval time units.&lt;BR /&gt;&lt;BR /&gt;If the windows system has not called close, the probes will elicit normal ACKs and it indicates you have buggy windows clients that need to be fixed.</description>
      <pubDate>Wed, 09 Feb 2005 12:38:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481412#M562155</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2005-02-09T12:38:28Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481413#M562156</link>
      <description>Hey all,&lt;BR /&gt;&lt;BR /&gt;     I had to just do a fresh compile of LSOF and it appears to be working. I have attached the view of two examples of the problem port. The first is a 'netstat -na | grep 31438' the second in 'lsof -i tcp:31438'.&lt;BR /&gt;&lt;BR /&gt;    The ones I am interested in are coming from IP 172.28.8.232 in the 'netstat' view, however I am not seeing the information on the 'lsof'. What am I missing? Help&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 14:24:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481413#M562156</guid>
      <dc:creator>Matt Mumford</dc:creator>
      <dc:date>2005-02-09T14:24:05Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481414#M562157</link>
      <description>Matt - my guess is that where netstat shows all TCP endpoints, lsof may only be showing those with an associated socket.  If that is the case, it implies that those in netstat but not in lsof output are "detached" - ie the application has called close() on the socket.&lt;BR /&gt;&lt;BR /&gt;As such, the tcp_keepalive_detached_interval stuff should kick-in.&lt;BR /&gt;&lt;BR /&gt;Unless we are talking about hundreds adnd thousands of FIN_WAIT_2 connections it really should not be a big dea for the transport.  It has good hashes for connection lookup.&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 14:27:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481414#M562157</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2005-02-09T14:27:54Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481415#M562158</link>
      <description>Hey all,&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;    The problem is that those FIN_WAIT_2 have been out there since my last reboot last weekend. &lt;BR /&gt;&lt;BR /&gt;Matt&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 15:25:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481415#M562158</guid>
      <dc:creator>Matt Mumford</dc:creator>
      <dc:date>2005-02-09T15:25:21Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481416#M562159</link>
      <description>If they have been there since last weekend, it suggests that the remote endpoints are still alive and responding to the keepalive pings.&lt;BR /&gt;&lt;BR /&gt;(Might want to check that tcp_keepalives_kill is still set to 1)&lt;BR /&gt;&lt;BR /&gt;Again, unless there are thousands of them, it really isn't a big deal - particularly if your server application code is written correctly, setting SO_REUSEADDR before trying to bind() on a restart of the application.</description>
      <pubDate>Wed, 09 Feb 2005 15:33:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481416#M562159</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2005-02-09T15:33:44Z</dc:date>
    </item>
    <item>
      <title>Re: FIN_WAIT_2</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481417#M562160</link>
      <description>Go look on the Windows box and see if it has a bunch of connections stuck in LAST ACK.  NT has a bug in tcpip.sys that used to do this all the time.  Creates a bunch of FIN_WAIT_2's on the HPUX at the same time.  Compare the version of tcpip.sys on the bad box with that of the good box.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://support.microsoft.com/default.aspx?scid=kb;en-us;254930" target="_blank"&gt;http://support.microsoft.com/default.aspx?scid=kb;en-us;254930&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I have the fix for NT if you need it.&lt;BR /&gt;&lt;BR /&gt;Ron</description>
      <pubDate>Wed, 09 Feb 2005 18:24:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/fin-wait-2/m-p/3481417#M562160</guid>
      <dc:creator>Ron Kinner</dc:creator>
      <dc:date>2005-02-09T18:24:48Z</dc:date>
    </item>
  </channel>
</rss>

