<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: connect requests dropped due to full queue in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064053#M542166</link>
    <description>The so called "listen queue" - what is overflowing when you se connections dropped due to full queue will be the _minimum_ of tcp_conn_request_max and what the application passes-in to the listen() socket call.  So, once you have tcp_conn_request_max above what the application is setting, any further increase in tcp_conn_request_max will have no effect.&lt;BR /&gt;&lt;BR /&gt;The thing you really need to do is figure-out why the agent is freezing and make that go away.  Tweaking listen queues is only treating a symptom.&lt;BR /&gt;&lt;BR /&gt;Now, wrt your specific netstat output.  The numbers are the same after your tuning because the statistics do not reset except at  boot.  So, there were no additional drops after the tuning, or you happened to very coincidentally get the same stats after a boot, or happened to cut-and-paste the wrong set of stats.&lt;BR /&gt;&lt;BR /&gt;If you want stats over an interval, consider saving two snapshots of netstat stats to files and run them through beforeafter:&lt;BR /&gt;&lt;BR /&gt;netstat -s -p tcp &amp;gt; before&lt;BR /&gt;sleep 60&lt;BR /&gt;netstat -s -p tcp &amp;gt; after&lt;BR /&gt;beforeafter before after &amp;gt; delta&lt;BR /&gt;more delta&lt;BR /&gt;&lt;BR /&gt;where you can get beforeafter from:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="ftp://ftp.cup.hp.com/dist/networking/tools/" target="_blank"&gt;ftp://ftp.cup.hp.com/dist/networking/tools/&lt;/A&gt;</description>
    <pubDate>Tue, 04 Sep 2007 12:35:10 GMT</pubDate>
    <dc:creator>rick jones</dc:creator>
    <dc:date>2007-09-04T12:35:10Z</dc:date>
    <item>
      <title>connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064049#M542162</link>
      <description>A third party application (BMC Patrol) running on  one of my server has problem with the agent freezing and not responding to requests.&lt;BR /&gt;When I run “netstat –p tcp” command I get the bunch of output.  I am copying the last few lines here:&lt;BR /&gt;4680 connection requests&lt;BR /&gt;        16044 connection accepts&lt;BR /&gt;        20724 connections established (including accepts)&lt;BR /&gt;        23711 connections closed (including 3295 drops)&lt;BR /&gt;        1413 embryonic connections dropped&lt;BR /&gt;        3653768 segments updated rtt (of 3653768 attempts)&lt;BR /&gt;        3101 retransmit timeouts&lt;BR /&gt;                473 connections dropped by rexmit timeout&lt;BR /&gt;        205 persist timeouts&lt;BR /&gt;        9 keepalive timeouts&lt;BR /&gt;                4 keepalive probes sent&lt;BR /&gt;                1 connection dropped by keepalive&lt;BR /&gt;        55562 connect requests dropped due to full queue&lt;BR /&gt;        1398 connect requests dropped due to no listener&lt;BR /&gt;&lt;BR /&gt;As suggested on this forum, I increases the value of two tunables. I'll paste them below:&lt;BR /&gt;&lt;BR /&gt;TRANSPORT_NAME[0]=tcp&lt;BR /&gt;NDD_NAME[0]=tcp_conn_request_max&lt;BR /&gt;NDD_VALUE[0]=8192&lt;BR /&gt;#&lt;BR /&gt;# Changed the default value of tcp_syn_rcvd_max to 2000 (was 500)&lt;BR /&gt;TRANSPORT_NAME[1]=tcp&lt;BR /&gt;NDD_NAME[1]=tcp_syn_rcvd_max&lt;BR /&gt;NDD_VALUE[1]=2000&lt;BR /&gt;&lt;BR /&gt;I still have:&lt;BR /&gt;55562 connect requests dropped due to full queue&lt;BR /&gt;and in addition:&lt;BR /&gt;1398 connect requests dropped due to no listener&lt;BR /&gt;&lt;BR /&gt;What are you thoughts and suggestions?&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 01 Sep 2007 12:00:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064049#M542162</guid>
      <dc:creator>George_231</dc:creator>
      <dc:date>2007-09-01T12:00:29Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064050#M542163</link>
      <description>netstat -s&lt;BR /&gt;shows some stats.  run again in 5 minutes to perform a delta check or comparision.&lt;BR /&gt;&lt;BR /&gt;are you patched up-to-date ?  &lt;BR /&gt;&lt;BR /&gt;get on your local connected switch (cisco switch) and have networking verify how many packets are dropped ?  determine the percentage of dropped packets in the switch.&lt;BR /&gt;&lt;BR /&gt;are you using a Gige network card on the server?  run lanadmin and look at the stats on the network card. Verify your settings: speed negotiation, etc.  If you have a Gige card, the Flow-Control stats is reporting that the switch is backing up or dropping packets.  might be something to look at in your investigation.&lt;BR /&gt;&lt;BR /&gt;check the system log for possible errors.&lt;BR /&gt;&lt;BR /&gt;good luck,&lt;BR /&gt;T</description>
      <pubDate>Sat, 01 Sep 2007 20:45:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064050#M542163</guid>
      <dc:creator>D Block 2</dc:creator>
      <dc:date>2007-09-01T20:45:08Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064051#M542164</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;On the face of the message, it would appear the card is getting more traffic than it can handle.&lt;BR /&gt;&lt;BR /&gt;I'd expect to see something elsewhere though, in dmesg or /var/adm/syslog/syslog.log&lt;BR /&gt;&lt;BR /&gt;Perhaps fire up cstm,mstm or xstm and run some hardware diagnostics.&lt;BR /&gt;&lt;BR /&gt;If the switch is set to the same speed as the NIC card, your network team might find some problems too.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Sun, 02 Sep 2007 13:49:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064051#M542164</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2007-09-02T13:49:23Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064052#M542165</link>
      <description>&lt;BR /&gt;George,&lt;BR /&gt;&lt;BR /&gt;I'm more concerned with your current Arpa Patch and Stream Patches, because it fixed a lot of TCP issues.&lt;BR /&gt;&lt;BR /&gt;PHNE_35351 11.11 cumulative ARPA Transport patch&lt;BR /&gt;PHNE_34131 11.11 Cumulative STREAMS Patch &lt;BR /&gt;or&lt;BR /&gt;PHNE_35766 11.23 cumulative ARPA Transport patch &lt;BR /&gt;PHNE_34788 11.23 Cumulative STREAMS Patch &lt;BR /&gt;&lt;BR /&gt;Please get the patches updated.&lt;BR /&gt;&lt;BR /&gt;WK&lt;BR /&gt;please assign points</description>
      <pubDate>Mon, 03 Sep 2007 00:30:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064052#M542165</guid>
      <dc:creator>whiteknight</dc:creator>
      <dc:date>2007-09-03T00:30:18Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064053#M542166</link>
      <description>The so called "listen queue" - what is overflowing when you se connections dropped due to full queue will be the _minimum_ of tcp_conn_request_max and what the application passes-in to the listen() socket call.  So, once you have tcp_conn_request_max above what the application is setting, any further increase in tcp_conn_request_max will have no effect.&lt;BR /&gt;&lt;BR /&gt;The thing you really need to do is figure-out why the agent is freezing and make that go away.  Tweaking listen queues is only treating a symptom.&lt;BR /&gt;&lt;BR /&gt;Now, wrt your specific netstat output.  The numbers are the same after your tuning because the statistics do not reset except at  boot.  So, there were no additional drops after the tuning, or you happened to very coincidentally get the same stats after a boot, or happened to cut-and-paste the wrong set of stats.&lt;BR /&gt;&lt;BR /&gt;If you want stats over an interval, consider saving two snapshots of netstat stats to files and run them through beforeafter:&lt;BR /&gt;&lt;BR /&gt;netstat -s -p tcp &amp;gt; before&lt;BR /&gt;sleep 60&lt;BR /&gt;netstat -s -p tcp &amp;gt; after&lt;BR /&gt;beforeafter before after &amp;gt; delta&lt;BR /&gt;more delta&lt;BR /&gt;&lt;BR /&gt;where you can get beforeafter from:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="ftp://ftp.cup.hp.com/dist/networking/tools/" target="_blank"&gt;ftp://ftp.cup.hp.com/dist/networking/tools/&lt;/A&gt;</description>
      <pubDate>Tue, 04 Sep 2007 12:35:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064053#M542166</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-09-04T12:35:10Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064054#M542167</link>
      <description>I did reboot the server twice for applying different set of patches the latter being the STREAMS patch recommended in this thread.&lt;BR /&gt;In short:&lt;BR /&gt;1.  I have applied the recommended patches&lt;BR /&gt;2. Increase the values of tcp_conn_request_max to 8192&lt;BR /&gt;3. Increased the value of tcp_syn_rcvd_max to 2000&lt;BR /&gt;I still have the "connect requests dropped due to full queue" with rapidly growing count.&lt;BR /&gt;&lt;BR /&gt;I found out from netstat -an that I have packet loss of 10/14/18% on a certain network monitored by Patrol. I'll involve the network staff to troubleshoot.</description>
      <pubDate>Wed, 05 Sep 2007 18:58:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064054#M542167</guid>
      <dc:creator>George_231</dc:creator>
      <dc:date>2007-09-05T18:58:34Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064055#M542168</link>
      <description>Have you actually tweaked any of the _application_ settings?  Again, just setting tcp_conn_request_max alone is not enough if the application is calling listen() with a smaller value for the backlog parameter.</description>
      <pubDate>Wed, 05 Sep 2007 19:01:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064055#M542168</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-09-05T19:01:47Z</dc:date>
    </item>
    <item>
      <title>Re: connect requests dropped due to full queue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064056#M542169</link>
      <description>If your clients disconnect due to timeouts caused by a full listen queue, tuning the tcp_conn_request_max higher won't do anything for the app. When your app starts it's more than likely going to set backlog to some other defined value. Check your apps config files or vendor's support site for anything re: listen backlog or listenq &lt;BR /&gt;&lt;BR /&gt;-denver</description>
      <pubDate>Wed, 05 Sep 2007 20:57:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/connect-requests-dropped-due-to-full-queue/m-p/4064056#M542169</guid>
      <dc:creator>Denver Osborn</dc:creator>
      <dc:date>2007-09-05T20:57:47Z</dc:date>
    </item>
  </channel>
</rss>

