<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Socket queue/buffer introspection in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163159#M570022</link>
    <description>Rick,&lt;BR /&gt; &lt;BR /&gt;what's wrong with my invocation of tcp_status?&lt;BR /&gt;See ndd's responses below:&lt;BR /&gt; &lt;BR /&gt;# uname -srv&lt;BR /&gt;HP-UX B.11.00 U&lt;BR /&gt;&lt;BR /&gt;# ndd -h|grep status          &lt;BR /&gt;    ip_ill_status         -  Displays a report of all physical interfaces&lt;BR /&gt;    ip_ipif_status        -  Displays a report of all logical interfaces&lt;BR /&gt;    ip_ire_status         -  Displays all routing table entries&lt;BR /&gt;    ip_udp_status         -  Reports IP level UDP fanout table&lt;BR /&gt;    tcp_status               -  Get netstat-like TCP instances information&lt;BR /&gt;    udp_status               -  Get UDP instances information.&lt;BR /&gt;    ip_ill_config_status  -  Internal configuration option&lt;BR /&gt;    ip_mc_filter_status   -  Internal Multicast filter option&lt;BR /&gt; &lt;BR /&gt;# ndd -get /dev/tcp tcp_status&lt;BR /&gt;operation failed, Invalid argument&lt;BR /&gt;  &lt;BR /&gt;# ndd -?&lt;BR /&gt;usage: ndd [-get] network-device parameter&lt;BR /&gt;           [-set] network-device parameter value&lt;BR /&gt;           [ -h ] [ supported | unsupported ] &lt;BR /&gt;           [ -c ]&lt;BR /&gt;  &lt;BR /&gt;# ndd -get /dev/tcp \?|grep status&lt;BR /&gt;tcp_status                    (read only)&lt;BR /&gt; &lt;BR /&gt;</description>
    <pubDate>Fri, 16 Jan 2004 11:46:16 GMT</pubDate>
    <dc:creator>Ralph Grothe</dc:creator>
    <dc:date>2004-01-16T11:46:16Z</dc:date>
    <item>
      <title>Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163155#M570018</link>
      <description>Hello,&lt;BR /&gt; &lt;BR /&gt;I need to know how to get a glance at the contents of socket queues.&lt;BR /&gt; &lt;BR /&gt;The reason are inexplicable tcp server latencies (e.g. telnets), and I thus wish to monitor the queues' states.&lt;BR /&gt; &lt;BR /&gt;Regards&lt;BR /&gt;Ralph&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Jan 2004 11:24:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163155#M570018</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2004-01-13T11:24:02Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163156#M570019</link>
      <description>Ralph,&lt;BR /&gt;&lt;BR /&gt;Did you consider looking at "Recv-Q" and "Send-Q" in netstat -an output?. YOu can get some information out of it though they may not be true representations of bottlenecks.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Tue, 13 Jan 2004 12:43:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163156#M570019</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2004-01-13T12:43:29Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163157#M570020</link>
      <description>Sridhar,&lt;BR /&gt; &lt;BR /&gt;yes, the two columns of send and receive queues from a "netstat -an -f inet" was also the first thing that came to my mind.&lt;BR /&gt; &lt;BR /&gt;But I hoped for some tool that would give a more thorough introspection (e.g. contents, or state changes in queues on different layers of the tcp/ip stack, as well as maybe pointers to possible "memory leaks").</description>
      <pubDate>Wed, 14 Jan 2004 10:05:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163157#M570020</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2004-01-14T10:05:17Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163158#M570021</link>
      <description>If one is within the context of a process with access to the socket, there is always MSG_PEEK.  However, from outside the process, it would basically mean finding the socket structure in the kernel and then following the pointers to the mblks chained off of it and such.  I'm not aware of such a tool.&lt;BR /&gt;&lt;BR /&gt;Lsof may give slightly more information, but perhaps not all of what you might want.&lt;BR /&gt;&lt;BR /&gt;As for state changes in the stack, the only place that really has "state" would be TCP, and perhaps the socket/streamhead.  TCP's state (connection anyhow) is displayed in that netstat -an output.&lt;BR /&gt;&lt;BR /&gt;There is also the output of "ndd /dev/tcp tcp_status" which will have a little more information about the TCP connection.&lt;BR /&gt;&lt;BR /&gt;It might help to know when in the lifecycle of these telnet connections you have these inexplicable latencies - there may be other things to check.</description>
      <pubDate>Thu, 15 Jan 2004 12:53:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163158#M570021</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2004-01-15T12:53:59Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163159#M570022</link>
      <description>Rick,&lt;BR /&gt; &lt;BR /&gt;what's wrong with my invocation of tcp_status?&lt;BR /&gt;See ndd's responses below:&lt;BR /&gt; &lt;BR /&gt;# uname -srv&lt;BR /&gt;HP-UX B.11.00 U&lt;BR /&gt;&lt;BR /&gt;# ndd -h|grep status          &lt;BR /&gt;    ip_ill_status         -  Displays a report of all physical interfaces&lt;BR /&gt;    ip_ipif_status        -  Displays a report of all logical interfaces&lt;BR /&gt;    ip_ire_status         -  Displays all routing table entries&lt;BR /&gt;    ip_udp_status         -  Reports IP level UDP fanout table&lt;BR /&gt;    tcp_status               -  Get netstat-like TCP instances information&lt;BR /&gt;    udp_status               -  Get UDP instances information.&lt;BR /&gt;    ip_ill_config_status  -  Internal configuration option&lt;BR /&gt;    ip_mc_filter_status   -  Internal Multicast filter option&lt;BR /&gt; &lt;BR /&gt;# ndd -get /dev/tcp tcp_status&lt;BR /&gt;operation failed, Invalid argument&lt;BR /&gt;  &lt;BR /&gt;# ndd -?&lt;BR /&gt;usage: ndd [-get] network-device parameter&lt;BR /&gt;           [-set] network-device parameter value&lt;BR /&gt;           [ -h ] [ supported | unsupported ] &lt;BR /&gt;           [ -c ]&lt;BR /&gt;  &lt;BR /&gt;# ndd -get /dev/tcp \?|grep status&lt;BR /&gt;tcp_status                    (read only)&lt;BR /&gt; &lt;BR /&gt;</description>
      <pubDate>Fri, 16 Jan 2004 11:46:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163159#M570022</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2004-01-16T11:46:16Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163160#M570023</link>
      <description>I seem to recall there being some issues with STRMSGSIZE and the quantity of data some ndd calls into the kernel wish to return.  IIRC, the fix is to be on th elatest ndd patch and perhaps set STRMSGSIZE to 0 and regen and reboot.&lt;BR /&gt;&lt;BR /&gt;The usual song and dance about being on the latest ARPA trasnport patch and its dependencies would likely apply as well.</description>
      <pubDate>Fri, 16 Jan 2004 13:09:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163160#M570023</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2004-01-16T13:09:02Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163161#M570024</link>
      <description>Rick,&lt;BR /&gt; &lt;BR /&gt;there is a cumulative ARPA patch on this box&lt;BR /&gt; &lt;BR /&gt;# swlist |grep -i arpa                                   &lt;BR /&gt;  PHNE_20436                            1.0            cumulative ARPA Transport patch &lt;BR /&gt; &lt;BR /&gt;However, as you mentioned, it may not be that recent.&lt;BR /&gt; &lt;BR /&gt;And it doesn't seem to address any streams segment size&lt;BR /&gt; &lt;BR /&gt;# swlist -l fileset -a readme  PHNE_20436|grep -i sgsize&lt;BR /&gt;        Blocking sendmsg() returns EMSGSIZE when recieve side is&lt;BR /&gt;        overflow of revieve side resulting in the EMSGSIZE.&lt;BR /&gt; &lt;BR /&gt; &lt;BR /&gt;Nor is in the accompanying readme a match against tcp_status.&lt;BR /&gt; &lt;BR /&gt;However, I've got an 11i box that shows a tcp_status dump, where the query succeeds.&lt;BR /&gt; &lt;BR /&gt;But then I fear I'm not register-literate enough to interpret this correctly.&lt;BR /&gt; &lt;BR /&gt;# uname -srv&lt;BR /&gt;HP-UX B.11.11 U&lt;BR /&gt;# ndd -get /dev/tcp tcp_status|wc -c&lt;BR /&gt;16389&lt;BR /&gt;# ndd -get /dev/tcp tcp_status|head &lt;BR /&gt;TCP              dst                                          snxt     suna     swnd     cwnd     rnxt &lt;BR /&gt;    rack     rwnd     rto   mss   [lport,fport] state&lt;BR /&gt;00000000431b1be8 000.000.000.000                               cc9be10e cc9be10d 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [c001,0] TCP_LISTEN &lt;BR /&gt;00000000431b1828 000.000.000.000                               cc97ae96 cc97ae95 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [0,0] TCP_IDLE &lt;BR /&gt;00000000431b10a8 000.000.000.000                               cc950084 cc950083 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [16,0] TCP_LISTEN &lt;BR /&gt;00000000431b1468 000.000.000.000                               cc96cd0a cc96cd09 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [6f,0] TCP_LISTEN &lt;BR /&gt;00000000482818e8 000.000.000.000                               1eb99c0b 1eb99c0a 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [d7e2,0] TCP_LISTEN &lt;BR /&gt;0000000048281528 000.000.000.000                               c76a13c1 c76a13c0 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [0,0] TCP_IDLE &lt;BR /&gt;0000000061d9b468 000.000.000.000                               ce79cc52 ce79cc51 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [c354,0] TCP_LISTEN &lt;BR /&gt;000000004c149468 000.000.000.000                               d1048fac d1048fab 00000000 00000000 0000&lt;BR /&gt;0000 00000000 00000000 01500 00536 [c07d,0] TCP_LISTEN &lt;BR /&gt;000000006139b8e8 127.000.000.001                               11ee677a 11ee677a 00008000 00002073 11ed&lt;BR /&gt;7d3f 11ed7d3f 00008000 00500 04096 [13bb,d11f] TCP_ESTABLISHED &lt;BR /&gt;</description>
      <pubDate>Mon, 19 Jan 2004 10:10:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163161#M570024</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2004-01-19T10:10:26Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163162#M570025</link>
      <description>Ralph,&lt;BR /&gt;&lt;BR /&gt;Though it is not going to give you indepth analysis of your socket queues, you might want to install ethreal and find if the problem is in fact with your tcp servers. While it is running, duplicate your problem and follow the tcp stream. You can see each sequence and time it took to send|receive the acknowledgements. You can also see details for each packet.&lt;BR /&gt;&lt;BR /&gt;You could do this with nettl and tcpdumps too. But I find it much easier with ethereal.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Mon, 19 Jan 2004 10:44:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163162#M570025</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2004-01-19T10:44:01Z</dc:date>
    </item>
    <item>
      <title>Re: Socket queue/buffer introspection</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163163#M570026</link>
      <description>Indeed, that patch has been superceded. &lt;BR /&gt;&lt;BR /&gt;You also want to look for a "standalone" ndd patch PHNE_26125.  PLuggins some of that in to some magical mystery tools tells me that the set of patches you want to be "up to date" wrt the Transport would be:&lt;BR /&gt;&lt;BR /&gt;PHCO_23651  fsck_vxfs(1M) cumulative patch&lt;BR /&gt;PHCO_24437  LVM commands cumulative patch&lt;BR /&gt;PHCO_27375  cumulative SAM/ObAM patch&lt;BR /&gt;PHCO_29380  user/group(add/mod/del)(1M) cumulative patch&lt;BR /&gt;PHKL_18543  PM/VM/UFS/async/scsi/io/DMAPI/JFS/perf patch&lt;BR /&gt;PHKL_20016  2nd CPU not recognized in G70/H70/I70&lt;BR /&gt;PHKL_23409  NFS, Large Data Space, kernel memory leak&lt;BR /&gt;PHKL_28150  LVM Cumulative Patch w/Performance Upgrades&lt;BR /&gt;PHKL_28593  VxFS 3.1 cumulative patch: CR_EIEM&lt;BR /&gt;PHKL_29385  IDS/9000; syscalls; eventports; dup2() race&lt;BR /&gt;PHKL_29434  POSIX AIO;getdirentries;MVFS;rcp;mmap/IDS;&lt;BR /&gt;PHKL_29648  Probe,IDDS,PM,VM,PA-8700,AIO,T600,FS,PDC,CLK&lt;BR /&gt;PHNE_26125  ndd(1M) cumulative patch&lt;BR /&gt;PHNE_26771  cumulative ARPA Transport patch&lt;BR /&gt;PHNE_27902  Cumulative STREAMS Patch&lt;BR /&gt;&lt;BR /&gt;Also, STRMSGSIZE is a kernel tunable, not an error return value, so you would tweak that via SAM.&lt;BR /&gt;&lt;BR /&gt;As for the output of ndd -get /dev/tcp tcp_status, assuming I don't fumble finger anything, the "TCP" column is a reference to the intenal TCP state location.  "dst" is the destination IP address.  "snxt" is the next sequence number that will be send when data is provided by the application. "suna" is the lowest sequence number of data for which we've sent, but not received an ACKnowledgement. "swnd" I _think_ is how much more data TCP will accept from the local application before it exerts local flow control.  "cwnd" is the  current size of the "congestion window."  "rnxt" is the sequence number we next expect to receive. I think "rack" is the last sequence number we have acked. "rto" is the current value of the retransmission timeout.  "mss" is the maximum segment size - the limit to the quantity of data we will send in any one TCP segment. "lport" is the local port number.  "fport" is the "foreign" or "remote" port number.  and finally, "state" is the TCP connection state&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 19 Jan 2004 13:35:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/socket-queue-buffer-introspection/m-p/3163163#M570026</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2004-01-19T13:35:07Z</dc:date>
    </item>
  </channel>
</rss>

