<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Network Bottleneck in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041264#M574728</link>
    <description>The June 2002 stuff may be OK.  put cd in &amp;amp; do&lt;BR /&gt;&lt;BR /&gt;# swlist -s /cdrom&lt;BR /&gt;&lt;BR /&gt;this will list all the products regardless if they are locked, so you can check the version from there.  C.03.70 is not THAT recent (about Dec 2002) so you may get away with C.03.45 or so.&lt;BR /&gt;&lt;BR /&gt;I've upgraded a few systems &amp;amp; havenever needed to save off the /var/opt/perf/datafiles/ stuff.  HOWEVER I have as a belt &amp;amp; braces thing done the following before Just INCASE&lt;BR /&gt;&lt;BR /&gt;/opt/perf/bin/extract -xt -gancd -f &lt;FILE.MWA&gt;&lt;BR /&gt;&lt;BR /&gt;The "gancd" stuff is global, application, network, config &amp;amp; disk, what I consider the most important classes, so if there are others you think are important, then put them on the end of the list.  The binary output &lt;FILE.MWA&gt; can then simply be used as a single binary source file so extracting info do&lt;BR /&gt;&lt;BR /&gt;/opt/perf/bin/extract -xp -&lt;CLASSES&gt; -r&lt;REP-FILE&gt; -l &lt;FILE.MWA&gt; ....&lt;BR /&gt;&lt;BR /&gt;The version of extarct can be any as long as it is the same or newer than the original &lt;FILE.MWA&gt; was created with.&lt;BR /&gt;&lt;BR /&gt;The only other bit of advice I would give is, before installing the newer version of MWA stop it &amp;amp; the ttd!&lt;BR /&gt;/opt/perf/bin/mwa stop&lt;BR /&gt;/opt/perf/bin/ttd -k&lt;BR /&gt;swinstrall ....&lt;BR /&gt;/opt/perf/bin/mwa start&lt;BR /&gt;&lt;BR /&gt;Good luck&lt;BR /&gt;&lt;BR /&gt;Tim&lt;/FILE.MWA&gt;&lt;/FILE.MWA&gt;&lt;/REP-FILE&gt;&lt;/CLASSES&gt;&lt;/FILE.MWA&gt;&lt;/FILE.MWA&gt;</description>
    <pubDate>Tue, 05 Aug 2003 10:01:17 GMT</pubDate>
    <dc:creator>Tim D Fulford</dc:creator>
    <dc:date>2003-08-05T10:01:17Z</dc:date>
    <item>
      <title>Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041252#M574716</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I've got a cluster that is causing me some grief since the customer's DBAs/application developers changed their whole design.&lt;BR /&gt;&lt;BR /&gt;The daft design flaw in my view is that although the cluster consists of 3 nodes with each of them running packages/db instances all client requests obviously have to make TCP connections to the "central" node/instance.&lt;BR /&gt;This is causing quite some network traffic on this node with the effect of most of the TCP db connections being blocked.&lt;BR /&gt;&lt;BR /&gt;The central server is an N class which by now has reached it's final stage of HW upgradebility.&lt;BR /&gt;Please see my attachment for details where I collected some data.&lt;BR /&gt;&lt;BR /&gt;I'm using the MWA (aka OpenView Performance Agent) toolkit for monitoring services.&lt;BR /&gt;I mainly left the preset configuration (as in/opt/perf/newconfig/alarmdef and /opt/perf/newconfig/parm) unchanged, and only made minor modifications to the respective files in /var/opt/perf (see attachment).&lt;BR /&gt;&lt;BR /&gt;With these alarmdefs I get several network bottleneck alerts during the day (see utility sample output from yesterday in attachment).&lt;BR /&gt;The bottlenecks reported during the night (from 20:00 onwards) may be neglected here, as they are owe to backup traffic that wouldn't directly affect users unlike the bottlenecks during working hours).&lt;BR /&gt;&lt;BR /&gt;My problem now is how to verify that the maximum bandwith of the NIC/LAN really has been reached.&lt;BR /&gt;To this end I would rather go for the BYNETIF_{IN|OUT}_BYTE_RATE metrics than the BYNETIF_{IN|OUT}_PACKET_RATE because I believe these would makemore conspicuous that the bandwith limit has been reached,&lt;BR /&gt;simply because of the Bytes/sec unit.&lt;BR /&gt;&lt;BR /&gt;But I couldn't extract nor see in the PerfView tool a way to get the BYTE_RATEs charted.&lt;BR /&gt;&lt;BR /&gt;(Btw. I feel that I don't need to monitor the ERROR_RATE or COLLISION_RATE since the cumulative MIB stats of the NIC suggest that in this respect everything is in order (see attachment))&lt;BR /&gt;&lt;BR /&gt;Because of my poor Ethernet/ARP/IP knowledge I have to ask you the network experts how to translate from PACKET_RATEs (in Hz) to BYTE_RATEs while I don't know how large the average packet is.&lt;BR /&gt;&lt;BR /&gt;So my rather naive (worst case) assumption would be the following product:&lt;BR /&gt;&lt;BR /&gt;BYTE_RATE = PACKET_RATE * MTU * 8&lt;BR /&gt;&lt;BR /&gt;Taking the experienced peak PACKET_RATEs (which are abt. 7000 Hz) and the default MTU the NIC is set to (see attachment) the above naive formula would already result some 84 MBit/s.&lt;BR /&gt;The NICs are 10/100 MBit/s quadruple Base-TX cards which per autonegotiation with the switch link partner operate at 100 MBit/s full duplex (see attachment).&lt;BR /&gt;&lt;BR /&gt;Thus a network bottleneck would sound reasonable.&lt;BR /&gt;&lt;BR /&gt;But on the other hand the packets could have average size as small as 64 octets, so that the theoretical maximum bandwidth will never be reached while the sheer packet handling through the stack layers may have already brought the btlan driver to its knees.&lt;BR /&gt;&lt;BR /&gt;Is the only safe way to find out the average packet size by packet sniffing and extracting the packets' frame sizes?&lt;BR /&gt;&lt;BR /&gt;How could I use nettl and netfmt to this end (have never used these HP-UX tools, but instead open source sniffers that use the libpcap's API)?&lt;BR /&gt;&lt;BR /&gt;If I really confirm a LAN bottleneck here, what were my options to overcome it?&lt;BR /&gt;&lt;BR /&gt;Mind you, I have no influence on the application.&lt;BR /&gt;My first remedy of course would be to reduce network connections altogether by a change of the application's logic.&lt;BR /&gt;&lt;BR /&gt;Would it make sense to upgrade to GBit LAN.&lt;BR /&gt;I guess this would imply an upgrade of other components such as switches, routers etc., and thus be quite costly.&lt;BR /&gt;(It would also involve the willingness of the network admins)&lt;BR /&gt;&lt;BR /&gt;Since the servers have quad NICs whose most ports are unused (of course one is standby for HA failover) are there ways to sort of distribute the load on several NICs.&lt;BR /&gt;&lt;BR /&gt;I think in this context I've heard the buzz word Auto Port Aggregation.&lt;BR /&gt;How costly a solution would this be with regard to a SG cluster?&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Ralph&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Aug 2003 07:15:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041252#M574716</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T07:15:54Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041253#M574717</link>
      <description>Trying to monitor how busy the network connections are isnt easy. We use glance and perfview and still dont find it easy. The best way ive found to gauge network usage is landamin. It has display fields in octets which is simply bytes - so write a script to monitor that to see how many MB/s is going through a particular interface.&lt;BR /&gt;&lt;BR /&gt;eg. here is a script to do it;&lt;BR /&gt;&lt;BR /&gt;let z=0&lt;BR /&gt;let y=$(lanadmin -g mibstats 0|grep -i oct|grep Inbound|awk '{print $4}')&lt;BR /&gt;let y2=$(lanadmin -g mibstats 0|grep -i oct|grep Outbound|awk '{print $4}')&lt;BR /&gt;while true&lt;BR /&gt;do&lt;BR /&gt;        let x=0&lt;BR /&gt;        sleep 1&lt;BR /&gt;        x=$(lanadmin -g mibstats 0|grep -i oct|grep Inbound|awk '{print $4}')&lt;BR /&gt;        x2=$(lanadmin -g mibstats 0|grep -i oct|grep Outbound|awk '{print $4}')&lt;BR /&gt;        let t=$x-$y&lt;BR /&gt;        let t2=$x2-$y2&lt;BR /&gt;        let y=$x&lt;BR /&gt;        let y2=$x2&lt;BR /&gt;        let z=$z+1&lt;BR /&gt;        let t=$t/1000&lt;BR /&gt;        let t2=$t2/1000&lt;BR /&gt;        echo "${t} Kb/s inbound, ${t2} Kb/s outbound"&lt;BR /&gt;done&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;This reports like so for the default nmid (you may need to run lanadmin and change/set the default nmid if you want to monitor a different lancard);&lt;BR /&gt;&lt;BR /&gt;5 Kb/s inbound into pluto2, 35 Kb/s outbound from pluto2&lt;BR /&gt;4 Kb/s inbound into pluto2, 3 Kb/s outbound from pluto2&lt;BR /&gt;8 Kb/s inbound into pluto2, 9 Kb/s outbound from pluto2&lt;BR /&gt;3 Kb/s inbound into pluto2, 1 Kb/s outbound from pluto2&lt;BR /&gt;18 Kb/s inbound into pluto2, 15 Kb/s outbound from pluto2&lt;BR /&gt;7 Kb/s inbound into pluto2, 60 Kb/s outbound from pluto2&lt;BR /&gt;&lt;BR /&gt;If this shows max throughput then to fix it you only have 2 choices, Gigabit or Auto port aggregation. Seeing as everyone is using Gigabit nowadays I would go for that, but you may well find it easier and cheaper to install APA and add in another 100Mbit lan card.&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Aug 2003 07:26:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041253#M574717</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-08-05T07:26:36Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041254#M574718</link>
      <description>Stefan,&lt;BR /&gt;&lt;BR /&gt;thanks for your suggestion.&lt;BR /&gt;&lt;BR /&gt;I've also already thought about simply parsing the output of lanadmin's mibstats in a loop, and subtracting kB counts devided by the interval.&lt;BR /&gt;&lt;BR /&gt;But I guess there is a more direct way to get to those MIB stats through SNMP get requests on the correct OID.&lt;BR /&gt;I would think this is more efficient than doing an exec of lanadmin each interval.&lt;BR /&gt;I would like to use the Net::SNMP Perl module from CPAN to this end.&lt;BR /&gt;But I don't know the OIDs of the sought quantities.&lt;BR /&gt;(yes I should look them up in the HP-UX's MIBs)&lt;BR /&gt;&lt;BR /&gt;Does any SNMP guru know, or has anyone pulled them through Net::SNMP?</description>
      <pubDate>Tue, 05 Aug 2003 07:48:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041254#M574718</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T07:48:46Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041255#M574719</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;IP packet size differs depending on the application you are using. &lt;BR /&gt;&lt;BR /&gt;To analyse the bandwidth usage of your NICs i would suggest you to use MRTG traffic grapher tool which is freely available in internet.&lt;BR /&gt;&lt;BR /&gt;you can also use a good MIB browser with hp-ux MIB to analyse the interface statistics instantly.&lt;BR /&gt;&lt;BR /&gt;If you have lot of clients accessing the same hp-ux server then the total available bandwidth for each client will be,&lt;BR /&gt;&lt;BR /&gt;bandwidth available for client = total available bandwidth for server / number of clients &lt;BR /&gt;&lt;BR /&gt;since it is 100 MBps full-duplex then , we can say &lt;BR /&gt;&lt;BR /&gt;available bandwidth = 200 / number of clients&lt;BR /&gt;&lt;BR /&gt;if the result is too small that does no meet your application's requirement consider upgrading the hp-ux server's fast ethernet NIC to GigabitEthernet.&lt;BR /&gt;&lt;BR /&gt;regards,&lt;BR /&gt;&lt;BR /&gt;U.SivaKumar</description>
      <pubDate>Tue, 05 Aug 2003 08:14:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041255#M574719</guid>
      <dc:creator>U.SivaKumar_2</dc:creator>
      <dc:date>2003-08-05T08:14:42Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041256#M574720</link>
      <description>Ralph&lt;BR /&gt;&lt;BR /&gt;0 - netperf will give you the PERFORMANCE data you need.  &lt;A href="http://hpux.connect.org.uk/" target="_blank"&gt;http://hpux.connect.org.uk/&lt;/A&gt; &amp;amp; look for netperf&lt;BR /&gt;&lt;BR /&gt;1 - Average packet size is probably much less than the MTU.  You can show it in the attached bit of perl or use lanadmin repeatedly.  alternatively MeasureWare (OVPA) Version C.03.70 has both packet rate &amp;amp; kB/s. PER LAN INTERFACE (BYNETIF in the /var/opt/perf/reptall file)&lt;BR /&gt;&lt;BR /&gt;2 - For OLTP type data transfer you will probably find that the packet size will be small (100Bytes) which IMPLIES (it may  not be the case) that the network card will be Very inefficient e.g. you may be getting say 10% of the bandwidth available, e.g. 100Mbit/s card will only be able to do 10Mbit/s.  The problem in this case is that it is tempting to say "hey lets use gigabit cards", this will not help (much) as the cards will probably have similar throughput (IO/s) limits because the CPU is the limiting factor.  So buying faster CPUs would help!!!  Remember this ONLY applies if the system is throughput limited (IO/s) as opposed to bandwidth limited (kB/s or reasonable fraction of network card speed)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Tue, 05 Aug 2003 08:29:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041256#M574720</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-05T08:29:38Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041257#M574721</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;Looking at some notes (&amp;amp; please dont sue if you take the info onboard &amp;amp; it turn out not to be applicable).  The results are REALLY DEPENDANT  ON YOUR SYSTEM &amp;amp; LAN CARDS an...&lt;BR /&gt;&lt;BR /&gt;N4000 with 8x550MHz CPUs will do about 20,000 IO/s (or 20 kHz) using netperf, x-cable on 100Base-T (Full duplex).  The point where the system goes from throughput limited to bandwidth limited is about 100-150 byes/packet&lt;BR /&gt;&lt;BR /&gt;I got the above using netperf, I suggest you do the same using the relavent network equipment as this will have some impact on the results.&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Tue, 05 Aug 2003 08:37:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041257#M574721</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-05T08:37:20Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041258#M574722</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;your reply has been most helpful.&lt;BR /&gt;&lt;BR /&gt;I already was wondering why on earth I could not find the BYNETIF_*_BYTE_RATEs.&lt;BR /&gt;Looks my documentation is newer than the Glance Pak bundle I use&lt;BR /&gt;&lt;BR /&gt;# swlist B3701AA|grep -v ^#&lt;BR /&gt;&lt;BR /&gt;  B3701AA.MeasurementInt        C.02.40.000    HP-UX Measurement Interface for 11.0 &lt;BR /&gt;  B3701AA.OVOPC-AGT             A.04.17        IT/Operations Agent &lt;BR /&gt;  B3701AA.OVOPC-SE-DOC          A.04.17        IT/Operations Special Edition Documentation &lt;BR /&gt;  B3701AA.OVOPC-SE-GUI          A.04.17        IT/Operations Special Edition Java UI &lt;BR /&gt;  B3701AA.OVOPC-SE              A.04.17        IT/Operations Special Edition Product &lt;BR /&gt;  B3701AA.Glance                C.02.40.000    HP GlancePlus/UX &lt;BR /&gt;  B3701AA.MeasureWare           C.02.40.000    MeasureWare Software/UX &lt;BR /&gt;&lt;BR /&gt;Since I have C.02.40  no wonder I only get PACKET_RATES displayed.&lt;BR /&gt;&lt;BR /&gt;Also thanks for your script.&lt;BR /&gt;I was about to do exactly the same in Perl.&lt;BR /&gt;This will save me the hacking.&lt;BR /&gt;But I would want to try using the Net::SNMP module from CPAN and query the MIB more directly.&lt;BR /&gt;Unfortunately my knowledge of MIBs, OIDs, BER, and ASN is almost none existent.&lt;BR /&gt;I hope I can weed through the MIBs to find out the propper OIDs.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The netperf utility sounds promising, though I haven't looked up the URI yet.&lt;BR /&gt;I will give it a try on a test system first.&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Aug 2003 08:56:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041258#M574722</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T08:56:49Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041259#M574723</link>
      <description>As you've got MeasureWare C.02.40 on your system why not upgrade to the latest? I believe you can under the licencing agrement, you just need to get intouch with HP for the codewords etc....or install the trial one (I'd only do this out of desperation)&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Tue, 05 Aug 2003 09:16:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041259#M574723</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-05T09:16:11Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041260#M574724</link>
      <description>U.SivaKumar,&lt;BR /&gt;&lt;BR /&gt;thanks for giving the per client bandwith hint.&lt;BR /&gt;Since most of the connections are TCP to the DB instances I can lookup the process table and filter for LOCAL=NO args of the cmd line to get the connects per instance.&lt;BR /&gt;Unfortunately I don't know the view or table of the data dictionatry to query per SQL the instances (OK this would add to the TCP connections;-).&lt;BR /&gt;Do you happen to know where to look it up from within Oracle?&lt;BR /&gt;I probably could also parse "netstat -an -f inet" output by the local port numbers on the DB server.&lt;BR /&gt;&lt;BR /&gt;I do know the MRTG which I think is also a collection of Perl scripts/modules.&lt;BR /&gt;Thus I think from looking at its sources I could find out how they query the MIBs.&lt;BR /&gt;I bet they use another Perl module like Net::SNMP to this end. &lt;BR /&gt;</description>
      <pubDate>Tue, 05 Aug 2003 09:17:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041260#M574724</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T09:17:50Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041261#M574725</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;you are right I should upgrade MWA since we have a valid license anyway.&lt;BR /&gt;I found the codeword generation for the application CDs which we get on a regular basis really a pain in the a*.&lt;BR /&gt;I never ever succeeded to receive a codeword from the HP webserver, but always got timed out or webserver not responding.&lt;BR /&gt;That's why we kind of sticked to the by now dated version of the MWA.&lt;BR /&gt;I will have a look at the latest application CD I received by mail (where there came no codewords with it).&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Aug 2003 09:26:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041261#M574725</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T09:26:35Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041262#M574726</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;it looks as if we've been supplied lately only with recent HP-UX 11i application CDs.&lt;BR /&gt;&lt;BR /&gt;The latest one for 11.00 I could find is from June 2002.&lt;BR /&gt;&lt;BR /&gt;If the glance pak from there isn't more recent could I also install the depot from the 11i CD?&lt;BR /&gt;&lt;BR /&gt;Will I have to roll or save the /var/opt/perf/datafiles/* before doing the swremove of the old pack?&lt;BR /&gt;&lt;BR /&gt;Usually swremove leaves config files untouched.</description>
      <pubDate>Tue, 05 Aug 2003 09:37:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041262#M574726</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T09:37:21Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041263#M574727</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I am interested to know the packet round trip time ( ping ) in peak load time and during light load time.&lt;BR /&gt;&lt;BR /&gt;you can list the number of actively data transfering sessions by this command.&lt;BR /&gt;&lt;BR /&gt;#netstat -an | grep ESTABLISHED | wc -l&lt;BR /&gt;&lt;BR /&gt;I hope you will be interested in this tool for benchmarking the network performance of your server at light load and peak loads.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://dast.nlanr.net/Projects/Iperf/" target="_blank"&gt;http://dast.nlanr.net/Projects/Iperf/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;regards,&lt;BR /&gt;&lt;BR /&gt;U.SivaKumar&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Aug 2003 09:41:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041263#M574727</guid>
      <dc:creator>U.SivaKumar_2</dc:creator>
      <dc:date>2003-08-05T09:41:44Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041264#M574728</link>
      <description>The June 2002 stuff may be OK.  put cd in &amp;amp; do&lt;BR /&gt;&lt;BR /&gt;# swlist -s /cdrom&lt;BR /&gt;&lt;BR /&gt;this will list all the products regardless if they are locked, so you can check the version from there.  C.03.70 is not THAT recent (about Dec 2002) so you may get away with C.03.45 or so.&lt;BR /&gt;&lt;BR /&gt;I've upgraded a few systems &amp;amp; havenever needed to save off the /var/opt/perf/datafiles/ stuff.  HOWEVER I have as a belt &amp;amp; braces thing done the following before Just INCASE&lt;BR /&gt;&lt;BR /&gt;/opt/perf/bin/extract -xt -gancd -f &lt;FILE.MWA&gt;&lt;BR /&gt;&lt;BR /&gt;The "gancd" stuff is global, application, network, config &amp;amp; disk, what I consider the most important classes, so if there are others you think are important, then put them on the end of the list.  The binary output &lt;FILE.MWA&gt; can then simply be used as a single binary source file so extracting info do&lt;BR /&gt;&lt;BR /&gt;/opt/perf/bin/extract -xp -&lt;CLASSES&gt; -r&lt;REP-FILE&gt; -l &lt;FILE.MWA&gt; ....&lt;BR /&gt;&lt;BR /&gt;The version of extarct can be any as long as it is the same or newer than the original &lt;FILE.MWA&gt; was created with.&lt;BR /&gt;&lt;BR /&gt;The only other bit of advice I would give is, before installing the newer version of MWA stop it &amp;amp; the ttd!&lt;BR /&gt;/opt/perf/bin/mwa stop&lt;BR /&gt;/opt/perf/bin/ttd -k&lt;BR /&gt;swinstrall ....&lt;BR /&gt;/opt/perf/bin/mwa start&lt;BR /&gt;&lt;BR /&gt;Good luck&lt;BR /&gt;&lt;BR /&gt;Tim&lt;/FILE.MWA&gt;&lt;/FILE.MWA&gt;&lt;/REP-FILE&gt;&lt;/CLASSES&gt;&lt;/FILE.MWA&gt;&lt;/FILE.MWA&gt;</description>
      <pubDate>Tue, 05 Aug 2003 10:01:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041264#M574728</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-05T10:01:17Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041265#M574729</link>
      <description>Siva,&lt;BR /&gt;&lt;BR /&gt;thanks for the URL&lt;BR /&gt;&lt;BR /&gt;Tim,&lt;BR /&gt;&lt;BR /&gt;thanks for outlining the MWA upgrade procedure.&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;&lt;BR /&gt;Ralph</description>
      <pubDate>Tue, 05 Aug 2003 12:16:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041265#M574729</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-05T12:16:55Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041266#M574730</link>
      <description>historically, the alarms in glance et al have been a bit low.&lt;BR /&gt;&lt;BR /&gt;checking the byte rates si the way to go.&lt;BR /&gt;&lt;BR /&gt;there are some other things:&lt;BR /&gt;&lt;BR /&gt;1) If the NIC is consistently loaded, the outbound queue length stat in lanadmin (also tracked by glance et al these days IIRC) will be non-zero and stay there&lt;BR /&gt;&lt;BR /&gt;2) If the NIC is really overloaded, there will be outbound discards in the lanadmin et al stats, there will also be TCP retransmissions recorded in netstat -p tcp output, and you may also see inbound ot of order packets&lt;BR /&gt;&lt;BR /&gt;3) I'm a triffle surprised that an 8x550 N maxed-out on netperf TCP_RR at 20,000 packets per second (presumeably 10,000 transactions per second reported by netperf) Did one specific CPU peg at 100% during that test?  The request/reponse size was 1 byte yes?&lt;BR /&gt;&lt;BR /&gt;4) APA is indeed the way to go active active on the unused ports on the quad cards.  Going to different switches in the same trunk means active standby though.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;5) Other possibly useful things:&lt;BR /&gt;&lt;BR /&gt;ftp://dist/networking/briefs/ &lt;BR /&gt;   annotated_netstat.txt&lt;BR /&gt;   sane_glance.txt (this is a bit dated)&lt;BR /&gt;&lt;BR /&gt;ftp://dist/networking/tools/&lt;BR /&gt;    connhist&lt;BR /&gt;    beforeafter&lt;BR /&gt;&lt;BR /&gt;6) Netperf can be used to measure the limits of a system/network/nic, but will not tell you if a given system/network/nic is overloaded in its day to day stuff - it is a benchmark, not a monitor.</description>
      <pubDate>Wed, 06 Aug 2003 19:55:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041266#M574730</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2003-08-06T19:55:57Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041267#M574731</link>
      <description>Rick&lt;BR /&gt;&lt;BR /&gt;Regarding the figure of 20,000 packets.  I did not do a TCP_RR what I did (&amp;amp; it is fuzzy in my mind, some time ago) was to send packets of varying lenghth &amp;amp; measure the bandwidth (as per the tcp_stream_script with netperf).  You can plot a graph of pkt_size Vs pkt/s &amp;amp; kbit/s&lt;BR /&gt;&lt;BR /&gt;When pkt/s peaks, we used this as the MAX throughput figure.&lt;BR /&gt;&lt;BR /&gt;Now my notes are super fuzzy (OK they are files in the bin) on this I forget how it was exactly done but I think I used the following&lt;BR /&gt;&lt;BR /&gt;use ndd to set tcp_naglim=1 (get rid of nagle)&lt;BR /&gt;netperf -H &lt;REM_HOST&gt; -s X -S X -m X -M X -l 60&lt;BR /&gt;&lt;BR /&gt;so this sets the In/Out sockets to X bytes &amp;amp; we send &amp;amp; recieve a message of X bytes &amp;amp; run for 60 seconds.  From what you said I infer I should have done&lt;BR /&gt;&lt;BR /&gt;netperf -H &lt;REM_HOST&gt; -s X -S 1 -m X -M 1 -l 60&lt;BR /&gt;&lt;BR /&gt;Which may nean my results are lower than they should be.&lt;BR /&gt;&lt;BR /&gt;The reason I did not use TCP_RR is I did not understand it (enough to be able to take the results &amp;amp; do calculations, draw up conclusions etc).  Again from what you said I shopuld be using &lt;BR /&gt;&lt;BR /&gt;netperf -H &lt;REM_HOST&gt; -t TCP_RR&lt;BR /&gt;&lt;BR /&gt;anyway that is the whole truth nothing but the truth as I remember it guv...&lt;BR /&gt;&lt;BR /&gt;Tim&lt;/REM_HOST&gt;&lt;/REM_HOST&gt;&lt;/REM_HOST&gt;</description>
      <pubDate>Thu, 07 Aug 2003 09:47:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041267#M574731</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-07T09:47:09Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041268#M574732</link>
      <description>I guess now that people are starting to use it I will have to add tcp_naglim_def to my annotated_ndd.txt file... that was a clever and perhaps one of the only valid uses for it :)  I probably would have just set -D as a test-specific option and had netperf set TCP_NODELAY :)&lt;BR /&gt;&lt;BR /&gt;I didn't mean setting the socket buffer to one byte.  A "single-byte, netperf TCP_RR test" means something like:&lt;BR /&gt;&lt;BR /&gt;$ netperf -t TCP_RR -H &lt;HOST&gt; -- -r 1&lt;BR /&gt;&lt;BR /&gt;When you talk about packets per second, are you calculating that from the reported throughput? I just recently discovered (running an undocumented variant of the TCP_RR test) a behaviour of the HP-UX 11 TCP stack that likely applies to what you are doing - when the HP_UX 11 TCP stack will issue an immediate ACK whenever it receives a second sub-MSS segment in a row.  This means that the "normal" ACK avoiadance algorithms in HP-UX 11 TCP are effectively shut-off, and any attempt to caculate packets per second from the netperf output alone will be off by something like 50%.&lt;/HOST&gt;</description>
      <pubDate>Thu, 07 Aug 2003 16:20:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041268#M574732</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2003-08-07T16:20:50Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041269#M574733</link>
      <description>Hi Rick,&lt;BR /&gt;&lt;BR /&gt;in case you have received and found time to read my email,&lt;BR /&gt;would you please be so kind to continue this thread?&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Ralph</description>
      <pubDate>Fri, 08 Aug 2003 07:52:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041269#M574733</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-08T07:52:53Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041270#M574734</link>
      <description>Ralph, sorry Rick &amp;amp; I are haveing a nice little discussion in the middle of your thread!!  I hope some of it is useful&lt;BR /&gt;&lt;BR /&gt;Anyhow, Rick I got the packets per second from a pre &amp;amp; post lanadmin, is this effect in there?  &lt;BR /&gt;&lt;BR /&gt;Back to Ralphs original question, how many packets per second would you guesstimate an 8x550MHz N4000 using 100Base-T to do?  I think that is what Ralph is using!  And no, "well it depends", "how long is a piece of string" "what is the weather like on Mars" stuff.  I think we can take all the caveats as read &amp;amp; hopefully no law-suits will follow!!&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;&lt;BR /&gt;Tim</description>
      <pubDate>Fri, 08 Aug 2003 08:51:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041270#M574734</guid>
      <dc:creator>Tim D Fulford</dc:creator>
      <dc:date>2003-08-08T08:51:09Z</dc:date>
    </item>
    <item>
      <title>Re: Network Bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041271#M574735</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;I meanwhile contacted Rick per mail, attaching some of the dumps fromt he netperf tests I ran after having compiled and installed the tool.&lt;BR /&gt;I've been so insolent because I believed Rick to be the programmer of netperf (or one of the team, if it was a joined effort).&lt;BR /&gt;&lt;BR /&gt;Because of my network illiteracy I need some help on the assesment of these results.&lt;BR /&gt;For instance I only used the standard INET Socket buffer sizes,&lt;BR /&gt;and I got warnings from netperf that my chosen confidence interval could not be reached within the specified number of iterations.&lt;BR /&gt;Btw. the same warning appeared when I ran the tcp_stream script that iterates over several buffer sizes.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 08 Aug 2003 09:22:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/network-bottleneck/m-p/3041271#M574735</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2003-08-08T09:22:12Z</dc:date>
    </item>
  </channel>
</rss>

