<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Network Adapter &amp;quot;NIC&amp;quot; Bottleneck?? in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755294#M51845</link>
    <description>Jorge,&lt;BR /&gt;&lt;BR /&gt;yes, your EWA0 is down and has never been up after boot. Is there a cable connected at all ?&lt;BR /&gt;&lt;BR /&gt;T4 is probably the best tool to start with collecting data. It can collect data on your LAN interfaces, which - as far as I remember - Polycenter can't. It's also very easy to set up and it's free !&lt;BR /&gt;&lt;BR /&gt;Start collecting data NOW and then have a look at the data and compare 'good' days and 'bad' days.&lt;BR /&gt;&lt;BR /&gt;Note that performance analysis can be a complex job and may need needs lots of data, questions and answers exchanged, which is not always possible in a forum like this.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
    <pubDate>Tue, 21 Mar 2006 03:47:13 GMT</pubDate>
    <dc:creator>Volker Halle</dc:creator>
    <dc:date>2006-03-21T03:47:13Z</dc:date>
    <item>
      <title>Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755289#M51840</link>
      <description>Is there a way to find out if your network adapter is your bottleneck?  I am running OpenVMS 7.3-2 TCPIP ECO 2 - Connected through a Gigabit HP 2800's series Switch.  I can see that the server detected the adapter as 1000Mbps.  On some day our system just flies through all the updates and some day just run a few hours longer.  Btw, these are queries running against Oracle 10g DB.&lt;BR /&gt;&lt;BR /&gt;I also have Polycenter runnning which I can pull some stats as well.&lt;BR /&gt;&lt;BR /&gt;But please, direct me to where and what I can do do measure the suspected or potential network bottleneck.&lt;BR /&gt;&lt;BR /&gt;Thank you in advance.&lt;BR /&gt;&lt;BR /&gt;J</description>
      <pubDate>Mon, 20 Mar 2006 22:19:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755289#M51840</guid>
      <dc:creator>Jorge Cocomess</dc:creator>
      <dc:date>2006-03-20T22:19:40Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755290#M51841</link>
      <description>Hi Jorge,&lt;BR /&gt;&lt;BR /&gt;I do not think that it is related to bottleneck at the NIC.&lt;BR /&gt;&lt;BR /&gt;Have you installed the patch VMS732_LAN-V0300? If yes, you can use LANCP&amp;gt; show device/int to check if there has been any duplex mismatch.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards.&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Mar 2006 23:04:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755290#M51841</guid>
      <dc:creator>Michael Yu_3</dc:creator>
      <dc:date>2006-03-20T23:04:03Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755291#M51842</link>
      <description>Agreed, unlikely to be a network bottleneck @1gb/s&lt;BR /&gt;&lt;BR /&gt;Maybe you can share measured mb/sec and pckt/sec in busy windows, to confirm?&lt;BR /&gt;&lt;BR /&gt;I would recommend T4 (google: +t4 +site:hp.com) to measure and graph system usage details in slow vs normal time windows.&lt;BR /&gt;It can give nice timelines mapping network as well as CPU and other data. And it can correlate those numbers.&lt;BR /&gt;&lt;BR /&gt;Are you coming to the bootcamp? (again, google) T4 and many other performance plays will be discussed.&lt;BR /&gt;&lt;BR /&gt;If the main load is Oracle, then I would recommend to also add statspack reports.&lt;BR /&gt;Those will tell you how much 'sql*net' action there is, how much waiting for client data and more data, or whether the execution itself was slow.&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;[Email me if you think I can help beyond the scope of this Forum.]&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Mar 2006 23:27:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755291#M51842</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2006-03-20T23:27:35Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755292#M51843</link>
      <description>Gentlemen,&lt;BR /&gt;&lt;BR /&gt;Per Micheal request, here is a status on these adapters; Looks like my EWA0 is down - Btw, EWA0 is a 10/100 copper NIC. &lt;BR /&gt;&lt;BR /&gt;Device Internal Counters EWA0:&lt;BR /&gt;                  Value  Counter&lt;BR /&gt;                  -----  -------&lt;BR /&gt;                         --- Internal Driver Counters ---&lt;BR /&gt;                    111  Driver version (X-n)&lt;BR /&gt;               00000001  Driver flags &lt;MAP_ALL&gt;&lt;BR /&gt;               00001100  Device type &lt;DC21143&gt;&lt;BR /&gt;               LinkDown  Link status&lt;BR /&gt;                      2  Tulip reset count&lt;BR /&gt;                      5  Setup buffers issued&lt;BR /&gt;                      4  CSR6 changes&lt;BR /&gt;                     39  Transmit multiple addresses&lt;BR /&gt;                 309504  Device interrupts&lt;BR /&gt;                      1  Inits (not using map registers)&lt;BR /&gt;                   1458  Transmits issued (using map registers)&lt;BR /&gt;                    Off  Auto-negotiation state&lt;BR /&gt;                    Off  Autosense state&lt;BR /&gt;             0:00:03.00  Transmit time limit&lt;BR /&gt;             0:00:01.00  Timer routine interval&lt;BR /&gt;               F0660004  Most recent CSR5 contents &amp;lt;31, 30, 29, TS2, TS1, RS1,&lt;BR /&gt;                         RS0, TU&amp;gt;&lt;BR /&gt;               02ACE002  Most recent CSR6 contents &lt;MBO&gt;&lt;/MBO&gt;                         TR0, ST, SR&amp;gt;&lt;BR /&gt;               000000C6  Most recent CSR12 error contents &amp;lt;7, 6, LS10, LS100&amp;gt;&lt;BR /&gt;               00050008  Most recent CSR15 contents&lt;BR /&gt;              230998751  Current time (EXE$GL_ABSTIM_TICS)&lt;BR /&gt;                         --- Driver Messages ---&lt;BR /&gt;22-FEB-2006 05:29:55.92  FastFD mode set by console&lt;BR /&gt; &lt;BR /&gt;Device Internal Counters EWB0:&lt;BR /&gt;                  Value  Counter&lt;BR /&gt;                  -----  -------&lt;BR /&gt;                         --- Internal Driver Counters ---&lt;BR /&gt;             "DEGXA-TB"  Device name&lt;BR /&gt; "Feb  9 2005 13:25:57"  Driver timestamp&lt;BR /&gt;                     39  Driver version (X-n)&lt;BR /&gt;               11000000  Device revision (Broadcom 5701,5703,5704 chip)&lt;BR /&gt;             -740748270  Device interrupts&lt;BR /&gt;                      9  Link transitions&lt;BR /&gt;                     15  Link transitions avoided&lt;BR /&gt;                    214  Status block link state changes&lt;BR /&gt;                2308303  Link checks&lt;BR /&gt;                      1  Device resets&lt;BR /&gt;                      1  Device initializations&lt;BR /&gt;                     10  User start/change/stop requests&lt;BR /&gt;                     58  Transmits queued&lt;BR /&gt;              732355745  Receives issued (using map registers)&lt;BR /&gt;                     18  Rescheduled forks (too long in fork)&lt;BR /&gt;                    320  Standard receive buffers&lt;BR /&gt;                      8  Jumbo receive buffers (current)&lt;BR /&gt;                      8  Jumbo receive buffers (minimum)&lt;BR /&gt;                      8  Jumbo receive buffer allocations&lt;BR /&gt;                   2158  Standard buffer size (bytes)&lt;BR /&gt;                   1518  Standard packet size (bytes)(device standard ring)&lt;BR /&gt;                   9658  Jumbo buffer size (bytes)&lt;BR /&gt;                   9018  Jumbo packet size (bytes)(device jumbo ring))&lt;BR /&gt;               000002A4  Requested link state &lt;FLOWCONTROL&gt;&lt;/FLOWCONTROL&gt;                         Auto-negotiation&amp;gt;&lt;BR /&gt;               000000A5  Current link state &lt;FDX&gt;&lt;/FDX&gt;                         Link up&amp;gt;&lt;BR /&gt;               00008090  Driver flags &lt;DEVICE_LINK_HANDLING&gt;&lt;BR /&gt;               00000008  Driver state &lt;RUNUP&gt;&lt;BR /&gt;                     64  DMA width (bits)&lt;BR /&gt;                     66  BUS speed (mhz)&lt;BR /&gt;                    PCI  BUS type&lt;BR /&gt;               00000086  MSI control (Alloc&amp;lt;6:4&amp;gt;,Req&amp;lt;3:1&amp;gt;,Enable&amp;lt;0&amp;gt;)&lt;BR /&gt;                     16  Transmit coalesce value&lt;BR /&gt;                     16  Receive coalesce value&lt;BR /&gt;                     50  Transmit interrupt delay (usec)&lt;BR /&gt;                     10  Receive interrupt delay (usec)&lt;BR /&gt;                   5888  Map registers allocated&lt;BR /&gt;             0:00:03.00  Transmit time limit&lt;BR /&gt;             0:00:01.00  Timer routine interval&lt;BR /&gt;                         --- Registers (wrote/read) ---&lt;BR /&gt;      110002B8 110000BA  Misc Host Control&lt;BR /&gt;      00E04F08 00E04F08  MAC Mode&lt;BR /&gt;      0F40041C 00000003  MAC Status&lt;BR /&gt;      00000001 00000001  MI Status&lt;BR /&gt;      00001002 00001002  RX Mode&lt;BR /&gt;      FFFFFFFF 00000008  TX Status&lt;BR /&gt;                         --- Time Stamps ---&lt;BR /&gt;           641:39:47.52  Current uptime&lt;BR /&gt;             0:00:46.97  Last reset&lt;BR /&gt;           420:33:30.15  Last link up&lt;BR /&gt;           420:31:30.45  Last link down&lt;BR /&gt;           641:36:44.40  Total link uptime&lt;BR /&gt;             0:03:03.12  Total link downtime&lt;BR /&gt;                         --- Driver Auto-Negotiation Context (fiber) ---&lt;BR /&gt;            Not_Autoneg  Current state&lt;BR /&gt;                         --- Status Block ---&lt;BR /&gt;               00000017  Status tag value&lt;BR /&gt;               00000001  Status &lt;UPDATED&gt;&lt;BR /&gt;                     93  Receive Standard Consumer index&lt;BR /&gt;                    605  Receive Ring 0 Producer index&lt;BR /&gt;                    130  Send Ring 0 Consumer index&lt;BR /&gt;                         --- Statistics Block ---&lt;BR /&gt;                         ----- Statistics - Receive MAC ---&lt;BR /&gt;          3570002538716  Bytes received&lt;BR /&gt;             9282758603  Unicast packets received&lt;BR /&gt;                 401052  Multicast packets received&lt;BR /&gt;               16626677  Broadcast packets received&lt;BR /&gt;              103115611  Packets (64 bytes)&lt;BR /&gt;              199371071  Packets (65-127 bytes)&lt;BR /&gt;             2538687096  Packets (128-255 bytes)&lt;BR /&gt;             4202376048  Packets (256-511 bytes)&lt;BR /&gt;             2025774969  Packets (512-1023 bytes)&lt;BR /&gt;              230461540  Packets (1024-1522 bytes)&lt;BR /&gt;                         ----- Statistics - Transmit MAC ---&lt;BR /&gt;          1065773633315  Bytes sent&lt;BR /&gt;             9104928700  Unicast packets sent&lt;BR /&gt;                 376088  Multicast packets sent&lt;BR /&gt;                   7849  Broadcast packets sent&lt;BR /&gt;                         ----- Statistics - Receive List Placement State Machine&lt;BR /&gt; ---&lt;BR /&gt;             9299785655  Frames received onto return ring 1&lt;BR /&gt;                   9339  DMA write queue full&lt;BR /&gt;                    841  Inbound discards&lt;BR /&gt;                   1288  Receive threshold hit&lt;BR /&gt;                         ----- Statistics - Send Data Initiator State Machine --&lt;BR /&gt;-&lt;BR /&gt;             9105312637  Frames sent from send ring 1&lt;BR /&gt;                 160657  DMA Read Queue full&lt;BR /&gt;                         ----- Statistics - Host Coalescing State Machine ---&lt;BR /&gt;             9181025330  Send producer index updates&lt;BR /&gt;            16439070607  Ring status updates&lt;BR /&gt;            16438747125  Interrupts generated&lt;BR /&gt;                 323482  Interrupts avoided&lt;BR /&gt;                1892456  Send threshold hit&lt;BR /&gt;                         --- Driver Messages ---&lt;BR /&gt;11-MAR-2006 18:04:17.66  Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt;11-MAR-2006 18:02:21.46  Link down&lt;BR /&gt; 6-MAR-2006 21:35:33.36  Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt; 6-MAR-2006 21:35:32.32  Link down&lt;BR /&gt; 6-MAR-2006 21:29:48.74  Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt; 6-MAR-2006 21:29:46.02  Link down&lt;BR /&gt;22-FEB-2006 05:30:02.93  Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt;22-FEB-2006 05:29:59.78  Device type is BCM5703C (UTP) Rev A0 (11000000)&lt;BR /&gt;22-FEB-2006 05:29:59.77  DEGXA-TB located in 64-bit, 66-mhz PCI slot&lt;BR /&gt;22-FEB-2006 05:29:59.77  Auto-negotiation mode set by console (EGA0_MODE)&lt;BR /&gt;LANCP&amp;gt;&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;J&lt;BR /&gt;&lt;/UPDATED&gt;&lt;/RUNUP&gt;&lt;/DEVICE_LINK_HANDLING&gt;&lt;/DC21143&gt;&lt;/MAP_ALL&gt;</description>
      <pubDate>Tue, 21 Mar 2006 00:16:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755292#M51843</guid>
      <dc:creator>Jorge Cocomess</dc:creator>
      <dc:date>2006-03-21T00:16:23Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755293#M51844</link>
      <description>Hi Jorge,&lt;BR /&gt;&lt;BR /&gt;Can we also see the counters?&lt;BR /&gt;&lt;BR /&gt;LANCP&amp;gt; show dev/count&lt;BR /&gt;&lt;BR /&gt;Thanks and regards.&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Mar 2006 03:14:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755293#M51844</guid>
      <dc:creator>Michael Yu_3</dc:creator>
      <dc:date>2006-03-21T03:14:14Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755294#M51845</link>
      <description>Jorge,&lt;BR /&gt;&lt;BR /&gt;yes, your EWA0 is down and has never been up after boot. Is there a cable connected at all ?&lt;BR /&gt;&lt;BR /&gt;T4 is probably the best tool to start with collecting data. It can collect data on your LAN interfaces, which - as far as I remember - Polycenter can't. It's also very easy to set up and it's free !&lt;BR /&gt;&lt;BR /&gt;Start collecting data NOW and then have a look at the data and compare 'good' days and 'bad' days.&lt;BR /&gt;&lt;BR /&gt;Note that performance analysis can be a complex job and may need needs lots of data, questions and answers exchanged, which is not always possible in a forum like this.&lt;BR /&gt;&lt;BR /&gt;Volker.</description>
      <pubDate>Tue, 21 Mar 2006 03:47:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755294#M51845</guid>
      <dc:creator>Volker Halle</dc:creator>
      <dc:date>2006-03-21T03:47:13Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755295#M51846</link>
      <description>T4 is in&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/openvms/products/t4/" target="_blank"&gt;http://h71000.www7.hp.com/openvms/products/t4/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;or in SYS$ETC on newer VMS.</description>
      <pubDate>Tue, 21 Mar 2006 04:47:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755295#M51846</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-03-21T04:47:14Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755296#M51847</link>
      <description>Good day, gentlemen!&lt;BR /&gt;&lt;BR /&gt;Here's the stats after I ran the command sho dev/count on my Alpha server;&lt;BR /&gt;&lt;BR /&gt;LANCP&amp;gt; sho dev/count&lt;BR /&gt; &lt;BR /&gt;Device Counters EWA0:&lt;BR /&gt;                  Value  Counter&lt;BR /&gt;                  -----  -------&lt;BR /&gt;                2365353 Seconds since last zeroed&lt;BR /&gt;                      0 Bytes received&lt;BR /&gt;                      0 Bytes sent&lt;BR /&gt;                      0 Packets received&lt;BR /&gt;                      0 Packets sent&lt;BR /&gt;                      0 Multicast bytes received&lt;BR /&gt;                      0 Multicast bytes sent&lt;BR /&gt;                      0 Multicast packets received&lt;BR /&gt;                      0 Multicast packets sent&lt;BR /&gt;                      0 Unrecognized unicast destination packets&lt;BR /&gt;                      0 Unrecognized multicast destination packets&lt;BR /&gt;                      0 Unavailable station buffers&lt;BR /&gt;                      0 Unavailable user buffers&lt;BR /&gt;                      0 Alignment errors&lt;BR /&gt;                      0 Frame check errors&lt;BR /&gt;                      0 Frame size errors&lt;BR /&gt;                      0 Frame status errors&lt;BR /&gt;                      0 Frame length errors&lt;BR /&gt;                      0 Frame too long errors&lt;BR /&gt;                      0 Data overruns&lt;BR /&gt;                      0 Send data length errors&lt;BR /&gt;                      0 Receive data length errors&lt;BR /&gt;                      0 Transmit underrun errors&lt;BR /&gt;                      0 Transmit failures&lt;BR /&gt;                 321770 Carrier check failures&lt;BR /&gt;                      0 Station failures&lt;BR /&gt;                      0 Initially deferred packets sent&lt;BR /&gt;                      0 Single collision packets sent&lt;BR /&gt;                      0 Multiple collision packets sent&lt;BR /&gt;                      0 Excessive collisions&lt;BR /&gt;                      0 Late collisions&lt;BR /&gt;                      0 Collision detect check failures&lt;BR /&gt;Characteristic #0x00C9, Value = 0000&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;Characteristic #0x0BF6, Value = 00A53081&lt;BR /&gt;Characteristic #0x00CF, Value = 00000000&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;Characteristic #0x0081, Value = 00000000&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt;                      0  Seconds since last zeroed&lt;BR /&gt; &lt;BR /&gt;Device Counters EWB0:&lt;BR /&gt;                  Value  Counter&lt;BR /&gt;                  -----  -------&lt;BR /&gt;                2365353 Seconds since last zeroed&lt;BR /&gt;          3497553439267 Bytes received&lt;BR /&gt;          1058834861029 Bytes sent&lt;BR /&gt;             9544407751 Packets received&lt;BR /&gt;             9344816014 Packets sent&lt;BR /&gt;             1319983733 Multicast bytes received&lt;BR /&gt;               28923907 Multicast bytes sent&lt;BR /&gt;               17525066 Multicast packets received&lt;BR /&gt;                 393784 Multicast packets sent&lt;BR /&gt;                      1 Unrecognized unicast destination packets&lt;BR /&gt;                1885300 Unrecognized multicast destination packets&lt;BR /&gt;                      0 Unavailable station buffers&lt;BR /&gt;                      0 Unavailable user buffers&lt;BR /&gt;                      0 Alignment errors&lt;BR /&gt;                      0 Frame check errors&lt;BR /&gt;                      0 Frame size errors&lt;BR /&gt;                      0 Frame status errors&lt;BR /&gt;                      0 Frame length errors&lt;BR /&gt;                      0 Frame too long errors&lt;BR /&gt;                      0 Data overruns&lt;BR /&gt;                      0 Send data length errors&lt;BR /&gt;                      0 Receive data length errors&lt;BR /&gt;                      0 Transmit underrun errors&lt;BR /&gt;                      0 Transmit failures&lt;BR /&gt;                     58 Carrier check failures&lt;BR /&gt;                      0 Station failures&lt;BR /&gt;                      0 Initially deferred packets sent&lt;BR /&gt;                      0 Single collision packets sent&lt;BR /&gt;                      0 Multiple collision packets sent&lt;BR /&gt;                      0 Excessive collisions&lt;BR /&gt;                      0 Late collisions&lt;BR /&gt;                      0 Collision detect check failures&lt;BR /&gt;Characteristic #0x00C9, Value = 0005&lt;BR /&gt;                      5  Seconds since last zeroed&lt;BR /&gt;                      5  Seconds since last zeroed&lt;BR /&gt;Characteristic #0x0D03, Value = 28BE&lt;BR /&gt;    2935930974950842400 Abort delimiters sent&lt;BR /&gt;                  57549 Abort delimiters sent&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  57551 Abort delimiters sent&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  57557 Abort delimiters sent&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;                  10430  Seconds since last zeroed&lt;BR /&gt;LANCP&amp;gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I will try to install T4 sometime later this week to try it out.&lt;BR /&gt;&lt;BR /&gt;Thanks so much!&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;J</description>
      <pubDate>Tue, 21 Mar 2006 16:11:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755296#M51847</guid>
      <dc:creator>Jorge Cocomess</dc:creator>
      <dc:date>2006-03-21T16:11:59Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755297#M51848</link>
      <description>YOu should try and determine why EWB0 has&lt;BR /&gt;&lt;BR /&gt;&amp;gt; 58 Carrier check failures&lt;BR /&gt;&lt;BR /&gt;this indicates a total loss of electrical connectivity. Either hardware is failing or the cable is being unplugged at one end or the other. That is not normal. I don't know the significance of "Abort delimiters sent".&lt;BR /&gt;&lt;BR /&gt;(The carrier check failures that you see on EWA0 are likely a result of there not being a physical connection to the network and your system enabling protocols on the NIC - you might consider modifying your network configurations to disable whichever protocols are attempting to use the interface).</description>
      <pubDate>Tue, 21 Mar 2006 16:26:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755297#M51848</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2006-03-21T16:26:32Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755298#M51849</link>
      <description>Hi Jorge,&lt;BR /&gt;&lt;BR /&gt;The carrier check failures should be worth investigating. In fact, the link downs should be caused by these.&lt;BR /&gt;&lt;BR /&gt;--- Driver Messages ---&lt;BR /&gt;11-MAR-2006 18:04:17.66 Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt;11-MAR-2006 18:02:21.46 Link down&lt;BR /&gt;6-MAR-2006 21:35:33.36 Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt;6-MAR-2006 21:35:32.32 Link down&lt;BR /&gt;6-MAR-2006 21:29:48.74 Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt;6-MAR-2006 21:29:46.02 Link down&lt;BR /&gt;22-FEB-2006 05:30:02.93 Link up: 1000 mbit, full duplex, flow control disabled&lt;BR /&gt;&lt;BR /&gt;Also it might be better if flow control is enabled. Currently it is disabled. You might need to set it up on the switch. The following inbound discards were resulted.&lt;BR /&gt;&lt;BR /&gt;841 Inbound discards&lt;BR /&gt;1288 Receive threshold hit&lt;BR /&gt;&lt;BR /&gt;These inbound discards probably happened during peak hours. They would cause TCP retransmissions and would slow things down.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards.&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Mar 2006 20:24:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755298#M51849</guid>
      <dc:creator>Michael Yu_3</dc:creator>
      <dc:date>2006-03-21T20:24:07Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755299#M51850</link>
      <description>Gentlemen,&lt;BR /&gt;&lt;BR /&gt;I do remember now....We had problems with one of our HP Procurve switch about 2-3 weeks ago with a data storm, which shut down one of the port on the switch.  I had to reset the switch and since then, I have updated to the lastest and greatest patches.  Well, so far so good.  The kink up and link down you're seeing is that when I was updating the switch firmware couple of weeks ago.  I don't think these errors are registering any longer.  I can look at it again tomorrow to see if I still gettinh these errors again.&lt;BR /&gt;&lt;BR /&gt;Is there away for me to reset the stats on the NIC without rebooting the server? &lt;BR /&gt;&lt;BR /&gt;Thanks much!!&lt;BR /&gt;&lt;BR /&gt;J.</description>
      <pubDate>Tue, 21 Mar 2006 22:04:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755299#M51850</guid>
      <dc:creator>Jorge Cocomess</dc:creator>
      <dc:date>2006-03-21T22:04:01Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755300#M51851</link>
      <description>Michael - How do I enabled the flow control?  Also, where or how did you came up with 841 Inbounds, and 1288 Received threshold hit?&lt;BR /&gt;&lt;BR /&gt;With the 1288 Received threshold hit the limit, shouldn't that be a major concern?  Where do I increase or push the threshold back?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks for your time.&lt;BR /&gt;&lt;BR /&gt;J&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Mar 2006 22:12:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755300#M51851</guid>
      <dc:creator>Jorge Cocomess</dc:creator>
      <dc:date>2006-03-21T22:12:32Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755301#M51852</link>
      <description>Hi Jorge,&lt;BR /&gt;&lt;BR /&gt;The requested link state of the controller was set to &lt;FLOWCONTROL&gt;&lt;BR /&gt;&lt;BR /&gt;The current link state is &lt;FDX&gt;&lt;BR /&gt;&lt;BR /&gt;So flow control has been disabled. Since we requested to have flow control and the negotiated result was no flow control, it should be the switch port which was set to no flow control. Hence to enable flow control, you have to configure the switch port to have flow control.&lt;BR /&gt;&lt;BR /&gt;The 841 inbound discards and 1288 receive threshold hit came from the output of LANCP&amp;gt; show device/int.&lt;BR /&gt;&lt;BR /&gt;I am not too concerned about the receive threshold hit, but the inbound discards have to be dealt with.&lt;BR /&gt;&lt;BR /&gt;If flow control is enabled and the receive threshold is hit, a pause frame will be sent to the switch port. But flow control has been disabled.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards.&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/FDX&gt;&lt;/FLOWCONTROL&gt;</description>
      <pubDate>Tue, 21 Mar 2006 23:03:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755301#M51852</guid>
      <dc:creator>Michael Yu_3</dc:creator>
      <dc:date>2006-03-21T23:03:37Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755302#M51853</link>
      <description>&amp;gt; Is there away for me to reset the stats&lt;BR /&gt;&amp;gt; on the NIC without rebooting the server?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;LANCP&amp;gt; SET DEVICE/DEVICE_SPECIFIC=FUNCTION="CCOU" devname</description>
      <pubDate>Wed, 22 Mar 2006 07:58:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755302#M51853</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2006-03-22T07:58:48Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755303#M51854</link>
      <description>I would first investigate what the system is doing : who is eating the cpu / IO and is the consumption normal. Check what the top images are.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 22 Mar 2006 09:39:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755303#M51854</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-03-22T09:39:49Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755304#M51855</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I thought turn on control flow at the switch was bad.  I am wrong or I just missed read or something?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;J</description>
      <pubDate>Wed, 22 Mar 2006 18:22:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755304#M51855</guid>
      <dc:creator>Jorge Cocomess</dc:creator>
      <dc:date>2006-03-22T18:22:03Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755305#M51856</link>
      <description>Jorge,&lt;BR /&gt;&lt;BR /&gt;you know the manual SYS$HELP:LAN_COUNTERS_AND_FUNCTIONS.TXT?&lt;BR /&gt;&lt;BR /&gt;It explains the device specific internal counters and commands such like CCOU.&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Thu, 23 Mar 2006 05:42:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755305#M51856</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2006-03-23T05:42:19Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755306#M51857</link>
      <description>"I thought turn on control flow at the switch was bad. I am wrong or I just missed read or something?"&lt;BR /&gt;&lt;BR /&gt;There is certainly a common view that nesting another flow control algorithm inside TCP is generally a bad idea.&lt;BR /&gt;</description>
      <pubDate>Thu, 23 Mar 2006 06:03:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755306#M51857</guid>
      <dc:creator>Richard Brodie_1</dc:creator>
      <dc:date>2006-03-23T06:03:01Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755307#M51858</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;The flow control between the switch port and the controller is at the datalink level  while the TCP flow control is at the transport layer.&lt;BR /&gt;&lt;BR /&gt;Flow control can be enabled on a per port basis at the switch.&lt;BR /&gt;&lt;BR /&gt;Pause is a mechanism for full duplex flow control which is discussed in IEEE 802.3 Annex 31B.&lt;BR /&gt;&lt;BR /&gt;Thanks and regards.&lt;BR /&gt;&lt;BR /&gt;Michael&lt;BR /&gt;  &lt;BR /&gt;</description>
      <pubDate>Thu, 23 Mar 2006 07:37:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755307#M51858</guid>
      <dc:creator>Michael Yu_3</dc:creator>
      <dc:date>2006-03-23T07:37:39Z</dc:date>
    </item>
    <item>
      <title>Re: Network Adapter "NIC" Bottleneck??</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755308#M51859</link>
      <description>Michael, &lt;BR /&gt;&lt;BR /&gt;I am aware what port based flow control is. I said it was a common view that it is a bad idea. This blog article is a well-argued example of a proponent of that view: &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html" target="_blank"&gt;http://virtualthreads.blogspot.com/2006/02/beware-ethernet-flow-control.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I am not saying I believe you were wrong in suggesting that it should be enabled, merely noting that (in the general case) it is a contentious area and a topic of active research. &lt;BR /&gt;&lt;BR /&gt;I'm thus not surprised that Jorge heard it was "bad". Now he's heard another view.</description>
      <pubDate>Thu, 23 Mar 2006 09:09:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/network-adapter-quot-nic-quot-bottleneck/m-p/3755308#M51859</guid>
      <dc:creator>Richard Brodie_1</dc:creator>
      <dc:date>2006-03-23T09:09:18Z</dc:date>
    </item>
  </channel>
</rss>

