HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- BYNETIF metrics' ludicrious arithmetics
Operating System - HP-UX
1836357
Members
2509
Online
110100
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-19-2004 03:12 AM
02-19-2004 03:12 AM
BYNETIF metrics' ludicrious arithmetics
Hello,
I'm crafting some DSI specs to improve performance analysis.
I especially need to come up with something that gives me a better evaluation of network performs since symptoms suggest a network bottleneck.
To actually tally the bytes that travel over the NIC I upgraded our OV performance agents since the newer releases now show metrics such as BYNETIF_{IN,OUT}_BYTE_RATE[_CUM]
Now I got stuck with a silly arithmetic twister.
I tried this adviser syntax (I restrict sampling to lan2 because that's the only active NIC)
# cat netmet.adv
netif loop
if bynetif_name == "lan2" then
print bynetif_in_byte_rate, bynetif_in_byte_rate/128,
bynetif_out_byte_rate, bynetif_out_byte_rate/128,
bynetif_in_byte_rate_cum, bynetif_in_byte_rate_cum/128,
bynetif_out_byte_rate_cum, bynetif_out_byte_rate_cum/128
Now what is a bit annoying is that NIC MIB stats usually count octets (aka Bytes),
whereas NIC vendors praise throughput caps in Mbps (or rather M Bit/s).
I suspect this is a marketing trick (makes the digits raise)
The NIC has been working at 100 Mbps in Full Duplex mode, which I interpret that it can actually handle 200 Mbps (viz. bidirectional).
# lanadmin -x 2
Current Speed = 100 Full-Duplex Auto-Negotiation-ON
Thus, I conclude that in order to convert from KB/s to Mb/s I would have to multiply by 8 and divide by 1024, or rather devide by 128.
When I run the above adviser syntax for say some 10 intervals I get this dump.
# glance -iterations 10 -j 10 -adviser_only -syntax netmet.adv 2>/dev/null
353.8 3 377.6 3 353.8 3 377.6 3
480.4 4 610.3 5 465.7 4 583.5 5
502.6 4 601.3 5 480.5 4 588.9 5
511.6 4 604.5 5 490.8 4 594.1 5
503.8 4 607.1 5 492.7 4 595.8 5
462.2 4 580.1 5 485.9 4 591.6 5
399.3 3 503.9 4 471.7 4 577.2 5
470.3 4 533.0 4 470.9 4 570.3 4
437.3 3 524.0 4 466.7 4 564.7 4
434.6 3 527.7 4 462.6 4 559.9 4
The columnns that should represent the Mbps throughput above are rediculously small,
far from the nominal physical capacity of the NIC.
The BYNETIF_*_BYTE_RATE legend from methp.txt explains that only bytes from packets that carry data are accounted.
Does this mean that packets with no payload, but which on the other do have a header are swept under the carpet?
(I don't know if such packets exist at all)
If the new metrics don't give me feasible values I will go for octet counting, either via "lanadmin -g mibstats", or SNMPGETs.
I think I can pipe them into dsilog equally well, but then knowing what exactly they are based on.
Rgds.
Ralph
I'm crafting some DSI specs to improve performance analysis.
I especially need to come up with something that gives me a better evaluation of network performs since symptoms suggest a network bottleneck.
To actually tally the bytes that travel over the NIC I upgraded our OV performance agents since the newer releases now show metrics such as BYNETIF_{IN,OUT}_BYTE_RATE[_CUM]
Now I got stuck with a silly arithmetic twister.
I tried this adviser syntax (I restrict sampling to lan2 because that's the only active NIC)
# cat netmet.adv
netif loop
if bynetif_name == "lan2" then
print bynetif_in_byte_rate, bynetif_in_byte_rate/128,
bynetif_out_byte_rate, bynetif_out_byte_rate/128,
bynetif_in_byte_rate_cum, bynetif_in_byte_rate_cum/128,
bynetif_out_byte_rate_cum, bynetif_out_byte_rate_cum/128
Now what is a bit annoying is that NIC MIB stats usually count octets (aka Bytes),
whereas NIC vendors praise throughput caps in Mbps (or rather M Bit/s).
I suspect this is a marketing trick (makes the digits raise)
The NIC has been working at 100 Mbps in Full Duplex mode, which I interpret that it can actually handle 200 Mbps (viz. bidirectional).
# lanadmin -x 2
Current Speed = 100 Full-Duplex Auto-Negotiation-ON
Thus, I conclude that in order to convert from KB/s to Mb/s I would have to multiply by 8 and divide by 1024, or rather devide by 128.
When I run the above adviser syntax for say some 10 intervals I get this dump.
# glance -iterations 10 -j 10 -adviser_only -syntax netmet.adv 2>/dev/null
353.8 3 377.6 3 353.8 3 377.6 3
480.4 4 610.3 5 465.7 4 583.5 5
502.6 4 601.3 5 480.5 4 588.9 5
511.6 4 604.5 5 490.8 4 594.1 5
503.8 4 607.1 5 492.7 4 595.8 5
462.2 4 580.1 5 485.9 4 591.6 5
399.3 3 503.9 4 471.7 4 577.2 5
470.3 4 533.0 4 470.9 4 570.3 4
437.3 3 524.0 4 466.7 4 564.7 4
434.6 3 527.7 4 462.6 4 559.9 4
The columnns that should represent the Mbps throughput above are rediculously small,
far from the nominal physical capacity of the NIC.
The BYNETIF_*_BYTE_RATE legend from methp.txt explains that only bytes from packets that carry data are accounted.
Does this mean that packets with no payload, but which on the other do have a header are swept under the carpet?
(I don't know if such packets exist at all)
If the new metrics don't give me feasible values I will go for octet counting, either via "lanadmin -g mibstats", or SNMPGETs.
I think I can pipe them into dsilog equally well, but then knowing what exactly they are based on.
Rgds.
Ralph
Madness, thy name is system administration
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-19-2004 04:02 AM
02-19-2004 04:02 AM
Re: BYNETIF metrics' ludicrious arithmetics
When computing typical throughput, you will rarely get more than half of the rated bit speed. So a 100mbit link will max out around 40-50mbits. If you are seeing less than 5mbits, then the link is likely experiencing a very high error rate, and for 100mbit, this is almost always a mismatch in duplex between the switch and the NIC. Use lanadmin to look at FCS and framing errors. And of course, a full duplex connection will never have any collisions, so if there are collisions counted in lanadmin, that is the problem.
Failure to autonegotiate is a common problem with cables that are about 40 meters in length. The fix is to turn off autonegotion at both the NIC and also the switch.
Bill Hassell, sysadmin
Failure to autonegotiate is a common problem with cables that are about 40 meters in length. The fix is to turn off autonegotion at both the NIC and also the switch.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-19-2004 04:17 AM
02-19-2004 04:17 AM
Re: BYNETIF metrics' ludicrious arithmetics
# lanadmin -g mibstats 2|grep -i -e errors -e discards -e collisions
Inbound Discards = 0
Inbound Errors = 0
Outbound Discards = 0
Outbound Errors = 0
Alignment Errors = 0
FCS Errors = 0
Late Collisions = 0
Excessive Collisions = 0
Internal MAC Transmit Errors = 0
Carrier Sense Errors = 0
Internal MAC Receive Errors = 0
# lanadmin -sa 2
Station Address = 0x001083182b76
Speed = 100000000
I know that one generally should avoid autoneg.
But I don't believe to have a mode mismatch of linkpartners.
Unfortunately neither the brand, nor the IP, nor the community string of the switch is known to me.
Otherwise I would have snmpwalk'ed its tree.
Inbound Discards = 0
Inbound Errors = 0
Outbound Discards = 0
Outbound Errors = 0
Alignment Errors = 0
FCS Errors = 0
Late Collisions = 0
Excessive Collisions = 0
Internal MAC Transmit Errors = 0
Carrier Sense Errors = 0
Internal MAC Receive Errors = 0
# lanadmin -sa 2
Station Address = 0x001083182b76
Speed = 100000000
I know that one generally should avoid autoneg.
But I don't believe to have a mode mismatch of linkpartners.
Unfortunately neither the brand, nor the IP, nor the community string of the switch is known to me.
Otherwise I would have snmpwalk'ed its tree.
Madness, thy name is system administration
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-19-2004 10:15 AM
02-19-2004 10:15 AM
Re: BYNETIF metrics' ludicrious arithmetics
Hi Ralph,
How did you verify that the traffic going through this interface was more than 3 MBPS (it's actually 1/10th of it as it is per 10 seconds)?.
I would install ethreal and verify the figures. It's slightly difficult to install ethereal but once you get it, it is very easy to play with. It gives you a snapshot of the bandwidth used during the data collection interval.
-Sri
How did you verify that the traffic going through this interface was more than 3 MBPS (it's actually 1/10th of it as it is per 10 seconds)?.
I would install ethreal and verify the figures. It's slightly difficult to install ethereal but once you get it, it is very easy to play with. It gives you a snapshot of the bandwidth used during the data collection interval.
-Sri
You may be disappointed if you fail, but you are doomed if you don't try
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP