HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Source quench in hpux 11.00
Operating System - HP-UX
1832235
Members
2861
Online
110041
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2004 02:57 PM
09-06-2004 02:57 PM
Dear all,
I am several hp A-class servers that running 11.00. They are on the same routed network and on same LAN media type: FDDI.
I find that the traffic initiated from Cisco Catalyst switch to HP UX server come across failure rate, depending on the size of packet, bigger size suffered more. Packet with minimum size likes 100 byte was fine. For a typical 1000 byte packet, almost 30 to 40 percent packet lost.
So, I try to set
#ndd -set /dev/tcp tcp_mss_max 1460
#ndd -c
As a result, I am performing a ping from the HP-UX servers and received a source quench from the pinged system.
Setting the ndd parameter ip_send_source_quench to 0 can be an effective way to deal with the messages.
I know that the source quench is a way for the pinged system to tell the pinging system that it is sending ICMP traffic too fast for it to handle i.e. giving a "please slow down" signal.
Source Quench is sent back from a host when it cannot deliver packets to the destination socket quickly enough. There is a bug in HP-UX 11.0 that causes it to return excessive SQ messages as well.
So, is there a patch to fix this problem in 11.0?
Do anyone give me some advises, thanks.
Alex
I am several hp A-class servers that running 11.00. They are on the same routed network and on same LAN media type: FDDI.
I find that the traffic initiated from Cisco Catalyst switch to HP UX server come across failure rate, depending on the size of packet, bigger size suffered more. Packet with minimum size likes 100 byte was fine. For a typical 1000 byte packet, almost 30 to 40 percent packet lost.
So, I try to set
#ndd -set /dev/tcp tcp_mss_max 1460
#ndd -c
As a result, I am performing a ping from the HP-UX servers and received a source quench from the pinged system.
Setting the ndd parameter ip_send_source_quench to 0 can be an effective way to deal with the messages.
I know that the source quench is a way for the pinged system to tell the pinging system that it is sending ICMP traffic too fast for it to handle i.e. giving a "please slow down" signal.
Source Quench is sent back from a host when it cannot deliver packets to the destination socket quickly enough. There is a bug in HP-UX 11.0 that causes it to return excessive SQ messages as well.
So, is there a patch to fix this problem in 11.0?
Do anyone give me some advises, thanks.
Alex
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2004 03:14 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-06-2004 09:23 PM
09-06-2004 09:23 PM
Re: Source quench in hpux 11.00
Hi
You can monitor source quench by
# netstat -p icmp
Regards,
You can monitor source quench by
# netstat -p icmp
Regards,
You need to know a lot to actually know how little you know
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-07-2004 12:08 PM
09-07-2004 12:08 PM
Re: Source quench in hpux 11.00
11.00 IP code was entirely too eager to send source quench messages. You might as well disable them via ndd.
As for the packet losses, I have a hard time seeing how altering tcp_mss_max would have helped - unless perhaps the FDDI interface(s) in your HP systems were losing races in the FIFOs between the network and the host memory. I would check the lanadmin statistics for the NICs and see if there were any FIFO settings for how soon the NIC might start to DMA packets into the host.
BTW, what sort of system and FDDI NIC is this? If this is the old HP-PB FDDI NIC, in something like a K class, the best you could hope for in that would be something along the lines of 65 Mbit/s.
As for the packet losses, I have a hard time seeing how altering tcp_mss_max would have helped - unless perhaps the FDDI interface(s) in your HP systems were losing races in the FIFOs between the network and the host memory. I would check the lanadmin statistics for the NICs and see if there were any FIFO settings for how soon the NIC might start to DMA packets into the host.
BTW, what sort of system and FDDI NIC is this? If this is the old HP-PB FDDI NIC, in something like a K class, the best you could hope for in that would be something along the lines of 65 Mbit/s.
there is no rest for the wicked yet the virtuous have no pillows
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP