- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Tuning TCP parameters in HP-UX
Operating System - HP-UX
1825509
Members
1777
Online
109681
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-01-2006 06:29 PM
06-01-2006 06:29 PM
Hi,
1. What is the best practice for permanent TCP parameter tuning? Should I put ndd commands in a startup script?
2. Is it possible to control FIN_WAIT_2 and CLOSE_WAIT timeouts?
Thannks and points in advance!
1. What is the best practice for permanent TCP parameter tuning? Should I put ndd commands in a startup script?
2. Is it possible to control FIN_WAIT_2 and CLOSE_WAIT timeouts?
Thannks and points in advance!
KISS - Keep It Simple Stupid
Solved! Go to Solution.
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-01-2006 06:54 PM
06-01-2006 06:54 PM
Solution
Hello Mihails,
1)I don't know what are the best practice about it but here is a response from rick jones (HP),(about a clean_wait issue), perhaps a beginning of best practice :
" The CLOSE_WAIT state is the state a TCP connection enters when it has received and ACKnowledged a FIN from the remote and is now waiting for the local application to call close() or shutdown().
99 times out of 10, a connection "stuck" in CLOSE_WAIT means the application at that end has a bug - it is either ignoring, or forgetting when it was told that the remote has initiated a shutdown of the connection.
The other, much more rare case, is that this is an application that is using TCP to transfer data in one direction only. CLOSE_WAIT is a perfectly valid "send only" state for a TCP connection, assuming that the remote side, in FIN_WAIT_2, got there by doing a shutdown(SHUT_WR) and not a SHUT_RD or SHUT_RDWR or close().
This is why I am not terribly fond of the arbitrary fin_wait_2_timeout for dealing with FIN_WAIT_2. I much prefer to let the "normal" tcp_keepalive_detached_interval, which will deal with a different sort of client bug - when the client uses an abortive (RST instead of FIN) close of the TCP connection. That is a doubleplusungood thing to do, the RST is not retransmitted, and it can leave the server stuck in FIN_WAIT_2. The tcp_keepalive_detached_interval will deal with FIN_WAIT_2 on the server when the server calls close() - at that point it will send keepalives, and if the keepalives get no response, or elicit a RST from the remote, it will terminate the FIN_WAIT_2.
Do not mess with the tcp_time_wait_interval. It should stay at 60 seconds or more. It is only for connections in TIME_WAIT, and TIME_WAIT is an integral part of TCP's correctness heuristics. I also would not suggest a tcp_find_wait_2_timeout of 10 seconds - if you must use it, keep it at least as long as tcp_time_wait_interval.
I'd also probably not make tcp_keepalive_detached_interval a mere 10 seconds. The two minute default should suffice - again 99 times out of 10 :) and if you do need to make it shorter, again I'd not make it any shorter than tcp_time_wait_interval. "
For tuning tcp you can put your modifications in /etc/rc.config.d/nddconf.
You can have the detail of all the tunable parameters with ndd -h and put what you want to tune in this file like this :
TRANSPORT_NAME[0]=ip
NDD_NAME[0]=ip_forward_directed_broadcasts
NDD_VALUE[0]=0
#
TRANSPORT_NAME[1]=ip
NDD_NAME[1]=ip_forward_src_routed
NDD_VALUE[1]=0
#
TRANSPORT_NAME[2]=ip
NDD_NAME[2]=ip_forwarding
NDD_VALUE[2]=0
2) take a look at the help :
ndd -h tcp_fin_wait_2_timeout
You can get the value on your box with :
ndd -get /dev/tcp tcp_fin_wait_2_timeout
ndd -get /dev/tcp ? to get the list of tunables.
and use set instead of get to set it.
Hope this helps
Kenavo
Pat
1)I don't know what are the best practice about it but here is a response from rick jones (HP),(about a clean_wait issue), perhaps a beginning of best practice :
" The CLOSE_WAIT state is the state a TCP connection enters when it has received and ACKnowledged a FIN from the remote and is now waiting for the local application to call close() or shutdown().
99 times out of 10, a connection "stuck" in CLOSE_WAIT means the application at that end has a bug - it is either ignoring, or forgetting when it was told that the remote has initiated a shutdown of the connection.
The other, much more rare case, is that this is an application that is using TCP to transfer data in one direction only. CLOSE_WAIT is a perfectly valid "send only" state for a TCP connection, assuming that the remote side, in FIN_WAIT_2, got there by doing a shutdown(SHUT_WR) and not a SHUT_RD or SHUT_RDWR or close().
This is why I am not terribly fond of the arbitrary fin_wait_2_timeout for dealing with FIN_WAIT_2. I much prefer to let the "normal" tcp_keepalive_detached_interval, which will deal with a different sort of client bug - when the client uses an abortive (RST instead of FIN) close of the TCP connection. That is a doubleplusungood thing to do, the RST is not retransmitted, and it can leave the server stuck in FIN_WAIT_2. The tcp_keepalive_detached_interval will deal with FIN_WAIT_2 on the server when the server calls close() - at that point it will send keepalives, and if the keepalives get no response, or elicit a RST from the remote, it will terminate the FIN_WAIT_2.
Do not mess with the tcp_time_wait_interval. It should stay at 60 seconds or more. It is only for connections in TIME_WAIT, and TIME_WAIT is an integral part of TCP's correctness heuristics. I also would not suggest a tcp_find_wait_2_timeout of 10 seconds - if you must use it, keep it at least as long as tcp_time_wait_interval.
I'd also probably not make tcp_keepalive_detached_interval a mere 10 seconds. The two minute default should suffice - again 99 times out of 10 :) and if you do need to make it shorter, again I'd not make it any shorter than tcp_time_wait_interval. "
For tuning tcp you can put your modifications in /etc/rc.config.d/nddconf.
You can have the detail of all the tunable parameters with ndd -h and put what you want to tune in this file like this :
TRANSPORT_NAME[0]=ip
NDD_NAME[0]=ip_forward_directed_broadcasts
NDD_VALUE[0]=0
#
TRANSPORT_NAME[1]=ip
NDD_NAME[1]=ip_forward_src_routed
NDD_VALUE[1]=0
#
TRANSPORT_NAME[2]=ip
NDD_NAME[2]=ip_forwarding
NDD_VALUE[2]=0
2) take a look at the help :
ndd -h tcp_fin_wait_2_timeout
You can get the value on your box with :
ndd -get /dev/tcp tcp_fin_wait_2_timeout
ndd -get /dev/tcp ? to get the list of tunables.
and use set instead of get to set it.
Hope this helps
Kenavo
Pat
Good judgement comes with experience. Unfortunately, the experience usually comes from bad judgement.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP