HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Can nettune TCP stack tuning intercept FW disr...
Operating System - HP-UX
1833747
Members
3057
Online
110063
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2003 12:48 AM
10-14-2003 12:48 AM
Can nettune TCP stack tuning intercept FW disruptions?
Hello,
this topic already had been raised by me here some time ago.
I hope you don't mind.
The issue is a webserver serving the WWW that is fenced by a FW.
This FW is (for the sake of buffer overflow) configured to sever stale connections which haven't exhibited any network traffic for one hour.
Especially for a webserver this seems quite sensible to me.
Another disruptive criterion, as I've been told by the FW admin, is that the usual 3 way TCP handshake which establishes new connections needs to be completed whithin a certain period of time (I guess one minute).
This is I think to guard against certain DoS attacks like SYN floodings.
Unfortunately, the webserver also needs to establish connections to a database server to service certain query requests from its clients.
The drawback with these additional connections is that obviously the application programmers haven't decided to implement a common BSD socket client server model, where a parent socket forks off children to serve isolated client sessions from an accept() loop, and where those children themselves cleanly close their sockets as soon as a request has been serviced.
Rather I think they use some sort of steady terminal sessions which are opened right from the server's start.
Then it happens that while no data is exchanged over these sockets the FW detects there is no traffic, and disrupts the connection, with the broken pipe effect and many sockets lingering in FIN_WAIT_2 state because they never receive a socket closed acknowledgement from the other end.
Now, these are of course my crude assumptions based on very little networking knowledge without any insight into the application implementation.
My only exculpation to the developers for their poor communication model of the application is that they either feared performance issues resulting from the large overhead of many simultaneous reestablishments of connections, or that they simply did it this way for the sake of laziness.
As this box runs under 10.20 there aren't that many tunables as the ndd command offers under 11.X.
However, I find that with regard to the 3600 sec FW intercepts this value probably is too high, and thus keepalive packets will never have a chance to be sent.
# nettune tcp_keepstart
7200
# nettune -h tcp_keepstart
tcp_keepstart:
The number of seconds that a TCP connection can be idle
(that is, no packets received) before keep-alive packets
will be sent attempting to solicit a response. When a packet
is received, keep-alive packets are not longer sent unless
the connection is idle again for this period of time.
If I set this below 3600 secs would this have a benign effect with regard to my problem?
Maybe there are even other tunables which are worthwhile putting into consideration?
These are currently set:
for p in $(nettune -h|grep -E '.*:'|tr -d ':');do echo "\n$p:";nettune $p;done
arp_killcomplete:
1200
arp_killincomplete:
600
arp_unicast:
300
arp_rebroadcast:
60
icmp_mask_agent:
0
ip_defaultttl:
255
ip_forwarding:
1
ip_intrqmax:
200
pmtu_defaulttime:
20
tcp_localsubnets:
1
tcp_receive:
32768
tcp_send:
32768
tcp_defaultttl:
64
tcp_keepstart:
7200
tcp_keepfreq:
75
tcp_keepstop:
600
tcp_maxretrans:
12
tcp_urgent_data_ptr:
0
udp_cksum:
1
udp_defaultttl:
64
udp_newbcastenable:
1
udp_pmtu:
0
tcp_pmtu:
1
tcp_random_seq:
0
this topic already had been raised by me here some time ago.
I hope you don't mind.
The issue is a webserver serving the WWW that is fenced by a FW.
This FW is (for the sake of buffer overflow) configured to sever stale connections which haven't exhibited any network traffic for one hour.
Especially for a webserver this seems quite sensible to me.
Another disruptive criterion, as I've been told by the FW admin, is that the usual 3 way TCP handshake which establishes new connections needs to be completed whithin a certain period of time (I guess one minute).
This is I think to guard against certain DoS attacks like SYN floodings.
Unfortunately, the webserver also needs to establish connections to a database server to service certain query requests from its clients.
The drawback with these additional connections is that obviously the application programmers haven't decided to implement a common BSD socket client server model, where a parent socket forks off children to serve isolated client sessions from an accept() loop, and where those children themselves cleanly close their sockets as soon as a request has been serviced.
Rather I think they use some sort of steady terminal sessions which are opened right from the server's start.
Then it happens that while no data is exchanged over these sockets the FW detects there is no traffic, and disrupts the connection, with the broken pipe effect and many sockets lingering in FIN_WAIT_2 state because they never receive a socket closed acknowledgement from the other end.
Now, these are of course my crude assumptions based on very little networking knowledge without any insight into the application implementation.
My only exculpation to the developers for their poor communication model of the application is that they either feared performance issues resulting from the large overhead of many simultaneous reestablishments of connections, or that they simply did it this way for the sake of laziness.
As this box runs under 10.20 there aren't that many tunables as the ndd command offers under 11.X.
However, I find that with regard to the 3600 sec FW intercepts this value probably is too high, and thus keepalive packets will never have a chance to be sent.
# nettune tcp_keepstart
7200
# nettune -h tcp_keepstart
tcp_keepstart:
The number of seconds that a TCP connection can be idle
(that is, no packets received) before keep-alive packets
will be sent attempting to solicit a response. When a packet
is received, keep-alive packets are not longer sent unless
the connection is idle again for this period of time.
If I set this below 3600 secs would this have a benign effect with regard to my problem?
Maybe there are even other tunables which are worthwhile putting into consideration?
These are currently set:
for p in $(nettune -h|grep -E '.*:'|tr -d ':');do echo "\n$p:";nettune $p;done
arp_killcomplete:
1200
arp_killincomplete:
600
arp_unicast:
300
arp_rebroadcast:
60
icmp_mask_agent:
0
ip_defaultttl:
255
ip_forwarding:
1
ip_intrqmax:
200
pmtu_defaulttime:
20
tcp_localsubnets:
1
tcp_receive:
32768
tcp_send:
32768
tcp_defaultttl:
64
tcp_keepstart:
7200
tcp_keepfreq:
75
tcp_keepstop:
600
tcp_maxretrans:
12
tcp_urgent_data_ptr:
0
udp_cksum:
1
udp_defaultttl:
64
udp_newbcastenable:
1
udp_pmtu:
0
tcp_pmtu:
1
tcp_random_seq:
0
Madness, thy name is system administration
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2003 03:12 AM
10-14-2003 03:12 AM
Re: Can nettune TCP stack tuning intercept FW disruptions?
HP's Rick Jones said on
http://httpd.apache.org/docs/misc/perf-hp.html
"If folks are concerned about the number of FIN_WAIT_2 connections, they can use nettune to shrink the value of tcp_keepstart. However, they should be careful there - certainly do not make it less than oh two to four minutes. If tcp_hash_size has been set well, it is probably OK to let the FIN_WAIT_2's take longer to timeout (perhaps even the default two hours) - they will not on average have a big impact on performance."
In the same article he also recommends that you install the latest cumulative ARPA Transport Patch.
That would be PHNE 22507
http://www2.itrc.hp.com/service/patch/patchDetail.do?patchid=PHNE_22507&context=hpux:800:10:20
and explains how to tune tcp_hash_size.
Ron
http://httpd.apache.org/docs/misc/perf-hp.html
"If folks are concerned about the number of FIN_WAIT_2 connections, they can use nettune to shrink the value of tcp_keepstart. However, they should be careful there - certainly do not make it less than oh two to four minutes. If tcp_hash_size has been set well, it is probably OK to let the FIN_WAIT_2's take longer to timeout (perhaps even the default two hours) - they will not on average have a big impact on performance."
In the same article he also recommends that you install the latest cumulative ARPA Transport Patch.
That would be PHNE 22507
http://www2.itrc.hp.com/service/patch/patchDetail.do?patchid=PHNE_22507&context=hpux:800:10:20
and explains how to tune tcp_hash_size.
Ron
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-14-2003 04:11 AM
10-14-2003 04:11 AM
Re: Can nettune TCP stack tuning intercept FW disruptions?
Ron,
thank you for your link to Rick's HP-UX contribs at apache.org.
We do have a cumulative ARPA patch installed.
However, its release No. is lower than the one you mentioned.
# swlist PHNE_\*|grep -i arpa
# PHNE_16210 B.10.00.00.AA cumulative ARPA Transport patch
PHNE_16210.PHNE_16210 B.10.00.00.AA cumulative ARPA Transport patch
thank you for your link to Rick's HP-UX contribs at apache.org.
We do have a cumulative ARPA patch installed.
However, its release No. is lower than the one you mentioned.
# swlist PHNE_\*|grep -i arpa
# PHNE_16210 B.10.00.00.AA cumulative ARPA Transport patch
PHNE_16210.PHNE_16210 B.10.00.00.AA cumulative ARPA Transport patch
Madness, thy name is system administration
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP