HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Slow ORACLE dblinks after maxdsiz change - Any C...
Operating System - HP-UX
1826428
Members
3768
Online
109692
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2008 03:27 PM
05-21-2008 03:27 PM
Slow ORACLE dblinks after maxdsiz change - Any Clues with Netstat?
Greetings!
We recently changed maxdsiz and maxdsiz_64 on several massive Mixed Load Servers (Oracle + Custom Cobol/C Apps) - from 1G to 2G as some of our custom apps were running out of data memory. However, after the change.. dblink dependent processes have become slow and Oracle stats show "SQL*Net more data to client" as the leading wait event. Below is a snippet of 5 minute deltas on NETSTAT stats as reported by the "beforeafter" utility from HP CUP.
Does anyone have any clue whther the maxdsiz change may be the cause? Any comment on the below NETSTAT deltas snippet? Any possible tweak to the TCP tunables or Oracle's SDU/sqlnet.ora params?
Any info/help will be appreciated... We're also awaiting on a TAR from Oracle.
tcp:
1527631 packets sent
1458285 data packets (1498698138 bytes)
164 data packets (135930 bytes) retransmitted
70709 ack-only packets (2917 delayed)
0 URG only packets
11 window probe packets
1382 window update packets
1876 control packets
1123704 packets received
1003745 acks (for 1498754616 bytes)
573 duplicate acks
0 acks for unsent data
618724 packets (158523464 bytes) received in-sequence
0 completely duplicate packets (0 bytes)
1 packets with some dup, data (1460 bytes duped)
32965 out of order packets (36098598 bytes)
0 packets (0 bytes) of data after window
0 window probes
3061 window update packets
19 packets received after close
7259 segments discarded for bad checksum
0 bad TCP segments dropped due to state change
182 connection requests
721 connection accepts
903 connections established (including accepts)
947 connections closed (including 40 drops)
30 embryonic connections dropped
1002740 segments updated rtt (of 1002740 attempts)
89 retransmit timeouts
17 connections dropped by rexmit timeout
11 persist timeouts
16 keepalive timeouts
16 keepalive probes sent
0 connections dropped by keepalive
0 connect requests dropped due to full queue
0 connect requests dropped due to no listener
0 suspect connect requests dropped due to aging
0 suspect connect requests dropped due to rate
We recently changed maxdsiz and maxdsiz_64 on several massive Mixed Load Servers (Oracle + Custom Cobol/C Apps) - from 1G to 2G as some of our custom apps were running out of data memory. However, after the change.. dblink dependent processes have become slow and Oracle stats show "SQL*Net more data to client" as the leading wait event. Below is a snippet of 5 minute deltas on NETSTAT stats as reported by the "beforeafter" utility from HP CUP.
Does anyone have any clue whther the maxdsiz change may be the cause? Any comment on the below NETSTAT deltas snippet? Any possible tweak to the TCP tunables or Oracle's SDU/sqlnet.ora params?
Any info/help will be appreciated... We're also awaiting on a TAR from Oracle.
tcp:
1527631 packets sent
1458285 data packets (1498698138 bytes)
164 data packets (135930 bytes) retransmitted
70709 ack-only packets (2917 delayed)
0 URG only packets
11 window probe packets
1382 window update packets
1876 control packets
1123704 packets received
1003745 acks (for 1498754616 bytes)
573 duplicate acks
0 acks for unsent data
618724 packets (158523464 bytes) received in-sequence
0 completely duplicate packets (0 bytes)
1 packets with some dup, data (1460 bytes duped)
32965 out of order packets (36098598 bytes)
0 packets (0 bytes) of data after window
0 window probes
3061 window update packets
19 packets received after close
7259 segments discarded for bad checksum
0 bad TCP segments dropped due to state change
182 connection requests
721 connection accepts
903 connections established (including accepts)
947 connections closed (including 40 drops)
30 embryonic connections dropped
1002740 segments updated rtt (of 1002740 attempts)
89 retransmit timeouts
17 connections dropped by rexmit timeout
11 persist timeouts
16 keepalive timeouts
16 keepalive probes sent
0 connections dropped by keepalive
0 connect requests dropped due to full queue
0 connect requests dropped due to no listener
0 suspect connect requests dropped due to aging
0 suspect connect requests dropped due to rate
Hakuna Matata.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2008 01:07 PM
05-22-2008 01:07 PM
Re: Slow ORACLE dblinks after maxdsiz change - Any Clues with Netstat?
Anyone experiencing the same issue?
Our environmets are 11.11 PARISC - latest patches.
Our environmets are 11.11 PARISC - latest patches.
Hakuna Matata.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2008 02:20 PM
05-22-2008 02:20 PM
Re: Slow ORACLE dblinks after maxdsiz change - Any Clues with Netstat?
Shalom Nelson,
I have not seen this kind of behavior myself.
The network statistics are interesting by themselves. They seem to show a problem with the network.
I kind of doubt the kernel parameter change had anything to do with it.
If possible, back the change out and re-run the network statistics.
If they are similar, have the network hardware and media and switch settings checked out.
SEP
I have not seen this kind of behavior myself.
The network statistics are interesting by themselves. They seem to show a problem with the network.
I kind of doubt the kernel parameter change had anything to do with it.
If possible, back the change out and re-run the network statistics.
If they are similar, have the network hardware and media and switch settings checked out.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-22-2008 06:07 PM
05-22-2008 06:07 PM
Re: Slow ORACLE dblinks after maxdsiz change - Any Clues with Netstat?
It appears apart from the kernel param change that was done, a set of patches from a Custom Patch Assessment was applied. After doing an exhaustive check on every PHNE patch - I found PHNE_35351 - ARPA Streams Cumulative Patch that would seem to be the CULPRIT. One of the many warnings associated with this patch states:
Warning: 08/01/24 - This Non-Critical Warning has been issued by HP.
- PHNE_35351 introduced behavior that may cause
applications to suffer some performance degradation that
is caused by the implementation of the TCP Congestion
Window Validation described in RFC2861. Although the TCP
Congestion Window Validation implemented by patch
PHNE_35351 works as expected for public networks, it may
limit throughput for fast private networks where the
possibility of network congestion is very low. The
behavior manifests itself when after some idle time, data
is sent. As the TCP Congestion Window after the idle
time is being reduced to 1 MSS (Maximum Segment Size),
rather than to 4 MSS without TCP Congestion Window
Validation, the throughput capacity for TCP connections
that are "application limited" will be limited.
("application limited" TCP connections are connections
where the application sends less data than allowed by the
TCP Congestion Window.) Else, the normal slow start and
Congestion Window algorithms are used.
- Additional information on this behavior may be found in
Service Request QXCR1000592888.
- The same behavior is experienced with superseding patch
PHNE_36125.
- This behavior will be addressed in ARPA Transport patch,
PHNE_37671, which is expected to be released by mid June
2008.
Anyone else experiencing this issue and have this patch on board? We are rolling back this patch to see if this is really the culprit.
Warning: 08/01/24 - This Non-Critical Warning has been issued by HP.
- PHNE_35351 introduced behavior that may cause
applications to suffer some performance degradation that
is caused by the implementation of the TCP Congestion
Window Validation described in RFC2861. Although the TCP
Congestion Window Validation implemented by patch
PHNE_35351 works as expected for public networks, it may
limit throughput for fast private networks where the
possibility of network congestion is very low. The
behavior manifests itself when after some idle time, data
is sent. As the TCP Congestion Window after the idle
time is being reduced to 1 MSS (Maximum Segment Size),
rather than to 4 MSS without TCP Congestion Window
Validation, the throughput capacity for TCP connections
that are "application limited" will be limited.
("application limited" TCP connections are connections
where the application sends less data than allowed by the
TCP Congestion Window.) Else, the normal slow start and
Congestion Window algorithms are used.
- Additional information on this behavior may be found in
Service Request QXCR1000592888.
- The same behavior is experienced with superseding patch
PHNE_36125.
- This behavior will be addressed in ARPA Transport patch,
PHNE_37671, which is expected to be released by mid June
2008.
Anyone else experiencing this issue and have this patch on board? We are rolling back this patch to see if this is really the culprit.
Hakuna Matata.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Support
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP