Operating System - HP-UX
1752301 Members
5340 Online
108786 Solutions
New Discussion

Application waiting on a SOCKT (HP-UX i64)

 
Igor Kersic
New Member

Application waiting on a SOCKT (HP-UX i64)

Hi,

Firstly, I apologize if I've choosen incorrect subsection of forum, basically, it is regarding performance, maybe somebody has some hint about this situation that I'm having:
I'm running the 64-bit application on B.11.31 U ia64, hp-ux Super dome, we have ~380 GB of RAM on the machine and enough disk space mounted on several partitions. Application is running on one node and our Oracle db server is running on the other node (10Gbit network between).
Basically, what is noticed using caliper / glance is that in overall the application in question is most of the time in SOCKT wait state (around 50 to more %). CFS was running on the machine (now currently it is back to NFS for short period), but basically CFS.
->We have increased our tcp send/receive buffers:tcp_recv_hiwater_def=262144 and lowater_def=8129, the same was set to tcp_xmit_hiwater_def and lowater_def.
->Caliper is basically shows something like:
0 53.0 0.0 53.0 libc.so.1::_read_sys { stream_sock_fd@7 } [18]
libc.so.1::read [20]
libclntsh.so.11.1::nttfprd [21]
libclntsh.so.11.1::nsbasic_brc [19]
libclntsh.so.11.1::nsbrecv [17]
libclntsh.so.11.1::nioqrc [9]
libclntsh.so.11.1::ttcdrv [12]
libclntsh.so.11.1::nioqwa [8]
libclntsh.so.11.1::upirtrc [11]
libclntsh.so.11.1::kpuexes [27]
libclntsh.so.11.1::upiex0 [25]
libclntsh.so.11.1::upiexn [26]
libclntsh.so.11.1::sqlall [15]
libclntsh.so.11.1::sqlatm [24]
libclntsh.so.11.1::sqlnst [14]
libclntsh.so.11.1::sqlcmex [16]
libclntsh.so.11.1::sqlcxt [13]
For most of the time application is hanging on a oracle client driver, communicating with a socket connected to oracle database.
Now, on the oracle server side we have great performance, basically AWR report on Oracle server says oracle is most of the time idle, he sents the query responses over the network and waits for a client (in our case the application that is in question). So definately oracle server/database is not a bottle neck.
The network is 10Gbit as said and the throughput /speed of the network is really satisfactory,
->This is what tcpdump shows most of the time on the file descriptor communicating on this socket (hanging/hot pathed one)
...
Flags [P.], seq 160518:168673, ack 2107, win 32768, length 8155
Flags [.], ack 168673, win 32768, length 0
Flags [P.], seq 168673:172899, ack 2107, win 32768, length 4226
Flags [.], ack 172899, win 32768, length 0
Flags [P.], seq 2107:2269, ack 172899, win 32768, length 162
Flags [P.], seq 172899:181054, ack 2269, win 32768, length 8155
Flags [.], ack 181054, win 32768, length 0
Flags [P.], seq 181054:185247, ack 2269, win 32768, length 4193
Flags [.], ack 185247, win 32768, length 0
Flags [P.], seq 2269:2431, ack 185247, win 32768, length 162
Flags [P.], seq 185247:193402, ack 2431, win 32768, length 8155
Flags [.], ack 193402, win 32768, length 0
Flags [P.], seq 193402:197643, ack 2431, win 32768, length 4241
Flags [.], ack 197643, win 32768, length 0
Flags [P.], seq 2431:2593, ack 197643, win 32768, length 162
Flags [P.], seq 197643:205798, ack 2593, win 32768, length 8155
Flags [.], ack 205798, win 32768, length 0
Flags [P.], seq 205798:209974, ack 2593, win 32768, length 4176
Flags [.], ack 209974, win 32768, length 0
Flags [P.], seq 2593:2755, ack 209974, win 32768, length 162
Flags [P.], seq 209974:218129, ack 2755, win 32768, length 8155
Flags [.], ack 218129, win 32768, length 0
Flags [P.], seq 218129:222373, ack 2755, win 32768, length 4244
Flags [.], ack 222373, win 32768, length 0
...
Basically showing that the window is 32k in this case, I really don't know how the windows is choosen.

->On the other hand, also, netperf shows different averages with 32k, 64, 128k and 256k throughput:
netperf -fM -H -tTCP_STREAM -- -s32768 -S32768
32k (against first db server Throughput in MB/sec ~ 161, other db server ~171) -> we have two servers
netperf -fM -H -tTCP_STREAM -- -s65536 -S65536
64k (first ~237, second ~260)
netperf -fM -H -tTCP_STREAM -- -s131072 -S131072
132k (first ~230, second ~276)
netperf -fM -H -tTCP_STREAM -- -s262144 -S262144
264k (first ~248, second ~282)
This basically shows that on TCP_STREAM with different sizes of buffers there is different performance, but mainly increase to 64k will boost throughput 50%.
#Note: when netperf is ran without buffer size and in debug mode it is taking get_send/rcv_buffer_size as 32k, by default..from where it takes this default 32k I don't know...but basically this would mean that we have something set as default (limited to 32k) in our kernel parameters.

->Also tusc output shows that most of the time application is in read system call on this socket (not write), also glance Wait System Calls shows that most of the time application is waiting on read system call.
->Please note that there are no db locks nor greater waits, and applications is using (heavily) database..but still, as said db is most of the time idle, and the most time applications spend is on read on the db connection socket.
Please let me know if you need any other ndd, kctune or something else parameter.

So, at the end, the puzzling thing for me is: why the application is hanging on a socket, spending most of time reading from tcp connection to database (we have Oracle 11.2)-> is there some bottleneck setup that we are missing in our kernel or network parameters. It is puzzling because db is so fast / tunned and idle-ing most of the time therefore sql data throughput on database servers is fast, all the queries and tables are optimized...therefore the bottleneck seems to be the fetching from the socket on application node; application is spending most on the time on socket; therefore maybe nfs or cfs buffers are limited (the only thing I can think of is nfs3 and nfs3 buffer sizes which are 32k currently they will be increased soon after netperf testing that we did, shown above). What limits and/or slows our read from socket on our application node? Or, at the bottom line, maybe this is a normal behaviour and we cannot achieve due to limitation in our application code...
Please advise, is any idea, about possible reasons for this SOCKT wait state hot path situation on kernel/network/system level?
If this looks like normal situtation under heavy database usage for an application in regards to the hp-ux system level (like presented with the tools reports) then we'll revise our application code/transaction fetch size etc.

Thank you in advance for any hint,
Igor