1827810 Members
2148 Online
109969 Solutions
New Discussion

ndd parameters

 
SOLVED
Go to solution
KPS
Super Advisor

ndd parameters

Hi,

I have a question or two about TCP Transport parameters. For our application, we were told by App Vendor and HP that the following ndd parameters should be changed from their default value to some specific values that would better suit our large environment. Below are the TCP Transport parameters and current settings in /etc/rc.config.d/nddconf

TRANSPORT_NAME[0]=tcp
NDD_NAME[0]=tcp_recv_hiwater_def
NDD_VALUE[0]=262144
TRANSPORT_NAME[1]=tcp
NDD_NAME[1]=tcp_xmit_hiwater_def
NDD_VALUE[1]=262144
TRANSPORT_NAME[2]=tcp
NDD_NAME[2]=tcp_recv_hiwater_lfp
NDD_VALUE[2]=262144
TRANSPORT_NAME[3]=tcp
NDD_NAME[3]=tcp_xmit_hiwater_lfp
NDD_VALUE[3]=262144
TRANSPORT_NAME[4]=sockets
NDD_NAME[4]=socket_udp_rcvbuf_default
NDD_VALUE[4]=262144
TRANSPORT_NAME[5]=sockets
NDD_NAME[5]=socket_udp_sndbuf_default
NDD_VALUE[5]=262144

What I'm interested in right now is what the default Transport Value is for each of these parameters above. Would anyone by chance know how to obtain the default settings for the above or by chance know what they might be?

Here is some more info of system type and OS:

OS= HP-UX 11.23 (ia64)
System Type= BL870

Thanks in advance,

KPS

8 REPLIES 8
Patrick Wallek
Honored Contributor
Solution

Re: ndd parameters

To get the default value run the command 'ndd -h ' for each parameter.

For example:

$ ndd -h tcp_recv_hiwater_def

tcp_recv_hiwater_def:

The maximum size for the receive window. [4096,-]
Default: 32768 bytes
Mel Burslan
Honored Contributor

Re: ndd parameters

here are my values from a recently installed (and never been messed with ndd settings) 11.23 system: (look at lines starting with -->

NDD_NAME[0]=tcp_recv_hiwater_def
NDD_VALUE[0]=262144
-->32768

NDD_NAME[1]=tcp_xmit_hiwater_def
NDD_VALUE[1]=262144
-->32768

NDD_NAME[2]=tcp_recv_hiwater_lfp
NDD_VALUE[2]=262144
-->65536

NDD_NAME[3]=tcp_xmit_hiwater_lfp
NDD_VALUE[3]=262144
-->65536

NDD_NAME[4]=socket_udp_rcvbuf_default
NDD_VALUE[4]=262144
-->65535

NDD_NAME[5]=socket_udp_sndbuf_default
NDD_VALUE[5]=262144
-->65535


Hope this helps
________________________________
UNIX because I majored in cryptology...
Steven E. Protter
Exalted Contributor

Re: ndd parameters

Shalom,

I would do an install of 11.23 on a system and take a look at /etc/rc.config.d/nddconf

That is how I'd find out the defaults.

The man pages don't seem to say anything.

SEP

Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Patrick Wallek
Honored Contributor

Re: ndd parameters

>>I would do an install of 11.23 on a system
>>and take a look at /etc/rc.config.d/nddconf
>>That is how I'd find out the defaults.

Then you wouldn't find anything. The nddconf file is empty by default.


>>The man pages don't seem to say anything.

Really? I thought it actually provided useful information. I guess you just missed the information about the '-h' option:

ndd -h
When parameter is specified, a detail description of the parameter, along with its minimum, maximum, and default value are displayed. If no parameter is specified, it displays all supported and unsupported tunable parameters.
KPS
Super Advisor

Re: ndd parameters

ndd -h worked perfectly, thanks everyone for you input on this.


/KPS
KPS
Super Advisor

Re: ndd parameters

pts assigned.
rick jones
Honored Contributor

Re: ndd parameters

Not that I'm 100% fond of the current HP-UX defaults (32768 has been the default TCP socket buffer size since 10.20 if I recall correctly) but did the app vendor/HP give specific reasons for the values they requested? And exactly what do you mean by "large environment?" Do you mean you have large bandwidthXdelay paths in your network, or something else?
there is no rest for the wicked yet the virtuous have no pillows
KPS
Super Advisor

Re: ndd parameters

I'm in the process of going back to the vendor to find out exactly why they suggested these values. I believe it's because of the bandwith and type of TCP packets that they expected to see between our Application and Database tier servers using this App.