Operating System - HP-UX
1751738 Members
5762 Online
108781 Solutions
New Discussion юеВ

NFS: Configured kernel resource exhausted, NFS mount limit

 
SOLVED
Go to solution
Scott Stonefield
New Member

NFS: Configured kernel resource exhausted, NFS mount limit

I am having a problem with NFS client on HP-UX 11.31. We have a number of NetApps that we are trying to mount. When I run a script to mount all these volumes I hit a point where I get this error message:

fs mount: : Configured kernel resource exhausted

If I look in /var/adm/syslog/syslog.log

I have this error message:
vmunix: WARNING: Maximum number of threads reached. Please increase 'nkthread' value. NFS mount of failed.

I have looked at nkthread and I don't seem to be coming close in using all that is allocated:

Tunable Usage / Setting
=============================================
nkthread 538 / 8416

Now I thought that maybe my script that goes through and mounts all these volumes was maybe the problem but then I started to look and it seems that I can only mount 99 volumes and then I can't mount any more.

Does anyone know if there is a kernel tunable that will allow us to mount more then 99 NFS volumes? I have two machines that I have reproduce this problem with.

Thanks for any help on this.

-Scott
8 REPLIES 8
Dave Olker
HPE Pro

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Hi Scott,

No, there isn't any artificial limit of 99 mounts on the NFS client. There must be a different kernel limit. If nkthread looks good then my guess is you've hit the "max_thread_proc" limit.

Can you please post the output of this command:

# kctune

I'd like to see how you have certain kernel tunables set.

Thanks,

Dave
I work for HPE

[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Steven E. Protter
Exalted Contributor

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Shalom,

Use kctune -l and look for kernel parameters set for 99 or that range.

max_thread_proc might be too low.

nproc

maxuprc

Also might be too low.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Scott Stonefield
New Member

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Here is the kctune output:

I looked for values around 100. And there are only a few none look to me as something that would cause this behavior to occur.

bb04wrk41 #> kctune
Tunable Value Expression Changes
NSTREVENT 50 Default
NSTRPUSH 16 Default
NSTRSCHED 0 Default
STRCTLSZ 1024 Default
STRMSGSZ 0 Default
acctresume 4 Default
acctsuspend 2 Default
aio_iosize_max 0 Default Immed
aio_listio_max 256 Default Immed
aio_max_ops 2048 Default Immed
aio_monitor_run_sec 30 Default Immed
aio_physmem_pct 10 Default Immed
aio_prio_delta_max 20 Default Immed
aio_proc_max 0 Default Immed
aio_proc_thread_pct 70 Default Immed
aio_proc_threads 1024 Default Immed
aio_req_per_thread 1 Default Immed
allocate_fs_swapmap 0 Default
alwaysdump 0 Default Immed
audit_memory_usage 5 Default Immed
audit_track_paths 0 Default Auto
copy_on_write 1 Default Immed
core_addshmem_read 0 Default Immed
core_addshmem_write 0 Default Immed
create_fastlinks 0 Default
default_disk_ir 0 Default
diskaudit_flush_interval 5 Default Immed
dma32_pool_size 268435456 Default
dmp_rootdev_is_vol 0 Default
dmp_swapdev_is_vol 0 Default
dnlc_hash_locks 512 Default
dontdump 0 Default Immed
dst 1 Default
dump_compress_on 1 Default Immed
dump_concurrent_on 1 Default Immed
executable_stack 0 Default Immed
expanded_node_host_names 0 Default Immed
fcache_fb_policy 0 Default Immed
fcache_seqlimit_file 100 Default Immed
fcache_seqlimit_system 100 Default Immed
filecache_max 16325685248 Default Auto
filecache_min 1632567296 Default Auto
fr_statemax 800000 Default
fr_tcpidletimeout 86400 Default
fs_async 0 Default
fs_symlinks 20 Default Immed
ftable_hash_locks 64 Default
gvid_no_claim_dev 0 Default
hires_timeout_enable 0 Default Immed
hp_hfs_mtra_enabled 1 Default
intr_strobe_ics_pct 80 Default Immed
io_ports_hash_locks 64 Default
ipl_buffer_sz 8192 Default
ipl_logall 0 Default
ipl_suppress 1 Default
ipmi_watchdog_action 0 Default Immed
ksi_alloc_max 33600 Default Immed
ksi_send_max 32 Default
lcpu_attr 0 Default Auto
max_acct_file_size 2560000 Default Immed
max_async_ports 4096 Default Immed
max_mem_window 0 Default Immed
max_thread_proc 1100 1100 Immed
maxdsiz 1073741824 Default Immed
maxdsiz_64bit 4294967296 Default Immed
maxfiles 2048 Default
maxfiles_lim 4096 Default Immed
maxrsessiz 8388608 Default
maxrsessiz_64bit 8388608 Default
maxssiz 8388608 Default Immed
maxssiz_64bit 268435456 Default Immed
maxtsiz 100663296 Default Immed
maxtsiz_64bit 1073741824 Default Immed
maxuprc 256 Default Immed
mca_recovery_on 1 Default Auto
msgmbs 8 Default Immed
msgmnb 16384 Default Immed
msgmni 512 Default Immed
msgtql 1024 Default Immed
ncdnode 150 Default
nclist 8292 Default
ncsize 8976 Default
nflocks 4096 Default Auto
nfs2_max_threads 16 16 Immed
nfs2_nra 4 Default Immed
nfs3_bsize 32768 Default Immed
nfs3_do_readdirplus 1 Default Immed
nfs3_jukebox_delay 1000 Default Immed
nfs3_max_threads 16 16 Immed
nfs3_max_transfer_size 1048576 Default Immed
nfs3_max_transfer_size_cots 1048576 Default Immed
nfs3_nra 4 Default Immed
nfs4_bsize 32768 Default Immed
nfs4_max_threads 100 100 Immed
nfs4_max_transfer_size 1048576 Default Immed
nfs4_max_transfer_size_cots 1048576 Default Immed
nfs4_nra 4 Default Immed
nfs_portmon 0 Default Immed
ninode 8192 Default
nkthread 8416 Default Immed
nproc 4200 Default Immed
npty 60 Default
nstrpty 60 Default
nstrtel 60 Default
nswapdev 32 Default
nswapfs 32 Default
numa_policy 0 Default Immed
pa_maxssiz_32bit 83648512 Default
pa_maxssiz_64bit 536870912 Default
pagezero_daemon_enabled 1 Default Immed
patch_active_text 1 Default Immed
pci_eh_enable 1 Default
pci_error_tolerance_time 1440 Default Immed
process_id_max 30000 Default Auto
process_id_min 0 Default Auto
remote_nfs_swap 0 Default
rng_bitvals 9876543210 Default
rng_sleeptime 2 Default
rtsched_numpri 32 Default
sched_thread_affinity 6 Default Immed
scroll_lines 100 Default
secure_sid_scripts 1 Default Immed
semaem 16384 Default
semmni 2048 Default
semmns 4096 Default
semmnu 256 Default
semmsl 2048 Default Immed
semume 100 Default
semvmx 32767 Default
shmmax 1073741824 Default Immed
shmmni 400 Default Immed
shmseg 300 Default Immed
streampipes 0 Default
swchunk 2048 Default
sysv_hash_locks 128 Default
tcphashsz 0 Default
timeslice 10 Default
timezone 420 Default
uname_eoverflow 1 Default Immed
vnode_cd_hash_locks 128 Default
vnode_hash_locks 128 Default
vol_checkpt_default 10240 Default
vol_dcm_replay_size 262144 Default
vol_default_iodelay 50 Default
vol_fmr_logsz 4 Default
vol_max_bchain 32 Default
vol_max_nconfigs 20 Default
vol_max_nlogs 20 Default
vol_max_nmpool_sz 4194304 Default Immed
vol_max_prm_dgs 1024 Default
vol_max_rdback_sz 4194304 Default Immed
vol_max_vol 8388608 Default
vol_max_wrspool_sz 4194304 Default Immed
vol_maxio 256 Default
vol_maxioctl 32768 Default
vol_maxkiocount 2048 Default
vol_maxparallelio 256 Default
vol_maxspecialio 256 Default
vol_maxstablebufsize 256 Default
vol_min_lowmem_sz 532480 Default Immed
vol_mvr_maxround 256 Default
vol_nm_hb_timeout 10 Default
vol_rootdev_is_vol 0 Default
vol_rvio_maxpool_sz 4194304 Default Immed
vol_subdisk_num 4096 Default
vol_swapdev_is_vol 0 Default
vol_vvr_transport 1 Default
vol_vvr_use_nat 0 Default
volcvm_cluster_size 16 Default
volcvm_smartsync 1 Default
voldrl_max_drtregs 2048 Default
voldrl_min_regionsz 512 Default
voliomem_chunk_size 65536 Default
voliomem_maxpool_sz 4194304 Default
voliot_errbuf_dflt 16384 Default
voliot_iobuf_default 8192 Default
voliot_iobuf_limit 131072 Default
voliot_iobuf_max 65536 Default
voliot_max_open 32 Default
volpagemod_max_memsz 6144 Default Immed
volraid_rsrtransmax 1 Default
vps_ceiling 16 Default Immed
vps_chatr_ceiling 1048576 Default Immed
vps_pagesize 16 Default Immed
vx_maxlink 32767 Default
vx_ninode 0 Default Immed
vxfs_bc_bufhwm 0 Default Immed
vxfs_ifree_timelag 0 Default Immed
vxtask_max_monitors 32 Default
Dave Olker
HPE Pro
Solution

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Hi Scott,

Thanks for posting that output. Here's what I'm interested in:

max_thread_proc 1100 1100 Immed
nfs3_max_threads 16 16 Immed


Each NFS mount point will get a service thread pool of 16 threads. 99 mounts * 16 threads = 1584 threads total. Now, not every pool will have all 16 threads running at any given time. However, your max_thread_proc value is set at 1100, so as soon as the NFS client tries to create a new thread pool and the max_thread_proc limit is hit the mount fails.

You have a few options here.

o decrease nfs3_max_threads to a lower value
o increase max_thread_proc
o both

Once you allow more NFS mounts to occur you will likely need to increase the nkthread value to make sure you don't hit that limit next.

Regards,

Dave
I work for HPE

[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Ravi kumar raju
Advisor

Re: NFS: Configured kernel resource exhausted, NFS mount limit

HI Dave,

I am also facing same issue.what is the value i have to give to nkthread.

mumbo# kctune
Tunable Value Expression Changes
NSTREVENT 50 Default
NSTRPUSH 16 Default
NSTRSCHED 0 Default
STRCTLSZ 1024 Default
STRMSGSZ 0 Default
acctresume 4 Default
acctsuspend 2 Default
aio_iosize_max 0 Default Immed
aio_listio_max 256 Default Immed
aio_max_ops 2048 Default Immed
aio_monitor_run_sec 30 Default Immed
aio_physmem_pct 10 Default Immed
aio_prio_delta_max 20 Default Immed
aio_proc_max 0 Default Immed
aio_proc_thread_pct 70 Default Immed
aio_proc_threads 1024 Default Immed
aio_req_per_thread 1 Default Immed
allocate_fs_swapmap 0 Default
alwaysdump 0 Default Immed
audit_memory_usage 5 Default Immed
audit_track_paths 0 Default Auto
core_addshmem_read 0 Default Immed
core_addshmem_write 0 Default Immed
create_fastlinks 0 Default
default_disk_ir 0 Default
diskaudit_flush_interval 5 Default Immed
dmp_rootdev_is_vol 0 Default
dmp_swapdev_is_vol 0 Default
dnlc_hash_locks 512 Default
dontdump 0 Default Immed
dst 1 Default
dump_compress_on 1 Default Immed
dump_concurrent_on 0 Default Immed
eqmem_limit 65536 Default
executable_stack 0 Default Immed
expanded_node_host_names 1 1 Immed
fcache_fb_policy 0 Default Immed
fcache_seqlimit_file 100 Default Immed
fcache_seqlimit_system 100 Default Immed
filecache_max 4071505920 Default Auto
filecache_min 407150592 Default Auto
fr_statemax 800000 Default Immed
fr_tcpidletimeout 86400 Default Immed
fs_async 0 Default
fs_symlinks 20 Default Immed
ftable_hash_locks 64 Default
gvid_no_claim_dev 0 Default
hires_timeout_enable 0 Default Immed
hp_hfs_mtra_enabled 1 Default
intr_strobe_ics_pct 80 Default Immed
io_ports_hash_locks 64 Default
ipl_buffer_sz 8192 Default Immed
ipl_logall 0 Default Immed
ipl_suppress 1 Default Immed
ipmi_watchdog_action 0 Default Immed
ksi_alloc_max 33600 Default Immed
ksi_send_max 32 Default
lcpu_attr 0 Default Auto
max_acct_file_size 2560000 Default Immed
max_async_ports 4096 Default Immed
max_mem_window 0 Default Immed
max_thread_proc 256 256 Immed
maxdsiz 0xff000000 0xff000000 Immed
maxdsiz_64bit 0x1000000000 0x1000000000 Immed
maxfiles 1024 1024
maxfiles_lim 4096 Default Immed
maxssiz 0x6400000 0x6400000 Immed
maxssiz_64bit 0x40000000 0x40000000 Immed
maxtsiz 0x40000000 0x40000000 Immed
maxtsiz_64bit 0x1000000000 0x1000000000 Immed
maxuprc 1024 1024 Immed
msgmbs 8 Default Immed
msgmnb 32768 32768 Immed
msgmni 100 100 Immed
msgtql 256 256 Immed
ncdnode 300 300
nclist 8292 Default
ncsize 8976 Default
nflocks 10000 10000 Imm (auto disabled)
nfs2_max_threads 16 Default Immed
nfs2_nra 4 Default Immed
nfs2_srv_read_copyavoid 0 Default Immed
nfs3_bsize 32768 Default Immed
nfs3_do_readdirplus 1 Default Immed
nfs3_jukebox_delay 1000 Default Immed
nfs3_max_threads 16 Default Immed
nfs3_max_transfer_size 1048576 Default Immed
nfs3_max_transfer_size_cots 1048576 Default Immed
nfs3_nra 4 Default Immed
nfs3_srv_read_copyavoid 0 Default Immed
nfs4_bsize 32768 Default Immed
nfs4_max_threads 16 Default Immed
nfs4_max_transfer_size 1048576 Default Immed
nfs4_max_transfer_size_cots 1048576 Default Immed
nfs4_nra 4 Default Immed
nfs_portmon 0 Default Immed
ninode 8192 Default
nkthread 8416 Default Immed
nproc 4200 Default Immed
npty 256 256
nstrpty 256 256
nstrtel 60 Default
nswapdev 32 Default
nswapfs 32 Default
pagezero_daemon_enabled 1 Default Immed
pci_eh_enable 1 Default
pci_error_tolerance_time 1440 Default Immed
process_id_max 30000 Default Auto
process_id_min 0 Default Auto
remote_nfs_swap 0 Default
rng_bitvals 9876543210 Default
rng_sleeptime 2 Default
rtsched_numpri 32 Default
sched_thread_affinity 6 Default Immed
scroll_lines 100 Default
secure_sid_scripts 1 Default Immed
semaem 16384 Default
semmni 128 128
semmns 256 256
semmnu 256 Default
semmsl 2048 Default Immed
semume 100 Default
semvmx 32767 Default
shmmax 251658240 0XF000000 Immed
shmmni 400 Default Immed
shmseg 300 Default Immed
streampipes 0 Default
swchunk 2048 Default
sysv_hash_locks 128 Default
tcphashsz 2048 Default
timeslice 10 Default
timezone 420 Default
uname_eoverflow 1 Default Immed
vnode_cd_hash_locks 128 Default
vnode_hash_locks 128 Default
vol_checkpt_default 10240 Default
vol_dcm_replay_size 262144 Default
vol_default_iodelay 50 Default
vol_fmr_logsz 4 Default
vol_max_bchain 32 Default
vol_max_nconfigs 20 Default
vol_max_nlogs 20 Default
vol_max_nmpool_sz 4194304 Default Immed
vol_max_prm_dgs 1024 Default
vol_max_rdback_sz 4194304 Default Immed
vol_max_vol 8388608 Default
vol_max_wrspool_sz 4194304 Default Immed
vol_maxio 256 Default
vol_maxioctl 32768 Default
vol_maxkiocount 2048 Default
vol_maxparallelio 256 Default
vol_maxspecialio 256 Default
vol_maxstablebufsize 256 Default
vol_min_lowmem_sz 532480 Default Immed
vol_mvr_maxround 256 Default
vol_nm_hb_timeout 10 Default
vol_rootdev_is_vol 0 Default
vol_rvio_maxpool_sz 4194304 Default Immed
vol_subdisk_num 4096 Default
vol_swapdev_is_vol 0 Default
vol_vvr_transport 1 Default
vol_vvr_use_nat 0 Default
volcvm_cluster_size 16 Default
volcvm_smartsync 1 Default
voldrl_max_drtregs 2048 Default
voldrl_min_regionsz 512 Default
voliomem_chunk_size 65536 Default
voliomem_maxpool_sz 4194304 Default
voliot_errbuf_dflt 16384 Default
voliot_iobuf_default 8192 Default
voliot_iobuf_limit 131072 Default
voliot_iobuf_max 65536 Default
voliot_max_open 32 Default
volpagemod_max_memsz 6144 Default Immed
volraid_rsrtransmax 1 Default
vps_ceiling 16 Default Immed
vps_chatr_ceiling 1048576 Default Immed
vps_pagesize 16 Default Immed
vx_maxlink 32767 Default
vx_ninode 0 Default Immed
vxfs_bc_bufhwm 0 Default Immed
vxfs_ifree_timelag 0 Default Immed
vxtask_max_monitors 32 Default
mumbo#
Dave Olker
HPE Pro

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Hi Ravi,

When you say you have the same problem, you mean you're not able to mount NFS filesystems because of an exhausted kernel resource?

What OS is the NFS client system running? How many NFS mounts are you planning on using? What is the exact error message you see when you attempt to mount a filesystem and it fails? Are there any messages in either dmesg or /var/adm/syslog/syslog.log when this happens?

Regards,

Dave
I work for HPE

[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
sujit kumar singh
Honored Contributor

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Hi

Also can the O/P of cat /etc/defaults/nfs be posted from the client so as to know which version of NFS is actually being affected? Is that NFS3 or NFS4.

In that case this kernel parameter can be checked;
nfs4_max_threads 100 100 Immed
regards
sujit
Dave Olker
HPE Pro

Re: NFS: Configured kernel resource exhausted, NFS mount limit

Hi Sujit,

> Also can the O/P of cat /etc/defaults/nfs
> be posted from the client so as to know
> which version of NFS is actually being
> affected? Is that NFS3 or NFS4.

The /etc/defaults/nfs file only exists on 11i v3, so if you have that file it answers my "which OS are you running" question. :)

Also, just because NFS v4 is enabled on this system that doesn't mean NFS v4 is being used. V4 only is used if *both* systems (client and server) support v4 and negotiate that at mount time. Even if both systems support NFS v4 you can always override that using the "vers=3" option.

To find out for certain which version of NFS is being used on a per-mount basis you should use the "nfsstat -m" command and check the output to see what version the mount points are using.

> In that case this kernel parameter can be
> checked;
> nfs4_max_threads 100 100 Immed

Be very careful tuning the nfs#_max_threads to something like 100. This parameter tells the NFS client how many threads it can use *for each individual mount point* - not how many threads to use system-wide for all NFS I/O on the client. In other words, if you have 100 NFS v4 mount points and you configure nfs4_max_threads to 100 you could conceivably cause 10,000 threads to perform NFS v4 operations. I've personally never seen a situation where that makes sense, but there may be some out there.

The default for all the nfs#_max_threads tunables is 8, meaning each NFS filesystem will have a dedicated pool of 8 worker threads for that specific filesystem. That typically works well for most workloads. On occasion I've seen situations where 16 or 32 gives better performance, but those are typically cases where the NFS client is a very large system and it uses few NFS mount points to do all its NFS activity. In those cases having a higher limit of worker threads may make sense.

Do you really need 100 dedicated worker threads for each filesystem (i.e. you've done comparisons with 8/16/32/64/100 threads and found 100 threads gives you the best performance and behavior)?

Obviously the higher you configure the limits for these worker thread pools the higher you'll need to increase nkthread and max_thread_proc. Again, these tunables only affect 11i v3 systems. Prior to 11i v3 we used a system-wide pool of biod daemons, so you would do all your sizing of the biod pool.

Regards,

Dave
I work for HPE

[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo