1820636 Members
1919 Online
109626 Solutions
New Discussion юеВ

ninode kernel parameter

 
SOLVED
Go to solution
OldSchool
Honored Contributor

ninode kernel parameter

I've a piece of software installed that occasionally (approx. once every 7-10 days) reports that a file isn't open after a call was made to open it. Uses it's own error message numbers, not std UNIX error code(s).

Looking at the usual suspects in system tables (nproc, nfile, flock, maxfiles, maxfiles_lim) and all are less than 40% used. However ninode maxes out (current 7120).

In looking at the man pages for fopen and open, I don't see anything that specifically states that hitting the max on ninode will cause issues. OS is 11.0 btw

Can anybody confirm (or deny) this?

Thx...scott
11 REPLIES 11
Dave Olker
Neighborhood Moderator
Solution

Re: ninode kernel parameter

Hi Scott,

ninode, as you may already know, sizes (among other things) the HFS inode table. Are you using HFS filesystems on your 11.0 system? Does your application open lots of files in the HFS filesystem? Do you see any messages in the dmesg or syslog.log output indicating "file table full"?

Regards,

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
OldSchool
Honored Contributor

Re: ninode kernel parameter

Hi Dave,

only HFS filesystem is /stand, everything else is vxfs...

wasn't aware of the relationship you noted between HFS and ninode. do see it described as "inode cache" and associated w/ the number of open inodes.

we know when the last one of these events occurred (yesterday afternoon) and there is nothing reported by dmesg or showing in syslog that is of interest.

"maxfiles" is set kinda low, but I didn't think a "soft" limit would cause issues...you would be ok until you hit "maxfiles_lim". Am I mistaken?
Dave Olker
Neighborhood Moderator

Re: ninode kernel parameter

Hi Scott,

So you're not using HFS, which means ninode is likely not the culprit. If you're using VxFS filesystems then that inode table is sized by different tunables, and the defaults set by the kernel are based on the amount of physical memory in your system, and they tend to be sized on the high side, which means I doubt you're running out of VxFS inodes.

The maxfiles limit could be responsible for the application issue, though your application vendor should be able to tell you definitively what would cause the problem you're seeing.

How do you have maxfiles and maxfiles_lim set currently?

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
F Verschuren
Esteemed Contributor

Re: ninode kernel parameter

if ninode ( Max Number of Open Inodes) are maxing out the program properly is claming a inode and is not closing it again, I read in the other replays that ninode is only for hfs I din't know that but if that is the case please look ad bdf -i /stand.
maybe the program is creating a lot of files (7120?) in stand
normaly iused is not match bigger than 50
James R. Ferguson
Acclaimed Contributor

Re: ninode kernel parameter

Hi Scott:

To add to Dave's comments regarding HFS filesystems and 'ninode' see this execellent
whitepaper:

http://docs.hp.com/en/7779/commonMisconfig.pdf

Regards!

...JRF...
OldSchool
Honored Contributor

Re: ninode kernel parameter

maxfiles and maxfiles_lim are set at 90 / 1024 respectively.

The app. vendor wasn't real helpful...they suggested "run ulimit -a" and "change the "nofiles" entry, That's usually the problem."

Of course, he also kept referring to "file handles" and didn't seem to grasp the concept of "transactions" either.

I'm going to bump both of the above up as soon as I can get scheduled downtime.

I just wasn't sure as the the impact of ninode. From your replies, and the doc pointed out by JRF, this doesn't appear to be the issue..
Dave Olker
Neighborhood Moderator

Re: ninode kernel parameter

What value is the vendor recommending you to increase nofiles to with "ulimit -n"? Will you be using that value for the new maxfiles setting when you rebuild your kernel?

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
OldSchool
Honored Contributor

Re: ninode kernel parameter

that's just it...they have no recommendation other than "that's probably the problem" and "make it bigger". They didn't even bother to ask what they were currently set to.

I'd like to go into this w/ at least an idea of what's what.

I've been watching nfile and such w/ glance, and their highwater mark (so far) is less than 40%. To me, the system wide stuff seems not to be an issue.

I've yet to find a method to follow the "per user" limits...and as they app doesn't report anything other than their own error codes, I've nothing to base a fix on.

I should have mentioned, the file that fails to open is basically a read only file, it already exists, so its not a disk space issue...

would be really nice is the app returned something like [EMFILE] or [ENFILE]. I guess something that provides a useful clue is a bit much to ask for however
Dave Olker
Neighborhood Moderator

Re: ninode kernel parameter

Well..... if you really want to investigate why the problem is happening and collect some concrete evidence before you rebuild the kernel with a higher maxfiles setting, you could use tusc to attach to the application process and trace the system calls. When the problem occurs you would look at the tusc output and see if the application failed because it called open() or fopen() or some other system call and it failed because the process hit the maxfiles limit.

Unfortunately it sounds like this problem occurs very intermittently so you'd end up with tusc attached to your application for quite a while, which would generate a pretty sizable log file while you wait for the problem to happen.

Now, if you had a way to force the application to duplicate the behavior then collecting a tusc trace would be ideal. Even if the application won't return a standard error code, tusc will show you which system call is failing and why.

Just a thought...

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
OldSchool
Honored Contributor

Re: ninode kernel parameter

its worse than you think...each user runs an instance of applicationA. AppA calls AppB. AppB is the one failing. 100's of users a day...each of whom may (or may not) experience the issue.

I've not been able to rule out things like "nfiles" either...I just haven't seen that one exceed 37% at highwater (yet).

of course, their is always the possibilty that AppB has other problem and is just reporting it didn't open the file

Thanks for all the input.

scott
OldSchool
Honored Contributor

Re: ninode kernel parameter

ok, the original question was about ninode parameter.

responses indicate that is not the issue here, so I'm going to close this.

thanks to all for the help