- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- sar -v and ninode results
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 04:23 AM
тАО09-18-2003 04:23 AM
I ask because of 16 HP servers the others display values like 788/9284, 3400/2100910, and 3245/6188, etc. Which implies that the currently utilized number of inodes never reaches the maximum.
Our SUN servers on the other hand display values, ( also for vxfs file systems ) 2187/2187, 1998/1998, 2002/2002, and so on.
This is a reporting of a the dynamic value that I'm use too, ( ninode is dynamic in vxfs ), and I'm verifying that that HP-UX versions of '...sar -v...' are reporting a messed up value of ninode and can be ignored.
Comments? Concerns?
In the long run I'm trying to track down the reason for resource waits at the disk level, tcp disconnects and web server hangs.
Thanks in advanced.
Alien.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 04:36 AM
тАО09-18-2003 04:36 AM
Re: sar -v and ninode results
ninode on UX is just the # of entries in the kernel for HFS inode caching. So if it's maxed, that means that the max # of HFS inodes have been cached. This can happen very quickly on a server that has any HFS fs. For the servers that aren't maxed, I would guess they are all VXFS or the HFS fs's are not used much.
Bottom line, is that there is nothing to worry about with those numbers.
HTH.
BTW, vx_ninodes (on 11i +patch) is what controls the # of cache entries for VXFS fs's. But, sar does not report on this AFAIK.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 04:53 AM
тАО09-18-2003 04:53 AM
Re: sar -v and ninode results
Inode values shown with sar -v output are not like the hard limit values of file-sz and proc-sz.
So when this value is reached a new process could not be handled on the server.
The value of inode shows like
for ex. if 9284/9284
The value of the left of "/" is the number of open inodes in the inode table cache and on the right is the total number of inodes that can be open in the inode cache table.
This is configured using a kernel parameter ninode.
It is recommended that this value is at the maximum so that there is good performance.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 05:05 AM
тАО09-18-2003 05:05 AM
Re: sar -v and ninode results
You said, "...it's recommended that the open ninode value be at max for good performance...".
If ninode for vxfs is dynamic and not static then won't good performance always be the case? I can only see poor performance as an issue if static numbers like 600/201455 are being reported. For example, only 3% of the ninode table is being used. But vxfs is not a fixed table. Its dymanic.
So good performance is always present. No?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 05:15 AM
тАО09-18-2003 05:15 AM
Re: sar -v and ninode results
Also, as I said before, sar does not report on VXFS (vx_ninode), only HFS (ninode). So those numbers are only useful if your systems is mostly HFS.
HTH.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 05:31 AM
тАО09-18-2003 05:31 AM
Re: sar -v and ninode results
Like most I've only got one HFS file system and I can't believe /stand is using 9284 out of 9284 inodes. So just how reliable is this sar -v number? Are some vxfs values are being added in?
If you are saying that sar -v is only accurate for servers with HFS file systems and no VXFS file systems then I would agree. But this is 11.00 and not 9.0 or earlier.
Ultimately I'm looking for a relationship to our problems and a maxed out ninode report, and it appears that there is none.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 11:06 AM
тАО09-18-2003 11:06 AM
Re: sar -v and ninode results
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 11:17 AM
тАО09-18-2003 11:17 AM
Re: sar -v and ninode results
You probably want to collect more data. I'm attaching a script that does that and allows you to set the time period for collection. It runs background.
You should know that leaving the kernel parameter vx_ninode as 0(zero) can vastly distort your system. This lets the system decide what level to set and change it when it wants to. This is a vast waist of resources and can cause nasty results.
I'm pasting in a link that helps with basic performance tuning. It covers this and other topics. Its a good read, and I know the author, he's a wiz.
http://www2.itrc.hp.com/service/cki/search.do?category=c0&docType=Security&docType=Patch&docType=EngineerNotes&docType=BugReports&docType=Hardware&docType=ReferenceMaterials&docType=ThirdParty&searchString=UPERFKBAN00000726&search.y=8&search.x=28&mode=id&admit=-1335382922+1063912588647+28353475&searchCrit=allwords
Doc ID: UPERFKBAN00000726
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 12:18 PM
тАО09-18-2003 12:18 PM
Re: sar -v and ninode results
So, to test this I took our TestPRD box (which was having so similar issues) and changed our vx_ninode setting to 90% of our ninode value and then set vx_noifree to 1. It has been running about a week with these settings and my memory utilization has stabilized. Before I made these changes memory utilization was higher and it did not grow and shrink, it only grew. It acted like a memory leak. It would eventually get so bad that it would start swapping and then of course vhand kicked in and eat up cpu. So I think that is what is happening on our app servers, but they get rebooted every weekend and don't have a chance to get that bad.
Anyways, I just thought I would drop this in your ear. Maybe you will see some similar issues to your situation.
Hope this helps!
-Bryan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-18-2003 07:04 PM
тАО09-18-2003 07:04 PM
Re: sar -v and ninode results
As pointed before, ninode is only for HFS filesystem and there is absolutely no need to have it more than 500-1000 if you have no other hfs filesystem than /stand. You can simply delete the formula and hardcode the number there. The formula is no more valid with the introduction of vxfs's vx_inodes.
You are right that sar is reporting a wrong value as it cannot keep track of closed files where those inodes would be usable again.
On the other hand vx_ninode is dynamic if set to 0 for vxfs filesystems whose value is not reflected in sar -v output. You can get it by using
echo vxfs_ninode/D | adb -k /stand/vmunix /dev/mem
-Sri