- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- F/S Buffer cache & CPU & WIO
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 06:39 AM
тАО10-30-2001 06:39 AM
F/S Buffer cache & CPU & WIO
My question concerns a N4000/44 4xCpu 4Gb system where dbc_max_pct was reduced from 50% to 15% because of > 95% memory usage, pageouts & deactivations/reactivations and reclaiming memory pages from the f/s cache..
The subsequent result of this change was a vast improvement in memory usage < 95%, f/s caching levels were still in the high 90 %'s, and a reduction in cpu % system, but there was also an additional 15% cpu WIO (From 14% - 31% avg).
Can anyone explain why this should happen, and should I be concerned about this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 06:54 AM
тАО10-30-2001 06:54 AM
Re: F/S Buffer cache & CPU & WIO
If you keep less in the file buffer cache, then I'd expect some "penalty" for having to wait to do an I/O. I think the important factor is overall performance. If it's better then you have moved in the right direction.
Regards!
...JRF...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 06:54 AM
тАО10-30-2001 06:54 AM
Re: F/S Buffer cache & CPU & WIO
Your description should be more precise, because concerning the statistics about memory, it depends mostly what kind of soft you use on N4000, what kind of FS you create, i.e :
Buffer Cache is not used in case of Raw Devices for Oracle Database, for example.
VXFS don't use the same way the Buffer Cache than HFS File Systems.
And it can be a normal behavior that 90% of the buffer cache is used : it means that all the pages read from FS are loaded into Buffer cache and read from it afterwards, without direct access from disk and FS. It is better to have a threshold always near 100%, better than 5% which means that the system has to access directly to the disk/fs without using memory.
PJA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 07:12 AM
тАО10-30-2001 07:12 AM
Re: F/S Buffer cache & CPU & WIO
1. You need to be concerned about %wio. But this particular factor doesn't necessarily mean you need to increase your buffer cache size. with 15% of 4GB which is 600MB you have correctly configured the buffer cache. If you are seeing a lot of %wio even with this buffer cache, it means that your IO subsystem is not adequate and the response times may not be good. So, time for you to concentrate on improvising the disk subsystem by better arranging the logical volumes, finding the hot disks or considering striping etc.,.
2. What about your application?. Has it been improved after reducing the buffer size?.
3. When your %wio is more than 15%, try running sar -d and check for the disks are being used more than 50% and that have high avserv times. You need to move the data from off of those disks onto least used disks.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 07:58 AM
тАО10-30-2001 07:58 AM
Re: F/S Buffer cache & CPU & WIO
I agree with all your comments, but in the absence of any evidence from glance, gpm, sar and all other tools to suggest an I/O bottleneck on the disks, why the sudden increase in cpu % WIO.
The only other option is that the application needs tuning to run more efficiently.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 08:08 AM
тАО10-30-2001 08:08 AM
Re: F/S Buffer cache & CPU & WIO
>>The subsequent result of >>this change was a vast >>improvement in memory usage >>< 95%,
A good sign.
>>f/s caching levels were >>still in the high 90 %'s,
Which is good, since it means
the cache is serving the data
rather than the disk. Higher
the % the better
>> and a reduction in cpu % >>system, but there was also >>an additional 15% cpu WIO
>>(From 14% - 31% avg).
It shows that your system
is havin I/O intensive applications which are doing
a lot of I/O. If the performance is not acceptable
to the users , you can look at the disk configuration,
how the lv''s are setup,
which disks are busy (sar -d helps), the type of disks
being used ?, the connection
to the disks? (Fibre, scsi?),
the application itself.
I think the Buffer cache
configuration is fine. With
600Mb configured, you can
leave it as it is and look
at the I/O piece.
HTH
raj
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 08:18 AM
тАО10-30-2001 08:18 AM
Re: F/S Buffer cache & CPU & WIO
To answer your specific question: You are comparing apples to oranges. Yes, %WIO did increase but because you removed a significant
bigger bottleneck (memory) you now increased the ability of the I/O subsystem to become a bottleneck and thus its role as a bottleneck has increased BUT overall system throughput has gone up.
In essense, you have removed a small pipe now the next smaller pipe plays a bigger role in impeding the flow of water to the faucet.
Clay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-30-2001 08:23 AM
тАО10-30-2001 08:23 AM
Re: F/S Buffer cache & CPU & WIO
It's always a default cocern about the application. It needs tuning.
The reason why you are seeing the %wio is because the CPU is waiting to get rid of the IO part of the processes. It had successfully completed to process except for the IO portion and is waiting to get into the buffer. If the activity between the buffer cache and the disk subsystem is fast enough, you don't see this sign. If there is too much of buffer cache, cpu can simply dump the IO in the buffer cache and get rid of the process. But this will cause memory to be used and the kernel will spend more time in processing the IO from the buffer to the disks.
The most important thing to look at is the avserv time in sar -d. This is the time in ms it took for that particular LUN/disk to process a request. If it is high, then there is a problem but not necessarily a bottleneck depending on your application. But if %busy is more than 70 and if avserv is having high value it is a bottleneck.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-11-2002 07:45 AM
тАО02-11-2002 07:45 AM
Re: F/S Buffer cache & CPU & WIO
I use Informix & when I see %wio high it is generaly because there are not enough IO threads. I can increase these threads in the application. I can also analyse their throughput (in a gross sense, not like MeasureWare). I do not know, but you might like to look at increasing the IO throughput from the app side.
Tim