- Community Home
- >
- Servers and Operating Systems
- >
- Legacy
- >
- HPE 9000 and HPE e3000 Servers
- >
- Re: performance questions
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-02-2005 08:11 AM
тАО06-02-2005 08:11 AM
A performance cookbook I have found that was written by a few HP engineers covers these points:
It asks if the system is primarily random access or sequential access on the FS. It mentions this on using buffer cache:
When access is primarily random, any read-ahead I/O performed by the buffer cache routines is "wasted": logical read requests will invoke routines that will look through buffer cache and not get hits. Then, performance degradation results because a physical read to disk will be performed for nearly every logical read request. When mincache=direct is used, it causes the routines to bypass buffer cache: I/O goes directly from disk to the process's own buffer space, eliminating the middle steps of searching the buffer cache and moving data from the disk to the buffer cache, and from there into the process memory. If mincache=direct is used when read patterns are very sequential, you will get hammered in the performance arena, because very sequential reading will take big advantage of read ahead in the buffer cache, making logical I/O wait less often for physical reads. You want much more logical than physical reading for performance (when patterns are sequential).
Here is an example of the exception to the rule: We have seen special cases such as a large, 32-bit Oracle application in which the amount of shared memory limited the size of the SGA, thus limiting the amount of memory allocated to the buffer pool space; and (more important) Oracle was found to be reading sequentially 68 percent of the time! When the mincache=direct option was removed (and the buffer cache enlarged), the number of physical I/Os was greatly reduced, which increased performance substantially. Remember, this was a specific, unique, pathological case; often experimentation and/or research is required to know if your system/application will behave this way.
From you experience, would these systems benefit more from using buffer cache, or from disabling buffer cache?
Also, each partition will have 16gb RAM and are currently configured like this:
# swapinfo -tam
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 4096 0 4096 0% 0 - 1 /dev/vg00/lvol2
reserve - 435 -435
memory 16367 2106 14261 13%
total 20463 2541 17922 12% - 0 -
There is enough disk space to increase swap to whatever. The cookbook mentions this:
swapmem_on
This trick to enable pseudo swap is used to increase the amount of reservable virtual memory. It's only useful when you can't configure as much swap as you need. For example, say you have more physical memory installed than you have disk available to use as swap: in this case if pseudo swap is not turned on, you'll never be able to use all the memory you have installed. The problem is, managing pseudo swap takes up some memory itself, and can slow performance! We recommend you set this to 0 unless you have a boatload of memory and not enough disk available for allocating to swap.
So, is this telling me that I should increase the disk swap to 32gb and disable swapmen_on? Should I increase disk swap and leave swapmem_on=1?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-03-2005 03:48 AM
тАО06-03-2005 03:48 AM
Re: performance questions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-03-2005 04:20 AM
тАО06-03-2005 04:20 AM
Re: performance questions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-03-2005 07:49 AM
тАО06-03-2005 07:49 AM
SolutionYou should really think of pseudoswap as nothing more than kernel bookeeping. Up to 75% of the boxes physical memory can be counted as swap space (although it's never used for that purpose). As long as you have 0.25 x RAM configured as swap space, you machine can utilize all of its memory with swapmem_on=1. Most of my boxes with large amounts of memory are configured this way and they never swap. That's why I bought all that memory is the first place. The old rules about 2-3X RAM (4-6X if mirrored) as swap space are literally decades old.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-03-2005 08:30 AM
тАО06-03-2005 08:30 AM
Re: performance questions
Thanks again.