- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- wio % high
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 02:51 AM
тАО08-13-2001 02:51 AM
the output from sar shows that the wait I/O is high.
samples
Average 10 9 55 25
swapinfo -mta
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 512 73 439 14% 0 - 1 /dev/vg00/lvol2
dev 512 68 444 13% 0 - 1 /dev/vg00/lvol9
dev 512 72 440 14% 0 - 1 /dev/vg00/lvol10
reserve - 571 -571
memory 725 599 126 83%
total 2261 1383 878 61% - 0 -
is there a kernel parameter that needs changing
Thank you
Jane
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 03:18 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 04:59 AM
тАО08-13-2001 04:59 AM
Re: wio % high
I have attached a copy of the sar report.
thanks
Jane
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 06:45 AM
тАО08-13-2001 06:45 AM
Re: wio % high
What i can say from the sar report is that the disks
c4t1d0
c5t1d0
c4t2d0
c5t2d0
c2t2d0
the %busy is more than 60% continuously, this means that these disk have become a bottleneck.
You can try to move one or more file or filesystem from these disks to a lesser used disk. This can help decrease the load on those disk.
...BPK...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 07:28 AM
тАО08-13-2001 07:28 AM
Re: wio % high
thanks again for your help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 07:36 AM
тАО08-13-2001 07:36 AM
Re: wio % high
About the only way to find busy lvols and files is with Glance. Glance will show doisk I/O by hardware channel, mounted filesystem or lvol. Glance can also show you the program(s) that are performing the highest I/O.
Once you identify a really busy program, use Glance's ability to list all the open files for a single program and browse through the list looking for files where the offset jumps every time you refresh Glance. These are likely candidates as the busy files. See what it takes to move the file(s) to other locations. Repeat for other busy programs. Glance is available as a free trial version on your Application CDs (usually #2 CD).
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-13-2001 10:28 PM
тАО08-13-2001 10:28 PM
Re: wio % high
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-14-2001 05:15 AM
тАО08-14-2001 05:15 AM
Re: wio % high
Yes these disks do have a database on them and it is only one filesystem stipped, with a three way mirror.
I have foundthe problem. The users are running 'end of year reports' during the day instead of using the over night batch job facility.
Thanks to all who helped.
JMS
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-14-2001 05:36 AM
тАО08-14-2001 05:36 AM
Re: wio % high
I believe I have the database and system tuned as best I can, with the requirements placed on the kernal (database wants some weird numbers, even though it doesn't come close to utilizing them).
Anywho... with a WIO% of over 40%, and a belief that the only way to fix it is a physical upgrade, which direction would you go? Additional Memory or additional hard drives for the Autoraid.
Current stats: 512M RAM
5 9 GB, 10K drives in HP Autoraid
Database size: 3+ GB
Progress database, if that matters (if anyone recognizes it...!)
TIA
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-14-2001 07:09 AM
тАО08-14-2001 07:09 AM
Re: wio % high
I know u say your app/dbase are tuned as well as you can, but is the AutoRAID setup correctly? This can cause a lot of problems, especially a high WIO.
There is a lot of contention over the best way to setup an AutoRAID - An effective performant way of setting it up is if you have 2 controllers and attach with 2 HBAs and require 2 VGs (vg00 & vgapp) then create 4 LUNs of half the total size u require (2 for each VG). Each LUN will have 2 paths. Create the VG with alternate paths for the 2 LUNs. e.g. c3t6d0 c2t5d1
Then vgextend the alternate paths to the LUNs e.g. c2t5d0 c3t6d1.
Then create your logical volumes by striping across the LUNs e.g. lvcreate -i 2 -I 64 -L ?? -n lvol1 /dev/vg00. Setup up your filesystems. Perform the same procedure for your second VG.
This method ensures that you utilise both channels for your data giving you maximum throughput of 20MB/S for each path.
The AR uses a working set, which basically ensures the most regularly used files are in the fastest access storage, be it cache or RAID0/1. As you utilise the space more data is migrated to RAID5 to provide increased capacity the default is to ensure a minimum of 10% for RAID0/1. Unallocated space is used for RAID0/1 as free space within your LUNs. However, once you use the space in the LUNs it will not be used for RAID0/1 again even if u delete the data, due to how the data is stored.
One thing that does concern me is that you only have 5 disks, the AutoRAID performs at it's optimum with all 12 disks. I would definitely recommend you add more disks. As to memory, this is a separate issue to WIO - have a check how your buffer cache is working and if any processes are waiting on memory, but I don't think this will help with this issue.
Hope this helps, Paul