HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- I/O bottleneck
Operating System - HP-UX
1837095
Members
2378
Online
110112
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-26-2003 01:13 AM
03-26-2003 01:13 AM
I've found using glance an I/O possible bottleneck from disck.
In effetc some oracle process (ora_dbwr) spend a lot of time in waiting for I/O and my SCSI channel throughput is very stressed.
Is there any way to fine tune this situation? Which kernel parametre could I modify?
Thank You
In effetc some oracle process (ora_dbwr) spend a lot of time in waiting for I/O and my SCSI channel throughput is very stressed.
Is there any way to fine tune this situation? Which kernel parametre could I modify?
Thank You
Ubi maior, minor cessat!
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-26-2003 01:39 AM
03-26-2003 01:39 AM
Re: I/O bottleneck
dbwr is the database writer process and when thsese processes spend lot of time writing in one of the disks then there could be a i/o bottleneck.
There is no need to change any kernel settings for the same.
Just distribute the Oracle datafiles across different disks so that the I/O is distributed.
REvert
There is no need to change any kernel settings for the same.
Just distribute the Oracle datafiles across different disks so that the I/O is distributed.
REvert
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-26-2003 02:25 AM
03-26-2003 02:25 AM
Solution
No, there is no kernel parameter you can change to help I/O bottlenecks.
You need to look further into which disk(s) are being used heavily. sar -d 1 10
will show this.
And to prove you are I/O bound check by using the command; sar 1 10
and look at the wio% figure. Anything >20 means you are indeed completely I/O bound. The best thing to do if you cant add more SCSI controllers or faster disks is to use LVM striping (lvcreate -i -I). This balances the load across controllers and disks and will give the maximum I/O throughput.
You need to look further into which disk(s) are being used heavily. sar -d 1 10
will show this.
And to prove you are I/O bound check by using the command; sar 1 10
and look at the wio% figure. Anything >20 means you are indeed completely I/O bound. The best thing to do if you cant add more SCSI controllers or faster disks is to use LVM striping (lvcreate -i -I). This balances the load across controllers and disks and will give the maximum I/O throughput.
Im from Palmerston North, New Zealand, but somehow ended up in London...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-26-2003 04:15 AM
03-26-2003 04:15 AM
Re: I/O bottleneck
I/O bottlenecks often are corrected by adding more memory. There evaluation is more about processes hitting the disks and file systems associated to the disks so it???s per process transfer rate that you're measuring. One dedicated process, like oracle, on a file system verses many processes all competing for write access. Oracle has their own file system block that tends to be larger because in a dedicated process environment a large block size increases performance. With many processes a smaller file system block size increases performance. So with ora_dbwr you do want to probably use Oracle's recommended large block size. Other variables include adding more disks or physical memory.
The basic rule with sar for detecting disk and I/O bottlenecks involve checking the %busy, avwait and avserve metrics:
sar -d 5 5
Note: %busy > 50% (* for most disks is a bottleneck *)
Note: %busy > 20% (* for a small minority of disks is a bottleneck *)
For disks where a disk bottleneck is found then also check avwait and avserve.
avwait > avserve ?
If true then I/O bottleneck.
Also,
avserv > 20ms ?
This also indicates a bottleneck.
The basic rule with sar for detecting disk and I/O bottlenecks involve checking the %busy, avwait and avserve metrics:
sar -d 5 5
Note: %busy > 50% (* for most disks is a bottleneck *)
Note: %busy > 20% (* for a small minority of disks is a bottleneck *)
For disks where a disk bottleneck is found then also check avwait and avserve.
avwait > avserve ?
If true then I/O bottleneck.
Also,
avserv > 20ms ?
This also indicates a bottleneck.
Support Fatherhood - Stop Family Law
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP