- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Possible Disk performance issues
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 10:57 AM
11-07-2004 10:57 AM
sar -u shows: (All stats for today)
%usr %sys %wio %idle
Average 15 5 29 51
Idle time looks good, however, wait for IO time looks suspiciously high. User is low, so no CPU constraint here.
sar -d (this is the disk I am interested in)
10:49:00 device %busy avque r+w/s blks/s avwait avserv
Average c10t0d3 79.35 4.81 761 31086 6.85 4.45
This by itself looks OK, however during peak times, %busy is constantly at 100% queue length can be up as high as 28, and avait can be double avserv.
The disk in question is on an EMC array, 7 x 73GB mirrored to 7 x 73GB. EMC disk stats look OK.
There are only 2 logical volumes on this LUN.
During peak times, we experience a large performance hit, and as we have recenetly upraded to an rp7420 from an N4000, the only thing that has stayed the same is disk (the config has gone from 5 mirrored disks to 7).
vgdisplay -v shows
VG Name /dev/vg10
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 2
Open LV 2
Max PV 16
Cur PV 1
Act PV 1
Max PE per PV 51200
VGDA 2
PE Size (Mbytes) 256
Total PE 1863
Alloc PE 1525
Free PE 338
Total PVG 1
Total Spare PVs 0
Total Spare PVs in use 0
Will the PE size affect the disk performance?
Caching on the array has been changed somewhat to cope with peak loads (mostly read)
Any help here greatly appreciated.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 11:35 AM
11-07-2004 11:35 AM
Re: Possible Disk performance issues
Looks like this is one huge ~475GB LUN. Personally, I am not quite comfortable with disks of this size in my VG configurations. I can tolerate upto 128GB but not beyond. There are two reasons why you can experience bottlenecks.
1. SCSI queue depth on the system: This doesn't change with the size of the disk. It's the same 8 requests for a 36 GB LUN or a 500GB LUN. If you can split this disk into say 8x 64GB or atleast 4x 128GB disks, then you will see improvement. If it is not possible, then try changing the 'queue_depth'. 'man scsictl' for more information.
2. Load balance. If you are not using powerpath, then all the activity to disk will go through only one path. See if you can install 'powerpath' as it can loadbalance betwen the primary and alternate links to the same disk. If you could split this into multiple disks and if you don't have powerpath, then put halt of the disks on one path and halt on the other.
PE size will not affect the performance. It matters only for the disk space. At the max you will be wasting 255 MB of space which is nothing for the size of this LUN.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 12:29 PM
11-07-2004 12:29 PM
Re: Possible Disk performance issues
>> 7 x 73GB * 2
>>> 10:49:00 r+w/s = 761 blks/s = 31086
I would call that busy, to the point of being a bottleneck, from the hpux numbers alone.
We would need the EMC data points, like cache hit% (and thuse the real read IO rate) and the write - to read ratio (and thus the real write rate0).
imho the portential 100 IO/sec per disk is a significant load, and possibly a bottleneck if the access is random enough. If it is 100% read, then the caches are likely to help some, and you have 14 disks to play with for perhaps 30 physical disk IO/sec: readily sustainable. But if it is 100% write, then you mirrors hinder and you have less than 7 disk to spread the load over.
The IOs are relatively big: 40 block/IO?, but my main conccern would be IO/sec, not MB/sec.
Do check the maximum queure depth. That migth explain the significant avwait vs avserv time.
You may want to describe your applicatoon a little for better advise: DB or plain files? Oracle/Sybase/db2? oltp-ish, businesss intellegence? web messages, lots of little files? serialization points (central files/record)
hth,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 02:56 PM
11-07-2004 02:56 PM
Re: Possible Disk performance issues
#scsictl -m queue_depth /dev/rdsk/c10t0d3
queue_depth = 8
Pardon my ignornnce, but I should be making this higher? Is double too far? Can this be done on the fly?
I wown't be able to split to smaller luns in the imeediate to medium term, so I need to look at other alternatives first. Powerpath is also not on the horizon.
The particular file systems in question house PROGRESS database files, and a small tmp/sort area. Not lots of small files.
For the disk in question:
Read Throughput: 1175
Write Throughput: 71
Read Bandwidth: 18065.76
Write Bandwidth: 645.06
Utilisation: 20.45%
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 03:03 PM
11-07-2004 03:03 PM
Re: Possible Disk performance issues
It should be set in accordance with the disk array. Check with EMC support on it. Setting it too high may result in timeouts.
I believe you should be ok with 16. See if it helps.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 03:41 PM
11-07-2004 03:41 PM
SolutionAh! that queue depth is probably not enough.
I'd go 'all in'... ok, 1/2 in : try 128.
I seem to recall we set this at a system wide basis, not per device. But I don't recall just know how. Just go per device for now.
my rule-of-the-thumb is: queue-depth is 4 per physical disk behind the cotroller presented disk. So that would 30+ for you.
You still have those 7 disks, even though presented as 1. It is more than reasonabel to be able to have an IO outstanding to each of those, and to have an other few in the queue for each of them.
fwiw,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 04:33 PM
11-07-2004 04:33 PM
Re: Possible Disk performance issues
Thanks for the fast responses. Points will be assigned in due course.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-07-2004 09:57 PM
11-07-2004 09:57 PM
Re: Possible Disk performance issues
What i always do on a 64GB metadevice.(striped EMC 8*8) is to set it to 40
then i set the Lun:
scsictl -a -m queue_depth=40 -m queue_depth /dev/rdsk/cxtydz
Try it. might help u out..
But again.. More disks is better because of the io on ONE FC channel.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-09-2004 03:01 PM
11-09-2004 03:01 PM
Re: Possible Disk performance issues
Thanks for the responses, it's always good to know there are guru's who can answer just about anything!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
12-01-2004 10:19 AM
12-01-2004 10:19 AM
Re: Possible Disk performance issues
Disk is still very busy, however avserv > avwait (generally), and disk queue lenthgs now sit at 0.50
EMC were of little help here, as were HP actually...
Thanks guys for your help!