Operating System - HP-UX
1752401 Members
5643 Online
108788 Solutions
New Discussion юеВ

Disk I/O performance using gpm

 
SOLVED
Go to solution
Fidel Ramirez_1
Frequent Advisor

Disk I/O performance using gpm

I'm trying to identify if disk I/O is a bottleneck on an HP-UX 11.23 system connected to a SAN with two-2 Gb fiber channel interface cards.
I'm using GlancePlus gpm utility and getting charts from Global Disk Summary and Disk Throughput.
On the first chart, is the Peak Disk Utilization graph helping somehow? These are peak values only.
And if so, what are the best practices on this? How to know if a value is ok?
The second chart provides individual disk throughput graphs. What are the best practices so this can be understood clearly.
Also, I've read in this Forum some people saying GlancePlus is old and it won't report correctly disk performance on a SAN. If this were the case, what are other options and how to use them?
Thank you,

Fidel
10 REPLIES 10
Michael Steele_2
Honored Contributor
Solution

Re: Disk I/O performance using gpm

Hi

I don't use gpm unless I'm instructing. I prefer sar -d to search for a disk bottlenect. It's very easy. When 'avwait' > 'avserv' = disk bottlenect.

$Wio is also useful. It should be 70 to 90%. If it too low, like 30%, then the dba's need to reindex their database.
Support Fatherhood - Stop Family Law

Re: Disk I/O performance using gpm

>> GlancePlus is old and it won't report correctly disk performance on a SAN.

no thats not the case - you just have to know what the stats are telling you...

in glance "disk utilization" (and %busy in sar -d) simply tells you what percentage of time during a measurement interval a particular disk or LUN was actually servicing some sort of IO - now it doesn't take into account what that disk is actually capable of doing in terms of IO, simply how much time it is busy doing something. Now obviously you could see 2 disks on a system, one of which is an old 10K RPM 36GB physical disk, and another which is a LUN presented from a disk array which actually might be made up of dozens of 15K RPM disks fronted by a load of cache. To sar or glance, these devices look the same - they are both devices of type sdisk - but if they both show close to 100% utilization/%busy what does that mean? Well it might mean a lot to the physical disk, but probably isn't very important for the SAN LUN. Of course there are other stats in both glance and sar which are much more relevant, but the sar values Michael has pointed you at are a good starting point.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Fidel Ramirez_1
Frequent Advisor

Re: Disk I/O performance using gpm

Thank you both Michael and Duncan for your response. Duncan, I will like to know what are those other relevant stats. My idea is to have a general and detail picture of disk performance using GPM. What are the best practices for that or what are simple rule of thumbs to have a clear picture.
Please let me know if you guys can see the points assigned for these responses because on my response I don't see them.
Thank you again.

Re: Disk I/O performance using gpm

Fidel,

Don't see any points assigned...

I don't use gpm, but I do use glance, and for disks generally look at the IO by Disk screen (press 'u' in glance or just launch glance with "glance -u")

In there the _real_ interesting metric is the last column (Serv Time) - this is the same as "avserv" from sar -d. You want to see this below 10ms for a well performing system, and anything over 20ms is usually a cause for concern.

In addition the Phys IO column is pretty interesting as this tells us how many IOs per second the LUN is handling - a physical disk spindle is typically capable of handling over 100 IOs per second, and most stuggle with over 200 IOs pers second, but that's really in _very_ broad hand waving terms as it can depend on many factors associated with data location/disk geometry and IO sizes etc.

More than that is difficult to say from a "generalities" perspective. As with all performance measuring, the thing to do is have a picture of what the system looks like when performance is good, and then compare that to when you have a performance issue.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
Fidel Ramirez_1
Frequent Advisor

Re: Disk I/O performance using gpm

Thank you very much Duncan for your precise comments on disk I/O.
I assigned 10 points to both of you. In case they don't show up, please let me know to correct it.
Thank you very much again.
Steven E. Protter
Exalted Contributor

Re: Disk I/O performance using gpm

Shalom,

Glance will let you identify the disks with high I/O.

Then you need to take that information, along with ioscan output to identify the LUN to the SAN administrator in order to get the SAN administrator to look for trouble on the SAN.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Fidel Ramirez_1
Frequent Advisor

Re: Disk I/O performance using gpm

Thank you for your response Steven.

Fidel
Emil Velez
Honored Contributor

Re: Disk I/O performance using gpm

You want disk utilization to be less than 50% and you want the queue to be short.
Fidel Ramirez_1
Frequent Advisor

Re: Disk I/O performance using gpm

Hi Emil,

I've seen that requirement before but how will you compare a fast disk on a SAN with an old disk drive?
Are they both weighted with the same rule?
Thank you. Fidel