- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Extremly HIGH Qlen in Glance.
Categories
Company
Local Language
Forums
Discussions
Knowledge Base
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 09:52 AM
09-23-2002 09:52 AM
Extremly HIGH Qlen in Glance.
Having disk performance problems on HP/UX 11.0, but the disk performance issues appear to me as if they are not disk related. For example EMC WorkLoad analyzer tells me pretty much everything is running without any bottlenecks on the Symmetrix.. And EMC has verified this with thier SymTop tool that they use when dialing in.
Yet, disk access is extremly slow.
I also have 4 fiber channel cards load balanced to the EMC using powerpath. None of the cards ever reach more then 50% utilization.
The only issues I see anywhere is this:
A) Glance reports disks at 100% utilization most of the day..Yet, EMC is not even close to 100% utilization.
And the kicker.
B) Extremly high QLEN's inside of Glance. I have never ever seen QLEN's this high in my life.
Can someone explain QLEN, what it is, how it works.. Anyway to get QLEN lower? Someone told me they thought 1000 QLEN was fairly high.. I have LUNS that have like 50k+ QLens most of the day!
Help me please, I am in QLen hell :) lol
Idx Device Util Qlen KB/Sec Logl IO Phys IO
--------------------------------------------------------------------------------
43 6/1/0.1.16.0.0.0.3 33/ 43 52386.1 1160.7/ 1896.0 0.0/ 0.0 53.2/ 72.0
44 5/0/0.1.16.0.0.0.3 30/ 43 52187.0 1334.3/ 1907.2 0.0/ 0.0 55.6/ 72.5
45 3/0/0.1.17.0.0.1.3 49/ 30 59658.0 286.7/ 442.2 na/ na 35.8/ 42.6
46 4/1/0.1.17.0.0.1.3 48/ 30 59782.0 295.8/ 433.6 0.0/ 0.0 33.7/ 42.0
47 6/1/0.1.16.0.0.1.3 48/ 30 59695.0 323.0/ 436.6 0.0/ 0.0 36.6/ 42.5
48 5/0/0.1.16.0.0.1.3 46/ 30 59663.0 285.2/ 433.7 0.0/ 0.0 31.6/ 42.4
49 3/0/0.1.17.0.0.0.4 18/ 28 60507.7 211.3/ 411.1 na/ na 25.4/ 41.6
50 4/1/0.1.17.0.0.0.4 21/ 28 60353.4 258.1/ 406.8 0.0/ 0.0 31.5/ 41.6
51 6/1/0.1.16.0.0.0.4 23/ 29 60194.7 249.0/ 405.0 0.0/ 0.0 29.4/ 41.5
52 5/0/0.1.16.0.0.0.4 22/ 29 60379.6 232.4/ 404.3 0.0/ 0.0 28.6/ 41.4
53 3/0/0.1.17.0.0.1.5 41/ 40 57685.0 246.0/ 428.4 na/ na 29.6/ 43.6
54 4/1/0.1.17.0.0.1.5 44/ 40 57999.0 270.1/ 424.7 0.0/ 0.0 32.2/ 43.3
55 6/1/0.1.16.0.0.1.5 39/ 40 58031.0 219.6/ 427.5 0.0/ 0.0 25.6/ 43.4
56 5/0/0.1.16.0.0.1.5 44/ 39 57585.0 224.9/ 423.4 0.0/ 0.0 26.7/ 43.1
57 3/0/0.1.17.0.0.1.0 26/ 38 58081.0 327.5/ 638.6 na/ na 28.1/ 51.7
58 4/1/0.1.17.0.0.1.0 26/ 38 58188.0 353.2/ 628.9 0.0/ 0.0 31.1/ 50.8
59 6/1/0.1.16.0.0.1.0 22/ 37 57921.0 348.6/ 638.5 0.0/ 0.0 27.7/ 51.1
60 5/0/0.1.16.0.0.1.0 25/ 38 57950.0 333.5/ 637.0 0.0/ 0.0 26.7/ 51.7
61 3/0/0.1.17.0.0.1.1 86/ 41 50789.6 400.0/ 560.9 na/ na 38.8/ 48.4
62 4/1/0.1.17.0.0.1.1 75/ 40 50792.2 422.6/ 564.7 0.0/ 0.0 36.2/ 48.2
63 6/1/0.1.16.0.0.1.1 67/ 40 50485.3 460.3/ 573.2 0.0/ 0.0 39.4/ 48.5
64 5/0/0.1.16.0.0.1.1 67/ 41 50485.4 442.2/ 561.4 0.0/ 0.0 38.8/ 48.0
65 3/0/0.1.17.0.0.1.2 27/ 38 49446.0 294.3/ 521.1 na/ na 30.9/ 47.2
66 4/1/0.1.17.0.0.1.2 25/ 38 49488.0 321.5/ 525.6 0.0/ 0.0 33.0/ 47.2
67 6/1/0.1.16.0.0.1.2 28/ 38 49278.0 297.3/ 519.7 0.0/ 0.0 31.8/ 46.5
68 5/0/0.1.16.0.0.1.2 28/ 38 49270.0 288.3/ 530.2 0.0/ 0.0 27.9/ 47.0
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:09 AM
09-23-2002 10:09 AM
Re: Extremly HIGH Qlen in Glance.
HTH
Marty
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:09 AM
09-23-2002 10:09 AM
Re: Extremly HIGH Qlen in Glance.
Can you tell us what filesystem(s) are on those disks, and what are the mount options? What kind of data is on those disks? Are there lots of small files or just a few big files (like an Oracle database)?
The disk utilization % in Glance is just for the busiest disk on the system, and not for all the disks as a group.
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:12 AM
09-23-2002 10:12 AM
Re: Extremly HIGH Qlen in Glance.
QLEN refers to high I/O requests. You may reduce the number of physical connections to resolve this issue.
Also there is a firmware available to resolve this issue. You may ask HP for a free in-warranty firmware for this.
I believe the latest release is HP16. In order to get there, you'll need to upgrade Command View SDM to version 1.04. I believe the firmware you have for the Brocade is the latest. There are some significant performance enhancements included in HP14, however, that was a factory only install. HP15 also included some additional enhancements. I'm not sure of what's included in HP16 but I beleieve it should be available. Contact your local HP to schedule an upgrade
Regards,
Anil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:12 AM
09-23-2002 10:12 AM
Re: Extremly HIGH Qlen in Glance.
QLEN refers to high I/O requests to a disk. You may reduce the number of physical connections to resolve this issue.
Also there is a firmware available to resolve this issue. You may ask HP for a free in-warranty firmware for this.
I believe the latest release is HP16. In order to get there, you'll need to upgrade Command View SDM to version 1.04. I believe the firmware you have for the Brocade is the latest. There are some significant performance enhancements included in HP14, however, that was a factory only install. HP15 also included some additional enhancements. I'm not sure of what's included in HP16 but I beleieve it should be available. Contact your local HP to schedule an upgrade
Regards,
Anil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:16 AM
09-23-2002 10:16 AM
Re: Extremly HIGH Qlen in Glance.
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:23 AM
09-23-2002 10:23 AM
Re: Extremly HIGH Qlen in Glance.
*) I am running Glance 3.35
*) The filesystems on the disk are VxFS 3.1 .. It's 2 filesystems.. About 200 gig of filesystems.. Mostly small files, but large files as well.. Approximatly 200,000 files per filesystem.
*) Anil, what firmware are you speaking of? The V class firmware which you load through the b180 console? Or some sort of Brokade switch firmware? Do you have a link you can send me?
*) John, here are my Global waits from Glance.. Does this help?
Procs/ Procs/
Event % Time Threads Blocked On % Time Threads
--------------------------------------------------------------------------------
IPC 0.0 0.00 0.0 Cache 0.1 18.78 2.8
Job Control 0.0 0.00 0.0 CDROM IO 0.0 0.00 0.0
Message 0.0 6.75 1.0 Disk IO 0.0 0.00 0.0
Pipe 0.5 72.95 11.0 Graphics 0.0 0.00 0.0
RPC 0.0 0.00 0.0 Inode 0.0 0.00 0.0
Semaphore 0.0 6.62 1.0 IO 0.3 37.99 5.7
Sleep 8.7 1251.99 189.4 LAN 0.0 0.00 0.0
Socket 0.2 26.47 4.0 NFS 0.0 0.00 0.0
Stream 84.6 12213.00 1847.7 Priority 0.0 3.18 0.5
Terminal 0.0 6.62 1.0 System 3.5 511.87 77.4
Other 1.5 218.21 33.0 Virtual Mem 0.0 0.72 0.1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:25 AM
09-23-2002 10:25 AM
Re: Extremly HIGH Qlen in Glance.
What kind of server is this?
It looks like an L-class?
If it is an L-class, are these IO cards in the "Shared PCI" slots 3-6??
live free or die
harry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:26 AM
09-23-2002 10:26 AM
Re: Extremly HIGH Qlen in Glance.
Qlen is the average number of IOs that are the in Queue waiting to be processed by the physical disk. As you heard, this should be a low value.
It is not going to be very good loadbalancing. Take the measurement of Qlen and the response time by disabling powerpath.
Also report your sar -d stats.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:30 AM
09-23-2002 10:30 AM
Re: Extremly HIGH Qlen in Glance.
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:32 AM
09-23-2002 10:32 AM
Re: Extremly HIGH Qlen in Glance.
Use sar -d to verify an I/O problem.
HTH
Marty
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:40 AM
09-23-2002 10:40 AM
Re: Extremly HIGH Qlen in Glance.
Check out this sar -d... It's very entertaining!! LOL!
14:39:45 device %busy avque r+w/s blks/s avwait avserv
14:39:51 c3t6d0 56.09 0.50 56 562 5.22 16.29
c6t6d0 35.93 0.50 40 378 5.37 12.98
c25t0d2 36.53 52634.64 73 1450 8.47 11.75
c31t0d3 31.94 51402.71 50 1459 6.10 16.98
c19t0d2 33.53 52523.71 61 1287 10.32 13.80
c19t0d4 27.54 59244.19 45 1351 13101239296.00 0.00
c19t1d4 0.20 52412.50 0 3 0.00 0.00
c19t0d3 30.14 51167.66 50 1466 6.26 15.13
c19t1d2 36.53 48157.84 62 1699 6.06 14.07
c7t1d0 25.15 57678.84 46 1009 10.87 15.08
c7t1d1 38.52 49737.83 62 1517 13241108480.00 0.00
c7t1d2 28.54 48337.57 51 1463 5.44 12.31
c19t1d6 0.40 51179.50 0 3 7.55 14.82
c19t1d3 22.95 59280.55 26 699 6.22 19.50
c25t0d4 27.74 59268.39 42 1456 4099361536.00 0.00
c25t1d5 31.94 57703.77 39 1118 9.41 16.99
c25t1d0 21.96 57517.89 42 945 11.32 15.82
c25t0d3 32.93 51413.54 51 1627 5.10 14.30
c25t1d2 30.34 48104.50 51 1514 4.96 12.94
c25t1d1 35.13 49204.01 59 1463 6468949504.00 0.00
c25t2d6 1.20 65497.50 7 156 4.83 1.93
c25t0d1 32.73 52702.69 96 1618 7.44 8.45
c25t1d3 23.55 59331.64 27 703 7.55 20.66
c31t1d6 0.40 51065.50 0 3 6.31 18.90
c31t1d0 25.35 57775.03 45 917 13.16 15.51
c31t1d1 33.53 50013.25 52 1230 16.28 18.70
c31t1d4 0.80 52508.50 0 6 8.60 17.95
c31t1d5 31.34 57656.67 41 1124 8.84 15.96
c31t2d6 0.40 65505.50 5 141 3.78 1.89
c31t1d2 29.74 48363.68 55 1421 5.73 12.34
c31t1d3 23.35 59445.81 27 754 10.15 23.19
c31t0d1 33.53 52761.53 84 1482 5.63 8.25
c31t0d2 31.14 52358.66 59 1143 7.32 12.48
c31t0d4 27.54 59655.99 41 1327 15188329472.00 0.00
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:45 AM
09-23-2002 10:45 AM
Re: Extremly HIGH Qlen in Glance.
Somehow my inner instict tells me that there is a problem with Powerpath. Did you try disabling it for see how the response times were?. You may run without load balance for sometime but you can eliminate a major factor from the scene.
Sar -d tells me that there is no problem with glance report.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:46 AM
09-23-2002 10:46 AM
Re: Extremly HIGH Qlen in Glance.
I am sorry. Last time when i posted about Firmware it was for an array connected to a brocade switch.
You may refer the following thread for understanding QLEN in detail.
http://www2.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&docId=200000062919617
Regards,
Anil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 10:49 AM
09-23-2002 10:49 AM
Re: Extremly HIGH Qlen in Glance.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 11:05 AM
09-23-2002 11:05 AM
Re: Extremly HIGH Qlen in Glance.
I'm not so sure that the problem is with PowerPath. You can see how PowerPath is running with this command:
powermt display
You can watch PowerPath continuously with this:
powermt watch
I don't think you can disable PowerPath without rebooting. You aren't in any danger of losing connectivity to the disks, or EMC would be calling you. I still think this might be an issue with the mount options, and if these are VxFS filesystems you can change the options on the fly without unmounting or bringing down anything. What options do you have those filesystems mounted with?
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 11:12 AM
09-23-2002 11:12 AM
Re: Extremly HIGH Qlen in Glance.
I am using the default VxFS mount options, except the only option I add during mount time is -o largefiles.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 11:21 AM
09-23-2002 11:21 AM
Re: Extremly HIGH Qlen in Glance.
One of your posts showed the wait times. Your I/O waits were very low, which is good, but your streams waits were up around 84%. Does this application store lots and lots of small files that come and go via the network?
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 12:04 PM
09-23-2002 12:04 PM
Re: Extremly HIGH Qlen in Glance.
This is the fsadm for the largest most hit filesystem:
The application does alot of small file stuff.. But, the files are not transmitted via the network. Just worked on locally.
The application is telnet character based. Another thing is there are close to 300,000 files in this filesystem if that may be an issue.
v2500:/premdor/MP5/&SAVEDLISTS& # fsadm -F vxfs -E /premdor
Extent Fragmentation Report
Total Average Average Total
Files File Blks # Extents Free Blks
827793 20 1 2776275
blocks used for indirects: 366
% Free blocks in extents smaller than 64 blks: 28.48
% Free blocks in extents smaller than 8 blks: 3.97
% blks allocated to extents 64 blks or larger: 94.32
Free Extents By Size
1: 9121 2: 7297 4: 8308 8: 6894
16: 6441 32: 6027 64: 4390 128: 2299
256: 374 512: 269 1024: 211 2048: 0
4096: 0 8192: 0 16384: 0 32768: 19
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 12:21 PM
09-23-2002 12:21 PM
Re: Extremly HIGH Qlen in Glance.
If I'm reading your last post correctly, you have over 827,000 files in this one filesystem. Yikes!! There are some performance issues for VxFS filesystems with large numbers of files. I have seen threads discussing that issue here and they generally say that around 100,000 to 150,000 files is the upper limit for good performance on VxFS filesystems.
I would try running the directory and extents reorganization on that filesystem ('fsadm -F vxfs -d -D FILESYSTEM' and 'fsadm -F vxfs -e -E FILESYSTEM' - see 'man fsadm_vxfs'). Since you have so many files I would suggest running it at an off peak time if you can. Try that and see if that helps your situation. If it doesn't, you might need to think about breaking up that large filesystem into several smaller filesystems to help your performance.
Since you have monitored your EMC (and EMC has looked at it also), and there aren't any issues there, I don't think there is much else you can do from the system side to make things better. There aren't any kernel parameters that will help your situation [we are all still searching for that magical, undocumented parameter that makes the system run 10 times faster ;) ]. It won't be an easy fix, but I'd sure like to see what happens.
JP
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 12:26 PM
09-23-2002 12:26 PM
Re: Extremly HIGH Qlen in Glance.
Based on your first message and the recent one, I think you may want to make use of your buffer cache a little better.
Your figures show that there is no logical IO at all (if I am reading the columns well). At the same time physical IOs are not in alarming numbers. Raw I/O can be painful if it is not used in correct circumstances. If the IOs are very small, then buffer cache may help you a bit.
What is your "mount" output. You do seem to have enabled OnlineJFS options of mincache and convosync to direct.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 12:27 PM
09-23-2002 12:27 PM
Re: Extremly HIGH Qlen in Glance.
Based on your first message and the recent one, I think you may want to make use of your buffer cache a little better.
Your figures show that there is no logical IO at all (if I am reading the columns well). At the same time physical IOs are not in alarming numbers. Raw I/O can be painful if it is not used in correct circumstances. If the IOs are very small, then buffer cache may help you a bit.
What is your mount -a output. You do seem to have enabled OnlineJFS options of mincache and convosync to direct.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 12:30 PM
09-23-2002 12:30 PM
Re: Extremly HIGH Qlen in Glance.
So many small files explain everything. You may want to move files a bit across different file systems to see if it helps.
-Sri
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 12:44 PM
09-23-2002 12:44 PM
Re: Extremly HIGH Qlen in Glance.
I am also reducing about 400,000 files out of the 800,000 file filesystem tonight to see if I can get any sort of performance improvement out of it...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-23-2002 02:51 PM
09-23-2002 02:51 PM
Re: Extremly HIGH Qlen in Glance.
I would suggest making one change at a time.
For example, your buffer cache is way too large, and may cause large queues to disk when its data are flushed. The first thing I would do is reduce dbc_max_pct so that the buffer cache is not larger than 300 Mb (I'm assuming you are using dynamic buffer cache). Then re-evaluate your disk i/o performance, and consider deframgemntation as your next step, re-evaluate again, etc.
I would be curious to know which processes (or applications) are creating the large disk queues.
Mladen