<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Extremly HIGH Qlen in Glance. in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811581#M828076</link>
    <description>Interesting.  Now, what options are those two filesystems mounted with?  I'm curious to see if you are bypassing the buffer cache.  What kind of application are you running on those filesystems?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
    <pubDate>Mon, 23 Sep 2002 17:30:12 GMT</pubDate>
    <dc:creator>John Poff</dc:creator>
    <dc:date>2002-09-23T17:30:12Z</dc:date>
    <item>
      <title>Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811572#M828067</link>
      <description>I am going to make this as brief as possible.. I have been dealing with this for months, and could type up several pages :)&lt;BR /&gt;&lt;BR /&gt;Having disk performance problems on HP/UX 11.0, but the disk performance issues appear to me as if they are not disk related.  For example EMC WorkLoad analyzer tells me pretty much everything is running without any bottlenecks on the Symmetrix.. And EMC has verified this with thier SymTop tool that they use when dialing in.&lt;BR /&gt;&lt;BR /&gt;Yet, disk access is extremly slow.&lt;BR /&gt;&lt;BR /&gt;I also have 4 fiber channel cards load balanced to the EMC using powerpath.  None of the cards ever reach more then 50% utilization.&lt;BR /&gt;&lt;BR /&gt;The only issues I see anywhere is this:&lt;BR /&gt;&lt;BR /&gt;A)  Glance reports disks at 100% utilization most of the day..Yet, EMC is not even close to 100% utilization.&lt;BR /&gt;&lt;BR /&gt;And the kicker.  &lt;BR /&gt;&lt;BR /&gt;B)  Extremly high QLEN's inside of Glance.  I have never ever seen QLEN's this high in my life.  &lt;BR /&gt;&lt;BR /&gt;Can someone explain QLEN, what it is, how it works.. Anyway to get QLEN lower?  Someone told me they thought 1000 QLEN was fairly high.. I have LUNS that have like 50k+ QLens most of the day!&lt;BR /&gt;&lt;BR /&gt;Help me please, I am in QLen hell :) lol&lt;BR /&gt;&lt;BR /&gt;Idx   Device               Util    Qlen       KB/Sec       Logl IO     Phys IO&lt;BR /&gt;--------------------------------------------------------------------------------&lt;BR /&gt;  43 6/1/0.1.16.0.0.0.3  33/ 43 52386.1  1160.7/ 1896.0   0.0/  0.0  53.2/ 72.0&lt;BR /&gt;  44 5/0/0.1.16.0.0.0.3  30/ 43 52187.0  1334.3/ 1907.2   0.0/  0.0  55.6/ 72.5&lt;BR /&gt;  45 3/0/0.1.17.0.0.1.3  49/ 30 59658.0   286.7/  442.2    na/   na  35.8/ 42.6&lt;BR /&gt;  46 4/1/0.1.17.0.0.1.3  48/ 30 59782.0   295.8/  433.6   0.0/  0.0  33.7/ 42.0&lt;BR /&gt;  47 6/1/0.1.16.0.0.1.3  48/ 30 59695.0   323.0/  436.6   0.0/  0.0  36.6/ 42.5&lt;BR /&gt;  48 5/0/0.1.16.0.0.1.3  46/ 30 59663.0   285.2/  433.7   0.0/  0.0  31.6/ 42.4&lt;BR /&gt;  49 3/0/0.1.17.0.0.0.4  18/ 28 60507.7   211.3/  411.1    na/   na  25.4/ 41.6&lt;BR /&gt;  50 4/1/0.1.17.0.0.0.4  21/ 28 60353.4   258.1/  406.8   0.0/  0.0  31.5/ 41.6&lt;BR /&gt;  51 6/1/0.1.16.0.0.0.4  23/ 29 60194.7   249.0/  405.0   0.0/  0.0  29.4/ 41.5&lt;BR /&gt;  52 5/0/0.1.16.0.0.0.4  22/ 29 60379.6   232.4/  404.3   0.0/  0.0  28.6/ 41.4&lt;BR /&gt;  53 3/0/0.1.17.0.0.1.5  41/ 40 57685.0   246.0/  428.4    na/   na  29.6/ 43.6&lt;BR /&gt;  54 4/1/0.1.17.0.0.1.5  44/ 40 57999.0   270.1/  424.7   0.0/  0.0  32.2/ 43.3&lt;BR /&gt;  55 6/1/0.1.16.0.0.1.5  39/ 40 58031.0   219.6/  427.5   0.0/  0.0  25.6/ 43.4&lt;BR /&gt;  56 5/0/0.1.16.0.0.1.5  44/ 39 57585.0   224.9/  423.4   0.0/  0.0  26.7/ 43.1&lt;BR /&gt;  57 3/0/0.1.17.0.0.1.0  26/ 38 58081.0   327.5/  638.6    na/   na  28.1/ 51.7&lt;BR /&gt;  58 4/1/0.1.17.0.0.1.0  26/ 38 58188.0   353.2/  628.9   0.0/  0.0  31.1/ 50.8&lt;BR /&gt;  59 6/1/0.1.16.0.0.1.0  22/ 37 57921.0   348.6/  638.5   0.0/  0.0  27.7/ 51.1&lt;BR /&gt;  60 5/0/0.1.16.0.0.1.0  25/ 38 57950.0   333.5/  637.0   0.0/  0.0  26.7/ 51.7&lt;BR /&gt;  61 3/0/0.1.17.0.0.1.1  86/ 41 50789.6   400.0/  560.9    na/   na  38.8/ 48.4&lt;BR /&gt;  62 4/1/0.1.17.0.0.1.1  75/ 40 50792.2   422.6/  564.7   0.0/  0.0  36.2/ 48.2&lt;BR /&gt;  63 6/1/0.1.16.0.0.1.1  67/ 40 50485.3   460.3/  573.2   0.0/  0.0  39.4/ 48.5&lt;BR /&gt;  64 5/0/0.1.16.0.0.1.1  67/ 41 50485.4   442.2/  561.4   0.0/  0.0  38.8/ 48.0&lt;BR /&gt;  65 3/0/0.1.17.0.0.1.2  27/ 38 49446.0   294.3/  521.1    na/   na  30.9/ 47.2&lt;BR /&gt;  66 4/1/0.1.17.0.0.1.2  25/ 38 49488.0   321.5/  525.6   0.0/  0.0  33.0/ 47.2&lt;BR /&gt;  67 6/1/0.1.16.0.0.1.2  28/ 38 49278.0   297.3/  519.7   0.0/  0.0  31.8/ 46.5&lt;BR /&gt;  68 5/0/0.1.16.0.0.1.2  28/ 38 49270.0   288.3/  530.2   0.0/  0.0  27.9/ 47.0&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 16:52:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811572#M828067</guid>
      <dc:creator>Aharon Chernin</dc:creator>
      <dc:date>2002-09-23T16:52:26Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811573#M828068</link>
      <description>You do not say what version of glance you are running. Make sure you have upgraded to the latest version. Earlier versions have problems with disk stats.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Marty</description>
      <pubDate>Mon, 23 Sep 2002 17:09:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811573#M828068</guid>
      <dc:creator>Martin Johnson</dc:creator>
      <dc:date>2002-09-23T17:09:17Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811574#M828069</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Can you tell us what filesystem(s) are on those disks, and what are the mount options?  What kind of data is on those disks?  Are there lots of small files or just a few big files (like an Oracle database)?&lt;BR /&gt;&lt;BR /&gt;The disk utilization % in Glance is just for the busiest disk on the system, and not for all the disks as a group.&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 17:09:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811574#M828069</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-09-23T17:09:35Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811575#M828070</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;QLEN refers to high I/O requests. You may reduce the number of physical connections to resolve this issue.&lt;BR /&gt;&lt;BR /&gt;Also there is a firmware available to resolve this issue. You may ask HP for a free in-warranty firmware for this.&lt;BR /&gt;&lt;BR /&gt;I believe the latest release is HP16. In order to get there, you'll need to upgrade Command View SDM to version 1.04. I believe the firmware you have for the Brocade is the latest. There are some significant performance enhancements included in HP14, however, that was a factory only install. HP15 also included some additional enhancements. I'm not sure of what's included in HP16 but I beleieve it should be available. Contact your local HP to schedule an upgrade&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Anil</description>
      <pubDate>Mon, 23 Sep 2002 17:12:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811575#M828070</guid>
      <dc:creator>Anil C. Sedha</dc:creator>
      <dc:date>2002-09-23T17:12:10Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811576#M828071</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;QLEN refers to high I/O requests to a disk. You may reduce the number of physical connections to resolve this issue.&lt;BR /&gt;&lt;BR /&gt;Also there is a firmware available to resolve this issue. You may ask HP for a free in-warranty firmware for this.&lt;BR /&gt;&lt;BR /&gt;I believe the latest release is HP16. In order to get there, you'll need to upgrade Command View SDM to version 1.04. I believe the firmware you have for the Brocade is the latest. There are some significant performance enhancements included in HP14, however, that was a factory only install. HP15 also included some additional enhancements. I'm not sure of what's included in HP16 but I beleieve it should be available. Contact your local HP to schedule an upgrade&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Anil</description>
      <pubDate>Mon, 23 Sep 2002 17:12:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811576#M828071</guid>
      <dc:creator>Anil C. Sedha</dc:creator>
      <dc:date>2002-09-23T17:12:21Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811577#M828072</link>
      <description>Also, are you seeing processes blocked on I/O waits?&lt;BR /&gt;&lt;BR /&gt;JP</description>
      <pubDate>Mon, 23 Sep 2002 17:16:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811577#M828072</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-09-23T17:16:13Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811578#M828073</link>
      <description>Ok, my replies :)&lt;BR /&gt;&lt;BR /&gt;*)  I am running Glance 3.35&lt;BR /&gt;&lt;BR /&gt;*)  The filesystems on the disk are VxFS 3.1 .. It's 2 filesystems.. About 200 gig of filesystems.. Mostly small files, but large files as well.. Approximatly 200,000 files per filesystem.&lt;BR /&gt;&lt;BR /&gt;*)  Anil, what firmware are you speaking of?  The V class firmware which you load through the b180 console?  Or some sort of Brokade switch firmware?  Do you have a link you can send me?&lt;BR /&gt;&lt;BR /&gt;*)  John, here are my Global waits from Glance.. Does this help?&lt;BR /&gt;&lt;BR /&gt;                               Procs/                                  Procs/&lt;BR /&gt;Event           %       Time  Threads Blocked On      %       Time    Threads&lt;BR /&gt;--------------------------------------------------------------------------------&lt;BR /&gt;IPC           0.0       0.00      0.0 Cache         0.1      18.78      2.8&lt;BR /&gt;Job Control   0.0       0.00      0.0 CDROM IO      0.0       0.00      0.0&lt;BR /&gt;Message       0.0       6.75      1.0 Disk IO       0.0       0.00      0.0&lt;BR /&gt;Pipe          0.5      72.95     11.0 Graphics      0.0       0.00      0.0&lt;BR /&gt;RPC           0.0       0.00      0.0 Inode         0.0       0.00      0.0&lt;BR /&gt;Semaphore     0.0       6.62      1.0 IO            0.3      37.99      5.7&lt;BR /&gt;Sleep         8.7    1251.99    189.4 LAN           0.0       0.00      0.0&lt;BR /&gt;Socket        0.2      26.47      4.0 NFS           0.0       0.00      0.0&lt;BR /&gt;Stream       84.6   12213.00   1847.7 Priority      0.0       3.18      0.5&lt;BR /&gt;Terminal      0.0       6.62      1.0 System        3.5     511.87     77.4&lt;BR /&gt;Other         1.5     218.21     33.0 Virtual Mem   0.0       0.72      0.1&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 17:23:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811578#M828073</guid>
      <dc:creator>Aharon Chernin</dc:creator>
      <dc:date>2002-09-23T17:23:01Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811579#M828074</link>
      <description>&lt;BR /&gt;What kind of server is this?&lt;BR /&gt;&lt;BR /&gt;It looks like an L-class?&lt;BR /&gt;&lt;BR /&gt;If it is an L-class, are these IO cards in the "Shared PCI" slots 3-6??&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry</description>
      <pubDate>Mon, 23 Sep 2002 17:25:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811579#M828074</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2002-09-23T17:25:37Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811580#M828075</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Qlen is the average number of IOs that are the in Queue waiting to be processed by the physical disk. As you heard, this should be a low value.&lt;BR /&gt;&lt;BR /&gt;It is not going to be very good loadbalancing. Take the measurement of Qlen and the response time by disabling powerpath.&lt;BR /&gt;&lt;BR /&gt;Also report your sar -d stats.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Mon, 23 Sep 2002 17:26:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811580#M828075</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2002-09-23T17:26:20Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811581#M828076</link>
      <description>Interesting.  Now, what options are those two filesystems mounted with?  I'm curious to see if you are bypassing the buffer cache.  What kind of application are you running on those filesystems?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 17:30:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811581#M828076</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-09-23T17:30:12Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811582#M828077</link>
      <description>The latest glance for HPUX is 3.58.&lt;BR /&gt;&lt;BR /&gt;Use sar -d to verify an I/O problem.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;Marty</description>
      <pubDate>Mon, 23 Sep 2002 17:32:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811582#M828077</guid>
      <dc:creator>Martin Johnson</dc:creator>
      <dc:date>2002-09-23T17:32:18Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811583#M828078</link>
      <description>The only mount option I am using is -o largefiles.&lt;BR /&gt;&lt;BR /&gt;Check out this sar -d... It's very entertaining!! LOL!&lt;BR /&gt;&lt;BR /&gt;14:39:45   device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;14:39:51   c3t6d0   56.09    0.50      56     562    5.22   16.29&lt;BR /&gt;           c6t6d0   35.93    0.50      40     378    5.37   12.98&lt;BR /&gt;          c25t0d2   36.53 52634.64      73    1450    8.47   11.75&lt;BR /&gt;          c31t0d3   31.94 51402.71      50    1459    6.10   16.98&lt;BR /&gt;          c19t0d2   33.53 52523.71      61    1287   10.32   13.80&lt;BR /&gt;          c19t0d4   27.54 59244.19      45    1351 13101239296.00    0.00&lt;BR /&gt;          c19t1d4    0.20 52412.50       0       3    0.00    0.00&lt;BR /&gt;          c19t0d3   30.14 51167.66      50    1466    6.26   15.13&lt;BR /&gt;          c19t1d2   36.53 48157.84      62    1699    6.06   14.07&lt;BR /&gt;           c7t1d0   25.15 57678.84      46    1009   10.87   15.08&lt;BR /&gt;           c7t1d1   38.52 49737.83      62    1517 13241108480.00    0.00&lt;BR /&gt;           c7t1d2   28.54 48337.57      51    1463    5.44   12.31&lt;BR /&gt;          c19t1d6    0.40 51179.50       0       3    7.55   14.82&lt;BR /&gt;          c19t1d3   22.95 59280.55      26     699    6.22   19.50&lt;BR /&gt;          c25t0d4   27.74 59268.39      42    1456 4099361536.00    0.00&lt;BR /&gt;          c25t1d5   31.94 57703.77      39    1118    9.41   16.99&lt;BR /&gt;          c25t1d0   21.96 57517.89      42     945   11.32   15.82&lt;BR /&gt;          c25t0d3   32.93 51413.54      51    1627    5.10   14.30&lt;BR /&gt;          c25t1d2   30.34 48104.50      51    1514    4.96   12.94&lt;BR /&gt;          c25t1d1   35.13 49204.01      59    1463 6468949504.00    0.00&lt;BR /&gt;          c25t2d6    1.20 65497.50       7     156    4.83    1.93&lt;BR /&gt;          c25t0d1   32.73 52702.69      96    1618    7.44    8.45&lt;BR /&gt;          c25t1d3   23.55 59331.64      27     703    7.55   20.66&lt;BR /&gt;          c31t1d6    0.40 51065.50       0       3    6.31   18.90&lt;BR /&gt;          c31t1d0   25.35 57775.03      45     917   13.16   15.51&lt;BR /&gt;          c31t1d1   33.53 50013.25      52    1230   16.28   18.70&lt;BR /&gt;          c31t1d4    0.80 52508.50       0       6    8.60   17.95&lt;BR /&gt;          c31t1d5   31.34 57656.67      41    1124    8.84   15.96&lt;BR /&gt;          c31t2d6    0.40 65505.50       5     141    3.78    1.89&lt;BR /&gt;          c31t1d2   29.74 48363.68      55    1421    5.73   12.34&lt;BR /&gt;          c31t1d3   23.35 59445.81      27     754   10.15   23.19&lt;BR /&gt;          c31t0d1   33.53 52761.53      84    1482    5.63    8.25&lt;BR /&gt;          c31t0d2   31.14 52358.66      59    1143    7.32   12.48&lt;BR /&gt;          c31t0d4   27.54 59655.99      41    1327 15188329472.00    0.00&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 17:40:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811583#M828078</guid>
      <dc:creator>Aharon Chernin</dc:creator>
      <dc:date>2002-09-23T17:40:44Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811584#M828079</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Somehow my inner instict tells me that there is a problem with Powerpath. Did you try disabling it for see how the response times were?. You may run without load balance for sometime but you can eliminate a major factor from the scene.&lt;BR /&gt;&lt;BR /&gt;Sar -d tells me that there is no problem with glance report.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Mon, 23 Sep 2002 17:45:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811584#M828079</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2002-09-23T17:45:37Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811585#M828080</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I am sorry. Last time when i posted about Firmware it was for an array connected to a brocade switch.&lt;BR /&gt;&lt;BR /&gt;You may refer the following thread for understanding QLEN in detail.&lt;BR /&gt;&lt;A href="http://www2.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&amp;amp;docId=200000062919617" target="_blank"&gt;http://www2.itrc.hp.com/service/cki/docDisplay.do?docLocale=en_US&amp;amp;docId=200000062919617&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Anil</description>
      <pubDate>Mon, 23 Sep 2002 17:46:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811585#M828080</guid>
      <dc:creator>Anil C. Sedha</dc:creator>
      <dc:date>2002-09-23T17:46:12Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811586#M828081</link>
      <description>Anyone know how to disable power path?  It's worth a shot I guess.. Is this safe to do on a live system?  I dont care if the machine gets any slower, I am just concerned about losing connectivity to the disks.</description>
      <pubDate>Mon, 23 Sep 2002 17:49:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811586#M828081</guid>
      <dc:creator>Aharon Chernin</dc:creator>
      <dc:date>2002-09-23T17:49:37Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811587#M828082</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I'm not so sure that the problem is with PowerPath.  You can see how PowerPath is running with this command:&lt;BR /&gt;&lt;BR /&gt;powermt display&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You can watch PowerPath continuously with this:&lt;BR /&gt;&lt;BR /&gt;powermt watch&lt;BR /&gt;&lt;BR /&gt;I don't think you can disable PowerPath without rebooting.  You aren't in any danger of losing connectivity to the disks, or EMC would be calling you.  I still think this might be an issue with the mount options, and if these are VxFS filesystems you can change the options on the fly without unmounting or bringing down anything.  What options do you have those filesystems mounted with?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 18:05:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811587#M828082</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-09-23T18:05:22Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811588#M828083</link>
      <description>powermt display dev=all tells me pretty much everything is working ok with powerpath.&lt;BR /&gt;&lt;BR /&gt;I am using the default VxFS mount options, except the only option I add during mount time is -o largefiles.</description>
      <pubDate>Mon, 23 Sep 2002 18:12:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811588#M828083</guid>
      <dc:creator>Aharon Chernin</dc:creator>
      <dc:date>2002-09-23T18:12:26Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811589#M828084</link>
      <description>Ok.  Have you checked for filesystem directory and extent fragmentation?  I'd be curious to see what fsadm -F vxfd -D and -E reports.&lt;BR /&gt;&lt;BR /&gt;One of your posts showed the wait times.  Your I/O waits were very low, which is good, but your streams waits were up around 84%.  Does this application store lots and lots of small files that come and go via the network?&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 18:21:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811589#M828084</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-09-23T18:21:30Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811590#M828085</link>
      <description>John..&lt;BR /&gt;&lt;BR /&gt;This is the fsadm for the largest most hit filesystem:&lt;BR /&gt;&lt;BR /&gt;The application does alot of small file stuff.. But, the files are not transmitted via the network.  Just worked on locally.&lt;BR /&gt;&lt;BR /&gt;The application is telnet character based.  Another thing is there are close to 300,000 files in this filesystem if that may be an issue.&lt;BR /&gt;&lt;BR /&gt;v2500:/premdor/MP5/&amp;amp;SAVEDLISTS&amp;amp; # fsadm -F vxfs -E /premdor&lt;BR /&gt;  Extent Fragmentation Report&lt;BR /&gt;        Total    Average      Average     Total&lt;BR /&gt;        Files    File Blks    # Extents   Free Blks&lt;BR /&gt;       827793          20           1     2776275&lt;BR /&gt;    blocks used for indirects: 366&lt;BR /&gt;    % Free blocks in extents smaller than 64 blks: 28.48&lt;BR /&gt;    % Free blocks in extents smaller than  8 blks: 3.97&lt;BR /&gt;    % blks allocated to extents 64 blks or larger: 94.32&lt;BR /&gt;    Free Extents By Size&lt;BR /&gt;        1:       9121      2:       7297      4:       8308      8:       6894&lt;BR /&gt;       16:       6441     32:       6027     64:       4390    128:       2299&lt;BR /&gt;      256:        374    512:        269   1024:        211   2048:          0&lt;BR /&gt;     4096:          0   8192:          0  16384:          0  32768:         19&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 19:04:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811590#M828085</guid>
      <dc:creator>Aharon Chernin</dc:creator>
      <dc:date>2002-09-23T19:04:43Z</dc:date>
    </item>
    <item>
      <title>Re: Extremly HIGH Qlen in Glance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811591#M828086</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;If I'm reading your last post correctly, you have over 827,000 files in this one filesystem.  Yikes!!  There are some performance issues for VxFS filesystems with large numbers of files.  I have seen threads discussing that issue here and they generally say that around 100,000 to 150,000 files is the upper limit for good performance on VxFS filesystems.&lt;BR /&gt;&lt;BR /&gt;I would try running the directory and extents reorganization on that filesystem ('fsadm -F vxfs -d -D FILESYSTEM' and 'fsadm -F vxfs -e -E FILESYSTEM' - see 'man fsadm_vxfs').  Since you have so many files I would suggest running it at an off peak time if you can.  Try that and see if that helps your situation.  If it doesn't, you might need to think about breaking up that large filesystem into several smaller filesystems to help your performance.&lt;BR /&gt;&lt;BR /&gt;Since you have monitored your EMC (and EMC has looked at it also), and there aren't any issues there, I don't think there is much else you can do from the system side to make things better.  There aren't any kernel parameters that will help your situation [we are all still searching for that magical, undocumented parameter that makes the system run 10 times faster ;) ].  It won't be an easy fix, but I'd sure like to see what happens.&lt;BR /&gt;&lt;BR /&gt;JP&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Sep 2002 19:21:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/extremly-high-qlen-in-glance/m-p/2811591#M828086</guid>
      <dc:creator>John Poff</dc:creator>
      <dc:date>2002-09-23T19:21:51Z</dc:date>
    </item>
  </channel>
</rss>

