<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Monitor disk/item=que in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431916#M65761</link>
    <description>A little off topic I think, but it may come in handy once. &lt;BR /&gt;&lt;BR /&gt;In case the disks are in a HSG80 take in mind that the controler can be overloaded if you send it to many IOs. That way you can hang one of the four ports. The only way to get that port back is to reboot that controler (top or bottom). Possible cause can be to high DIOlm of an account, for example the backup account. You can check if the HSG gets to many IOs with ana /sys : &lt;BR /&gt;&lt;BR /&gt;$ ana /sys&lt;BR /&gt;sda&amp;gt; fc stdt /all&lt;BR /&gt;&lt;BR /&gt;then check the QFseen. Any higher than 0 means that the HSG had to many IOs once... &lt;BR /&gt;&lt;BR /&gt;Good luck with the performance.</description>
    <pubDate>Tue, 30 Nov 2004 03:52:53 GMT</pubDate>
    <dc:creator>DICTU OpenVMS</dc:creator>
    <dc:date>2004-11-30T03:52:53Z</dc:date>
    <item>
      <title>Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431907#M65752</link>
      <description>During prime time there are several disks which showed an average of almost 2. &lt;BR /&gt;&lt;BR /&gt;Is this indicitave of any problem?&lt;BR /&gt;&lt;BR /&gt;The users are complaining about performance slowdown during prime time, and we are wondering if this is any indication.&lt;BR /&gt;&lt;BR /&gt;The main application is DSM.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Chaim</description>
      <pubDate>Mon, 29 Nov 2004 04:57:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431907#M65752</guid>
      <dc:creator>Chaim Budnick</dc:creator>
      <dc:date>2004-11-29T04:57:35Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431908#M65753</link>
      <description>Chaim,&lt;BR /&gt;&lt;BR /&gt;most probably: YES.&lt;BR /&gt;&lt;BR /&gt;It means that an IO request is placed in a queue before it is its turn to be satisfied.&lt;BR /&gt;And especially if this is "only" an IO to get relation info about where to find the record with the data requested (or even where to find info about where to find the data!), then the response times tend to detoriate exponentially.&lt;BR /&gt;&lt;BR /&gt;fwiw,&lt;BR /&gt;&lt;BR /&gt;Cheers.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;Jan</description>
      <pubDate>Mon, 29 Nov 2004 05:09:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431908#M65753</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-11-29T05:09:06Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431909#M65754</link>
      <description>Its an indication that there is always I/O queued for those disks. Wether or not thats a problem depends. Are there key files on those disks? What you need to know is the response time on those diks and could it be contributing to the percived problem. For a performance problem you need to look the whole picture - all of the resources involved in in performing the operation that the users perceive as slow. When did the complaints start and whats changed? Look for errors also.</description>
      <pubDate>Mon, 29 Nov 2004 05:09:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431909#M65754</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-11-29T05:09:37Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431910#M65755</link>
      <description>Is this a hardware or software problem?&lt;BR /&gt;&lt;BR /&gt;What can I do to try and diagnose this a little more sharply?&lt;BR /&gt;&lt;BR /&gt;Chaim</description>
      <pubDate>Mon, 29 Nov 2004 05:11:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431910#M65755</guid>
      <dc:creator>Chaim Budnick</dc:creator>
      <dc:date>2004-11-29T05:11:24Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431911#M65756</link>
      <description>Are there key files on those disks?&lt;BR /&gt;&lt;BR /&gt;Do you have PSDC or other tool to look at which  files are particularly busy?&lt;BR /&gt;&lt;BR /&gt;If not the information from &lt;BR /&gt;show mem/cach=(topq ,vol=diskname) may be&lt;BR /&gt;helpful.</description>
      <pubDate>Mon, 29 Nov 2004 05:30:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431911#M65756</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-11-29T05:30:23Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431912#M65757</link>
      <description>DSM flushed its cache every 30 seconds (by default). During this intervwal you might have a queue. Unless you have the queue length of 2 all the time. In that case you should investigate (which process is doing the IO and when).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 29 Nov 2004 05:31:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431912#M65757</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-11-29T05:31:47Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431913#M65758</link>
      <description>In what tool did you see this average and over what period was the average? It may be the affect of the flush every 30 seconds if the period for the average is longer than 30 seconds. The key thing is does this I/O queue affect the system performance hence my question on key files and caching statistics.</description>
      <pubDate>Mon, 29 Nov 2004 05:39:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431913#M65758</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2004-11-29T05:39:48Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431914#M65759</link>
      <description>I have a cluster running DSM. I checked and found that during end of day processing the queue length is about 2 - 2.5. We flush every second. But we don't have performance problems because of that.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 29 Nov 2004 05:43:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431914#M65759</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-11-29T05:43:47Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431915#M65760</link>
      <description>&lt;BR /&gt;Like Ian replied, be sure to look at the full picture. For example, if the system is near 100% CPU during the prime time, then the IO queue is no significant problem (assuming no application components 'spins' waiting for IO :-).&lt;BR /&gt;&lt;BR /&gt;What you call disks, are they simple disks (there's your problem :-), or virtual units on multi-disk stripe/raid/mirror. Very simplistically stated it would be ok to have a queue length of up to 1 per physical disk per unit. For a 5 member raid-5, it should be reasonable to have an average queue of 3 or so.&lt;BR /&gt;&lt;BR /&gt;As always, Much depends on specific application usage.&lt;BR /&gt;For example the performance of single application which does read, little-processing, read, a little more process in a tight loop will be 99% defined by the disk, but will never have more than 1 io in the queue for just that task. Add an other activity on that disk and it will seem really slow.&lt;BR /&gt;On the other end of the spectrum, take a task like VMS backup, or perhaps this DSM flush. What if it spends some processing to determine several IOs to be done, issues all of those IOs and then waits for all of them to be done. There you would always see a queue, no matter how fast the disk, but it would not at all be a real problem. (Note: I am making up the DSM part, I have no understanding about its IO engine).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;hope this helps some,&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Nov 2004 09:51:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431915#M65760</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2004-11-29T09:51:12Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431916#M65761</link>
      <description>A little off topic I think, but it may come in handy once. &lt;BR /&gt;&lt;BR /&gt;In case the disks are in a HSG80 take in mind that the controler can be overloaded if you send it to many IOs. That way you can hang one of the four ports. The only way to get that port back is to reboot that controler (top or bottom). Possible cause can be to high DIOlm of an account, for example the backup account. You can check if the HSG gets to many IOs with ana /sys : &lt;BR /&gt;&lt;BR /&gt;$ ana /sys&lt;BR /&gt;sda&amp;gt; fc stdt /all&lt;BR /&gt;&lt;BR /&gt;then check the QFseen. Any higher than 0 means that the HSG had to many IOs once... &lt;BR /&gt;&lt;BR /&gt;Good luck with the performance.</description>
      <pubDate>Tue, 30 Nov 2004 03:52:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431916#M65761</guid>
      <dc:creator>DICTU OpenVMS</dc:creator>
      <dc:date>2004-11-30T03:52:53Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431917#M65762</link>
      <description>But /all is only 7.3-1+&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 30 Nov 2004 04:00:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431917#M65762</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-11-30T04:00:05Z</dc:date>
    </item>
    <item>
      <title>Re: Monitor disk/item=que</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431918#M65763</link>
      <description>The disks in question are 5 member RAID5&lt;BR /&gt;&lt;BR /&gt;Chaim</description>
      <pubDate>Tue, 30 Nov 2004 05:17:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/monitor-disk-item-que/m-p/3431918#M65763</guid>
      <dc:creator>Chaim Budnick</dc:creator>
      <dc:date>2004-11-30T05:17:55Z</dc:date>
    </item>
  </channel>
</rss>

