<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: improving i/o performance and max depth queue diskes with slow users in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992388#M490563</link>
    <description>&lt;P&gt;Hi Bill&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ye, i have put only one part of the output,&amp;nbsp;&lt;/P&gt;&lt;P&gt;ye at the moment nog big load and no big work on the machine is running&lt;/P&gt;&lt;P&gt;but when works immediately there are big peaks in i/oand ram usage ...&lt;/P&gt;&lt;P&gt;you have seen sometime 100% disk usage&lt;/P&gt;&lt;P&gt;&amp;nbsp;i'll post more shortly&lt;/P&gt;&lt;P&gt;&amp;nbsp;thanx&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best&amp;nbsp; Max&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Fri, 29 Dec 2017 07:44:16 GMT</pubDate>
    <dc:creator>Max5</dc:creator>
    <dc:date>2017-12-29T07:44:16Z</dc:date>
    <item>
      <title>improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992072#M490552</link>
      <description>&lt;P&gt;HI All&lt;/P&gt;&lt;P&gt;for improving i/o disk performance modifing max depth queue , beeing 8 at the moment as the default !,&lt;/P&gt;&lt;P&gt;i would like to suggest to tune it at 32&amp;nbsp;&lt;/P&gt;&lt;P&gt;there are 8 physical diskes for application data , every vg one disk , 136 GB eachone&lt;/P&gt;&lt;P&gt;old os hp-ux more or less 15 years a go&lt;/P&gt;&lt;P&gt;vendor Compaq , product SAS BF14684970&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;problem is very slow sas performance for the users&lt;/P&gt;&lt;P&gt;&amp;nbsp;would be very good to put 32 for the maximum queue depth for improving the performance ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;which is your opinion ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;thanx in advance, best regards&amp;nbsp; &amp;nbsp;Max&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 22 Dec 2017 15:30:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992072#M490552</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-22T15:30:25Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992244#M490553</link>
      <description>&lt;P&gt;I don't think you'll see anything better with a deeper queue depth. The queue depth allows more requests to pile up but there is no performance change since HP-UX is running as fast as it can. Your layout (8 disks, one disk per VG) has several performance issues. The most important is that it sounds as if the disks are not mirrored. That means complete loss of the data on the disk(s) that fail, production downtime while repair is made and data is restored from backups.&lt;/P&gt;&lt;P&gt;There is very little that can be done to improve a single disk VG performance. The recommendation is to replace all of the disks with a modern disk array where several disks are used in striped mode to reduce overalll access time and improve the data transfer speed.&lt;/P&gt;&lt;P&gt;Note: The system hardware and version of HP-UX must be considered when looking at adding modern disk arrays. Older PARISC systems running HP-UX 11.11 or earlier cannot connect to more recent arrays such as the MSA2040. An alternative would be to replace the entire system with an rx2800 where all your storage will now be internal with an array controller maximizing performance. This can be done with HP-UX Containers.&lt;/P&gt;</description>
      <pubDate>Wed, 27 Dec 2017 03:00:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992244#M490553</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2017-12-27T03:00:51Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992276#M490554</link>
      <description>&lt;P&gt;Hi Bill&lt;/P&gt;&lt;P&gt;thanx for your feedback.&lt;/P&gt;&lt;P&gt;even if is useful what you write, i think instead that is a good test to extend from 8 to 16 the queue depth&lt;/P&gt;&lt;P&gt;8 is very low default&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;the only mode for trying , it is to organize the change and test the performance&lt;/P&gt;&lt;P&gt;thanx for your infoes&lt;/P&gt;&lt;P&gt;&amp;nbsp;best regards&amp;nbsp; Max&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Dec 2017 11:22:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992276#M490554</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-27T11:22:20Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992282#M490555</link>
      <description>&lt;P&gt;can i ask you, with&amp;nbsp;&lt;SPAN&gt;kmtune -u -s scsi_max_qdepth=16 , i can get new value on all the disk devices in dynamic way , can i ?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;only this command, and no other, from what i know&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp;no reboot is necessary&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&amp;nbsp; thanx in advance&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 27 Dec 2017 13:06:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992282#M490555</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-27T13:06:07Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992293#M490556</link>
      <description>&lt;PRE&gt;&lt;STRONG&gt;     Or.. in more i could look at these values , for the fs tuning :  maybe would you know suggest some values ?  now is default&lt;BR /&gt;&lt;BR /&gt;      default_indir_size&lt;/STRONG&gt;
         

      &lt;STRONG&gt;discovered_direct_iosz&lt;/STRONG&gt;
         

      &lt;STRONG&gt;hsm_write_prealloc&lt;BR /&gt;&lt;BR /&gt;&lt;/STRONG&gt; thanx, best regards !
 &lt;/PRE&gt;</description>
      <pubDate>Wed, 27 Dec 2017 17:02:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992293#M490556</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-27T17:02:56Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992295#M490557</link>
      <description>&lt;P&gt;I would not change the VxFS parameters without repeatable work loads and careful analysis.&lt;BR /&gt;I think you'll fid that the default settings are the best for single disks.&amp;nbsp;&lt;BR /&gt;If you have Glance installed, use it to provide details on the system's current workload.&lt;BR /&gt;You need to determine if the system is actually disk-bound or has a large CPU load.&lt;BR /&gt;Without Glance, you have to use sar to monitor current OS status.&lt;/P&gt;&lt;P&gt;For CPU load, use &lt;STRONG&gt;top&lt;/STRONG&gt;.&lt;BR /&gt;For disks, use &lt;STRONG&gt;sar -d 2 20&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;There is no point in changing disk or filesystem settings if the CPU is 100% loaded. For any meaningful recommendations, you need to provide:&lt;BR /&gt;&lt;BR /&gt;System model, HP-UX version, RAM, processor count&lt;/P&gt;</description>
      <pubDate>Wed, 27 Dec 2017 18:03:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992295#M490557</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2017-12-27T18:03:18Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992339#M490561</link>
      <description>&lt;P&gt;Hi ,&lt;/P&gt;&lt;P&gt;cpu is not 100% or saturaton , but i/o disk has some strong peaks&lt;/P&gt;&lt;P&gt;&amp;nbsp;thanx inadvance&amp;nbsp;&lt;/P&gt;&lt;P&gt;# sar -d 5 5&lt;/P&gt;&lt;P&gt;HP-UX&amp;nbsp; B.11.31 U ia64&amp;nbsp;&lt;/P&gt;&lt;P&gt;11:36:03 device %busy avque r+w/s blks/s avwait avserv&lt;BR /&gt;11:36:08 c1t3d0 0.20 0.50 0 0 0.00 10.55&lt;BR /&gt;c3t2d0 100.00 54.92 918 109799 60.71 8.70&lt;BR /&gt;c3t4d0 0.20 0.50 0 0 0.00 7.75&lt;BR /&gt;disk1 0.60 0.50 1 10 0.00 5.08&lt;BR /&gt;disk19 0.20 0.50 0 0 0.00 10.55&lt;BR /&gt;disk24 100.00 54.92 918 109799 60.71 8.70&lt;/P&gt;</description>
      <pubDate>Thu, 28 Dec 2017 12:02:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992339#M490561</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-28T12:02:19Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992367#M490562</link>
      <description>&lt;P&gt;Is there a reason that you did not paste all of the lines from sar?&lt;BR /&gt;This is just a single picture of a few of your disks.&amp;nbsp;&lt;BR /&gt;There should have been 5 groups of 5 second snapshots plus a summary.&lt;BR /&gt;What little it shows seems to point out something unusual for your disk assignments:&lt;BR /&gt;&lt;SPAN&gt;c3t2d0 and disk24 appear to be the same physical disk.&lt;BR /&gt;&lt;/SPAN&gt;It is also the only disk that is busy.&lt;BR /&gt;Based on this sparse information, your system is not busy at all.&lt;BR /&gt;Changing the queue depth will likely have no measureable effect.&amp;nbsp;&lt;/P&gt;&lt;P&gt;How about the requested information about your server?&lt;BR /&gt;And program or database is running slow on this server?&lt;/P&gt;</description>
      <pubDate>Thu, 28 Dec 2017 20:43:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992367#M490562</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2017-12-28T20:43:01Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992388#M490563</link>
      <description>&lt;P&gt;Hi Bill&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ye, i have put only one part of the output,&amp;nbsp;&lt;/P&gt;&lt;P&gt;ye at the moment nog big load and no big work on the machine is running&lt;/P&gt;&lt;P&gt;but when works immediately there are big peaks in i/oand ram usage ...&lt;/P&gt;&lt;P&gt;you have seen sometime 100% disk usage&lt;/P&gt;&lt;P&gt;&amp;nbsp;i'll post more shortly&lt;/P&gt;&lt;P&gt;&amp;nbsp;thanx&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Best&amp;nbsp; Max&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Dec 2017 07:44:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992388#M490563</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-29T07:44:16Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992422#M490566</link>
      <description>&lt;P&gt;# swapinfo -m&lt;BR /&gt;Mb Mb Mb PCT START/ Mb&lt;BR /&gt;TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME&lt;BR /&gt;dev 8192 0 8192 0% 0 - 1 /dev/vg00/lvol4&lt;BR /&gt;reserve - 4226 -4226&lt;BR /&gt;memory 71887 12851 59048 18%&lt;/P&gt;</description>
      <pubDate>Fri, 29 Dec 2017 16:58:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992422#M490566</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-29T16:58:43Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992423#M490567</link>
      <description>&lt;P&gt;this is the bottleneck, for exemple , just 2 rows is visible ..&amp;nbsp; some 100 % cpu's (not all)&lt;/P&gt;&lt;P&gt;and avwait values 59 or 54 &amp;gt; then&amp;nbsp;avserv values 8 or 7 ...&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;disk07 99.80 55.19 936 111218 59.88 8.52&lt;BR /&gt;&lt;BR /&gt;c3t2d0 100.00 53.28 1016 121446 53.22 7.85&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;sorry for the few exemples.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Dec 2017 17:14:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992423#M490567</guid>
      <dc:creator>Max5</dc:creator>
      <dc:date>2017-12-29T17:14:06Z</dc:date>
    </item>
    <item>
      <title>Re: improving i/o performance and max depth queue diskes with slow users</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992430#M490568</link>
      <description>&lt;P&gt;&lt;STRONG&gt;&lt;EM&gt;...sorry for the few exemples....&lt;/EM&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Analyzing and resolving performance concerns requires a lot of measuremnts and information about what is using the system resources. There is no magic switch or parameter to make your system run faster. You can make any disk 100%&amp;nbsp; busy with the &lt;STRONG&gt;dd&lt;/STRONG&gt; command, or make all the CPUs busy with a simple script. The application (database perhaps) is causing the load. Stop the application and performance will be excellent. &lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Is the application something that can be changed, perhaps rewritten to be more efficient?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The latest measurement shows two different disks that are very busy. The queue depth for both disks is over 50, so changing the SCSI driver depth to 32 will produce no change for that specific moment in time.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Changes for various parameters are meaningless if you do not have a repeatable load. For this simple example, you would have to change all your external disks for solid-state disks. In that case, you would see immediate performance gains.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 29 Dec 2017 20:27:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/improving-i-o-performance-and-max-depth-queue-diskes-with-slow/m-p/6992430#M490568</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2017-12-29T20:27:37Z</dc:date>
    </item>
  </channel>
</rss>

