<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Performance problem in disk IO in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957153#M756476</link>
    <description>&lt;!--!*#--&gt;hi Chetan,&lt;BR /&gt;&lt;BR /&gt;if you are running Oracle, try to run a statspack report and see if you have any waits relative to IO.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;Wait Events for DB: MYDB  Instance: mydb  Snaps: 17 -18&lt;BR /&gt;-&amp;gt; s  - second&lt;BR /&gt;-&amp;gt; cs - centisecond -     100th of a second&lt;BR /&gt;-&amp;gt; ms - millisecond -    1000th of a second&lt;BR /&gt;-&amp;gt; us - microsecond - 1000000th of a second&lt;BR /&gt;-&amp;gt; ordered by wait time desc, waits desc (idle events last)&lt;BR /&gt;&lt;BR /&gt;                                                                   Avg&lt;BR /&gt;                                                     Total Wait   wait    Waits&lt;BR /&gt;Event                               Waits   Timeouts   Time (s)   (ms)     /txn&lt;BR /&gt;---------------------------- ------------ ---------- ---------- ------ --------&lt;BR /&gt;db file parallel write                764        382         37     48    254.7&lt;BR /&gt;log file parallel write               394        392         32     81    131.3&lt;BR /&gt;control file parallel write           391          0         29     73    130.3&lt;BR /&gt;db file scattered read                 89          0          2     19     29.7&lt;BR /&gt;log file switch completion              4          0          1    209      1.3&lt;BR /&gt;async disk IO                          13          0          1     42      4.3&lt;BR /&gt;log file sync                           1          0          0    211      0.3&lt;BR /&gt;process startup                         3          0          0     49      1.0&lt;BR /&gt;db file sequential read                21          0          0      1      7.0&lt;BR /&gt;db file single write                    1          0          0     23      0.3&lt;BR /&gt;log file single write                   2          0          0     11      0.7&lt;BR /&gt;control file sequential read          238          0          0      0     79.3&lt;BR /&gt;latch free                              1          0          0      1      0.3&lt;BR /&gt;LGWR wait for redo copy                 1          0          0      0      0.3&lt;BR /&gt;log file sequential read                2          0          0      0      0.7&lt;BR /&gt;virtual circuit status                 41         41      1,202  29311     13.7&lt;BR /&gt;jobq slave wait                        66         63        197   2978     22.0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;also take a look at:&lt;BR /&gt;&lt;A href="http://technet.oracle.com/deploy/availability/pdf/oow2000_sane.pdf" target="_blank"&gt;http://technet.oracle.com/deploy/availability/pdf/oow2000_sane.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;hope this helps!&lt;BR /&gt;&lt;BR /&gt;kind regards&lt;BR /&gt;yogeeraj</description>
    <pubDate>Wed, 07 Mar 2007 06:17:57 GMT</pubDate>
    <dc:creator>Yogeeraj_1</dc:creator>
    <dc:date>2007-03-07T06:17:57Z</dc:date>
    <item>
      <title>Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957150#M756473</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I have a problem in scaling disk i/o. The machine is an 11i v2 itanium application server with two HP EVA's ( 75 gig, 15k rpm each). &lt;BR /&gt;&lt;BR /&gt;AVerage values of sar output is given below,&lt;BR /&gt;&lt;BR /&gt;c0t6d0    8.67    0.67      11     113    0.16   16.62&lt;BR /&gt;c127t0d7   78.73    0.52     716   24672    0.06    3.44&lt;BR /&gt;c80t0d3   77.73    0.51     700   24256    0.03    3.44&lt;BR /&gt;c57t1d2   77.03    0.51     722   24917    0.01    3.12&lt;BR /&gt;c32t0d6   78.71    0.51     737   25241    0.02    3.40&lt;BR /&gt;c95t0d2   77.69    0.51     721   24975    0.02    3.30&lt;BR /&gt;c102t1d1   79.77    0.51     973   39190    0.01    2.45&lt;BR /&gt;c53t0d5   78.73    0.51     726   24764    0.02    3.45&lt;BR /&gt;c16t0d1   77.59    0.51     718   24686    0.02    3.35&lt;BR /&gt;c102t1d3   70.10    0.50    3385   54153    0.00    0.21&lt;BR /&gt;c65t1d0    0.02    0.50       0       0    0.00    0.24&lt;BR /&gt;&lt;BR /&gt;AVerage CPU usage using sar,&lt;BR /&gt;&lt;BR /&gt;HP-UX rx8640f B.11.23 U ia64    03/05/07&lt;BR /&gt;&lt;BR /&gt;13:02:38    %usr    %sys    %wio   %idle&lt;BR /&gt;Average       30      13      32      25&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I tried tweaking scsi_max_qdepth, but it was of no use. My cpu consumption is only around 40% but I am not able to scale up my I/O capcity.&lt;BR /&gt;&lt;BR /&gt;How to determine the problem with IO, ?&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Chetan</description>
      <pubDate>Wed, 07 Mar 2007 03:46:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957150#M756473</guid>
      <dc:creator>chetan a</dc:creator>
      <dc:date>2007-03-07T03:46:57Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957151#M756474</link>
      <description>You have discovered the problem with scsi_max_qdepth.  It simply pushes the queue into the arrays away from the o/s and does not always improve the perforance.&lt;BR /&gt;You have many options.&lt;BR /&gt;Option 1:&lt;BR /&gt;tune your system and/or database buffer cache to avoid the need to go to i/o in the first place.  This may need more memory in the server.&lt;BR /&gt;Option 2: &lt;BR /&gt;tune your application to not require so much i/o, or tune your database indexes, or your database statistics, &lt;BR /&gt;Option 3:&lt;BR /&gt;Spread the load over even more storage spindles, consider your RAID policy (is it RAID5/6/Autoraid, if so try RAID 1/0).&lt;BR /&gt;Option 4:&lt;BR /&gt;A bigger disk array.&lt;BR /&gt;Option 5:&lt;BR /&gt;Consider evening out the load on controllers by striping the data.</description>
      <pubDate>Wed, 07 Mar 2007 05:28:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957151#M756474</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2007-03-07T05:28:33Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957152#M756475</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Try &lt;BR /&gt;&lt;BR /&gt;IOSTAT and Glance.&lt;BR /&gt;&lt;BR /&gt;Check your kernel paramaters w.r.t Database e.g  Oracle / Sybase.&lt;BR /&gt;&lt;BR /&gt;What is the RAID that you have implememted?&lt;BR /&gt;&lt;BR /&gt;Chan&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 07 Mar 2007 06:13:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957152#M756475</guid>
      <dc:creator>Chan 007</dc:creator>
      <dc:date>2007-03-07T06:13:50Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957153#M756476</link>
      <description>&lt;!--!*#--&gt;hi Chetan,&lt;BR /&gt;&lt;BR /&gt;if you are running Oracle, try to run a statspack report and see if you have any waits relative to IO.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;Wait Events for DB: MYDB  Instance: mydb  Snaps: 17 -18&lt;BR /&gt;-&amp;gt; s  - second&lt;BR /&gt;-&amp;gt; cs - centisecond -     100th of a second&lt;BR /&gt;-&amp;gt; ms - millisecond -    1000th of a second&lt;BR /&gt;-&amp;gt; us - microsecond - 1000000th of a second&lt;BR /&gt;-&amp;gt; ordered by wait time desc, waits desc (idle events last)&lt;BR /&gt;&lt;BR /&gt;                                                                   Avg&lt;BR /&gt;                                                     Total Wait   wait    Waits&lt;BR /&gt;Event                               Waits   Timeouts   Time (s)   (ms)     /txn&lt;BR /&gt;---------------------------- ------------ ---------- ---------- ------ --------&lt;BR /&gt;db file parallel write                764        382         37     48    254.7&lt;BR /&gt;log file parallel write               394        392         32     81    131.3&lt;BR /&gt;control file parallel write           391          0         29     73    130.3&lt;BR /&gt;db file scattered read                 89          0          2     19     29.7&lt;BR /&gt;log file switch completion              4          0          1    209      1.3&lt;BR /&gt;async disk IO                          13          0          1     42      4.3&lt;BR /&gt;log file sync                           1          0          0    211      0.3&lt;BR /&gt;process startup                         3          0          0     49      1.0&lt;BR /&gt;db file sequential read                21          0          0      1      7.0&lt;BR /&gt;db file single write                    1          0          0     23      0.3&lt;BR /&gt;log file single write                   2          0          0     11      0.7&lt;BR /&gt;control file sequential read          238          0          0      0     79.3&lt;BR /&gt;latch free                              1          0          0      1      0.3&lt;BR /&gt;LGWR wait for redo copy                 1          0          0      0      0.3&lt;BR /&gt;log file sequential read                2          0          0      0      0.7&lt;BR /&gt;virtual circuit status                 41         41      1,202  29311     13.7&lt;BR /&gt;jobq slave wait                        66         63        197   2978     22.0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;also take a look at:&lt;BR /&gt;&lt;A href="http://technet.oracle.com/deploy/availability/pdf/oow2000_sane.pdf" target="_blank"&gt;http://technet.oracle.com/deploy/availability/pdf/oow2000_sane.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;hope this helps!&lt;BR /&gt;&lt;BR /&gt;kind regards&lt;BR /&gt;yogeeraj</description>
      <pubDate>Wed, 07 Mar 2007 06:17:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957153#M756476</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2007-03-07T06:17:57Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957154#M756477</link>
      <description>Chetan,&lt;BR /&gt;&lt;BR /&gt;You are running at 9,400 IO/sec, 130MB.sec.&lt;BR /&gt;This may well be all there is, in which case you need to focus on reducing the IOs needed, even more so than normal. Can you get more filesystem or database caches going?&lt;BR /&gt;&lt;BR /&gt;You need to provide more context, and some explanation for the 'odd' numbers.&lt;BR /&gt;&lt;BR /&gt;What is the application doing? Oracle? NFS? Read-write ratio?&lt;BR /&gt;&lt;BR /&gt;What is happening to c102t1d3? It is reporting 4x more IO/sec as 1/2 the IO size and with sub-millisecodn response. So that means they are NOT real IO, but cache activities (backed up by real IO).&lt;BR /&gt;&lt;BR /&gt;For example, the 75Gig, 15Krps is per disks right? But how many drives? If we remove c102t1d3 from the equation then sar shows 6,000 IO/sec and at a generous 150 IO/sec per spindle that suggests you need at least 40 disks to or your threshhold is the number of spindle. If we add those 3,000 IO/sec for c102t1d3  to the mix, then you need 60+ spindles. How many do you have?&lt;BR /&gt;&lt;BR /&gt;How many disks, groups, disks per group on the EVA? RAID-5 or RAID-0+1? read-write ratio? How many fibres? Switches.&lt;BR /&gt;&lt;BR /&gt;Are some IOs waiting for others? Again that c102t1d3 activity, with its 8kb IO may well be the absolute max to a single logical unit over a single fibre, through a single HBA.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Hein van den Heuvel&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;</description>
      <pubDate>Wed, 07 Mar 2007 07:40:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957154#M756477</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-03-07T07:40:50Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957155#M756478</link>
      <description>Hi, &lt;BR /&gt;&lt;BR /&gt;Thanks for all you answers, I have more information this time.&lt;BR /&gt;&lt;BR /&gt;1.) Hardware Configuration,&lt;BR /&gt;&lt;BR /&gt;32 core 11.23 ipf Server with 4 Fibre channel connected to 3 EVA's ( HP EVA8000: 28 disk/75 gigs) using two switches. Two EVAs have 3 LUNs and remaining one has 5 LUNs. RAID 0 is implemented on all the EVAs. &lt;BR /&gt;&lt;BR /&gt;2.) Software Configuration,&lt;BR /&gt;&lt;BR /&gt;Server has an application which queires(60% read, 40% write) oracle 10g database running on  server. Oracle has not been configured/using ASM or SAME and uses it uses 3 raw luns for 3 log files and remaining LUNs have VxFS file system on them. &lt;BR /&gt;&lt;BR /&gt;Following list explains the file system present on each device.&lt;BR /&gt; c0t6d0    VxFS:BootVolume&lt;BR /&gt; c32t0d6  VXFS&lt;BR /&gt; c16t0d1  VXFS&lt;BR /&gt; c57t1d2  VXFS&lt;BR /&gt; c53t0d5  VXFS&lt;BR /&gt; c65t1d0  VXFS&lt;BR /&gt; c102t1d3 VXFS&lt;BR /&gt; c80t0d3   VXFS&lt;BR /&gt; c102t1d1 RAW&lt;BR /&gt; c95t0d2   VXFS&lt;BR /&gt; c127t0d7 VXFS&lt;BR /&gt;&lt;BR /&gt;I have also attached the wait events, which I got from statspack.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Chetan</description>
      <pubDate>Fri, 09 Mar 2007 08:37:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957155#M756478</guid>
      <dc:creator>chetan a</dc:creator>
      <dc:date>2007-03-09T08:37:21Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957156#M756479</link>
      <description>Am sorry about the rabbit :(, my problem is still not solved.</description>
      <pubDate>Fri, 09 Mar 2007 08:45:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957156#M756479</guid>
      <dc:creator>chetan a</dc:creator>
      <dc:date>2007-03-09T08:45:34Z</dc:date>
    </item>
    <item>
      <title>Re: Performance problem in disk IO</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957157#M756480</link>
      <description>That helps some more. Looks like plenty of CPU cycles to spare. A little high on the system CPU usage, but not too surprising giving the IO/sec load.&lt;BR /&gt;&lt;BR /&gt;Good to see you have statspack data. &lt;BR /&gt;It's trimmed down a bit much though.&lt;BR /&gt;Sure looks like you could use some more read IO power, or more effective (SGA) caching.&lt;BR /&gt;&lt;BR /&gt;Also looks like you want in increase your SQL*net buffers and you have 3/4 of you response messages needing a second package. Of course this could also be a distorted average with little room for improvement.&lt;BR /&gt;&lt;BR /&gt;And the library cache may need tweaking.&lt;BR /&gt;&lt;BR /&gt;RAID-0 huh? Should be fast, but scary!&lt;BR /&gt;&lt;BR /&gt;Perhaps you want to Email me a full statspack and I can help some more?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Hein van den Heuvel&lt;BR /&gt;HvdH Peformance Consulting.&lt;BR /&gt;</description>
      <pubDate>Fri, 09 Mar 2007 10:51:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-problem-in-disk-io/m-p/3957157#M756480</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-03-09T10:51:19Z</dc:date>
    </item>
  </channel>
</rss>

