<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: High wio% in XP512, everithing else normal in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456831#M14840</link>
    <description>To make the XP512 (aka HDS9960) fly - our recipe is to always stripe accross LUNS (ldevs) from different array groups on different ACPs and CHIp/HBA ports. 4 or 8 way stripes with 64-128k stipe size should be sufficient for general DB usage.&lt;BR /&gt;</description>
    <pubDate>Wed, 12 Jan 2005 11:46:11 GMT</pubDate>
    <dc:creator>Alzhy</dc:creator>
    <dc:date>2005-01-12T11:46:11Z</dc:date>
    <item>
      <title>High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456827#M14836</link>
      <description>Hi, guys:&lt;BR /&gt;&lt;BR /&gt;   I have two N4000 boxes in an Oracle database cluster (v8.1.7) sharing LUNs in an XP512 disk array in an active-active configuration. When reporting sar -u, wio% shows consistently high (around 35% and sometimes more). avque is always 0.5 for all LUNs (fine), avwait is in average around 7ms (fine) and avserv is around 10ms (fine, I think, according to XP512 docs).&lt;BR /&gt;&lt;BR /&gt;   Is there any starting point to diagnose this behavior? Unfortunately I don't have Glance in my boxes to apply in-deep analysis.  Any help you can bring me is truly appreciated.&lt;BR /&gt;&lt;BR /&gt;Jose Enrique</description>
      <pubDate>Thu, 06 Jan 2005 08:05:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456827#M14836</guid>
      <dc:creator>José Enrique González</dc:creator>
      <dc:date>2005-01-06T08:05:04Z</dc:date>
    </item>
    <item>
      <title>Re: High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456828#M14837</link>
      <description>Jose,&lt;BR /&gt;&lt;BR /&gt;Is this something new, or has the system been like that for a while?&lt;BR /&gt;&lt;BR /&gt;A couple of things to check...&lt;BR /&gt;&lt;BR /&gt;Look at the service times and IO rates of the internal disks (if any).  If the system's paging or something to an internal drive, it'll likely skew numbers like wio%.  Don't forget that wio% is a system-wide guess, not a truely measured number.&lt;BR /&gt;&lt;BR /&gt;Depending on your IO rate, those service times are a little high.  You don't mention how the XP is configured - # of array groups, drive type, LUN emulation, LUSE use, LVM configuration, and port congestion can all cause higher latencies.&lt;BR /&gt;&lt;BR /&gt;Some things to consider, if you're sure it's not the host system:&lt;BR /&gt;- can you add more array groups (more drives = more performance)&lt;BR /&gt;- can you move the DB to 15k drives?&lt;BR /&gt;- avoid LUSE&lt;BR /&gt;- review your LVM configuration - if striping, look closely, and compare with the XP lun config - you may be striping within an array group, thrashing the heads.&lt;BR /&gt;- check the port performance numbers; you may benefit from spreading the LUNs around on more ports.&lt;BR /&gt;&lt;BR /&gt;I hope this helps!&lt;BR /&gt;&lt;BR /&gt;Vince&lt;BR /&gt;</description>
      <pubDate>Thu, 06 Jan 2005 13:53:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456828#M14837</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2005-01-06T13:53:23Z</dc:date>
    </item>
    <item>
      <title>Re: High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456829#M14838</link>
      <description>I think it is not a disk I/O problem that you're having. Please be aware that what sar reports in %wio is not solely wait on disk I/O. Have you checked Oracle statistics if it indeed starved for disk I/O?&lt;BR /&gt;</description>
      <pubDate>Fri, 07 Jan 2005 09:17:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456829#M14838</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-01-07T09:17:09Z</dc:date>
    </item>
    <item>
      <title>Re: High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456830#M14839</link>
      <description>Thank you, guys, for your time. As database seems to be running good, I will try to conducte a closer inspection of LUN configuration with the help of experts, and so I will look for internal disks activity according to your suggestion.</description>
      <pubDate>Wed, 12 Jan 2005 11:41:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456830#M14839</guid>
      <dc:creator>José Enrique González</dc:creator>
      <dc:date>2005-01-12T11:41:31Z</dc:date>
    </item>
    <item>
      <title>Re: High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456831#M14840</link>
      <description>To make the XP512 (aka HDS9960) fly - our recipe is to always stripe accross LUNS (ldevs) from different array groups on different ACPs and CHIp/HBA ports. 4 or 8 way stripes with 64-128k stipe size should be sufficient for general DB usage.&lt;BR /&gt;</description>
      <pubDate>Wed, 12 Jan 2005 11:46:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456831#M14840</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-01-12T11:46:11Z</dc:date>
    </item>
    <item>
      <title>Re: High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456832#M14841</link>
      <description>Excelent, Nelson! Are your DB partition raw or filesystems?</description>
      <pubDate>Tue, 18 Jan 2005 16:01:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456832#M14841</guid>
      <dc:creator>José Enrique González</dc:creator>
      <dc:date>2005-01-18T16:01:07Z</dc:date>
    </item>
    <item>
      <title>Re: High wio% in XP512, everithing else normal</title>
      <link>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456833#M14842</link>
      <description>We are using both cooked (VxFS with DirectIO) and RAW. On some we use the Veritas QuickIO product with the XP512... which alows us to do raw-IO like performance on filesystems (VxFS).&lt;BR /&gt;&lt;BR /&gt;The key here is SAME - stripe and mirror everything. The "mirror" you donot have to worry about as it is already taken care of. It is the "stripe" that you need to plan and implement.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 21 Jan 2005 09:47:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/high-wio-in-xp512-everithing-else-normal/m-p/3456833#M14842</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2005-01-21T09:47:17Z</dc:date>
    </item>
  </channel>
</rss>

