<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: iostat and high wait state in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704248#M21214</link>
    <description>How are you doing your testing? OCFS performance cannot be tested with operating system commands.&lt;BR /&gt;&lt;BR /&gt;You can test the vdisk performance with hdparm -Tt.&lt;BR /&gt;&lt;BR /&gt;Hight iowait could be a problem, on an installation, the iowait was reduced and performance increased by using raw devices instead of OCFS filesystem.</description>
    <pubDate>Fri, 06 Jan 2006 13:33:03 GMT</pubDate>
    <dc:creator>Ivan Ferreira</dc:creator>
    <dc:date>2006-01-06T13:33:03Z</dc:date>
    <item>
      <title>iostat and high wait state</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704246#M21212</link>
      <description>Folks&lt;BR /&gt;I admin a site with a selection of proliant servers and an EVA SAN.&lt;BR /&gt;&lt;BR /&gt;While doing some performance testing recently I noticed that the io wait state was up to 50%. The files on disk are OCFS. I am getting contrary advice. An Oracle guy says that this is totally unnaceptable. A HP system engineer says that this is not a problem. Anyone any views on it?&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;&lt;BR /&gt;Colm</description>
      <pubDate>Fri, 06 Jan 2006 10:28:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704246#M21212</guid>
      <dc:creator>Colm O Cinnseala</dc:creator>
      <dc:date>2006-01-06T10:28:56Z</dc:date>
    </item>
    <item>
      <title>Re: iostat and high wait state</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704247#M21213</link>
      <description>Hello&lt;BR /&gt;&lt;BR /&gt;In general, a high iowait percentage indicates the system has a memory shortage or an inefficient I/O subsystem configuration. Understanding the I/O bottleneck and improving the efficiency of the I/O subsystem require more data than iostat can provide.&lt;BR /&gt;&lt;BR /&gt;50% should be normal on a DB Server.&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Jan 2006 10:34:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704247#M21213</guid>
      <dc:creator>Vipulinux</dc:creator>
      <dc:date>2006-01-06T10:34:23Z</dc:date>
    </item>
    <item>
      <title>Re: iostat and high wait state</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704248#M21214</link>
      <description>How are you doing your testing? OCFS performance cannot be tested with operating system commands.&lt;BR /&gt;&lt;BR /&gt;You can test the vdisk performance with hdparm -Tt.&lt;BR /&gt;&lt;BR /&gt;Hight iowait could be a problem, on an installation, the iowait was reduced and performance increased by using raw devices instead of OCFS filesystem.</description>
      <pubDate>Fri, 06 Jan 2006 13:33:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iostat-and-high-wait-state/m-p/3704248#M21214</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-01-06T13:33:03Z</dc:date>
    </item>
  </channel>
</rss>

