<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: yet another disk io question in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026060#M48191</link>
    <description>The thing is, we're stuck with this kernel by compatability constraints. There must be some way to verify that the system is operating correctly...even if the answer is that it's overloaded that would be fine, because i could certify that what we need is new hardware to get better i/os... any other tool that i could run besides vmstat and iostat that could give you more info?&lt;BR /&gt;</description>
    <pubDate>Fri, 02 Feb 2007 05:53:23 GMT</pubDate>
    <dc:creator>Jose Molina</dc:creator>
    <dc:date>2007-02-02T05:53:23Z</dc:date>
    <item>
      <title>yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026056#M48187</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I've been browsing the forums and gathering some data about how to "trace" an i/o problem with the disks.&lt;BR /&gt;&lt;BR /&gt;There's a lot of questions about this, and after reading the responses i'm still a bit lost about how to trace my problem.&lt;BR /&gt;&lt;BR /&gt;We have one DL380 with disks in RAID1, and RH 7.3 (old, i know). There's a custom application running on it, and the developers are "seeing" always a delay of about 4ms between every write they do on this app.&lt;BR /&gt;&lt;BR /&gt;I'm trying to find if this is a O.S. problem or app problem, and i've started looking at io data by checking your posts about it.&lt;BR /&gt;Doing an iostat -x 300 10 i get these results:&lt;BR /&gt;Device:    rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util&lt;BR /&gt;/dev/cciss/c0d0&lt;BR /&gt;             0.00   1.34  0.00  2.00    0.00   26.85     0.00    13.43    13.43     0.62  311.00  62.83   1.26&lt;BR /&gt;&lt;BR /&gt;/dev/cciss/c0d0p5&lt;BR /&gt;             0.00   1.34  0.00  2.00    0.00   26.85     0.00    13.43    13.43     0.62  311.00  62.83   1.26&lt;BR /&gt;/dev/cciss/c0d1&lt;BR /&gt;             0.01   0.07  0.01  0.52    0.19    4.77     0.09     2.39     9.36     0.31  584.91 340.88   1.81&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If i read this correctly, the mean await of about 0.300 seems to be too big?&lt;BR /&gt;CPU 99% idle, mem and net are Ok. CCISS version is 2.4.50 (doing a strings on cciss.o) on kernel 2.4.18-3smp.</description>
      <pubDate>Thu, 01 Feb 2007 11:10:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026056#M48187</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-01T11:10:14Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026057#M48188</link>
      <description>The first thing I would like to see is the the cpu iowait value. Obtain this value from the output of vmstat. If you have a high iowait value, then there is something extrange with your disk subsystem. What is the value of the "wa" column from vmstat?</description>
      <pubDate>Thu, 01 Feb 2007 14:54:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026057#M48188</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-02-01T14:54:09Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026058#M48189</link>
      <description>Hmm, maybe RH 7.3 has an old version of vmstat but i don't see any "wa" column. With vmstat 300 10, i get these results:&lt;BR /&gt;   procs                      memory    swap          io     system         cpu&lt;BR /&gt; r  b  w   swpd   free   buff  cache  si  so    bi    bo   in    cs  us  sy  id&lt;BR /&gt; 0  0  0      0   4864   8320 206276   0   0     1    15  116    25   0   0 100&lt;BR /&gt; 0  0  0      0   3944   8624 206692   0   0     2    16  119    58   0   0 100&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Feb 2007 03:45:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026058#M48189</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-02T03:45:05Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026059#M48190</link>
      <description>The 'wa' was introduced in kernel 2.5.41, before that it's been always zero. I'm not sure if it's been backported to the 2.4 series kernels.&lt;BR /&gt;&lt;BR /&gt;You could test the 2.4.20 kernels from fedoralegacy.org&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.athlon.rpm" target="_blank"&gt;http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.athlon.rpm&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.i586.rpm" target="_blank"&gt;http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.i586.rpm&lt;/A&gt;&lt;BR /&gt;&lt;A href="http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.i686.rpm" target="_blank"&gt;http://ftp.funet.fi/pub/mirrors/download.fedoralegacy.org/redhat/7.3/updates/i386/kernel-smp-2.4.20-46.7.legacy.i686.rpm&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You may find other important updates for the RedHat 7.3 box at the ftp.</description>
      <pubDate>Fri, 02 Feb 2007 05:28:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026059#M48190</guid>
      <dc:creator>Hmmm...</dc:creator>
      <dc:date>2007-02-02T05:28:21Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026060#M48191</link>
      <description>The thing is, we're stuck with this kernel by compatability constraints. There must be some way to verify that the system is operating correctly...even if the answer is that it's overloaded that would be fine, because i could certify that what we need is new hardware to get better i/os... any other tool that i could run besides vmstat and iostat that could give you more info?&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Feb 2007 05:53:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026060#M48191</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-02T05:53:23Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026061#M48192</link>
      <description>What I can see from the first iostat output is that you are using 8k block size for I/O and your disks are not very used %util. But this is not a warranty that you don't have any problems.&lt;BR /&gt;&lt;BR /&gt;What I would do is to test the performance of the raw device using the dd command or a software like iometer and try to stress your disks to identify the maximum I/O capabilities provided for the subsistem. If you can obtain high I/O rates with performance meassuring tools, then the problem should be traced from the application.</description>
      <pubDate>Fri, 02 Feb 2007 08:47:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026061#M48192</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-02-02T08:47:02Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026062#M48193</link>
      <description>Thanks for the replies, i'm going to check if i can get high i/os and if i do, i'll consider the platform "healthy". One last question, there's no tweaking to be done to cciss, nor a tool to do it, is there? What i mean is, if i have a problem we're talking about the smartarray controller or whatever, but not due to bad cciss configuration. I've never had to do any modification to cciss and since i'm being pressed from the developers i'm trying to make sure, this is the usual way to work with it. (i.e. install and "forget")&lt;BR /&gt;&lt;BR /&gt;Again thanks a lot for your time, it's hard to get support for these kind of things.&lt;BR /&gt;</description>
      <pubDate>Fri, 02 Feb 2007 10:13:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026062#M48193</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-02T10:13:52Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026063#M48194</link>
      <description>The kernel documentation does not provides any tuning option for the driver, see:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.mjmwired.net/kernel/Documentation/cciss.txt" target="_blank"&gt;http://www.mjmwired.net/kernel/Documentation/cciss.txt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You may try by tuning the file system, I don't know if in that version of kernel you can tune the I/O elevators, but se the article here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.redhat.com/magazine/008jun05/features/schedulers/" target="_blank"&gt;http://www.redhat.com/magazine/008jun05/features/schedulers/&lt;/A&gt;</description>
      <pubDate>Fri, 02 Feb 2007 10:27:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026063#M48194</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-02-02T10:27:02Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026064#M48195</link>
      <description>Well, i did some tests with dd, different blocksizes and all. Doing this dd several tests got these sustained results:&lt;BR /&gt;time dd if=/dev/zero of=./rawfile bs=8k count=204800&lt;BR /&gt;Time:&lt;BR /&gt;real    1m0.838s&lt;BR /&gt;user    0m0.220s&lt;BR /&gt;sys     0m12.900s&lt;BR /&gt;1.6G    rawfile&lt;BR /&gt;&lt;BR /&gt;While vmstat 10 showed this:&lt;BR /&gt;    io     system         cpu&lt;BR /&gt;bi    bo   in    cs  us  sy  id&lt;BR /&gt;0   11  116    32   0   1  99&lt;BR /&gt;0   13140  211    93   1  17  83&lt;BR /&gt;1   24780  237    64   0  22  77&lt;BR /&gt;1   25468  237    56   0  23  76&lt;BR /&gt;10  25449  244    79   1  23  76&lt;BR /&gt;4   26967  245    85   1  23  77&lt;BR /&gt;1   25205  257    81   0  24  76&lt;BR /&gt;5   23331  257    75   0  17  83&lt;BR /&gt;6   29  127    47   0   1  98&lt;BR /&gt;&lt;BR /&gt;On a new and faster server i got around 80000 bo's. So, it seems a bit slow... i don't know, we're talking about 4 years old HD versus a new platform. I don't have much numbers to compare...</description>
      <pubDate>Fri, 02 Feb 2007 13:03:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026064#M48195</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-02T13:03:13Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026065#M48196</link>
      <description>The results shows about 26 MB/s. But you for a single thread application. I think that is not too bad for two disks configuration (RAID1). The problem is that you are trying with a single thread application, running just one instance and also, you are using the file system, so, much information can go to the buffers/cache.&lt;BR /&gt;&lt;BR /&gt;For a better test you should use a new partition and bypass the file system, with a command like this:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Write performance:&lt;BR /&gt;&lt;BR /&gt;for I in `seq 5`; do&lt;BR /&gt;dd if=/dev/zero of=/dev/cciss/c0d0p6 bs=8k count=131072 &amp;amp;&lt;BR /&gt;done&lt;BR /&gt;&lt;BR /&gt;Read performance:&lt;BR /&gt;&lt;BR /&gt;for I in `seq 5`; do&lt;BR /&gt;dd if=/dev/cciss/c0d0p6 if=/dev/null bs=8k count=131072 &amp;amp;&lt;BR /&gt;done</description>
      <pubDate>Fri, 02 Feb 2007 13:15:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026065#M48196</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-02-02T13:15:27Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026066#M48197</link>
      <description>Thanks, with all these info i think i have enough. I knew i had to do also a paralell test, but i don't have any spare partition to directly write into. I'm closing this thread, thanks a lot for the replies.&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Feb 2007 04:24:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026066#M48197</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-06T04:24:50Z</dc:date>
    </item>
    <item>
      <title>Re: yet another disk io question</title>
      <link>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026067#M48198</link>
      <description>Got enough info to be able to verify the system</description>
      <pubDate>Tue, 06 Feb 2007 05:38:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/yet-another-disk-io-question/m-p/5026067#M48198</guid>
      <dc:creator>Jose Molina</dc:creator>
      <dc:date>2007-02-06T05:38:32Z</dc:date>
    </item>
  </channel>
</rss>

