<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic iowait 100% linx redhat ES3 2.4.21-9.ELsmp in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585576#M18759</link>
    <description>i have RH ES3, kernel 2.4.21-9.ELsmp installed on DL380 3G. It has 6 hdd (72 GBytes) in Raid 5. in read or write activity i'm getting 100% iowait, could you advise what to do? Latest cciss driver cpq_cciss-2.4.54-14.rhel3.i686&lt;BR /&gt;</description>
    <pubDate>Tue, 19 Jul 2005 05:18:17 GMT</pubDate>
    <dc:creator>Yaroslav_4</dc:creator>
    <dc:date>2005-07-19T05:18:17Z</dc:date>
    <item>
      <title>iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585576#M18759</link>
      <description>i have RH ES3, kernel 2.4.21-9.ELsmp installed on DL380 3G. It has 6 hdd (72 GBytes) in Raid 5. in read or write activity i'm getting 100% iowait, could you advise what to do? Latest cciss driver cpq_cciss-2.4.54-14.rhel3.i686&lt;BR /&gt;</description>
      <pubDate>Tue, 19 Jul 2005 05:18:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585576#M18759</guid>
      <dc:creator>Yaroslav_4</dc:creator>
      <dc:date>2005-07-19T05:18:17Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585577#M18760</link>
      <description>What are you doing on the server?&lt;BR /&gt;&lt;BR /&gt;Some information.  Run 'vmstat -ba 1 5', and show us the details.  Also show us the content of '/proc/meminfo'.&lt;BR /&gt;&lt;BR /&gt;Basically, what I'm looking at here is for the swap activity, and how much memory you've got.&lt;BR /&gt;&lt;BR /&gt;The first thing I'd get you to do however is update to the latest eratta kernel (32.EL), and associated packages.  There are a number of IO and memory handling updates of which might be bennificial.&lt;BR /&gt;&lt;BR /&gt;Also, what RAID controller are you using?  Are you using the onboard SmartArray 5i, or another controller?</description>
      <pubDate>Tue, 19 Jul 2005 06:14:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585577#M18760</guid>
      <dc:creator>Stuart Browne</dc:creator>
      <dc:date>2005-07-19T06:14:49Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585578#M18761</link>
      <description>&lt;BR /&gt;I believe it could be related to memory/swap issue. please post the /proc/meminfo and if possible the 'top' command output&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 19 Jul 2005 06:20:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585578#M18761</guid>
      <dc:creator>Gopi Sekar</dc:creator>
      <dc:date>2005-07-19T06:20:26Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585579#M18762</link>
      <description>I'm using onboard smart array 5i&lt;BR /&gt;Please look into output of related commands:&lt;BR /&gt;CPU states:  cpu    user    nice  system    irq  softirq  iowait    idle&lt;BR /&gt;           total    0.0%    0.0%    0.0%   0.0%     0.0%  100.0%    0.0%&lt;BR /&gt;           cpu00    0.0%    0.0%    0.0%   0.0%     0.0%  100.0%    0.0%&lt;BR /&gt;           cpu01    0.0%    0.0%    0.0%   0.0%     0.0%  100.0%    0.0%&lt;BR /&gt;           cpu02    0.0%    0.0%    0.0%   0.0%     0.0%  100.0%    0.0%&lt;BR /&gt;           cpu03    0.0%    0.0%    0.0%   0.0%     0.0%  100.0%    0.0%&lt;BR /&gt;Mem:  4898932k av, 3027948k used, 1870984k free,       0k shrd,   35852k buff&lt;BR /&gt;       860296k active,            2029808k inactive&lt;BR /&gt;Swap: 10241428k av,       0k used, 10241428k free                 2813784k cached&lt;BR /&gt;&lt;BR /&gt;[root@Zeus storage]# cat /proc/meminfo&lt;BR /&gt;        total:    used:    free:  shared: buffers:  cached:&lt;BR /&gt;Mem:  5016506368 4998258688 18247680        0 39489536 4750659584&lt;BR /&gt;Swap: 10487222272        0 10487222272&lt;BR /&gt;MemTotal:      4898932 kB&lt;BR /&gt;MemFree:         17820 kB&lt;BR /&gt;MemShared:           0 kB&lt;BR /&gt;Buffers:         38564 kB&lt;BR /&gt;Cached:        4639316 kB&lt;BR /&gt;SwapCached:          0 kB&lt;BR /&gt;Active:         862832 kB&lt;BR /&gt;ActiveAnon:     553452 kB&lt;BR /&gt;ActiveCache:    309380 kB&lt;BR /&gt;Inact_dirty:   3696872 kB&lt;BR /&gt;Inact_laundry:  105020 kB&lt;BR /&gt;Inact_clean:     79600 kB&lt;BR /&gt;Inact_target:   948864 kB&lt;BR /&gt;HighTotal:     4063204 kB&lt;BR /&gt;HighFree:         1088 kB&lt;BR /&gt;LowTotal:       835728 kB&lt;BR /&gt;LowFree:         16732 kB&lt;BR /&gt;SwapTotal:    10241428 kB&lt;BR /&gt;SwapFree:     10241428 kB&lt;BR /&gt;HugePages_Total:     0&lt;BR /&gt;HugePages_Free:      0&lt;BR /&gt;Hugepagesize:     2048 kB&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;[root@Zeus storage]# vmstat -a 1 5&lt;BR /&gt;procs                      memory      swap          io     system         cpu&lt;BR /&gt; r  b   swpd   free  inact active   si   so    bi    bo   in    cs us sy id wa&lt;BR /&gt; 1  3      0  18164 3704904 935904    0    0  1240   947  361   492  1  1 79 19&lt;BR /&gt; 0  5      0  18200 3704156 936568    0    0  4692  8548 1376   665  1  1  0 98&lt;BR /&gt; 1  1      0  18020 3700672 940196    0    0 42208   184 10256 10674  1 10 13 75&lt;BR /&gt; 1  2      0  17876 3698336 942860    0    0 46004    72 11217 11616  2  8 14 76&lt;BR /&gt; 0  3      0  17908 3697520 944100    0    0 44840   208 10990 15024  1  9  4 86&lt;BR /&gt;</description>
      <pubDate>Tue, 19 Jul 2005 06:45:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585579#M18762</guid>
      <dc:creator>Yaroslav_4</dc:creator>
      <dc:date>2005-07-19T06:45:01Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585580#M18763</link>
      <description>&lt;BR /&gt;your system looks to be clean from memory/swap issue and cpu is relaxing.&lt;BR /&gt;&lt;BR /&gt;Are there any activities happening in the hard disk (LEDs continuously glowing?), this might happen if one of the hard disk is bad and RAID is trying to rebuild it&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Gopi</description>
      <pubDate>Tue, 19 Jul 2005 06:53:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585580#M18763</guid>
      <dc:creator>Gopi Sekar</dc:creator>
      <dc:date>2005-07-19T06:53:07Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585581#M18764</link>
      <description>It seems an erratic application, because the high system time is not related to memory problems, as seen in the output.&lt;BR /&gt;&lt;BR /&gt;What is running on your server? The raid is a software or hardware raid?</description>
      <pubDate>Tue, 19 Jul 2005 06:59:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585581#M18764</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-07-19T06:59:39Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585582#M18765</link>
      <description>*nod*nod* yup.. No swap activity, but nasty disk IO.&lt;BR /&gt;&lt;BR /&gt;What is this machine doing ?!?  The name is 'storage'.  Is this perhaps a database storage server?  And if so, how big are the queries it's getting hammered with ?!</description>
      <pubDate>Tue, 19 Jul 2005 07:06:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585582#M18765</guid>
      <dc:creator>Stuart Browne</dc:creator>
      <dc:date>2005-07-19T07:06:45Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585583#M18766</link>
      <description>on this machine i'm having oracle database instance. all datafiles are 2 Gbytes. I tryed to shutdown database and just make copy just file - the same. i even try to make such test:&lt;BR /&gt;cat some_file &amp;gt; /dev/null when oracle down, i'm got the same thing, before that i change all disk and reinstall linux, nothing changed</description>
      <pubDate>Tue, 19 Jul 2005 07:23:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585583#M18766</guid>
      <dc:creator>Yaroslav_4</dc:creator>
      <dc:date>2005-07-19T07:23:31Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585584#M18767</link>
      <description>when i'm trying to copy some files, the leds begin flushing, but when not - normal</description>
      <pubDate>Tue, 19 Jul 2005 07:32:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585584#M18767</guid>
      <dc:creator>Yaroslav_4</dc:creator>
      <dc:date>2005-07-19T07:32:24Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585585#M18768</link>
      <description>Can you give information about your partitions, filesystem type (ext3?), block size, RAID stripe size and cache options, and journal mount options.&lt;BR /&gt;&lt;BR /&gt;Are you rally using RAID 5 or ADG (RAID 5 with 2 parity disks)?&lt;BR /&gt;&lt;BR /&gt;Do you have the chance to add a new disk without RAID (JBOD), mount it and run the tests?, better on other SCSI controller&lt;BR /&gt;&lt;BR /&gt;This way you can know if the problem is related to the RAID itself.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Recomendations:&lt;BR /&gt;&lt;BR /&gt;when creating the filesytem use:&lt;BR /&gt;&lt;BR /&gt;-O dir_index &lt;BR /&gt;-R stride=N&lt;BR /&gt;-O sparse_super&lt;BR /&gt;&lt;BR /&gt;Where N*4k equals to the RAID stripe size.&lt;BR /&gt;&lt;BR /&gt;Use writeback journaling when mounting the filesystem (if it is ext3).</description>
      <pubDate>Tue, 19 Jul 2005 13:52:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585585#M18768</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2005-07-19T13:52:22Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585586#M18769</link>
      <description>There is no chance to make tes with one more disk. It's raid five which is standart on DL380 in configuration menu</description>
      <pubDate>Wed, 20 Jul 2005 01:49:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585586#M18769</guid>
      <dc:creator>Yaroslav_4</dc:creator>
      <dc:date>2005-07-20T01:49:39Z</dc:date>
    </item>
    <item>
      <title>Re: iowait 100% linx redhat ES3 2.4.21-9.ELsmp</title>
      <link>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585587#M18770</link>
      <description>Ivan,&lt;BR /&gt;&lt;BR /&gt;How do you determine the stride size?&lt;BR /&gt;&lt;BR /&gt;What happens if this is my configuration:&lt;BR /&gt;&lt;BR /&gt;1) RAID 5 from P400, 4 disks in each logical volume with 256kb stripe size. Total of 3 logical volumes&lt;BR /&gt;2) Linux LVM &lt;BR /&gt;   Not sure how to create my PV metadata size (do I need to worry about this)?&lt;BR /&gt;   Not sure if I should worry about lv stripe size, I would like to stripe it for better performance. &lt;BR /&gt;3) Ext3 filesystem stride option. How does this affect LVM's LV stripesize. How does it work with HW raid's stripe size? &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any thoughts?&lt;BR /&gt;&lt;BR /&gt;TIA&lt;BR /&gt;</description>
      <pubDate>Wed, 18 Jun 2008 09:30:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/iowait-100-linx-redhat-es3-2-4-21-9-elsmp/m-p/3585587#M18770</guid>
      <dc:creator>Megan Moore</dc:creator>
      <dc:date>2008-06-18T09:30:10Z</dc:date>
    </item>
  </channel>
</rss>

