<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Performance I/O disk problem in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049462#M813124</link>
    <description>Mail is completely i/o bound, which means you really need to setup your disks to be striped at the Lvol level across as many disks as possible and across as many disk controllers as possible.&lt;BR /&gt;&lt;BR /&gt;Are the HP disks as fast and with as much cache as the ones on your Alpha were ?&lt;BR /&gt;&lt;BR /&gt;What is your disk layout like ?&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 15 Aug 2003 06:54:09 GMT</pubDate>
    <dc:creator>Stefan Farrelly</dc:creator>
    <dc:date>2003-08-15T06:54:09Z</dc:date>
    <item>
      <title>Performance I/O disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049460#M813122</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I have a HP server rp5430 serie HP9000 running OS HP-UX 11i, with 1GB RAM, 2 CPUs and a disk controller RAID4si.&lt;BR /&gt;&lt;BR /&gt;This server has installed an e-mail service for 50 thousand accounts (software is qmail + vpopmail + mysql + Maildir).&lt;BR /&gt;&lt;BR /&gt;At this moment, we are experiencing a problem of slowness in the reading and writing of disk,&lt;BR /&gt;&lt;BR /&gt;Apparently because is much the information that receives (e-mail service). Is generate a bottle neck in the writing to the disk, for that reason my e-mail service is slow for local messges.&lt;BR /&gt;&lt;BR /&gt;When I use the glance command, I saw that the disk used arrives at the 100% and memory 99%. &lt;BR /&gt;&lt;BR /&gt;I need to improve the performance, making an adjustment or tuning kernel, adjustment of the disk, or another option.  &lt;BR /&gt;&lt;BR /&gt;Previously I don't have any problem with another equipment, this was an Alpha 1200 Server with 739MB RAM. &lt;BR /&gt;&lt;BR /&gt;I don't know if I should change some kernel parameters, for example: fs_async, nfile, maxuser, nproc, dbc_max_pct, bufpages, etc.&lt;BR /&gt;&lt;BR /&gt;I attachment some additional information that  hope serves to you for the diagnosis.  &lt;BR /&gt;I wait for your recommendations. &lt;BR /&gt;&lt;BR /&gt;Very Thanks&lt;BR /&gt;</description>
      <pubDate>Thu, 14 Aug 2003 23:46:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049460#M813122</guid>
      <dc:creator>Juan_78</dc:creator>
      <dc:date>2003-08-14T23:46:57Z</dc:date>
    </item>
    <item>
      <title>Re: Performance I/O disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049461#M813123</link>
      <description>The first thing you need to do is reduce the size of your buffer cache to around 300Mb. (dbc_max_pct) This will have an immediate effect on the memory. You could start the setting at 30 for dbc_max_pct and say 15 for dbc_min_pct. 3 out of 5 disks are getting thrashed, which indicates a balancing problem. Perhaps you could identify your volume group information. I don't think changing the mounting options for filesystems is going to make a deal of difference. It could be the controller getting trashed or you may need additional disks to spread the load. What application is being used ?&lt;BR /&gt;</description>
      <pubDate>Fri, 15 Aug 2003 00:01:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049461#M813123</guid>
      <dc:creator>Michael Tully</dc:creator>
      <dc:date>2003-08-15T00:01:18Z</dc:date>
    </item>
    <item>
      <title>Re: Performance I/O disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049462#M813124</link>
      <description>Mail is completely i/o bound, which means you really need to setup your disks to be striped at the Lvol level across as many disks as possible and across as many disk controllers as possible.&lt;BR /&gt;&lt;BR /&gt;Are the HP disks as fast and with as much cache as the ones on your Alpha were ?&lt;BR /&gt;&lt;BR /&gt;What is your disk layout like ?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 15 Aug 2003 06:54:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049462#M813124</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-08-15T06:54:09Z</dc:date>
    </item>
    <item>
      <title>Re: Performance I/O disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049463#M813125</link>
      <description>Thanks for your response&lt;BR /&gt;Michael and Stefan&lt;BR /&gt;&lt;BR /&gt;My system have this configuration:&lt;BR /&gt;one controller disk RAID4Si with 4 disk of 36GB in raid5&lt;BR /&gt;&lt;BR /&gt;Michael: The application used is qmail, this application have hundreds process, and the problem appears when arrives the local e-mails. In this moments the local delivery is slow, and disk used is 100% and %busy too is 100%.&lt;BR /&gt; &lt;BR /&gt;Stefan: I am attaching my disk layout (command vgdisplay -v vg01)</description>
      <pubDate>Fri, 15 Aug 2003 15:40:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049463#M813125</guid>
      <dc:creator>Juan_78</dc:creator>
      <dc:date>2003-08-15T15:40:10Z</dc:date>
    </item>
    <item>
      <title>Re: Performance I/O disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049464#M813126</link>
      <description>You need to upgrade memory on the box. 1GB is very less for what is running on the box.&lt;BR /&gt;Also change the swap accordingly.&lt;BR /&gt;&lt;BR /&gt;To reduce disk I/O use striped mirror or RAID5.</description>
      <pubDate>Fri, 15 Aug 2003 16:20:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049464#M813126</guid>
      <dc:creator>GK_5</dc:creator>
      <dc:date>2003-08-15T16:20:05Z</dc:date>
    </item>
    <item>
      <title>Re: Performance I/O disk problem</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049465#M813127</link>
      <description>You are I/O bound on disk c4t0d0.&lt;BR /&gt;&lt;BR /&gt;1.  Rebuild kernel with max_dbc_pct set to 10%, as previously mentioned.  This is NOT the prime culprit, but should be done.&lt;BR /&gt;&lt;BR /&gt;2.  the "sar -d" shows that your system is beating up on disk c4t0d0, but c1t0d0 isn't much used.  SPREAD YOUR DISK I/O ON THE OTHER DISKS.&lt;BR /&gt;&lt;BR /&gt;  If possible, spread your disk i/o on another controller.  Multiple paths.  Stripe across multiple paths if you can.  &lt;BR /&gt;&lt;BR /&gt;   The sar command only shows what's happening now.  Get an "over time" look at your i/o and try to spot the hot disks....</description>
      <pubDate>Fri, 15 Aug 2003 16:23:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-i-o-disk-problem/m-p/3049465#M813127</guid>
      <dc:creator>Stuart Abramson_2</dc:creator>
      <dc:date>2003-08-15T16:23:50Z</dc:date>
    </item>
  </channel>
</rss>

