<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: adding memory to alieve disk bottleneck in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478706#M847154</link>
    <description>Of course I MEANT...&lt;BR /&gt;&lt;BR /&gt;will NOT.&lt;BR /&gt;&lt;BR /&gt;Ooooops,&lt;BR /&gt;Jeff</description>
    <pubDate>Fri, 04 Feb 2005 11:26:50 GMT</pubDate>
    <dc:creator>Jeff Schussele</dc:creator>
    <dc:date>2005-02-04T11:26:50Z</dc:date>
    <item>
      <title>adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478696#M847144</link>
      <description>I have 9 servers that are (EMC) disk i/o bound&lt;BR /&gt;High cpu waiting on I/O&lt;BR /&gt;Not unusual for our application on UNIVERSE at end of month.&lt;BR /&gt;&lt;BR /&gt;Management wants to throw memory at the problem. How do I take advantage of the extra memory. Add it all to buffer cache ?&lt;BR /&gt;&lt;BR /&gt;I have 4GB now going to 16GB&lt;BR /&gt;&lt;BR /&gt;Total VM : 972.9mb   Sys Mem  : 856.7mb   User Mem: 768.8mb   Phys Mem:  4.00gb&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Feb 2005 10:11:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478696#M847144</guid>
      <dc:creator>Larry Basford</dc:creator>
      <dc:date>2005-02-04T10:11:11Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478697#M847145</link>
      <description>Hi,&lt;BR /&gt;I would yes add some memory but I wonder if now already reducing your cache buffer will not help since  you say "at end of month" meaning big batches maybe? Bring it to more reasonable size as 500 MB to start with and give it a try then you have JFS tuning options, then why not stripe?&lt;BR /&gt;&lt;BR /&gt;All the best&lt;BR /&gt;Victor</description>
      <pubDate>Fri, 04 Feb 2005 10:19:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478697#M847145</guid>
      <dc:creator>Victor BERRIDGE</dc:creator>
      <dc:date>2005-02-04T10:19:13Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478698#M847146</link>
      <description>Larry,&lt;BR /&gt;&lt;BR /&gt;Before throwing money at a very expensive solution, I would want to be very sure that it would help.  Do you have Glance available?  If so, check the buffer cache statistics in the Reports &amp;gt; System Info &amp;gt; System Tables report.&lt;BR /&gt;&lt;BR /&gt;If you do go for more memory, you might want to increase buffer cache gradually and monitor it the same way to see how well it is being utilized.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Pete</description>
      <pubDate>Fri, 04 Feb 2005 10:22:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478698#M847146</guid>
      <dc:creator>Pete Randall</dc:creator>
      <dc:date>2005-02-04T10:22:49Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478699#M847147</link>
      <description>You are not going to be using dynamic buffer caching if you have bufpages = 204800, thus the values of dbc_min/max_pct's are not used.&lt;BR /&gt;&lt;BR /&gt;I'd change bufpages = 0 (and nbuf=0) UNLESS the vendor has specified the ABSOLUTE use of bufpages=&lt;SOMENUMBER&gt; and to basically turn off dynamic buffer caching.&lt;BR /&gt;&lt;BR /&gt;What kind of EMC disk array do you have and what kind of EMC monitoring/managing tools do you have available?&lt;BR /&gt;&lt;BR /&gt;live free or die&lt;BR /&gt;harry d brown jr&lt;/SOMENUMBER&gt;</description>
      <pubDate>Fri, 04 Feb 2005 10:34:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478699#M847147</guid>
      <dc:creator>harry d brown jr</dc:creator>
      <dc:date>2005-02-04T10:34:53Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478700#M847148</link>
      <description>This is a *typical* management response - Let's throw money at the problem and *hope* it goes away.&lt;BR /&gt;You REALLY need to use glance/gpm to look at the problem because high wio% is *frequently* due to crappy coding.&lt;BR /&gt;First you need to look at what the *rest* of the CPU usage is - system OR user.&lt;BR /&gt;If it's user - THEN well you might just need more horse power - NOT memory. IF it's system then the detective work needs to be done &amp;amp; this *will* take some work &amp;amp; time.&lt;BR /&gt;You need to look at the following MeasureWare (OVPA) metrics:&lt;BR /&gt; &lt;BR /&gt;GBL_PRI_QUEUE&lt;BR /&gt;GBL_RUN_QUEUE&lt;BR /&gt;GBL_CPU_INTERRUPT_UTIL&lt;BR /&gt;GBL_CPU_CSWITCH_UTIL&lt;BR /&gt;CBL_CPU_SYSCALL_UTIL&lt;BR /&gt;PROC_CPU_CSWITCH_UTIL&lt;BR /&gt;PROC_CPU_SYSCALL_UTIL&lt;BR /&gt;PROC_CPU_SYS_MODE_UTIL&lt;BR /&gt;&lt;BR /&gt;These - and others - can clue you in to whether more RAM is going to help.&lt;BR /&gt;And I *seriously* think that unless you're paging out NOW - it will.&lt;BR /&gt;&lt;BR /&gt;My 2 cents,&lt;BR /&gt;Jeff&lt;BR /&gt;&lt;BR /&gt;P.S. NON-IT mgmnt should *never* make expensive technical decisions BY THEMSELVES.&lt;BR /&gt;Sheesh - that's WHY they hire the techs in the FIRST place!</description>
      <pubDate>Fri, 04 Feb 2005 10:38:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478700#M847148</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2005-02-04T10:38:04Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478701#M847149</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;You mean adding memory in the EMC disk array&lt;BR /&gt;or in the server(s)?&lt;BR /&gt;&lt;BR /&gt;In the server you can allocate more memory cache and buffers to the database for example.&lt;BR /&gt;&lt;BR /&gt;In the EMC you can change the ratio of the read and write cache sizes already now with this 4GB.&lt;BR /&gt;&lt;BR /&gt;But a little analyzing before you put the memory can't hurt.&lt;BR /&gt;&lt;BR /&gt;It would be good to have statistics of the Â¤Gb situation and then you can run the statistics again with 16Gb. And see if it really helps...&lt;BR /&gt;&lt;BR /&gt;br,&lt;BR /&gt; B</description>
      <pubDate>Fri, 04 Feb 2005 10:44:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478701#M847149</guid>
      <dc:creator>B. Hulst</dc:creator>
      <dc:date>2005-02-04T10:44:02Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478702#M847150</link>
      <description>What kind of database is it?&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Feb 2005 10:52:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478702#M847150</guid>
      <dc:creator>TwoProc</dc:creator>
      <dc:date>2005-02-04T10:52:15Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478703#M847151</link>
      <description>More memory never hurts.&lt;BR /&gt;&lt;BR /&gt;Your database can allocate a larger cache and have more reads come out of memory instead of disk.&lt;BR /&gt;&lt;BR /&gt;Its also important to see how the i/o is spread across the disks.&lt;BR /&gt;&lt;BR /&gt;If you have a particular disk with lots of i/o on it, look at what sits on it. If there is a way to re-arrange things to balance the i/o thats a good idea.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 04 Feb 2005 11:01:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478703#M847151</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2005-02-04T11:01:34Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478704#M847152</link>
      <description>MORE INFO&lt;BR /&gt;EMC 8530  96 drives 6,551.19GB&lt;BR /&gt;4GB cache&lt;BR /&gt;&lt;BR /&gt;UNIVERSE database&lt;BR /&gt;&lt;BR /&gt;N4000 servers 4x440 4GB mem&lt;BR /&gt;&lt;BR /&gt;buffercache 800MB fixed&lt;BR /&gt;%rcache  above 90%&lt;BR /&gt;&lt;BR /&gt;Many database selects for reports are the cause of the high I/O&lt;BR /&gt;along with some EMC disk contention.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Feb 2005 11:16:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478704#M847152</guid>
      <dc:creator>Larry Basford</dc:creator>
      <dc:date>2005-02-04T11:16:25Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478705#M847153</link>
      <description>The EMC disks acn not go faster.&lt;BR /&gt;they are 80GB stripped metas.&lt;BR /&gt;10 spindles in each filesystem.</description>
      <pubDate>Fri, 04 Feb 2005 11:18:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478705#M847153</guid>
      <dc:creator>Larry Basford</dc:creator>
      <dc:date>2005-02-04T11:18:03Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478706#M847154</link>
      <description>Of course I MEANT...&lt;BR /&gt;&lt;BR /&gt;will NOT.&lt;BR /&gt;&lt;BR /&gt;Ooooops,&lt;BR /&gt;Jeff</description>
      <pubDate>Fri, 04 Feb 2005 11:26:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478706#M847154</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2005-02-04T11:26:50Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478707#M847155</link>
      <description>Everyone here is correct in that you need to determine what the bottleneck is before deciding on a course of action to correct the problem.&lt;BR /&gt;&lt;BR /&gt;Your high wio% can be caused by many things...&lt;BR /&gt;&lt;BR /&gt;Here's some tips:&lt;BR /&gt;&lt;BR /&gt;Look at your FC or SCSI (however you're connecting to the EMC) bus utilization.  If utilization is high, add some more busses.&lt;BR /&gt;&lt;BR /&gt;Note that HP-UX likes a lot of LUNs.  You have 80GB Metas, but how many?  1 per FS?  If so, you would be better off not using MetaVolumes, and presenting several LUNs to HP-UX instead, letting LVM do the striping.  The reason for this is that HP-UX does a better job managing the I/O queues when it has more queues to manage.&lt;BR /&gt;&lt;BR /&gt;Take a close look at the LVM layout and MetaVolume layout of the EMC.  If you place more than one FS on the same set of disks, they could be causing a lot of contention and thrashing the heads on those drives.  Seperate your most highly used FS's onto different sets of disks.&lt;BR /&gt;&lt;BR /&gt;Avoid mix logs (sequential access) and tablespaces (random I/O) on the same set of disks.  If you need to, put only tablespaces that aren't used heavily with the logs.&lt;BR /&gt;&lt;BR /&gt;Adding that memory to the database's SGA will likely (not guaranteed) lower the amount of disk I/O the database needs to do.  This should at least help a little.  Note that it typically doesn't solve problems like this, but it can help.  Some database I/O needs to be synchronous (such as log writes), so it bypasses any SGA caching anyway...&lt;BR /&gt;&lt;BR /&gt;If you have some performance monitoring software for the EMC, go and use it to make sure you're not overloading an internal bus or cpu, or causing a hot-spot on some of the drives.&lt;BR /&gt;&lt;BR /&gt;I hope this helps,&lt;BR /&gt;&lt;BR /&gt;Good Luck,&lt;BR /&gt;&lt;BR /&gt;Vince&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Feb 2005 11:53:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478707#M847155</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2005-02-04T11:53:14Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478708#M847156</link>
      <description>Unfortunately, Universe is a VERY old database design and it does not have options to make use of more memory in each instance of the program. HOWEVER, you can make massive improvements in performance by increasing the Universe config parameter called MBUFS. If it is 100, make it 300 or 500, even 1000. But before you restart Universe apps, you must compute the increase needed for nfiles and maxfiles in the kernel.&lt;BR /&gt; &lt;BR /&gt;Universe uses hundreds to thousands of files, depending on the database design. Since it was designed during the days when 128 megs of RAM = really big, the MBUFS value would control the number of file handles that could be opened at the same time inside the program. Then, as other files were needed, less-used files would be closed and new files opened. You probably notice that the system overhead is fairly high 20%-40%. That's because the files are being opened and closed hundreds of times per second. You can verify this with:&lt;BR /&gt; &lt;BR /&gt;sar -a 2 5&lt;BR /&gt; &lt;BR /&gt;This will produce a report of directory operations (lookup filenames, directory blocks read, etc). Normal numbers might be single and double digits, while a busy Universe system might have 4 digit numbers and higher (thousands). To reduce these numbers, you need to increase MBUFS dramatically. &lt;BR /&gt; &lt;BR /&gt;So to compute the needed changes to the kernel, make maxfiles (number of simultaneously open files per process) at least MBUFS + 50. So if MBUFS is 500, then maxfiles should be 550 or higher. maxfiles is just a runaway program protection so you can set it to 1000 and forget it if you want--no extra memory is used.&lt;BR /&gt; &lt;BR /&gt;Then you must increase nfiles using the formula:&lt;BR /&gt; &lt;BR /&gt;NUMPROC=max_number_of_Universe_processes&lt;BR /&gt;nfiles = MBUFS * NUMPROC + NUMPROC * 50&lt;BR /&gt; &lt;BR /&gt;In other words, the maximum number of Universe processes (NUMPROC) times the maximum number of files opened at the same time in each process plus about 50 files that are always opened in each process.&lt;BR /&gt; &lt;BR /&gt;Don't be alarmed at the size of nfile. If you  have a Universe license for 500 users (really, instances of Universe at the same time) and you make MBUF=500, then nfiles must be at least 500 * 500 + 500 * 50 or 275000 (that's 250 thousand). Don't worry, HP-UX can scale up to several million files opened at the same time. You may need to add just another 4Gb (8Gb total).&lt;BR /&gt; &lt;BR /&gt;NOTE: The buffer cache has an asymptotic curve of performance. 200 megs is way better than 100 megs, 500 megs improves a bit more, 1000 megs shows little improvement and beyond 1000 megs, little improvement will be seen. Set the maximum DBC % to 500-700 megs and you'll be at the top of the performance improvement curve.</description>
      <pubDate>Fri, 04 Feb 2005 15:48:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478708#M847156</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2005-02-04T15:48:02Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478709#M847157</link>
      <description>No one mentioned the possibility of increasing the queue depth.  Striping across multiple HBAs will help this, but you can often improve further by using an ioctl command.  I don't mean to dilute the message that you should measure first and understand the bottleneck before experimenting with possible cures, but there are many potential cures, because there are many potential problems.  You can pay HP or others to do a performance analysis, if you don't want to do it yourself.  Also, if you solve the I/O bottleneck, it sounds like the problem will shift to CPU.  What this with less than a GB of total VM with Physical memory of 4GB.  Are you even using all you have now?&lt;BR /&gt;What OS?  What are your kernel parameters?&lt;BR /&gt;swapinfo -tam?</description>
      <pubDate>Mon, 07 Feb 2005 12:42:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478709#M847157</guid>
      <dc:creator>Ted Buis</dc:creator>
      <dc:date>2005-02-07T12:42:04Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478710#M847158</link>
      <description>We installed an extra 12GB of mem&lt;BR /&gt;the kmtune was attached to the first message&lt;BR /&gt;We have and EMC 8530 dule path with power path and sar data ocasionally up to 50,000 I/O's sec (They are maxed)&lt;BR /&gt;There is no fixing that except with a new EMC D2000P which we will be getting next month.&lt;BR /&gt;Is there any way to use this extra 12GB of memory?   More processors? &lt;BR /&gt;Tunable parameters?&lt;BR /&gt;&lt;BR /&gt;I never found the MBUFS tunable in UNIVERSE&lt;BR /&gt;but &lt;BR /&gt;nfile                  149984  -  (320*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY+NSTRTEL))&lt;BR /&gt;&lt;BR /&gt;swapinfo -tam&lt;BR /&gt;             Mb      Mb      Mb   PCT  START/      Mb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev        4096       0    4096    0%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;reserve       -    1506   -1506&lt;BR /&gt;memory    12758     488   12270    4%&lt;BR /&gt;total     16854    1994   14860   12%       -       0    -&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Feb 2005 15:31:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478710#M847158</guid>
      <dc:creator>Larry Basford</dc:creator>
      <dc:date>2005-02-07T15:31:42Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478711#M847159</link>
      <description>You'll definitely need to find this variable. Universe won't use any of the extra memnory and you won't see any performance gains. It might also be called MBUF but the parameter is in the Universe configuration file and controls to maximum number of open files per instance of the program. The Universe docs will point out the value name. nfile in the kernel is large because you have a formula based on maxusers and maxusers is probably set to several hundred. Again, none of the kernel parameters will provide any significant improvement in speed. You need to reduce the directory and file open/close activity.</description>
      <pubDate>Mon, 07 Feb 2005 16:10:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478711#M847159</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2005-02-07T16:10:58Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478712#M847160</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;If you want to use the 12GB extra memory then set the nbuf value, regardless of the application.  ;-)&lt;BR /&gt;&lt;BR /&gt;(from the man pages)&lt;BR /&gt;nbuf:&lt;BR /&gt;The number of file-system buffer cache buffer                         headers. If both nbuf and bufpages are set to 0, the kernel allocates ten percent of available memory to buffer space. If only nbuf is 0, it will be computed from bufpages, assuming 4096 bytes per buffer. If both variables are non-zero, the kernel attempts to adhere to both requests, but if necessary, nbuf is changed to correspond to bufpages.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt; Bob</description>
      <pubDate>Mon, 07 Feb 2005 16:26:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478712#M847160</guid>
      <dc:creator>B. Hulst</dc:creator>
      <dc:date>2005-02-07T16:26:13Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478713#M847161</link>
      <description>Someone might check me on this, but it appears to me from your swapinfo output that you have 4GB of swap space.  However with 16GB of RAM you need to enable psuedo-swap, which appears to be off, swapmem_on=0 now, should be one for psuedo-swap to be enabled.  I don't think the system can use the RAM unless you have psuedo-swap enabled or create move physical device swap.</description>
      <pubDate>Mon, 07 Feb 2005 18:36:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478713#M847161</guid>
      <dc:creator>Ted Buis</dc:creator>
      <dc:date>2005-02-07T18:36:07Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478714#M847162</link>
      <description>Thanks Ted,&lt;BR /&gt;Yes I did add the swapmem_on to 1 after adding the 16GB of ram.&lt;BR /&gt;&lt;BR /&gt;Glad to know you are checking out my config that close.&lt;BR /&gt;&lt;BR /&gt;Only thing I think that might help is the system is an N4000 and we added 2 cariers to the system. This increases the memory bandwidth.&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Feb 2005 18:58:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478714#M847162</guid>
      <dc:creator>Larry Basford</dc:creator>
      <dc:date>2005-02-07T18:58:16Z</dc:date>
    </item>
    <item>
      <title>Re: adding memory to alieve disk bottleneck</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478715#M847163</link>
      <description>You need 4 carriers total for maximum bandwidth to memory, most important if you have more than 4 CPUs.  Ideally with memory spread out symmetrically across the carriers.  Some would argue that buffer cache is the best approach, others might suggest a RAM disk for tmp space.  You might want to review this web page.  I have not tried any of them.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.unixguide.net/hp/faq/5.3.2.shtml" target="_blank"&gt;http://www.unixguide.net/hp/faq/5.3.2.shtml&lt;/A&gt;</description>
      <pubDate>Mon, 07 Feb 2005 19:06:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/adding-memory-to-alieve-disk-bottleneck/m-p/3478715#M847163</guid>
      <dc:creator>Ted Buis</dc:creator>
      <dc:date>2005-02-07T19:06:12Z</dc:date>
    </item>
  </channel>
</rss>

