<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Tuning a Monster in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938288#M929409</link>
    <description>Hi Ross:&lt;BR /&gt;&lt;BR /&gt;With a 'dbc_max_pct' value of 50, and 16GB of memory, the Unix buffer cache could theoretically approach 8GB.  I doubt you want that.  The poor 'syncer' daemon is probably running like mad every 30-seconds!  I'd suggest a much more conservative value like &amp;lt;2&amp;gt; for 'dbc_min_pct' and &amp;lt;5&amp;gt; for 'dbc_max_pct' assuming RDBMS is buffering and assuming VxFS mount options that bypass the Unix buffer cache.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
    <pubDate>Fri, 28 Mar 2003 17:14:18 GMT</pubDate>
    <dc:creator>James R. Ferguson</dc:creator>
    <dc:date>2003-03-28T17:14:18Z</dc:date>
    <item>
      <title>Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938285#M929406</link>
      <description>Hi All!&lt;BR /&gt;&lt;BR /&gt;I have attached a kmtune output from a soon to be production RP7400 8X750Mhz 16GB Memory.  The I/O is dual channeled through a McData 1GB switch running powerpath.  The disk attached is right at 1TB, hardware striped, 1MB stripe depth.  The RDBMS is 9ias.  OS block size 8K, 16K Database block size.&lt;BR /&gt;&lt;BR /&gt;Any input would be appreciated.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;My concern is dbc_max_pct.  Can you give me any other hints?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;&lt;BR /&gt;RZ&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Mar 2003 17:04:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938285#M929406</guid>
      <dc:creator>Ross Zubritski</dc:creator>
      <dc:date>2003-03-28T17:04:05Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938286#M929407</link>
      <description>Not that I have anything to say about the values itself, but you might like to try the attached script to get a completely different view of those settings. It has helped me a lot in the past, especially when clients have replaced formulas with fixed values.&lt;BR /&gt;&lt;BR /&gt;Enjoy, have FUN! H.Merijn</description>
      <pubDate>Fri, 28 Mar 2003 17:09:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938286#M929407</guid>
      <dc:creator>H.Merijn Brand (procura</dc:creator>
      <dc:date>2003-03-28T17:09:56Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938287#M929408</link>
      <description>Hi RZ,&lt;BR /&gt;&lt;BR /&gt;Absolutely - you don't want dbc_max_pct at 50%. Don't think you'll EVER need 8GB of disk buffer. Depending on how the Oracle SGA is configured &amp;amp; what the mount options will be for the Oracle extents, you may be able to get away with&lt;BR /&gt;dbc_min_pct =&amp;gt; 2&lt;BR /&gt;dbc_max_pct =&amp;gt; 5&lt;BR /&gt;That still gives you between 320 &amp;amp; 800 MB for buffering.&lt;BR /&gt;&lt;BR /&gt;Did they give you JUST one LUN for ALL of Oracle? I'd definitely not like that.&lt;BR /&gt;&lt;BR /&gt;My $0.02,&lt;BR /&gt;Jeff</description>
      <pubDate>Fri, 28 Mar 2003 17:12:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938287#M929408</guid>
      <dc:creator>Jeff Schussele</dc:creator>
      <dc:date>2003-03-28T17:12:40Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938288#M929409</link>
      <description>Hi Ross:&lt;BR /&gt;&lt;BR /&gt;With a 'dbc_max_pct' value of 50, and 16GB of memory, the Unix buffer cache could theoretically approach 8GB.  I doubt you want that.  The poor 'syncer' daemon is probably running like mad every 30-seconds!  I'd suggest a much more conservative value like &amp;lt;2&amp;gt; for 'dbc_min_pct' and &amp;lt;5&amp;gt; for 'dbc_max_pct' assuming RDBMS is buffering and assuming VxFS mount options that bypass the Unix buffer cache.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Fri, 28 Mar 2003 17:14:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938288#M929409</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2003-03-28T17:14:18Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938289#M929410</link>
      <description>Hi RZ,&lt;BR /&gt;&lt;BR /&gt;dbc_max_pct. No doubt about it. Since you have both nbuf and bufpages set to 0, dbc will play a role here.&lt;BR /&gt;&lt;BR /&gt;Since you have 16GB, I would say start with dbc_max_pct=2 and dbc_min_pct=2.&lt;BR /&gt;&lt;BR /&gt;-Sri</description>
      <pubDate>Fri, 28 Mar 2003 17:15:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938289#M929410</guid>
      <dc:creator>Sridhar Bhaskarla</dc:creator>
      <dc:date>2003-03-28T17:15:20Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938290#M929411</link>
      <description>Another tidbit:&lt;BR /&gt;&lt;BR /&gt;SQL*Plus: Release 9.2.0.2.0 - Production on Fri Mar 28 11:15:20 2003&lt;BR /&gt;&lt;BR /&gt;Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.&lt;BR /&gt;&lt;BR /&gt;SQL&amp;gt; connect /as sysdba&lt;BR /&gt;Connected.&lt;BR /&gt;SQL&amp;gt; show sga&lt;BR /&gt;&lt;BR /&gt;Total System Global Area 1009561320 bytes&lt;BR /&gt;Fixed Size                   737000 bytes&lt;BR /&gt;Variable Size             167772160 bytes&lt;BR /&gt;Database Buffers          838860800 bytes&lt;BR /&gt;Redo Buffers                2191360 bytes&lt;BR /&gt;SQL&amp;gt; &lt;BR /&gt;</description>
      <pubDate>Fri, 28 Mar 2003 17:18:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938290#M929411</guid>
      <dc:creator>Ross Zubritski</dc:creator>
      <dc:date>2003-03-28T17:18:25Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938291#M929412</link>
      <description>The first thing that I would do is greatly reduce dbs_max_pct to no more than about 10%. Actually, I would set bufpages to 262144 (1GB - a good value for an 11.11 box with lots of memory). By fixing buffer cache, you make it easier to tune other things because of much less interaction; you also eliminate some kernel overhead. &lt;BR /&gt;&lt;BR /&gt;I would also reduce massiz_64bit; any code that needs that much stack space requires very serious programmer adjustment (with a ball-peen hammer - 128MB should be a gracious plenty).&lt;BR /&gt;&lt;BR /&gt;I would also reduce ninode to no more than 1000 or so - this paramter only applies to hfs filesystems and you almost certainly have one one - /stand.&lt;BR /&gt;&lt;BR /&gt;Your shared memory could bprobably be bumped up as Oracle likes very large SGA's.&lt;BR /&gt;Some of your msgxxx and semxxx&lt;BR /&gt;values look small; check for Oracle suggested values.&lt;BR /&gt;&lt;BR /&gt;Let nobody persuade you that setting timeslice to a small value (e.g. 1) is a good idea. Leave it at 10.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Mar 2003 17:19:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938291#M929412</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2003-03-28T17:19:25Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938292#M929413</link>
      <description>Jeff,&lt;BR /&gt;&lt;BR /&gt;There are 4 metas involved.&lt;BR /&gt;&lt;BR /&gt;JRF,  &lt;BR /&gt;&lt;BR /&gt;Could you please expound a bit on mount options?&lt;BR /&gt;&lt;BR /&gt;I have attached current fstab.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;RZ&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Mar 2003 17:49:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938292#M929413</guid>
      <dc:creator>Ross Zubritski</dc:creator>
      <dc:date>2003-03-28T17:49:40Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938293#M929414</link>
      <description>hi rz,&lt;BR /&gt;&lt;BR /&gt;what about a statspack report?&lt;BR /&gt;&lt;BR /&gt;let's see what a 15 minutes interval, during high load, gives for this monster.&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;Yogeeraj</description>
      <pubDate>Fri, 28 Mar 2003 17:57:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938293#M929414</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2003-03-28T17:57:42Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938294#M929415</link>
      <description>Not familiar with statspack&lt;BR /&gt;&lt;BR /&gt;Thanks&lt;BR /&gt;</description>
      <pubDate>Fri, 28 Mar 2003 18:00:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938294#M929415</guid>
      <dc:creator>Ross Zubritski</dc:creator>
      <dc:date>2003-03-28T18:00:26Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938295#M929416</link>
      <description>hi again,&lt;BR /&gt;&lt;BR /&gt;STATSPACK&lt;BR /&gt;=========&lt;BR /&gt;You should be using statspack on a constant basis.&lt;BR /&gt;&lt;BR /&gt;Every morning, you should take a snapshot, every afternoon another, every evening, yet another.&lt;BR /&gt;&lt;BR /&gt;Now you have a history.  You can compare a statspack from today (bad performance) with last weeks at the same time (good performance) and look for major differences.&lt;BR /&gt;&lt;BR /&gt;Also, people must "quantify" things.  Eg: Screen 1 typically takes less then 1 second, today it is taking 60 seconds.  -- Ah ha, maybe we lost an index on some of the tables surrounding screen 1, lets look at that.  Are there specific components "going slow" or is the entire thing going slow.  &lt;BR /&gt;&lt;BR /&gt;Statspack will help you identify the top sql, the big wait events, contention points, bad performance metric (eg: the soft parse ratio is my personal favorite).&lt;BR /&gt;&lt;BR /&gt;Also, attack this from two points - get the SA's looking at the machine, network, disks, etc.  &lt;BR /&gt;&lt;BR /&gt;As it is now, if you don't have a history of what "good" looks like - it is REALLY REALLY hard to figure out "badness". You need to gather more information, isolate the issue if possible and go from there.&lt;BR /&gt;&lt;BR /&gt;If you need any further help, let us know.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Yogeeraj</description>
      <pubDate>Sat, 29 Mar 2003 05:01:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938295#M929416</guid>
      <dc:creator>Yogeeraj_1</dc:creator>
      <dc:date>2003-03-29T05:01:32Z</dc:date>
    </item>
    <item>
      <title>Re: Tuning a Monster</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938296#M929417</link>
      <description>Let me chime in and say dbc_max_pct is a big possible problem.&lt;BR /&gt;&lt;BR /&gt;I'm attaching a data collection script for you to measure performance over time.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Sun, 30 Mar 2003 03:07:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/tuning-a-monster/m-p/2938296#M929417</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-03-30T03:07:14Z</dc:date>
    </item>
  </channel>
</rss>

