<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: high disk   IO  kernel params in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874101#M99211</link>
    <description>mike,&lt;BR /&gt;&lt;BR /&gt;give some of these nice people some points for trying.  They are trying to help.&lt;BR /&gt;&lt;BR /&gt;I've read the other suggestions, and I'm still thinking patches or disk layout.&lt;BR /&gt;&lt;BR /&gt;Steve</description>
    <pubDate>Sun, 05 Jan 2003 01:39:39 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2003-01-05T01:39:39Z</dc:date>
    <item>
      <title>high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874089#M99199</link>
      <description>Any suggestions that would reduce Disk IO from 100%??&lt;BR /&gt;&lt;BR /&gt;HPUX 11 /N9000 server/Oracle8.1.7&lt;BR /&gt;&lt;BR /&gt;kmtune  query&lt;BR /&gt;&lt;BR /&gt;NSTRBLKSCHED         2&lt;BR /&gt;NSTREVENT            50&lt;BR /&gt;NSTRPUSH             16&lt;BR /&gt;NSTRSCHED            0&lt;BR /&gt;STRCTLSZ             1024&lt;BR /&gt;STRMSGSZ             65535&lt;BR /&gt;acctresume           4&lt;BR /&gt;acctsuspend          2&lt;BR /&gt;aio_listio_max       256&lt;BR /&gt;aio_max_ops          2048&lt;BR /&gt;aio_physmem_pct      10&lt;BR /&gt;aio_prio_delta_max   20&lt;BR /&gt;allocate_fs_swapmap  0&lt;BR /&gt;alwaysdump           0&lt;BR /&gt;bootspinlocks        256&lt;BR /&gt;bufcache_hash_locks  128&lt;BR /&gt;bufpages             (NBUF*2)&lt;BR /&gt;chanq_hash_locks     256&lt;BR /&gt;create_fastlinks     0&lt;BR /&gt;dbc_max_pct          20&lt;BR /&gt;dbc_min_pct          5&lt;BR /&gt;default_disk_ir      0&lt;BR /&gt;desfree              0&lt;BR /&gt;dnlc_hash_locks      64&lt;BR /&gt;dontdump             0&lt;BR /&gt;dskless_node         0&lt;BR /&gt;dst                  1&lt;BR /&gt;eisa_io_estimate     0x300&lt;BR /&gt;eqmemsize            15&lt;BR /&gt;fcp_large_config     0&lt;BR /&gt;file_pad             10&lt;BR /&gt;fs_async             0&lt;BR /&gt;ftable_hash_locks    64&lt;BR /&gt;hdlpreg_hash_locks   128&lt;BR /&gt;hfs_max_ra_blocks    8&lt;BR /&gt;hfs_ra_per_disk      64&lt;BR /&gt;hpux_aes_override    0&lt;BR /&gt;initmodmax           50&lt;BR /&gt;io_ports_hash_locks  64&lt;BR /&gt;iomemsize            40000&lt;BR /&gt;km_disable           0&lt;BR /&gt;ksi_alloc_max        (NPROC*8)&lt;BR /&gt;ksi_send_max         32&lt;BR /&gt;lotsfree             0&lt;BR /&gt;max_async_ports      50&lt;BR /&gt;max_fcp_reqs         512&lt;BR /&gt;max_mem_window       0&lt;BR /&gt;max_thread_proc      256&lt;BR /&gt;maxdsiz              0X40000000&lt;BR /&gt;maxdsiz_64bit        0X0000000050000000&lt;BR /&gt;maxfiles             2048&lt;BR /&gt;maxfiles_lim         2048&lt;BR /&gt;maxqueuetime         0&lt;BR /&gt;maxssiz              0X01500000&lt;BR /&gt;maxssiz_64bit        0X00900000&lt;BR /&gt;maxswapchunks        1024&lt;BR /&gt;maxtsiz              0X09000000&lt;BR /&gt;maxtsiz_64bit        0X0000000050000000&lt;BR /&gt;maxuprc              2000&lt;BR /&gt;maxusers             400&lt;BR /&gt;maxvgs               10&lt;BR /&gt;mesg                 1&lt;BR /&gt;minfree              0&lt;BR /&gt;modstrmax            500&lt;BR /&gt;msgmap               (2+MSGTQL)&lt;BR /&gt;msgmax               8192&lt;BR /&gt;msgmnb               16384&lt;BR /&gt;msgmni               180&lt;BR /&gt;msgseg               2048&lt;BR /&gt;msgssz               8&lt;BR /&gt;msgtql               40&lt;BR /&gt;nbuf                 0&lt;BR /&gt;ncallout             (16+NPROC)&lt;BR /&gt;ncdnode              150&lt;BR /&gt;nclist               (100+16*MAXUSERS)&lt;BR /&gt;ncsize               (NINODE+VX_NCSIZE)&lt;BR /&gt;ndilbuffers          30&lt;BR /&gt;netisr_priority      -1&lt;BR /&gt;netmemmax            0&lt;BR /&gt;nfile                (32*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY+NSTRTEL))&lt;BR /&gt;nflocks              1000&lt;BR /&gt;nhtbl_scale          0&lt;BR /&gt;ninode               ((NPROC+16+MAXUSERS)+32+(2*NPTY))&lt;BR /&gt;nkthread             (((NPROC*7)/4)+16)&lt;BR /&gt;nni                  2&lt;BR /&gt;no_lvm_disks         0&lt;BR /&gt;nproc                (20+16*MAXUSERS)&lt;BR /&gt;npty                 400&lt;BR /&gt;nstrpty              400&lt;BR /&gt;nstrtel              400&lt;BR /&gt;nswapdev             10&lt;BR /&gt;nswapfs              10&lt;BR /&gt;nsysmap              ((NPROC)&amp;gt;800?2*(NPROC):800)&lt;BR /&gt;nsysmap64            ((NPROC)&amp;gt;800?2*(NPROC):800)&lt;BR /&gt;num_tachyon_adapters 5&lt;BR /&gt;o_sync_is_o_dsync    0&lt;BR /&gt;page_text_to_local   0&lt;BR /&gt;pfdat_hash_locks     128&lt;BR /&gt;public_shlibs        1&lt;BR /&gt;region_hash_locks    128&lt;BR /&gt;remote_nfs_swap      0&lt;BR /&gt;rtsched_numpri       32&lt;BR /&gt;scroll_lines         100&lt;BR /&gt;scsi_maxphys         1048576&lt;BR /&gt;sema                 1&lt;BR /&gt;semaem               16384&lt;BR /&gt;semmap               (SEMMNI+2)&lt;BR /&gt;semmni               1800&lt;BR /&gt;semmns               4096&lt;BR /&gt;semmnu               600&lt;BR /&gt;semume               10&lt;BR /&gt;semvmx               32767&lt;BR /&gt;sendfile_max         0&lt;BR /&gt;shmem                1&lt;BR /&gt;shmmax               0X70000000&lt;BR /&gt;shmmni               300&lt;BR /&gt;shmseg               120&lt;BR /&gt;st_ats_enabled       1&lt;BR /&gt;st_fail_overruns     0&lt;BR /&gt;st_large_recs        0&lt;BR /&gt;streampipes          0&lt;BR /&gt;swapmem_on           1&lt;BR /&gt;swchunk              2048&lt;BR /&gt;sysv_hash_locks      128&lt;BR /&gt;tcphashsz            0&lt;BR /&gt;timeslice            (100/10)&lt;BR /&gt;timezone             420&lt;BR /&gt;unlockable_mem       0&lt;BR /&gt;vnode_cd_hash_locks  128&lt;BR /&gt;vnode_hash_locks     128&lt;BR /&gt;vps_ceiling          16&lt;BR /&gt;vps_chatr_ceiling    65536&lt;BR /&gt;vps_pagesize         4&lt;BR /&gt;vx_ncsize            1024&lt;BR /&gt;vx_ninode            0&lt;BR /&gt;vx_noifree           0&lt;BR /&gt;vxfs_max_ra_kbytes   1024&lt;BR /&gt;vxfs_ra_per_disk     1024&lt;BR /&gt;&lt;BR /&gt;thanks &lt;BR /&gt;mike&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Jan 2003 16:45:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874089#M99199</guid>
      <dc:creator>michael_210</dc:creator>
      <dc:date>2003-01-03T16:45:37Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874090#M99200</link>
      <description>You really need to investigate why your disk IO is at 100% rather than looking into kernel parameters.  Even if you could find a parameter which would throtle the IO back you would just be creating a system bottleneck which is probably even worse than any you are already experiencing!&lt;BR /&gt;&lt;BR /&gt;Kind regards,&lt;BR /&gt;&lt;BR /&gt;Robert Thorneycroft</description>
      <pubDate>Fri, 03 Jan 2003 16:50:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874090#M99200</guid>
      <dc:creator>Robert Thorneycroft</dc:creator>
      <dc:date>2003-01-03T16:50:28Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874091#M99201</link>
      <description>The usuage is due to multiple oracle&lt;BR /&gt;processes running concurrently. I am working with devs to improve the process scripts.&lt;BR /&gt;&lt;BR /&gt;i was just curious if making any adjustments in the params would provide any benefit..&lt;BR /&gt;thanks&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Jan 2003 17:17:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874091#M99201</guid>
      <dc:creator>michael_210</dc:creator>
      <dc:date>2003-01-03T17:17:52Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874092#M99202</link>
      <description>I don't think this is a kernel problem.&lt;BR /&gt;&lt;BR /&gt;Are you using asyncrhonous access?&lt;BR /&gt;&lt;BR /&gt;is /etc/privgroup set properly(dba MLOCK).&lt;BR /&gt;&lt;BR /&gt;Also, you might want to look into your disk layout.  It makes a difference.&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Fri, 03 Jan 2003 17:24:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874092#M99202</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-01-03T17:24:20Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874093#M99203</link>
      <description>More likely you need to look at how your LVOL's are laid out across the disks.  Are you using striping?  Do you have datafiles and redo logs on the same disks?  Questions like that need to be answered and that information taken into account when laying out the VGs and LVs that will contain your Oracle databases.</description>
      <pubDate>Fri, 03 Jan 2003 17:24:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874093#M99203</guid>
      <dc:creator>Patrick Wallek</dc:creator>
      <dc:date>2003-01-03T17:24:43Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874094#M99204</link>
      <description>If the disk utilization is coming from your Oracle process there is not really a lot you can do about it other than:&lt;BR /&gt;1) Tune your Oracle SQL queries so that they use better access methods to gather the data they are after&lt;BR /&gt;2) Increase the size of your SGA so that more data resides in memory so repeated disk accesses are not necessary.&lt;BR /&gt;3) Go buy a nice fast disk array from EMC or HP.&lt;BR /&gt;&lt;BR /&gt;There is also a problem know with JFS 3.3 and IO throttling but this does not sound to be the issue you are experiencing as it tend to have the opposite effect.&lt;BR /&gt;&lt;BR /&gt;Sorry I could not be of any further help.&lt;BR /&gt;&lt;BR /&gt;Kind regards,&lt;BR /&gt;&lt;BR /&gt;Robert Thorneycroft.</description>
      <pubDate>Fri, 03 Jan 2003 17:27:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874094#M99204</guid>
      <dc:creator>Robert Thorneycroft</dc:creator>
      <dc:date>2003-01-03T17:27:46Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874095#M99205</link>
      <description>The data live on an array in different lvols.connected via fiber.&lt;BR /&gt;oracle binaries are on the server itself.&lt;BR /&gt;&lt;BR /&gt;i tried increasing SGA but couldnt get higher than 980mb.  &lt;BR /&gt;if i increase shmmax this should allow the oracle SGA to grow ..&lt;BR /&gt;&lt;BR /&gt;what i am curious about is if i increase &lt;BR /&gt;maxswapchunks  and max_thread_proc&lt;BR /&gt;and /or change db_max_pct..&lt;BR /&gt;&lt;BR /&gt;Will there be any improvement..?????&lt;BR /&gt;regards&lt;BR /&gt;mike&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;dev/vg00/lvol3     143360  106698   34399   76% /&lt;BR /&gt;/dev/vg00/lvol1      83733   43814   31545   58% /stand&lt;BR /&gt;/dev/vg00/lvol8    3670016 3194233  446389   88% /var&lt;BR /&gt;/dev/vg00/lvol7     770048  590443  168410   78% /usr&lt;BR /&gt;/dev/vg00/lvol4     536576  224977  292681   43% /tmp&lt;BR /&gt;/dev/vg01/lvol2    2048000  744358 1222238   38% /rman&lt;BR /&gt;/dev/vg01/lvol1    4096000 2727318 1283333   68% /pslogs&lt;BR /&gt;/dev/vg00/lvol6    1024000  966631   53969   95% /opt&lt;BR /&gt;/dev/vg00/lvol9    8192000 5525051 2500298   69% /hr/hroracle8i&lt;BR /&gt;/dev/vg01/lvol4    4096000    2104 3838035    0% /hr/hrarch&lt;BR /&gt;/dev/vg00/lvol5     102400   30906   67030   32% /home&lt;BR /&gt;/dev/vg00/lvol10    102400    1133   94945    1% /fin/finoracle8i&lt;BR /&gt;/dev/u06/lvol8     34865152 1342256 33261008    4% /fs/fsorasys&lt;BR /&gt;/dev/u06/lvol63    8912896   10930 8623788    0% /fs/fstools&lt;BR /&gt;/dev/u06/lvol9     34865152 4611488 30017376   13% /fs/fsredos&lt;BR /&gt;/dev/u06/lvol62    20709376 6294932 14189228   31% /fs/fstemp&lt;BR /&gt;/dev/u06/lvol5     177209344 22592096 153409392   13% /fs/fsarch&lt;BR /&gt;/dev/u06/lvol61    41156608 40684568  468360   99% /fs/fsrbs&lt;BR /&gt;/dev/u06/lvol1     141557760 26592432 114067216   19% /fs/fsdata&lt;BR /&gt;/dev/u06/lvol4     70778880 22204808 48194600   32% /fs/fsindex&lt;BR /&gt;/dev/u06/lvol2     70778880 14902680 55439736   21% /fs/fsexports&lt;BR /&gt;/dev/u06/lvol3     34865152 12586280 22104832   36% /fs/fsdata2&lt;BR /&gt;/dev/u06/lvol10    34865152 31460688 3377880   90% /fs/fsdata3&lt;BR /&gt;/dev/u06/lvol11    34865152 29363552 5458632   84% /fs/fsindex2</description>
      <pubDate>Fri, 03 Jan 2003 17:37:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874095#M99205</guid>
      <dc:creator>michael_210</dc:creator>
      <dc:date>2003-01-03T17:37:55Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874096#M99206</link>
      <description>I concur with the others that say that there isn't much you can do here from a kernel tuning standpoint.  There is, however, a lot you can do from an Oracle perspective to fix this.  &lt;BR /&gt;Usually, the application dumps all of its data into a single tablespace.  This is easy and convenient for the programmers, but is tough on the machines.  Without redesigning the application, about all a sysadmin can do is stripe the filesystem on which the tablespace resides across several disks.  HP, EMC and others sell systems that do this very nicely.  This can provide a substantial increase in disk speed--especially with sequential reads that can come from cache.  Random read slowness is usually impervious to hardware fixes.  &lt;BR /&gt;Best deal is to redesign the application(s) to use more than one tablespace.  It is far, far better to have several (even dozens) of tablespaces spread across numerous filesystems than a single (or very few) tablespaces on a single (or few) filesystems.  The more filesystems in use, the more the system can access multiple data streams at the same time.  Ultimately, a disk drive is its own bottleneck.  Spread the load as much as possible amongst as many as possible.  &lt;BR /&gt;Lastly, check to see that the application(s) are not doing full table scans, and that they're indexed as fully as possible.  All by itself, using indexes--and otherwise properly designing the Oracle database--can yield a tenfold increase in I/O speed.&lt;BR /&gt;Use iostat to find out which filesystems are getting hammered.  Usually, you'll find that only a few are so busy.  Whichever tablespace, index or other file is so busy is the first one to move to faster and better i/o systems, or be subject to a redesign.  &lt;BR /&gt;This is not a fast/easy fix.  As a rule, Oracles suggestions for kernel tuning parameters should be followed, and modified only very carefully, and only after exhausting other possibilities.  &lt;BR /&gt;Good Luck&lt;BR /&gt;Chris</description>
      <pubDate>Fri, 03 Jan 2003 17:58:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874096#M99206</guid>
      <dc:creator>Chris Vail</dc:creator>
      <dc:date>2003-01-03T17:58:43Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874097#M99207</link>
      <description>What tool says you disk I/O is 100%?  A lot of vendor tools do not give correct results, especially if the I/O is being done to a file on a RAID device.&lt;BR /&gt;&lt;BR /&gt;How long was the measurement interval where the disk was this &lt;BR /&gt;busy?  &lt;BR /&gt;&lt;BR /&gt;Mott Given</description>
      <pubDate>Fri, 03 Jan 2003 18:13:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874097#M99207</guid>
      <dc:creator>Mott Given</dc:creator>
      <dc:date>2003-01-03T18:13:49Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874098#M99208</link>
      <description>Mott&lt;BR /&gt;&lt;BR /&gt;I am using Glance.&lt;BR /&gt;&lt;BR /&gt;regards&lt;BR /&gt;mike</description>
      <pubDate>Fri, 03 Jan 2003 18:21:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874098#M99208</guid>
      <dc:creator>michael_210</dc:creator>
      <dc:date>2003-01-03T18:21:22Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874099#M99209</link>
      <description>Hi Mike:&lt;BR /&gt;&lt;BR /&gt;As already noted, the most performance gains are probably going to be realized with I/O configuration (striping (extent-based or true stripes)); improved SQL queries; and mount options.&lt;BR /&gt;&lt;BR /&gt;For instance, if you are using VxFS filesystems and if you have OnlineJFS, then you can mount the filesystems which contain datafiles and indices with 'delaylog, nodatainlog, convosync=direct, mincache=direct'.  For the archive and redo logs use 'delaylog, nodatainlog'.  This allows the datafiles I/O to bypass the Unix buffer cache.&lt;BR /&gt;&lt;BR /&gt;In fact, you might decrease your 'dbc_min_pct' and 'dbc_max_pct' to something like &amp;lt;2-5&amp;gt; and &amp;lt;5-8&amp;gt; respectively.  This will return some of the buffer cache memory.&lt;BR /&gt;&lt;BR /&gt;You asked about 'max_thread_proc' and 'macswapchunks'.  Neither will improve performance per se.  'max_thread_proc' is a fence for the number of threads a process may have.  If you are running into this with an errno value of EAGAIN, then you might need to increase it.&lt;BR /&gt;&lt;BR /&gt;'maxswapchunks' governs how much swap space can be defined.  If you are unable to define sufficient swap, then you increase 'maxswapchunks'.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...&lt;BR /&gt;</description>
      <pubDate>Fri, 03 Jan 2003 18:48:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874099#M99209</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2003-01-03T18:48:03Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874100#M99210</link>
      <description>Glance does not give accurate info on things like disk utilization if the disks are on a RAID device.&lt;BR /&gt;&lt;BR /&gt;Mott Given</description>
      <pubDate>Fri, 03 Jan 2003 19:10:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874100#M99210</guid>
      <dc:creator>Mott Given</dc:creator>
      <dc:date>2003-01-03T19:10:03Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874101#M99211</link>
      <description>mike,&lt;BR /&gt;&lt;BR /&gt;give some of these nice people some points for trying.  They are trying to help.&lt;BR /&gt;&lt;BR /&gt;I've read the other suggestions, and I'm still thinking patches or disk layout.&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Sun, 05 Jan 2003 01:39:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874101#M99211</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-01-05T01:39:39Z</dc:date>
    </item>
    <item>
      <title>Re: high disk   IO  kernel params</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874102#M99212</link>
      <description>Sounds like you'll need to work with the DBA's to optimize their SQL lookups and look at how the database is actually being laid out on the disks in your array. &lt;BR /&gt;&lt;BR /&gt;When dealing with an active database, the more physical spindles you can spread the I/O across, the better, especially when it comes to I/O intensive operations like redo and archive logs.&lt;BR /&gt;&lt;BR /&gt;And don't make your SGA too big...if you give too much memory to the database, you could end up causing other problems (excessive swapping, print spooling delays, etc.)</description>
      <pubDate>Mon, 06 Jan 2003 19:54:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/high-disk-io-kernel-params/m-p/2874102#M99212</guid>
      <dc:creator>Brian Watkins</dc:creator>
      <dc:date>2003-01-06T19:54:21Z</dc:date>
    </item>
  </channel>
</rss>

