<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Increasing LVM performance. in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364736#M625772</link>
    <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I have a HP N4000-55, attached to a EMC Clarrion FC4700. I am experiencing progressively slow I/O performance on a filesystem with 130,000+ files inside of it.  We have tried defragmenting the filesystem with little success or improvement on performance.  We have also tried removing the filesystem, re-creating it and restoring the data.  This did help I/O performance, but only for a few weeks before we were crawling again.  I ran the following defragment commands several times to see if they made a difference: &lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -d -D -e -E /var/opt/oneworld/PrintQueue&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -a 1 -d /var/opt/oneworld/PrintQueue&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -l 2048 -e /var/opt/oneworld/PrintQueue&lt;BR /&gt;&lt;BR /&gt;I did these at lease 10-20 times over a 2 hour period, but this didn't seem to do much.  The filesystem in question is configured as PVG-strict/distributed.&lt;BR /&gt;&lt;BR /&gt;Here's some more details.&lt;BR /&gt;&lt;BR /&gt;HP N4000-55&lt;BR /&gt;Processors: 8 &lt;BR /&gt;Clock Frequency: 550 MHz&lt;BR /&gt;Kernel Width Support: 64&lt;BR /&gt;Physical Memory: 16398.7 MB&lt;BR /&gt;OS Identification: B.11.11 U&lt;BR /&gt;&lt;BR /&gt;uteudc11[/root]# swapinfo -tam&lt;BR /&gt;             Mb      Mb      Mb   PCT  START/      Mb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol3&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol4&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol5&lt;BR /&gt;dev         944       0     944    0%       0       -    1  /dev/vg00/lvol20&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol21&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol22&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol23&lt;BR /&gt;dev        4096       0    4096    0%       0       -    1  /dev/vg00/lvol24&lt;BR /&gt;dev        4096       0    4096    0%       0       -    1  /dev/vg00/lvol25&lt;BR /&gt;reserve       -    8606   -8606&lt;BR /&gt;memory    12600    3218    9382   26%&lt;BR /&gt;total     28904   11824   17080   41%       -       0    -&lt;BR /&gt;&lt;BR /&gt;Filesystem in question:&lt;BR /&gt; &lt;BR /&gt;uteudc11[/opt/jde/oneworld/app/PrintQueue]# bdf .&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vgmcjde/lvol4 32768000 23586956 8607247   73% /opt/jde/oneworld/app/PrintQueue&lt;BR /&gt;&lt;BR /&gt;Lvdisplay -v of /dev/vgmcjde/lvol4:&lt;BR /&gt;&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vgmcjde/lvol4&lt;BR /&gt;VG Name                     /dev/vgmcjde&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               0            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            32000           &lt;BR /&gt;Current LE                  4000      &lt;BR /&gt;Allocated PE                4000        &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   NONE         &lt;BR /&gt;Allocation                  PVG-strict/distributed&lt;BR /&gt;IO Timeout (Seconds)        180                 &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c12t0d6   1863      1863      &lt;BR /&gt;   /dev/dsk/c12t0d7   1867      1867      &lt;BR /&gt;   /dev/dsk/c7t1d4    68        68        &lt;BR /&gt;   /dev/dsk/c12t1d5   68        68        &lt;BR /&gt;   /dev/dsk/c7t1d6    67        67        &lt;BR /&gt;   /dev/dsk/c12t1d7   67        67        &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;   LE    PV1                PE1   Status 1 &lt;BR /&gt;   00000 /dev/dsk/c12t0d6   02201 current  &lt;BR /&gt;   00001 /dev/dsk/c12t0d7   02199 current  &lt;BR /&gt;   00002 /dev/dsk/c12t0d6   02202 current  &lt;BR /&gt;   00003 /dev/dsk/c12t0d7   02200 current  &lt;BR /&gt;   00004 /dev/dsk/c12t0d6   02203 current  &lt;BR /&gt;   00005 /dev/dsk/c12t0d7   02201 current  &lt;BR /&gt;   00006 /dev/dsk/c12t0d6   02204 current  &lt;BR /&gt;   00007 /dev/dsk/c12t0d7   02202 current  &lt;BR /&gt;   00008 /dev/dsk/c12t0d6   02205 current  &lt;BR /&gt;   00009 /dev/dsk/c12t0d7   02203 current  &lt;BR /&gt;   00010 /dev/dsk/c12t0d6   02206 current  &lt;BR /&gt;etc, etc...&lt;BR /&gt;&lt;BR /&gt;Volume group info:&lt;BR /&gt;&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vgmcjde&lt;BR /&gt;VG Write Access             read/write     &lt;BR /&gt;VG Status                   available, exclusive      &lt;BR /&gt;Max LV                      255    &lt;BR /&gt;Cur LV                      5      &lt;BR /&gt;Open LV                     5      &lt;BR /&gt;Max PV                      96     &lt;BR /&gt;Cur PV                      6      &lt;BR /&gt;Act PV                      6      &lt;BR /&gt;Max PE per PV               10240        &lt;BR /&gt;VGDA                        12  &lt;BR /&gt;PE Size (Mbytes)            8               &lt;BR /&gt;Total PE                    11444   &lt;BR /&gt;Alloc PE                    8650    &lt;BR /&gt;Free PE                     2794    &lt;BR /&gt;Total PVG                   1        &lt;BR /&gt;Total Spare PVs             0              &lt;BR /&gt;Total Spare PVs in use      0                     &lt;BR /&gt;&lt;BR /&gt;The bloated filesystem:&lt;BR /&gt;&lt;BR /&gt;uteudc11[/opt/jde/oneworld/app/PrintQueue]# ls -l |wc&lt;BR /&gt;136891 1232012 11898612&lt;BR /&gt;uteudc11[/opt/jde/oneworld/app/PrintQueue]# &lt;BR /&gt;&lt;BR /&gt;Any suggestions or ideas would be appreciated.&lt;BR /&gt;</description>
    <pubDate>Wed, 25 Aug 2004 09:41:04 GMT</pubDate>
    <dc:creator>Tom Bies</dc:creator>
    <dc:date>2004-08-25T09:41:04Z</dc:date>
    <item>
      <title>Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364736#M625772</link>
      <description>Hello,&lt;BR /&gt;&lt;BR /&gt;I have a HP N4000-55, attached to a EMC Clarrion FC4700. I am experiencing progressively slow I/O performance on a filesystem with 130,000+ files inside of it.  We have tried defragmenting the filesystem with little success or improvement on performance.  We have also tried removing the filesystem, re-creating it and restoring the data.  This did help I/O performance, but only for a few weeks before we were crawling again.  I ran the following defragment commands several times to see if they made a difference: &lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -d -D -e -E /var/opt/oneworld/PrintQueue&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -a 1 -d /var/opt/oneworld/PrintQueue&lt;BR /&gt;&lt;BR /&gt;fsadm -F vxfs -l 2048 -e /var/opt/oneworld/PrintQueue&lt;BR /&gt;&lt;BR /&gt;I did these at lease 10-20 times over a 2 hour period, but this didn't seem to do much.  The filesystem in question is configured as PVG-strict/distributed.&lt;BR /&gt;&lt;BR /&gt;Here's some more details.&lt;BR /&gt;&lt;BR /&gt;HP N4000-55&lt;BR /&gt;Processors: 8 &lt;BR /&gt;Clock Frequency: 550 MHz&lt;BR /&gt;Kernel Width Support: 64&lt;BR /&gt;Physical Memory: 16398.7 MB&lt;BR /&gt;OS Identification: B.11.11 U&lt;BR /&gt;&lt;BR /&gt;uteudc11[/root]# swapinfo -tam&lt;BR /&gt;             Mb      Mb      Mb   PCT  START/      Mb&lt;BR /&gt;TYPE      AVAIL    USED    FREE  USED   LIMIT RESERVE  PRI  NAME&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol2&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol3&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol4&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol5&lt;BR /&gt;dev         944       0     944    0%       0       -    1  /dev/vg00/lvol20&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol21&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol22&lt;BR /&gt;dev        1024       0    1024    0%       0       -    1  /dev/vg00/lvol23&lt;BR /&gt;dev        4096       0    4096    0%       0       -    1  /dev/vg00/lvol24&lt;BR /&gt;dev        4096       0    4096    0%       0       -    1  /dev/vg00/lvol25&lt;BR /&gt;reserve       -    8606   -8606&lt;BR /&gt;memory    12600    3218    9382   26%&lt;BR /&gt;total     28904   11824   17080   41%       -       0    -&lt;BR /&gt;&lt;BR /&gt;Filesystem in question:&lt;BR /&gt; &lt;BR /&gt;uteudc11[/opt/jde/oneworld/app/PrintQueue]# bdf .&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vgmcjde/lvol4 32768000 23586956 8607247   73% /opt/jde/oneworld/app/PrintQueue&lt;BR /&gt;&lt;BR /&gt;Lvdisplay -v of /dev/vgmcjde/lvol4:&lt;BR /&gt;&lt;BR /&gt;--- Logical volumes ---&lt;BR /&gt;LV Name                     /dev/vgmcjde/lvol4&lt;BR /&gt;VG Name                     /dev/vgmcjde&lt;BR /&gt;LV Permission               read/write   &lt;BR /&gt;LV Status                   available/syncd           &lt;BR /&gt;Mirror copies               0            &lt;BR /&gt;Consistency Recovery        MWC                 &lt;BR /&gt;Schedule                    parallel     &lt;BR /&gt;LV Size (Mbytes)            32000           &lt;BR /&gt;Current LE                  4000      &lt;BR /&gt;Allocated PE                4000        &lt;BR /&gt;Stripes                     0       &lt;BR /&gt;Stripe Size (Kbytes)        0                   &lt;BR /&gt;Bad block                   NONE         &lt;BR /&gt;Allocation                  PVG-strict/distributed&lt;BR /&gt;IO Timeout (Seconds)        180                 &lt;BR /&gt;&lt;BR /&gt;   --- Distribution of logical volume ---&lt;BR /&gt;   PV Name            LE on PV  PE on PV  &lt;BR /&gt;   /dev/dsk/c12t0d6   1863      1863      &lt;BR /&gt;   /dev/dsk/c12t0d7   1867      1867      &lt;BR /&gt;   /dev/dsk/c7t1d4    68        68        &lt;BR /&gt;   /dev/dsk/c12t1d5   68        68        &lt;BR /&gt;   /dev/dsk/c7t1d6    67        67        &lt;BR /&gt;   /dev/dsk/c12t1d7   67        67        &lt;BR /&gt;&lt;BR /&gt;   --- Logical extents ---&lt;BR /&gt;   LE    PV1                PE1   Status 1 &lt;BR /&gt;   00000 /dev/dsk/c12t0d6   02201 current  &lt;BR /&gt;   00001 /dev/dsk/c12t0d7   02199 current  &lt;BR /&gt;   00002 /dev/dsk/c12t0d6   02202 current  &lt;BR /&gt;   00003 /dev/dsk/c12t0d7   02200 current  &lt;BR /&gt;   00004 /dev/dsk/c12t0d6   02203 current  &lt;BR /&gt;   00005 /dev/dsk/c12t0d7   02201 current  &lt;BR /&gt;   00006 /dev/dsk/c12t0d6   02204 current  &lt;BR /&gt;   00007 /dev/dsk/c12t0d7   02202 current  &lt;BR /&gt;   00008 /dev/dsk/c12t0d6   02205 current  &lt;BR /&gt;   00009 /dev/dsk/c12t0d7   02203 current  &lt;BR /&gt;   00010 /dev/dsk/c12t0d6   02206 current  &lt;BR /&gt;etc, etc...&lt;BR /&gt;&lt;BR /&gt;Volume group info:&lt;BR /&gt;&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vgmcjde&lt;BR /&gt;VG Write Access             read/write     &lt;BR /&gt;VG Status                   available, exclusive      &lt;BR /&gt;Max LV                      255    &lt;BR /&gt;Cur LV                      5      &lt;BR /&gt;Open LV                     5      &lt;BR /&gt;Max PV                      96     &lt;BR /&gt;Cur PV                      6      &lt;BR /&gt;Act PV                      6      &lt;BR /&gt;Max PE per PV               10240        &lt;BR /&gt;VGDA                        12  &lt;BR /&gt;PE Size (Mbytes)            8               &lt;BR /&gt;Total PE                    11444   &lt;BR /&gt;Alloc PE                    8650    &lt;BR /&gt;Free PE                     2794    &lt;BR /&gt;Total PVG                   1        &lt;BR /&gt;Total Spare PVs             0              &lt;BR /&gt;Total Spare PVs in use      0                     &lt;BR /&gt;&lt;BR /&gt;The bloated filesystem:&lt;BR /&gt;&lt;BR /&gt;uteudc11[/opt/jde/oneworld/app/PrintQueue]# ls -l |wc&lt;BR /&gt;136891 1232012 11898612&lt;BR /&gt;uteudc11[/opt/jde/oneworld/app/PrintQueue]# &lt;BR /&gt;&lt;BR /&gt;Any suggestions or ideas would be appreciated.&lt;BR /&gt;</description>
      <pubDate>Wed, 25 Aug 2004 09:41:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364736#M625772</guid>
      <dc:creator>Tom Bies</dc:creator>
      <dc:date>2004-08-25T09:41:04Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364737#M625773</link>
      <description>What was listed in the fsadm output:&lt;BR /&gt;&lt;BR /&gt;  Directory Fragmentation Report&lt;BR /&gt;             Dirs        Total      Immed    Immeds   Dirs to   Blocks to&lt;BR /&gt;             Searched    Blocks     Dirs     to Add   Reduce    Reduce&lt;BR /&gt;  total            79      5087        39         0         5        3103&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Were there any blocks to reduce?&lt;BR /&gt;&lt;BR /&gt;If yes, then keep on defragging until down to (or close to) zero...&lt;BR /&gt;&lt;BR /&gt;Rgds...Geoff</description>
      <pubDate>Wed, 25 Aug 2004 09:47:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364737#M625773</guid>
      <dc:creator>Geoff Wild</dc:creator>
      <dc:date>2004-08-25T09:47:24Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364738#M625774</link>
      <description>130000 is a lot of files for one directory!  Given that this directory appears to be for a print queue of some form, presumably they're only needed there during printing/spooling.  Can they not be moved to an archive area afterwards, perhaps on a separate filesystem?&lt;BR /&gt;&lt;BR /&gt;find /opt/jde/oneworld/app/PrintQueue -atime +7 -exec mv {} /archivearea&lt;BR /&gt;&lt;BR /&gt;This would move files unaccessed in last 7 days to /archivearea.  A cron job to do this would allow you to keep the files (if they are indeed needed), but at least maintain performance on the main filesystem.</description>
      <pubDate>Wed, 25 Aug 2004 09:47:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364738#M625774</guid>
      <dc:creator>Simon Hargrave</dc:creator>
      <dc:date>2004-08-25T09:47:34Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364739#M625775</link>
      <description>No. of files in that directory. That is what the problem is.&lt;BR /&gt;&lt;BR /&gt;Do you alaternate paths?? If so set it properly. This wont help much, but do it.&lt;BR /&gt;&lt;BR /&gt;With the information that you have given seems that these files are print siles. If yes, are they required all the time?? If not move the ones which are not required. &lt;BR /&gt;&lt;BR /&gt;Another thing would be setting up a stripped LV. You will have to remove LV, prepare one with striped across your disks in that lvol.&lt;BR /&gt;&lt;BR /&gt;Anil</description>
      <pubDate>Wed, 25 Aug 2004 10:09:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364739#M625775</guid>
      <dc:creator>RAC_1</dc:creator>
      <dc:date>2004-08-25T10:09:40Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364740#M625776</link>
      <description>You post a lvdisplay of /dev/vgmcjde, but you seem to be pointing to the problem of your printqueue on /var.....&lt;BR /&gt;&lt;BR /&gt;So let's address your printqueue, since you say you removed &amp;amp; replaced it and saw some improvement before.  &lt;BR /&gt;/var is a very busy mountpoint...is does stuff and it is writing all the time !  It is a critical filesystem (fill up /var and you stop..).  Remember /var/opt/* has logs writing to it; /var/adm/syslog more logs writing. Are you running MWA..even more /var/opt/perf/datafiles.  See my point.&lt;BR /&gt;&lt;BR /&gt;If you want to try and improve performance to known 'busy' directories, then set up seperate disk and create a lvol just for that mountpoint with the disks that are just for them.  Like:&lt;BR /&gt;/var&lt;BR /&gt;/var/opt/spool&lt;BR /&gt;/var/opt/oneworld&lt;BR /&gt;/var/opt/perf&lt;BR /&gt;...and maybe you can think of others you could set up their own disk so they won't be struggling to get I/O attention from others.&lt;BR /&gt;&lt;BR /&gt;Just a thought, HTH&lt;BR /&gt;Rita&lt;BR /&gt;</description>
      <pubDate>Wed, 25 Aug 2004 10:11:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364740#M625776</guid>
      <dc:creator>Rita C Workman</dc:creator>
      <dc:date>2004-08-25T10:11:38Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364741#M625777</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;As mentioned already 130,000+ files si quite a lot for same directory.&lt;BR /&gt;According to the Filesystem name under /var this could be temporary files.&lt;BR /&gt;&lt;BR /&gt;You give info for :&lt;BR /&gt;/opt/jde/oneworld/app/PrintQueue&lt;BR /&gt;&lt;BR /&gt;and you ran fsadm command on :&lt;BR /&gt;var/opt/oneworld/PrintQueue&lt;BR /&gt;Are they the same fs or did I miss something ?&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Jean-Luc&lt;BR /&gt;</description>
      <pubDate>Wed, 25 Aug 2004 10:22:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364741#M625777</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2004-08-25T10:22:51Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364742#M625778</link>
      <description>Also,&lt;BR /&gt;you can check these threads and attached doc :&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=99401" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=99401&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.hpworld.com/pubcontent/enterprise/may00/08sysadx.html" target="_blank"&gt;http://www.hpworld.com/pubcontent/enterprise/may00/08sysadx.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Jean-Luc&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 25 Aug 2004 10:25:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364742#M625778</guid>
      <dc:creator>Jean-Luc Oudart</dc:creator>
      <dc:date>2004-08-25T10:25:04Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364743#M625779</link>
      <description>I should have said I defragmented the following - /opt/jde/oneworld/app/PrintQueue NOT  /var/opt/oneworld/PrintQueue.  Sorry, I didn't have my coffee in me yet.&lt;BR /&gt;&lt;BR /&gt;This filesystem resides on its own EMC fiberchannel disks.  We are setup for dual paths.  &lt;BR /&gt;&lt;BR /&gt;/dev/vgmcjde/lvol4 32768000 23638232 8559220   73% /opt/jde/oneworld/app/PrintQueue&lt;BR /&gt;&lt;BR /&gt;I will check with the application folks to see if moving older files to a alternate filesystem via a cronjob would not break the application.  To my understanding, these are print jobs that may (or may not) need to be recalled at any given time.</description>
      <pubDate>Wed, 25 Aug 2004 11:28:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364743#M625779</guid>
      <dc:creator>Tom Bies</dc:creator>
      <dc:date>2004-08-25T11:28:44Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364744#M625780</link>
      <description>/opt/jde/oneworld/app/PrintQueue is definitely NOT the place for temporary files like print jobs. /opt is the directory structure for an application but temp files shoul dbe configured in /var/opt/jde/oneworld/app/PrintQueue to follow the V.4 layout recommendations. I realize that some apps are not configurable, so you may not have that choice. But in either case, /opt/jde/oneworld/app/PrintQueue should be turned into a mountpoint. Using /opt in this way is killing other applications. This also allows for a much larger space for the PrintQueue without affecting /opt or /var. You can also assign this mountpoint to a specific LUN or set of LUNs.&lt;BR /&gt; &lt;BR /&gt;As for performance, 130,000 files in a single directory is GUARANTEED to cause major performance issues if the directory is searched. Now that does NOT mean open and read a specific file which will run at full speed. It means when creating a new file or listing files (like ls, especially when using pattern matching such as ls a* to find all files that start with "a"). You will probably see massive numbers in sar -a 5 20 when performance is bad. This is inevitable when asking the system to search the directory structure. The only fix is an application rewrite: either keep a local index in the program of all files by name so it doesn't kill the system searching for filenames, or eliminate the 130,000 files and use a real database with just a few files.</description>
      <pubDate>Wed, 25 Aug 2004 14:16:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364744#M625780</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2004-08-25T14:16:40Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364745#M625781</link>
      <description>Tune the inode cache of the filesystem containing those zillion or so little files.&lt;BR /&gt;&lt;BR /&gt;man vxtunefs&lt;BR /&gt;&lt;BR /&gt;If using JFS 3.5, there are a couple more tunables in relation to very large inode numbers (file entries..)&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Aug 2004 08:09:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364745#M625781</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2004-08-30T08:09:35Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364746#M625782</link>
      <description>Here's whats currently set on this filesystem per vxtunefs.&lt;BR /&gt;&lt;BR /&gt;uteudc11[/etc/vx]# vxtunefs /opt/jde/oneworld/app/PrintQueue&lt;BR /&gt;Filesystem i/o parameters for /opt/jde/oneworld/app/PrintQueue&lt;BR /&gt;read_pref_io = 65536&lt;BR /&gt;read_nstream = 1&lt;BR /&gt;read_unit_io = 65536&lt;BR /&gt;write_pref_io = 65536&lt;BR /&gt;write_nstream = 1&lt;BR /&gt;write_unit_io = 65536&lt;BR /&gt;pref_strength = 10&lt;BR /&gt;buf_breakup_size = 131072&lt;BR /&gt;discovered_direct_iosz = 262144&lt;BR /&gt;max_direct_iosz = 1048576&lt;BR /&gt;default_indir_size = 8192&lt;BR /&gt;qio_cache_enable = 0&lt;BR /&gt;max_diskq = 1048576&lt;BR /&gt;initial_extent_size = 8&lt;BR /&gt;max_seqio_extent_size = 2048&lt;BR /&gt;max_buf_data_size = 8192&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Aug 2004 14:27:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364746#M625782</guid>
      <dc:creator>Tom Bies</dc:creator>
      <dc:date>2004-08-30T14:27:07Z</dc:date>
    </item>
    <item>
      <title>Re: Increasing LVM performance.</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364747#M625783</link>
      <description>Tom,&lt;BR /&gt;&lt;BR /&gt;My mistake.. not vxtunefs (although there probably will be some in there that you could tune) but look at your kernel parameters relating to VxFS specifically:&lt;BR /&gt;&lt;BR /&gt;vx_fancyra_enable&lt;BR /&gt;vx_ncsize (default is just 1024...!)&lt;BR /&gt;vx_ninode&lt;BR /&gt;vxfs_max_ra_kbytes&lt;BR /&gt;ncsize &lt;BR /&gt;&lt;BR /&gt;Watch your memory ...&lt;BR /&gt;&lt;BR /&gt;HTH.&lt;BR /&gt;</description>
      <pubDate>Mon, 30 Aug 2004 14:45:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/increasing-lvm-performance/m-p/3364747#M625783</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2004-08-30T14:45:05Z</dc:date>
    </item>
  </channel>
</rss>

