<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic DIsk Performance in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048388#M303993</link>
    <description>I have 2 servers.  03 and 05 are there names.&lt;BR /&gt;&lt;BR /&gt;They both have MSA 30 attached to tehm with a terrabyte of disk space.&lt;BR /&gt;&lt;BR /&gt;05 is a newer faster machine.  recently, one of the hard drives failed in the array.  I replaced the drive in the array and rebuild the array.  We are seeing substaintal differences in disk IO now.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If I run this &lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/zero of=./8gbfile bs=8192k count=204&lt;BR /&gt;&lt;BR /&gt;on 03 I get results back in 2 seconds on 05 it takes 12 seconds.&lt;BR /&gt;&lt;BR /&gt;Nothing else has changed on the server except the disks.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any ideas on what I can do to improve performance.  Why would 03 be faster that 05 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;here is how I build the new array:&lt;BR /&gt; In SAM:&lt;BR /&gt;&lt;BR /&gt; Create Volume Group –scratchvg&lt;BR /&gt; Selected the disks&lt;BR /&gt; Maximum Physical Extents 17366&lt;BR /&gt; Maximum Logical Volumes 255&lt;BR /&gt; Maximum Physical Volumes 16&lt;BR /&gt; Physical Extent size (mbytes) 64&lt;BR /&gt;&lt;BR /&gt; lvcreate -i 14 -I 64 -n scratchlv scratchvg&lt;BR /&gt;&lt;BR /&gt; Logical volume "/dev/scratchvg/scratchlv" has been successfully&lt;BR /&gt; created with&lt;BR /&gt; character device "/dev/scratchvg/rscratchlv".&lt;BR /&gt; Volume Group configuration for /dev/scratchvg has been saved in&lt;BR /&gt; /etc/lvmconf/scratchvg.conf&lt;BR /&gt;&lt;BR /&gt; export STRIPE='/dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0&lt;BR /&gt; /dev/dsk/c5t1d0 /dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0&lt;BR /&gt; /dev/dsk/c5t3d0 /dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0&lt;BR /&gt; /dev/dsk/c5t5d0 /dev/dsk/c4t8d0 /dev/dsk/c5t8d0'&lt;BR /&gt; root@msc05-&amp;gt; dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0&lt;BR /&gt; /dev/dsk/c4t8d0 /dev/dsk/c5t8d0'&lt;BR /&gt;&lt;BR /&gt; echo $STRIPE&lt;BR /&gt; /dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0 /dev/dsk/c5t1d0&lt;BR /&gt; /dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0 /dev/dsk/c5t3d0&lt;BR /&gt; /dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0&lt;BR /&gt; /dev/dsk/c4t8d0 /dev/dsk/c5t8d0export STRIPE='/dev/dsk/c1t0d0&lt;BR /&gt; /dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0&lt;BR /&gt; /dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0&lt;BR /&gt; /dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0&lt;BR /&gt; /dev/dsk/c2t8d0'&lt;BR /&gt;&lt;BR /&gt; lvextend -L 485632 /dev/scratchvg/scratchlv $STRIPE&lt;BR /&gt;&lt;BR /&gt; Logical volume "/dev/scratchvg/scratchlv" has been successfully&lt;BR /&gt;&amp;gt; &amp;gt; extended.&lt;BR /&gt; Volume Group configuration for /dev/scratchvg has been saved in&lt;BR /&gt; /etc/lvmconf/scratchvg.conf&lt;BR /&gt;&lt;BR /&gt; Mounted /scratch in SAM&lt;BR /&gt; /dev/scratchvg/scratchlv /scratch vxfs&lt;BR /&gt; rw,suid,largefiles,tmplog,mincache=tmpcache,nodatainlog 0&lt;BR /&gt; 2/dev/scratchvg/scratchlv /scratch vxfs&lt;BR /&gt; rw,suid,largefiles,delaylog,datainlog 0 2&lt;BR /&gt;&lt;BR /&gt;I also tried changing fs_async from 0 to 1 in the kernel.&lt;BR /&gt;&lt;BR /&gt;The system uses these drives strictly as scratch space. &lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 02 Aug 2007 06:18:31 GMT</pubDate>
    <dc:creator>Bob Wallner</dc:creator>
    <dc:date>2007-08-02T06:18:31Z</dc:date>
    <item>
      <title>DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048388#M303993</link>
      <description>I have 2 servers.  03 and 05 are there names.&lt;BR /&gt;&lt;BR /&gt;They both have MSA 30 attached to tehm with a terrabyte of disk space.&lt;BR /&gt;&lt;BR /&gt;05 is a newer faster machine.  recently, one of the hard drives failed in the array.  I replaced the drive in the array and rebuild the array.  We are seeing substaintal differences in disk IO now.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If I run this &lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/zero of=./8gbfile bs=8192k count=204&lt;BR /&gt;&lt;BR /&gt;on 03 I get results back in 2 seconds on 05 it takes 12 seconds.&lt;BR /&gt;&lt;BR /&gt;Nothing else has changed on the server except the disks.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Any ideas on what I can do to improve performance.  Why would 03 be faster that 05 &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;here is how I build the new array:&lt;BR /&gt; In SAM:&lt;BR /&gt;&lt;BR /&gt; Create Volume Group –scratchvg&lt;BR /&gt; Selected the disks&lt;BR /&gt; Maximum Physical Extents 17366&lt;BR /&gt; Maximum Logical Volumes 255&lt;BR /&gt; Maximum Physical Volumes 16&lt;BR /&gt; Physical Extent size (mbytes) 64&lt;BR /&gt;&lt;BR /&gt; lvcreate -i 14 -I 64 -n scratchlv scratchvg&lt;BR /&gt;&lt;BR /&gt; Logical volume "/dev/scratchvg/scratchlv" has been successfully&lt;BR /&gt; created with&lt;BR /&gt; character device "/dev/scratchvg/rscratchlv".&lt;BR /&gt; Volume Group configuration for /dev/scratchvg has been saved in&lt;BR /&gt; /etc/lvmconf/scratchvg.conf&lt;BR /&gt;&lt;BR /&gt; export STRIPE='/dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0&lt;BR /&gt; /dev/dsk/c5t1d0 /dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0&lt;BR /&gt; /dev/dsk/c5t3d0 /dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0&lt;BR /&gt; /dev/dsk/c5t5d0 /dev/dsk/c4t8d0 /dev/dsk/c5t8d0'&lt;BR /&gt; root@msc05-&amp;gt; dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0&lt;BR /&gt; /dev/dsk/c4t8d0 /dev/dsk/c5t8d0'&lt;BR /&gt;&lt;BR /&gt; echo $STRIPE&lt;BR /&gt; /dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0 /dev/dsk/c5t1d0&lt;BR /&gt; /dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0 /dev/dsk/c5t3d0&lt;BR /&gt; /dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0&lt;BR /&gt; /dev/dsk/c4t8d0 /dev/dsk/c5t8d0export STRIPE='/dev/dsk/c1t0d0&lt;BR /&gt; /dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0&lt;BR /&gt; /dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0&lt;BR /&gt; /dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0&lt;BR /&gt; /dev/dsk/c2t8d0'&lt;BR /&gt;&lt;BR /&gt; lvextend -L 485632 /dev/scratchvg/scratchlv $STRIPE&lt;BR /&gt;&lt;BR /&gt; Logical volume "/dev/scratchvg/scratchlv" has been successfully&lt;BR /&gt;&amp;gt; &amp;gt; extended.&lt;BR /&gt; Volume Group configuration for /dev/scratchvg has been saved in&lt;BR /&gt; /etc/lvmconf/scratchvg.conf&lt;BR /&gt;&lt;BR /&gt; Mounted /scratch in SAM&lt;BR /&gt; /dev/scratchvg/scratchlv /scratch vxfs&lt;BR /&gt; rw,suid,largefiles,tmplog,mincache=tmpcache,nodatainlog 0&lt;BR /&gt; 2/dev/scratchvg/scratchlv /scratch vxfs&lt;BR /&gt; rw,suid,largefiles,delaylog,datainlog 0 2&lt;BR /&gt;&lt;BR /&gt;I also tried changing fs_async from 0 to 1 in the kernel.&lt;BR /&gt;&lt;BR /&gt;The system uses these drives strictly as scratch space. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Aug 2007 06:18:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048388#M303993</guid>
      <dc:creator>Bob Wallner</dc:creator>
      <dc:date>2007-08-02T06:18:31Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048389#M303994</link>
      <description>Hello Bob,&lt;BR /&gt;&lt;BR /&gt;you don"t mention anything about the hardware. are the old an the new disks identically?&lt;BR /&gt;Whats about RPM, Cache, ... Are there differences? &lt;BR /&gt;&lt;BR /&gt;What about the time ut takes reading from the disks. Does it differ, too?&lt;BR /&gt;&lt;BR /&gt;Your configuration seems OK to me.&lt;BR /&gt;&lt;BR /&gt;Can you write and read small files several times from which you know, that they do not use any space on the exchanged disk?&lt;BR /&gt;&lt;BR /&gt;Bye&lt;BR /&gt;Ralf&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Aug 2007 07:03:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048389#M303994</guid>
      <dc:creator>Ralf Seefeldt</dc:creator>
      <dc:date>2007-08-02T07:03:33Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048390#M303995</link>
      <description>THe disk was replaced was identical disk.  &lt;BR /&gt;&lt;BR /&gt;ut?   what?&lt;BR /&gt;&lt;BR /&gt;Can you write and read small files several times from which you know, that they do not use any space on the exchanged disk?&lt;BR /&gt;&lt;BR /&gt;As the volume is striped accros all drives, I can't write to one drive.  That I know of.</description>
      <pubDate>Thu, 02 Aug 2007 07:30:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048390#M303995</guid>
      <dc:creator>Bob Wallner</dc:creator>
      <dc:date>2007-08-02T07:30:59Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048391#M303996</link>
      <description>What I really don't under stand is this from your initial post:&lt;BR /&gt;&lt;BR /&gt;echo $STRIPE&lt;BR /&gt;/dev/dsk/c4t0d0 /dev/dsk/c5t0d0 /dev/dsk/c4t1d0 /dev/dsk/c5t1d0&lt;BR /&gt;/dev/dsk/c4t2d0 /dev/dsk/c5t2d0 /dev/dsk/c4t3d0 /dev/dsk/c5t3d0&lt;BR /&gt;/dev/dsk/c4t4d0 /dev/dsk/c5t4d0 /dev/dsk/c4t5d0 /dev/dsk/c5t5d0&lt;BR /&gt;/dev/dsk/c4t8d0 /dev/dsk/c5t8d0&lt;BR /&gt;&lt;BR /&gt;export STRIPE='/dev/dsk/c1t0d0&lt;BR /&gt;/dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0&lt;BR /&gt;/dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0&lt;BR /&gt;/dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0&lt;BR /&gt;/dev/dsk/c2t8d0'&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;So many different device files?&lt;BR /&gt;On which devices did you create the VG?&lt;BR /&gt;(see vgdisplay -v)&lt;BR /&gt;What type of IO module is inside the MSA30?&lt;BR /&gt;(DB/MI)</description>
      <pubDate>Thu, 02 Aug 2007 07:37:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048391#M303996</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2007-08-02T07:37:33Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048392#M303997</link>
      <description>&lt;BR /&gt;So many different device files? -  14 disks in the array - 14 device files&lt;BR /&gt;&lt;BR /&gt;On which devices did you create the VG? -  It is striped across all 4.&lt;BR /&gt;&lt;BR /&gt;What type of IO module is inside the MSA30? Not sure.  they are the same on both servers.  And this hasn't changed since the faled Hard drive.</description>
      <pubDate>Thu, 02 Aug 2007 07:48:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048392#M303997</guid>
      <dc:creator>Bob Wallner</dc:creator>
      <dc:date>2007-08-02T07:48:30Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048393#M303998</link>
      <description>Why there are 2x 14 device files?&lt;BR /&gt;(c1/c2 and c4/c5)&lt;BR /&gt;&lt;BR /&gt;Have a look - the MI module has 4 connectors, the DB has only 2.</description>
      <pubDate>Thu, 02 Aug 2007 07:52:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048393#M303998</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2007-08-02T07:52:58Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048394#M303999</link>
      <description>Why there are 2x 14 device files?&lt;BR /&gt;(c1/c2 and c4/c5)&lt;BR /&gt;&lt;BR /&gt;I must have copied and pasted this wrong.&lt;BR /&gt;&lt;BR /&gt;These are not in the system:&lt;BR /&gt;STRIPE='/dev/dsk/c1t0d0&lt;BR /&gt;/dev/dsk/c2t0d0 /dev/dsk/c1t1d0 /dev/dsk/c2t1d0 /dev/dsk/c1t2d0&lt;BR /&gt;/dev/dsk/c2t2d0 dev/dsk/c1t3d0 /dev/dsk/c2t3d0 /dev/dsk/c1t4d0&lt;BR /&gt;/dev/dsk/c2t4d0 /dev/dsk/c1t5d0 /dev/dsk/c2t5d0 /dev/dsk/c1t8d0&lt;BR /&gt;/dev/dsk/c2t8d0'&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The connectors, is a DB as it has only 2.</description>
      <pubDate>Thu, 02 Aug 2007 08:41:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048394#M303999</guid>
      <dc:creator>Bob Wallner</dc:creator>
      <dc:date>2007-08-02T08:41:49Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048395#M304000</link>
      <description>--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg00&lt;BR /&gt;VG Write Access             read/write     &lt;BR /&gt;VG Status                   available                 &lt;BR /&gt;Max LV                      255    &lt;BR /&gt;Cur LV                      11     &lt;BR /&gt;Open LV                     11     &lt;BR /&gt;Max PV                      16     &lt;BR /&gt;Cur PV                      2      &lt;BR /&gt;Act PV                      2      &lt;BR /&gt;Max PE per PV               4328         &lt;BR /&gt;VGDA                        4   &lt;BR /&gt;PE Size (Mbytes)            16              &lt;BR /&gt;Total PE                    8636    &lt;BR /&gt;Alloc PE                    5290    &lt;BR /&gt;Free PE                     3346    &lt;BR /&gt;Total PVG                   0        &lt;BR /&gt;Total Spare PVs             0              &lt;BR /&gt;Total Spare PVs in use      0                     &lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vg00/lvol1&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            304             &lt;BR /&gt;   Current LE                  19        &lt;BR /&gt;   Allocated PE                38          &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol2&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            4096            &lt;BR /&gt;   Current LE                  256       &lt;BR /&gt;   Allocated PE                512         &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol3&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            528             &lt;BR /&gt;   Current LE                  33        &lt;BR /&gt;   Allocated PE                66          &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol4&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            208             &lt;BR /&gt;   Current LE                  13        &lt;BR /&gt;   Allocated PE                26          &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol5&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            32              &lt;BR /&gt;   Current LE                  2         &lt;BR /&gt;   Allocated PE                4           &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol6&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            8000            &lt;BR /&gt;   Current LE                  500       &lt;BR /&gt;   Allocated PE                1000        &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol7&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            6336            &lt;BR /&gt;   Current LE                  396       &lt;BR /&gt;   Allocated PE                792         &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/lvol8&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            7808            &lt;BR /&gt;   Current LE                  488       &lt;BR /&gt;   Allocated PE                976         &lt;BR /&gt;   Used PV                     2       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/apps&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            9008            &lt;BR /&gt;   Current LE                  563       &lt;BR /&gt;   Allocated PE                563         &lt;BR /&gt;   Used PV                     1       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/sysdata&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            5008            &lt;BR /&gt;   Current LE                  313       &lt;BR /&gt;   Allocated PE                313         &lt;BR /&gt;   Used PV                     1       &lt;BR /&gt;&lt;BR /&gt;   LV Name                     /dev/vg00/dev_swap&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            16000           &lt;BR /&gt;   Current LE                  1000      &lt;BR /&gt;   Allocated PE                1000        &lt;BR /&gt;   Used PV                     1       &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c2t1d0s2&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    4318    &lt;BR /&gt;   Free PE                     735     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c3t0d0s2&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    4318    &lt;BR /&gt;   Free PE                     2611    &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;VG Name                     /dev/scratchvg&lt;BR /&gt;VG Write Access             read/write     &lt;BR /&gt;VG Status                   available                 &lt;BR /&gt;Max LV                      255    &lt;BR /&gt;Cur LV                      1      &lt;BR /&gt;Open LV                     1      &lt;BR /&gt;Max PV                      16     &lt;BR /&gt;Cur PV                      14     &lt;BR /&gt;Act PV                      14     &lt;BR /&gt;Max PE per PV               17366        &lt;BR /&gt;VGDA                        28  &lt;BR /&gt;PE Size (Mbytes)            64              &lt;BR /&gt;Total PE                    15190   &lt;BR /&gt;Alloc PE                    7588    &lt;BR /&gt;Free PE                     7602    &lt;BR /&gt;Total PVG                   0        &lt;BR /&gt;Total Spare PVs             0              &lt;BR /&gt;Total Spare PVs in use      0                     &lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/scratchvg/scratchlv&lt;BR /&gt;   LV Status                   available/syncd           &lt;BR /&gt;   LV Size (Mbytes)            485632          &lt;BR /&gt;   Current LE                  7588      &lt;BR /&gt;   Allocated PE                7588        &lt;BR /&gt;   Used PV                     14      &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/dsk/c4t0d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t1d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t2d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t3d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t4d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t5d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c4t8d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t0d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t1d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t2d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t3d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t4d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t5d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/dsk/c5t8d0&lt;BR /&gt;   PV Status                   available                &lt;BR /&gt;   Total PE                    1085    &lt;BR /&gt;   Free PE                     543     &lt;BR /&gt;   Autoswitch                  On        &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Aug 2007 08:44:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048395#M304000</guid>
      <dc:creator>Bob Wallner</dc:creator>
      <dc:date>2007-08-02T08:44:28Z</dc:date>
    </item>
    <item>
      <title>Re: DIsk Performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048396#M304001</link>
      <description>&amp;gt;&amp;gt; time(x) dd if=/dev/zero of=./8gbfile bs=8192k count=204&lt;BR /&gt;&lt;BR /&gt;The problem with measuring i/o like this and calling it disk i/o is that disk i/o is but one component. Trying to do this with cooked files may tell you more about buffer cache and/or mount options than anything else. The first thing that I would do is replace your cooked output file with a raw (character) device. An LVM raw device will be close enough for our purposes in that the LVM abstraction layer is all but zero. You should replace your 8gbfile with something like /dev/vg20/rlvol1. If you then see significant i/o performance differnces between the two arrays (and make several runs on each to get a meaningful mean value), you now have far greater confidence that the differences reside in the array although you do need to make sure that the scsi queue_depth is set the same for your disk devices. (e.g. scsictl -a /dev/rdsk/c1t6d0). Your test is using sequential i/o so the queue_depth could have profound effects if different.  Man scsictl for details.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 02 Aug 2007 10:35:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/disk-performance/m-p/4048396#M304001</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2007-08-02T10:35:13Z</dc:date>
    </item>
  </channel>
</rss>

