<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: migration of data from   non-stripped lvs to  stripped lvs in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205631#M464121</link>
    <description>Hi&lt;BR /&gt;&lt;BR /&gt;I don't like striping myself.  It's a pain when you run out of space and have to extend the vg and find out you can't because the vg's striped.&lt;BR /&gt;&lt;BR /&gt;I personally would spread the file system across more pv's in order to get more spindles involved.&lt;BR /&gt;&lt;BR /&gt;But before you do, $wio is not the definitive metric to determine a disk bottle neck.  Let's see what avwait and avserv are in sar -d.&lt;BR /&gt;&lt;BR /&gt;%wio is more of a metric for measuring structure and unstructure data.  A flat file is unstructured.  A database is structured.  It's more suitable to rehash a database or defrag a file system to get lower $wio.&lt;BR /&gt;&lt;BR /&gt;Read the man page on sar and its definition of avwait and how a disk bottleneck appears when it is higher than avserver.  Note:  Rarely do the big disk array's like the EMC DMX ever exhibit a disk bottleneck of any kind.  In fact, I haven't seen a disk bottlneck in three years last on a EMC Symmetrix.&lt;BR /&gt;&lt;BR /&gt;From sar -d isolate the PV.  Then pvdisplay and isolate the file system.  Run fuser on the file system and count the processes.</description>
    <pubDate>Fri, 23 Oct 2009 06:15:07 GMT</pubDate>
    <dc:creator>Michael Steele_2</dc:creator>
    <dc:date>2009-10-23T06:15:07Z</dc:date>
    <item>
      <title>migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205630#M464120</link>
      <description>We are facing high  wio in one of our production servers. &lt;BR /&gt;The main reason for this is, none of the lvs are in stripe.  &lt;BR /&gt;I want to make all the lvs to  stripped, with lesser downtime pls suggest.&lt;BR /&gt;&lt;BR /&gt;# sar 2 2&lt;BR /&gt;HP-UX   B.11.11 U 9000/800    10/23/09&lt;BR /&gt;&lt;BR /&gt;12:25:44    %usr    %sys    %wio   %idle&lt;BR /&gt;12:25:46      15       6      65      14&lt;BR /&gt;12:25:48       8       4      69      18&lt;BR /&gt;&lt;BR /&gt;Average       12       5      67      16&lt;BR /&gt;# vmstat 2 2&lt;BR /&gt;         procs           memory                   page                              faults       cpu&lt;BR /&gt;    r     b     w      avm    free   re   at    pi   po    fr   de    sr     in     sy    cs  us sy id&lt;BR /&gt;    2    10     0   880331  3562734 1904  195     1    0     0    0     4   9014  57392  6096  12  7 81&lt;BR /&gt;    2    10     0   880331  3564218 1505  112     0    0     0    0     0   5174  27238  2488  12  4 84&lt;BR /&gt;#&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Prasanth V Aravindakshan</description>
      <pubDate>Fri, 23 Oct 2009 06:03:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205630#M464120</guid>
      <dc:creator>Prasanth V Aravind</dc:creator>
      <dc:date>2009-10-23T06:03:07Z</dc:date>
    </item>
    <item>
      <title>Re: migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205631#M464121</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;I don't like striping myself.  It's a pain when you run out of space and have to extend the vg and find out you can't because the vg's striped.&lt;BR /&gt;&lt;BR /&gt;I personally would spread the file system across more pv's in order to get more spindles involved.&lt;BR /&gt;&lt;BR /&gt;But before you do, $wio is not the definitive metric to determine a disk bottle neck.  Let's see what avwait and avserv are in sar -d.&lt;BR /&gt;&lt;BR /&gt;%wio is more of a metric for measuring structure and unstructure data.  A flat file is unstructured.  A database is structured.  It's more suitable to rehash a database or defrag a file system to get lower $wio.&lt;BR /&gt;&lt;BR /&gt;Read the man page on sar and its definition of avwait and how a disk bottleneck appears when it is higher than avserver.  Note:  Rarely do the big disk array's like the EMC DMX ever exhibit a disk bottleneck of any kind.  In fact, I haven't seen a disk bottlneck in three years last on a EMC Symmetrix.&lt;BR /&gt;&lt;BR /&gt;From sar -d isolate the PV.  Then pvdisplay and isolate the file system.  Run fuser on the file system and count the processes.</description>
      <pubDate>Fri, 23 Oct 2009 06:15:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205631#M464121</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2009-10-23T06:15:07Z</dc:date>
    </item>
    <item>
      <title>Re: migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205632#M464122</link>
      <description>The server have 30 data filesystems &amp;amp; the application process is not accessing the data simultaniously on all fs. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;May be today  the approcationprocessing files on  /data01 &amp;amp; after 3 days it will process files on  /data05. &lt;BR /&gt;&lt;BR /&gt;So the fisrt day i feel the bottleneck on /data01  pvs &amp;amp; after that the wait will move to another pv.&lt;BR /&gt;&lt;BR /&gt;Moreover that my pain is /work filesystem , which having  high number of very small files. doing ls -l in this filesystem directories is taking  5 to 10 seconds for output.</description>
      <pubDate>Fri, 23 Oct 2009 06:30:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205632#M464122</guid>
      <dc:creator>Prasanth V Aravind</dc:creator>
      <dc:date>2009-10-23T06:30:05Z</dc:date>
    </item>
    <item>
      <title>Re: migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205633#M464123</link>
      <description>Oh, you insist you have a disk bottle neck.  OK.</description>
      <pubDate>Fri, 23 Oct 2009 08:35:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205633#M464123</guid>
      <dc:creator>Michael Steele_2</dc:creator>
      <dc:date>2009-10-23T08:35:42Z</dc:date>
    </item>
    <item>
      <title>Re: migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205634#M464124</link>
      <description>Are your logical volumes mirrored? You can't have mirroring AND striping in 11.11! You need to upgrade to 11.31 to be able to do this!&lt;BR /&gt;For the work directory it may be helpful to increase the buffer cache. Check the kernel parameters dbc_max_pct and dbc_min_pct. Check with glance (get the trial of necessary) for busy disks and decide further after that!&lt;BR /&gt;Striping is not a cure for every disk problem! Most of the time it just adds complexity for next to no effect. Any RAID configuration is done on the storage array these days.&lt;BR /&gt;&lt;BR /&gt;My 2 cents,&lt;BR /&gt;Armin&lt;BR /&gt;</description>
      <pubDate>Mon, 26 Oct 2009 11:37:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205634#M464124</guid>
      <dc:creator>Armin Kunaschik</dc:creator>
      <dc:date>2009-10-26T11:37:30Z</dc:date>
    </item>
    <item>
      <title>Re: migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205635#M464125</link>
      <description>You have not provided any storage (PV) information on this. You must look into how the PVs are set up. How many spindles, raid type, logical disks LVM PVs), i/o sharing etc. What kind of LVM striping are you looking for? LVM extend based stripping does not help, you can search in this forum for it. And if the PVs come from the same raid group, there isn't much you can do.</description>
      <pubDate>Mon, 26 Oct 2009 12:02:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205635#M464125</guid>
      <dc:creator>TTr</dc:creator>
      <dc:date>2009-10-26T12:02:25Z</dc:date>
    </item>
    <item>
      <title>Re: migration of data from   non-stripped lvs to  stripped lvs</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205636#M464126</link>
      <description>&lt;BR /&gt;THanks for support</description>
      <pubDate>Mon, 17 May 2010 07:23:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/migration-of-data-from-non-stripped-lvs-to-stripped-lvs/m-p/5205636#M464126</guid>
      <dc:creator>Prasanth V Aravind</dc:creator>
      <dc:date>2010-05-17T07:23:41Z</dc:date>
    </item>
  </channel>
</rss>

