<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: performance issue in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975393#M632097</link>
    <description>fbackup and recover generally and write to tape.&lt;BR /&gt;&lt;BR /&gt;pvmove or using lvextend -m 1 to mirror and break the mirror are going to be faster.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Fri, 16 May 2003 16:50:08 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2003-05-16T16:50:08Z</dc:date>
    <item>
      <title>performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975392#M632096</link>
      <description>Currently, I'm using bonnie as I/O driver for EMC storage.  I want to compare the io rate from EMC to storage , and also using fback and frecover to time the process .  any suggestions on this topic?&lt;BR /&gt;Plan :&lt;BR /&gt;create 2 vg : 1 with EMC and 1 with Storage&lt;BR /&gt;Create 1 vg with both EMC and Storage : use pvmove to move the physical extend, or using fbackup and frecover...</description>
      <pubDate>Fri, 16 May 2003 16:12:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975392#M632096</guid>
      <dc:creator>hi_5</dc:creator>
      <dc:date>2003-05-16T16:12:49Z</dc:date>
    </item>
    <item>
      <title>Re: performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975393#M632097</link>
      <description>fbackup and recover generally and write to tape.&lt;BR /&gt;&lt;BR /&gt;pvmove or using lvextend -m 1 to mirror and break the mirror are going to be faster.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 16 May 2003 16:50:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975393#M632097</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2003-05-16T16:50:08Z</dc:date>
    </item>
    <item>
      <title>Re: performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975394#M632098</link>
      <description>If you are looking for disk I/O performance the tool I've used several times is perfview and measureware.  This is a liscensed product through HP - I'm not entirely sure of the costs.  &lt;BR /&gt;&lt;BR /&gt;You can use this tool to create graphs or text readouts of a TON of performance checks.  anywhere from CPU, memory, disk performance etc.  &lt;BR /&gt;&lt;BR /&gt;When looking at disk performance you can drill down to the individual disk to observe performance.  I highly recommend this tool.</description>
      <pubDate>Fri, 16 May 2003 16:52:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975394#M632098</guid>
      <dc:creator>John Meissner</dc:creator>
      <dc:date>2003-05-16T16:52:48Z</dc:date>
    </item>
    <item>
      <title>Re: performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975395#M632099</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Perhaps you should look at the Postmark filesystem benchmark.&lt;BR /&gt;&lt;A href="http://www.netapp.com/tech_library/3022.html" target="_blank"&gt;http://www.netapp.com/tech_library/3022.html&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 16 May 2003 17:02:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975395#M632099</guid>
      <dc:creator>Leif Halvarsson_2</dc:creator>
      <dc:date>2003-05-16T17:02:21Z</dc:date>
    </item>
    <item>
      <title>Re: performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975396#M632100</link>
      <description>1.  What is "bonnie"?&lt;BR /&gt;&lt;BR /&gt;2.  Why don't you just do a big "cp"?  Why use "fbackup"/"frecover"?  They probably introduce backup overhead, so your measure isn't just disk i/o, it's program performance. &lt;BR /&gt;&lt;BR /&gt;3.  BTW, you can get BIG performance improvements from fbackup by messing with the "-c configfile" parameter and making big block sizes, and reducing the retry count (which won't count on EMC disk) and other stuff, so why bother - it's not standar.  We used to have a config file standard, but I have lost it.&lt;BR /&gt;&lt;BR /&gt;Be carefull measuring to tape.  Tape is often the limiting factor on a disk to tape transfer.</description>
      <pubDate>Fri, 16 May 2003 17:22:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975396#M632100</guid>
      <dc:creator>Stuart Abramson_2</dc:creator>
      <dc:date>2003-05-16T17:22:44Z</dc:date>
    </item>
    <item>
      <title>Re: performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975397#M632101</link>
      <description>Some thoughts:&lt;BR /&gt;&lt;BR /&gt;1&amp;gt; If you are looking for the 'real world' performance from your EMC, through your server(s), to your tape storage, you'll see it doing things as you suggest.  Performance will be affected by setup of many things, which you can tweak and retry.  &lt;BR /&gt;&lt;BR /&gt;2&amp;gt; The suggestions above are a good start on improving performance, but if you tweak fbackup, say, as noted above, you have to figure out how to implement the same change on your real backup app before it will do you 'real world' good.&lt;BR /&gt;&lt;BR /&gt;3&amp;gt; If you are looking to test individual throughput for EMC, or your server, network, or library, you really don't want to test them all together.  If one is a bottleneck, you'll just see the overall speed, not the causitive element.&lt;BR /&gt;&lt;BR /&gt;4&amp;gt; To test individual components, you can try things like raw reads and writes, using /dev/null or /dev/zero as source or destination device.  These are "infinitely fast" (server speed, anyway) devices, so if you read from EMC and write to /dev/null, you really get to see what the EMC (and your server) can do.  Likewise, if you read from /dev/zero and write to a tape drive, you can test throughput there, without the I/O and wait times associated with disk seeks and File System overhead.  You won't get any compression out of /dev/null (random data), I've never tried it with the infinitely repeating zeros or ones out of /dev/zero or /dev/one, you might get vast compression from those.  Hmmmm....&lt;BR /&gt;&lt;BR /&gt;One other interesting trick, once you have a read or write baseline for a disk device, is to read from and then write to that same device.  Performance you get out of this mix is much closer to "real" performance, since most apps have a R/W mix (not 50/50, generally, but some common pattern).&lt;BR /&gt;&lt;BR /&gt;Regards, let us know what you find out, please.&lt;BR /&gt;&lt;BR /&gt;--bmr</description>
      <pubDate>Sat, 17 May 2003 04:39:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/performance-issue/m-p/2975397#M632101</guid>
      <dc:creator>Brian M Rawlings</dc:creator>
      <dc:date>2003-05-17T04:39:37Z</dc:date>
    </item>
  </channel>
</rss>

