<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: BACKUP/IO_LOAD - What Value in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184248#M95373</link>
    <description>Well, I tried all sorts of combinations. I changed IO_LOAD to 12, then 20, but without any difference.&lt;BR /&gt;&lt;BR /&gt;I then tried increasing various UAF parameters, but again, it had no effect.&lt;BR /&gt;&lt;BR /&gt;Think I'll just live with what I always knew - VMS is very poor at feeding fast tape drives!&lt;BR /&gt;&lt;BR /&gt;Thanks for your responses. Maybe one day BACKUP will be less of a black art.&lt;BR /&gt;&lt;BR /&gt;Rob.</description>
    <pubDate>Tue, 30 Jun 2009 13:13:36 GMT</pubDate>
    <dc:creator>Robert Atkinson</dc:creator>
    <dc:date>2009-06-30T13:13:36Z</dc:date>
    <item>
      <title>BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184242#M95367</link>
      <description>I'm contemplating giving this new(ish) qualifier a go, but I'm not sure what value to start at (default is 8).&lt;BR /&gt;&lt;BR /&gt;I'm running a fast EVA8000 on a quiet ES45. Any recommendations?&lt;BR /&gt;&lt;BR /&gt;Rob.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Jun 2009 13:06:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184242#M95367</guid>
      <dc:creator>Robert Atkinson</dc:creator>
      <dc:date>2009-06-29T13:06:22Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184243#M95368</link>
      <description>Read this thread through:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1143614" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1143614&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;More is not necessarily better as far as a storage controller is concerned, and the general recommendation of the fellow that implemented this knob (Guy Peleg) has classically been eight; the default.  But you can try a few other values and see.  &lt;BR /&gt;&lt;BR /&gt;Do recognize that various newer storage controllers might not appreciate the blizzard of I/O that classic I/O on OpenVMS can generate; some newer widgets react, um, poorly to I/O overloads.  I'm comparatively cautious around pushing most any FC SAN storage controller from most any vendor harder.&lt;BR /&gt;&lt;BR /&gt;I prefer to treat /IO_LOAD as a governor, and not as an accelerator.&lt;BR /&gt;</description>
      <pubDate>Mon, 29 Jun 2009 14:18:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184243#M95368</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-06-29T14:18:09Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184244#M95369</link>
      <description>Run your backup script and watch disk I/O queue lengths with MONITOR DISK/QUEUE. An average queue length of 2-3 per spindle (*) is about right.&lt;BR /&gt;&lt;BR /&gt;If less use a larger IO_LOAD value and if too large use a smaller value.&lt;BR /&gt;&lt;BR /&gt;(*) If the disk presented is a RAID set then use 2-3 times number of disks in this RAID set.&lt;BR /&gt;&lt;BR /&gt;No need to shove more down the throat of a disk drive than it can swallow at a time.&lt;BR /&gt;&lt;BR /&gt;/Guenter</description>
      <pubDate>Mon, 29 Jun 2009 19:50:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184244#M95369</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2009-06-29T19:50:26Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184245#M95370</link>
      <description>The EVA controller queue and particularly the queue-full that can arise over there is not what is visible with MONITOR and its view.  This per my recollection and confirmed by Rob Brooks over in the cited thread.   OpenVMS is fairly oblivious to what's going on downstairs in the EVA, which is both a benefit and a problem.</description>
      <pubDate>Mon, 29 Jun 2009 20:38:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184245#M95370</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-06-29T20:38:32Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184246#M95371</link>
      <description>I'm cautious these days on specifying any figures for pushing IO harder without knowing the complete environment.  this is after putting bigger systems in at a client and then having the lock manager pull things down because I'd gone from a single node to a cluster.&lt;BR /&gt;&lt;BR /&gt;If the default's 8, I'd probably be looking at that and then determining what else goes on and what other things are affected by me changing the value of the qualifier (i.e. don't just look at the backup, look at the effect it has on everything around it).&lt;BR /&gt;&lt;BR /&gt;Steve</description>
      <pubDate>Tue, 30 Jun 2009 09:54:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184246#M95371</guid>
      <dc:creator>Steve Reece_3</dc:creator>
      <dc:date>2009-06-30T09:54:41Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184247#M95372</link>
      <description>The reason I'm looking at this is because I've bought LTO4 drives, but they're running about the same speed as LTO3.&lt;BR /&gt;&lt;BR /&gt;My EVA is practically idle most of the time, so if there's a way of pushing it harder and getting the transfer speeds up, then I'd like to pursue it.&lt;BR /&gt;&lt;BR /&gt;Rob.&lt;BR /&gt;&lt;BR /&gt;BTW - How are things down there?</description>
      <pubDate>Tue, 30 Jun 2009 10:05:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184247#M95372</guid>
      <dc:creator>Robert Atkinson</dc:creator>
      <dc:date>2009-06-30T10:05:36Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184248#M95373</link>
      <description>Well, I tried all sorts of combinations. I changed IO_LOAD to 12, then 20, but without any difference.&lt;BR /&gt;&lt;BR /&gt;I then tried increasing various UAF parameters, but again, it had no effect.&lt;BR /&gt;&lt;BR /&gt;Think I'll just live with what I always knew - VMS is very poor at feeding fast tape drives!&lt;BR /&gt;&lt;BR /&gt;Thanks for your responses. Maybe one day BACKUP will be less of a black art.&lt;BR /&gt;&lt;BR /&gt;Rob.</description>
      <pubDate>Tue, 30 Jun 2009 13:13:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184248#M95373</guid>
      <dc:creator>Robert Atkinson</dc:creator>
      <dc:date>2009-06-30T13:13:36Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184249#M95374</link>
      <description>I'd look to run the archive on the storage controller here, LBN disk to tape. &lt;BR /&gt;&lt;BR /&gt;What's the LTO4 hooked to?&lt;BR /&gt;&lt;BR /&gt;If hauling the blocks off disk and up into the host and back out to the controller is in order (and this and file fragmentation and such is particularly important when looking to keep a fast tape busy), then BACKUP is sensitive to the proportions among the process quotas.  See:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://labs.hoffmanlabs.com/node/49" target="_blank"&gt;http://labs.hoffmanlabs.com/node/49&lt;/A&gt;</description>
      <pubDate>Tue, 30 Jun 2009 16:23:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184249#M95374</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2009-06-30T16:23:59Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184250#M95375</link>
      <description>Is your CPU maxed out?  If you are using BACKUP with the defaults then it is calculating CRC and group XORs.  &lt;BR /&gt;&lt;BR /&gt;You can set /GROUP=0. /CRC is a big hog of CPU.  With todays tape and Fibre Channel technology I don't know what benefit using /CRC still provides.  From my faulty memory, the undetected error rate is about 10E-34 and CRC just increases that to 10E-51.  When you crunch the numbers, you would have to be backing up a very, very long time to get an undetected error with out using CRC.</description>
      <pubDate>Tue, 30 Jun 2009 17:40:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184250#M95375</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2009-06-30T17:40:02Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184251#M95376</link>
      <description>The bug in EVA firmware that Hoff mentioned ealier has been fixed in recent formware verions.&lt;BR /&gt;&lt;BR /&gt;With BACKUP performance one basic test _ ALWAYS - is doing the same backup to the null device: $ BACK disk: NLA0:dummy.&lt;BR /&gt;&lt;BR /&gt;This gives a good idea about how fast files can be copied from the input disk.&lt;BR /&gt;&lt;BR /&gt;One parameter which I found sensitive in my testing was WSQUOTA. Smaller values - YEAH - give better performance (less than 50,000 pagelets).&lt;BR /&gt;&lt;BR /&gt;Ah, and just for fun try a BACKUP/PHYSICAL of that disk device to tape. That takes the file system overhead out of the loop.&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Tue, 30 Jun 2009 17:55:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184251#M95376</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2009-06-30T17:55:39Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184252#M95377</link>
      <description>Backup to NLA0 results in a 'invalid backup device' error.&lt;BR /&gt;&lt;BR /&gt;Using '/NOCRC/GROUP=0' increases the backup speed from 00:01:54 to 00:01:52 - yep, 2 seconds. Was hoping for something more.&lt;BR /&gt;&lt;BR /&gt;Rob.&lt;BR /&gt;</description>
      <pubDate>Wed, 01 Jul 2009 09:49:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184252#M95377</guid>
      <dc:creator>Robert Atkinson</dc:creator>
      <dc:date>2009-07-01T09:49:03Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184253#M95378</link>
      <description>&amp;gt; Backup to NLA0 results in a 'invalid backup device' error.&lt;BR /&gt;&lt;BR /&gt;Could it be that you specified only the NL device and not a save_set on that device? Backup will want a save_set as the target. The general form of the command that you need to use is similar to the following {...you supply the qualifiers, filespecs, etc...}.&lt;BR /&gt;&lt;BR /&gt;$ backup[/...} ddcu:{...} nl:t.t/sav{/...}</description>
      <pubDate>Wed, 01 Jul 2009 10:53:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184253#M95378</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2009-07-01T10:53:18Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP/IO_LOAD - What Value</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184254#M95379</link>
      <description>Your backup is only taking 114 seconds?</description>
      <pubDate>Wed, 01 Jul 2009 14:50:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-io-load-what-value/m-p/5184254#M95379</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2009-07-01T14:50:53Z</dc:date>
    </item>
  </channel>
</rss>

