<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Backup block size in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609725#M71057</link>
    <description>But not working on 7.3 (fc stdt/all).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
    <pubDate>Tue, 13 Sep 2005 08:18:46 GMT</pubDate>
    <dc:creator>Wim Van den Wyngaert</dc:creator>
    <dc:date>2005-09-13T08:18:46Z</dc:date>
    <item>
      <title>Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609712#M71044</link>
      <description>Are there any disadvantages of spcifying a block size of 64K when doing backups to DLT* ?&lt;BR /&gt;&lt;BR /&gt;And for remote backups (backup to a T2T passing the save set to a convert command) ?&lt;BR /&gt;&lt;BR /&gt;And why is the default not increased from 8K to 64K ?&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 23 Aug 2005 09:31:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609712#M71044</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-08-23T09:31:25Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609713#M71045</link>
      <description>For me, the main disadvantage is that I cannot $COPY such a saveset from tape to disk and treat is as a container file. When I tried it the last time, the record size was limited to 32767 bytes&lt;BR /&gt;(did you know OpenVMS was a 16-bit OS ? ;-). So I use 32256.</description>
      <pubDate>Tue, 23 Aug 2005 09:36:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609713#M71045</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-08-23T09:36:06Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609714#M71046</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;as far a I know, when specifying blocksizes greater 32k it would no longer be possible to copy the saveset from tape to disk.&lt;BR /&gt;&lt;BR /&gt;regards Kalle</description>
      <pubDate>Tue, 23 Aug 2005 09:37:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609714#M71046</guid>
      <dc:creator>Karl Rohwedder</dc:creator>
      <dc:date>2005-08-23T09:37:46Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609715#M71047</link>
      <description>I think default block size is going to be increased in a future version of VMS.&lt;BR /&gt;&lt;BR /&gt;For remote backups you are limited to 32K due to RMS limitations.</description>
      <pubDate>Tue, 23 Aug 2005 09:38:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609715#M71047</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-08-23T09:38:54Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609716#M71048</link>
      <description>(aside) Do modern tape technologies actually still have interrecord gaps and tape marks or do they coalesce writes into large aggregate structures from which they emulate having tape-like records?</description>
      <pubDate>Tue, 23 Aug 2005 16:59:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609716#M71048</guid>
      <dc:creator>David Jones_21</dc:creator>
      <dc:date>2005-08-23T16:59:18Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609717#M71049</link>
      <description>Wim,&lt;BR /&gt;&lt;BR /&gt;For years I've been specifying /BLOCK=65535/GROUP=20 with great results on DLT.  The 65535 was suggested by DEC back in VMS 4.n.  And the group size puts 20 blocks together between IRG's (and yes, IRG's still exist - at least on DLT.)&lt;BR /&gt;&lt;BR /&gt;There are other changes that you can make, if you are interested, to make backups really fly.  Increase working set, BIOLM, DIOLM and etc on the BACKUP account and you can really get some I/O's going.  Let me know if you want to try it and I'll give you the values that I use.&lt;BR /&gt;&lt;BR /&gt;Mike</description>
      <pubDate>Thu, 08 Sep 2005 14:56:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609717#M71049</guid>
      <dc:creator>Mike McKinney</dc:creator>
      <dc:date>2005-09-08T14:56:22Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609718#M71050</link>
      <description>No, /GROUP_SIZE specifies how many blocks are combined in a redundancy group. It writes that many blocks, then creates a recovery block (similar to RAID5) and writes it to tape, too.</description>
      <pubDate>Fri, 09 Sep 2005 00:01:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609718#M71050</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-09-09T00:01:11Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609719#M71051</link>
      <description>"(did you know OpenVMS was a 16-bit OS ? ;-). " not really but parts of the file system where inherited from one (PDP11 RSX).&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 09 Sep 2005 03:38:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609719#M71051</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-09-09T03:38:03Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609720#M71052</link>
      <description>Guess why there as a ";-)" attached.&lt;BR /&gt;&lt;BR /&gt;I know about the RSX history and why you can only use a 15-bit record length on disk.</description>
      <pubDate>Fri, 09 Sep 2005 03:42:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609720#M71052</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-09-09T03:42:36Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609721#M71053</link>
      <description>For the youngsters that don't know what the old guys are talking about :&lt;BR /&gt;&lt;A href="http://www.village.org/pdp11/faq.html" target="_blank"&gt;http://www.village.org/pdp11/faq.html&lt;/A&gt;</description>
      <pubDate>Fri, 09 Sep 2005 03:46:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609721#M71053</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-09-09T03:46:06Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609722#M71054</link>
      <description>I notice there were some references to /group.&lt;BR /&gt;&lt;BR /&gt;The latest reference URL on backup performance suggests setting /group=0.&lt;BR /&gt;&lt;BR /&gt;The entire recommendations for backup performance have changed remarkably. Especially the reduction diolm.&lt;BR /&gt;&lt;BR /&gt;This is especially true with thrashing that occurs in SAN cache with a high diolm.&lt;BR /&gt;&lt;BR /&gt;To see the new docummented suggestions see&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h71000.www7.hp.com/doc/82FINAL/aa-pv5mj-tk/00/01/117-con.html" target="_blank"&gt;http://h71000.www7.hp.com/doc/82FINAL/aa-pv5mj-tk/00/01/117-con.html&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 09 Sep 2005 08:51:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609722#M71054</guid>
      <dc:creator>comarow</dc:creator>
      <dc:date>2005-09-09T08:51:22Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609723#M71055</link>
      <description>&lt;BR /&gt;&amp;gt; The entire recommendations for backup&lt;BR /&gt;&amp;gt; performance have changed remarkably.&lt;BR /&gt;&amp;gt; Especially the reduction diolm.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;This is especially true with thrashing that&lt;BR /&gt;&amp;gt;occurs in SAN cache with a high diolm.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;To see the new docummented suggestions see&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&lt;A href="http://h71000.www7.hp.com/doc/82FINAL/aa-pv5mj-tk/00/01/117-con.html" target="_blank"&gt;http://h71000.www7.hp.com/doc/82FINAL/aa-pv5mj-tk/00/01/117-con.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Keith Parris comments at length in c.o.v&lt;BR /&gt;about this:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://tinyurl.com/ddct3" target="_blank"&gt;http://tinyurl.com/ddct3&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;8.2 (and 7.3-2 with FIBRE_SCSI V0400 and above) does back off much more&lt;BR /&gt;aggressively than 7.3-1 (and vanilla 7.3-2) in the face of Queue Full&lt;BR /&gt;events. Brian Allison's Bootcamp presentation i222 covered this:&lt;BR /&gt;&lt;BR /&gt;Prior to 7.3-1, VMS had a variable maximum I/O queue depth per LUN, in&lt;BR /&gt;the range of 3 to 16, based on recent I/O sizes. This caused severe&lt;BR /&gt;performance problems, particularly on RAID LUNs (which can have lots of&lt;BR /&gt;independent disk-head actuators, and we really want to keep all of them&lt;BR /&gt;busy for best throughput) and for large I/O sizes (which could reduce&lt;BR /&gt;the allowable queue depth down to 3), because of the very-small maximum&lt;BR /&gt;queue depths allowed. (Aside: On MSCP controllers, I've observed queue&lt;BR /&gt;depths in the 100s, even 1000s, without problems. But the marketplace&lt;BR /&gt;said Proprietary is Bad, Industry Standard is Good. And we're doing our&lt;BR /&gt;best to provide all the advantages of the old Proprietary soluions on&lt;BR /&gt;the less-expensive Industry Standard foundation, by applying our&lt;BR /&gt;engineering skills on the software side.)&lt;BR /&gt;&lt;BR /&gt;In V7.3-1 VMS moved to a per-storage-port scheme where the host didnÃ¢??t&lt;BR /&gt;limit I/O queue depths until the storage sub-system asked it to back off&lt;BR /&gt;(via a "queue full" response). Upon receiving the "queue full" response&lt;BR /&gt;VMS issued no more I/Os to that port until half of the outstanding I/Os&lt;BR /&gt;to that port had completed, and then VMS again allowed the queue size to&lt;BR /&gt;build up until it got another "queue full".&lt;BR /&gt;&lt;BR /&gt;Unfortunately, due to the large number of commands that can be in-flight&lt;BR /&gt;in a SAN, the V7.3-1 / V7.3-2 algorithm was too aggressive:&lt;BR /&gt;o Many mount verification messages can result when the same I/O gets a&lt;BR /&gt;"queue full" response several times in a row&lt;BR /&gt;o Performance suffers badly on the HSG when it has to return "queue&lt;BR /&gt;full" responses&lt;BR /&gt;o In extreme cases, the HSG can crash if it receives more I/O after&lt;BR /&gt;signaling "queue full"&lt;BR /&gt;&lt;BR /&gt;In V8.2 (and 7.3-2 with FIBRE_SCSI V0400 and above), VMS moved to an&lt;BR /&gt;algorithm that drains 1/2 of the existing I/O requests and then allows&lt;BR /&gt;the queue depth to increase by 1 entry every 5 seconds. It was hoped&lt;BR /&gt;that this, combined with HSG ACS 8.8, would solve the problems.&lt;BR /&gt;&lt;BR /&gt;Unfortunately this modified algorithm seems to have been a little too&lt;BR /&gt;aggressive in backing off I/O after a "queue full" condition. Currently,&lt;BR /&gt;once a "queue full" occurs we throttle traffic to that I/O port forever.&lt;BR /&gt;Traffic rates are allowed to gradually increase, but if the I/O load&lt;BR /&gt;ever has to throttle back, the re-ramp time is slow and impacts&lt;BR /&gt;performance. So FIBRE_SCSI kits are now in the works to pick a better&lt;BR /&gt;I/O ramp scheme.&lt;BR /&gt;&lt;BR /&gt;This might explain your symptoms of slower I/O for a period of time.&lt;BR /&gt;&lt;BR /&gt;How can you avoid "queue full" events? One way is to spread the I/Os&lt;BR /&gt;across as many controller ports as possible. If I/Os are predominantly&lt;BR /&gt;reads, going to a 2-member or 3-member shadowset across multiple&lt;BR /&gt;controllers could reduce the I/O load by as much as 1/2 or 2/3 on a&lt;BR /&gt;given controller port. Using host-based RAID software to divide the I/Os&lt;BR /&gt;across disks in different controllers can help for both reads and writes&lt;BR /&gt;equally (forming RAID-0 arrays, or RAID 0+1 arrays in conjunction with&lt;BR /&gt;Shadowing). If "queue fulls" occur mostly during Backups, reducing&lt;BR /&gt;process quotas for the process running Backup could help. And of course&lt;BR /&gt;doing as much caching as possible in the host (by using XFC, RMS Global&lt;BR /&gt;Buffers, database caches, etc.) can help by avoiding I/Os to the&lt;BR /&gt;controller as much as possible.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;---&lt;BR /&gt;&lt;BR /&gt;I believe the fast 7.3-1 algorithm&lt;BR /&gt;was/is a storage platform centric&lt;BR /&gt;problem (a certain&lt;BR /&gt;large third-party storage provider doesn't seem to have this issue).&lt;BR /&gt;&lt;BR /&gt;I speculate the monkeying with the algorithm&lt;BR /&gt;was an outgrowth of problems similar to&lt;BR /&gt;this (not discounting Keith's excellent&lt;BR /&gt;analysis - just adding to it):&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://tinyurl.com/auapr" target="_blank"&gt;http://tinyurl.com/auapr&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;And the root cause analysis of the above&lt;BR /&gt;problem:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://tinyurl.com/bxw4n" target="_blank"&gt;http://tinyurl.com/bxw4n&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Anywho, the root problem that we are seeing is that our mighty ES40&lt;BR /&gt;running OpenVMS 7.3-1 is simply OVERWHELMING the ESA12000 with I/O&lt;BR /&gt;requests. The VMS Engineering person told me that some of the&lt;BR /&gt;performance tweaks in 7.3-1 really make VMS fly when it comes to I/O.&lt;BR /&gt;Now our ES40 is demanding data so fast from the HSG80s that eventually&lt;BR /&gt;the HSG80s tip over.&lt;BR /&gt;&lt;BR /&gt;This makes perfect sense. The three times that we have been bitten by&lt;BR /&gt;this problem, we had *extremely* heavy I/O on the ES40. The first&lt;BR /&gt;time, we were running three concurrent backup streams from snapshots&lt;BR /&gt;AND running our month-end batch processing. The poor HSG80s could not&lt;BR /&gt;handle the load and gave up (which corrupted our Cache data files and&lt;BR /&gt;made us restore from tape).&lt;BR /&gt;&lt;BR /&gt;The VMS Engineering person told me that the HSG80s have a total queue&lt;BR /&gt;depth (if that is the correct term) of *240* outstanding I/O requests.&lt;BR /&gt;After that, the controllers try to tell the host system to slow down&lt;BR /&gt;a little. But the ES40 and VMS 7.3-1 are hungry for more data and&lt;BR /&gt;finally the HSG80 faints.&lt;BR /&gt;&lt;BR /&gt;First Fix: The guru from VMS Engineering asked me to check the DIOLM&lt;BR /&gt;setting on the account that we use to run out backup jobs. Knowing&lt;BR /&gt;that the HSG80s have a maximum queue depth of 240, we don't want to&lt;BR /&gt;bury the HSG80s any more. In Authorize, I found that DIOLM for our&lt;BR /&gt;backup account was set to 32767. Three backup jobs running at the&lt;BR /&gt;same time under that account were issuing TONS and TONs of I/O&lt;BR /&gt;requests and burying the HSG80s.&lt;BR /&gt;&lt;BR /&gt;So, per his advice, I set the DIOLM for our backup account to "32".&lt;BR /&gt;This will give very good backup performance and still not bury the&lt;BR /&gt;HSG80s.&lt;BR /&gt;&lt;BR /&gt;Second fix: The VMS Engineering guru told me to install&lt;BR /&gt;DEC-AXPVMS-VMS731_MSA1000-V0100 as soon as I can. It fixes a timeout&lt;BR /&gt;value for fibre channel read/writes. The value got set to "4 seconds"&lt;BR /&gt;in VMS 7.3-1 and this patch changes the timeout value back to "24&lt;BR /&gt;seconds". This will help the OS be more tolerant when the HSG80s are&lt;BR /&gt;being pokey with returning requested data.&lt;BR /&gt;&lt;BR /&gt;---&lt;BR /&gt;&lt;BR /&gt;Summarizing:&lt;BR /&gt;&lt;BR /&gt;Be at a recently patched up rev of VMS and&lt;BR /&gt;you will have a throttling algorithm&lt;BR /&gt;in place to prevent IO nastiness.&lt;BR /&gt;&lt;BR /&gt;Keith comments on controller firmware also:&lt;BR /&gt;&lt;BR /&gt;[patched 7.3-2+]&lt;BR /&gt;"combined with HSG ACS 8.8, would solve the problems."&lt;BR /&gt;&lt;BR /&gt;I'd check with engineering about 7.3-1&lt;BR /&gt;and IO issues and whether you would&lt;BR /&gt;be at risk if patched (I don't have a definitive answer).&lt;BR /&gt;&lt;BR /&gt;If early or minimally patched 7.3-1 watch&lt;BR /&gt;overwhelming certain types of storage&lt;BR /&gt;back-ends by limiting DIOLM (a kludge).&lt;BR /&gt;&lt;BR /&gt;Rob</description>
      <pubDate>Fri, 09 Sep 2005 22:29:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609723#M71055</guid>
      <dc:creator>Rob Young_4</dc:creator>
      <dc:date>2005-09-09T22:29:39Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609724#M71056</link>
      <description>See the 26th August entry here&lt;BR /&gt;&lt;A href="http://www.eight-cubed.com/blog/" target="_blank"&gt;http://www.eight-cubed.com/blog/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;for DCL to check for QFUL. The VMS V8.2 documentation about backup quota recommendations is much improved over previous versions and also can be used on previous versions.</description>
      <pubDate>Sat, 10 Sep 2005 09:57:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609724#M71056</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-09-10T09:57:26Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609725#M71057</link>
      <description>But not working on 7.3 (fc stdt/all).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 13 Sep 2005 08:18:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609725#M71057</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-09-13T08:18:46Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609726#M71058</link>
      <description>I don't know if the FC SDA extension existed for VMS Alpha V7.3&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 13 Sep 2005 09:45:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609726#M71058</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2005-09-13T09:45:25Z</dc:date>
    </item>
    <item>
      <title>Re: Backup block size</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609727#M71059</link>
      <description>FYI : I found out that Unicenter TNG performance solution 2.1 can produce a graph with tape thruput (custom graph per user).&lt;BR /&gt;&lt;BR /&gt;I increased the working set for backup from 8Kpages to 32Kpages but performance didn't improve(7.3 on a 4100 with TZ88).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 16 Sep 2005 02:44:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-block-size/m-p/3609727#M71059</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2005-09-16T02:44:25Z</dc:date>
    </item>
  </channel>
</rss>

