<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Need to speed up expansion of very large savesets in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990531#M35998</link>
    <description>&amp;gt;&amp;gt;&amp;gt; My point is that by using LD container files, you will not need to recreate millions of files. That is a very slow operation on VMS. By using LDDRIVER you can move the whole disk filesystem as a single file.&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;I don't understand this.   You still have to create the files inside the LD device, right?  And LD still has to write through to the underlying disk, right?  Yes, LD has a single file system, but if you're restoring a saveset into individual files you're still working with a file system inside the LD device.  (And LD isn't a RAM-based disk; it does write back to disk for the blocks written into the LD device.)&lt;BR /&gt;</description>
    <pubDate>Sat, 28 Apr 2007 17:29:33 GMT</pubDate>
    <dc:creator>Hoff</dc:creator>
    <dc:date>2007-04-28T17:29:33Z</dc:date>
    <item>
      <title>Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990515#M35982</link>
      <description>I have sets of savesets that need to be expanded simultaneously.  8 savesets are 130 GB.  It took 5 full 24 hour days for these savesets to fully exspand.  Is there a way to speed up this process?  Someone said to use the "mount" command with the following qualifiers "/blocksize &amp;amp; /cache" along with the "mount" command.  Is this the best solution?&lt;BR /&gt;&lt;BR /&gt;I am using SCSI drives for my source and destination.  The source drive contains the savesets that range from 14 to 50 GB each.  I am using OpenVMS 7.2&lt;BR /&gt;&lt;BR /&gt;Does anyone have any ideas?</description>
      <pubDate>Sat, 28 Apr 2007 11:17:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990515#M35982</guid>
      <dc:creator>Kenneth Toler</dc:creator>
      <dc:date>2007-04-28T11:17:13Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990516#M35983</link>
      <description>Ken,&lt;BR /&gt;&lt;BR /&gt;By expand, do you mean restore?&lt;BR /&gt;&lt;BR /&gt;Have you measured what the bottleneck really is?&lt;BR /&gt;&lt;BR /&gt;If you want to talk about this privately, please contact me via email.&lt;BR /&gt;&lt;BR /&gt;The BLOCKSIZE parameter on disk mount is unlikely to have an effect.&lt;BR /&gt;&lt;BR /&gt;Caching will also likely not help, as the data is not being reused.&lt;BR /&gt;&lt;BR /&gt;I would consider increasing the RMS parameters, and possibly using a local DECnet connection, HOWEVER, I would not recommend random experimentation. It is far easier to see what the bottleneck actually is and address the problem directly.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Sat, 28 Apr 2007 11:43:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990516#M35983</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-04-28T11:43:46Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990517#M35984</link>
      <description>Yes, I mean restoring savesets.  One problem is the tri-link SCSI connector.  But there is nothing we can do about that.  What I need is a bonefied way to harness the true power of OpenVMS to accelerate the restoration process.  What parameters can be set for RMS to fix this?  I have backups to my system disk, so, experimentaion is not a problem.</description>
      <pubDate>Sat, 28 Apr 2007 11:59:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990517#M35984</guid>
      <dc:creator>Kenneth Toler</dc:creator>
      <dc:date>2007-04-28T11:59:50Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990518#M35985</link>
      <description>C.2.1 Sequential File Access&lt;BR /&gt;To make t he best use of DECdfs's quick file access, most applications benefit from default RMS multibuffer and multiblock values of 3 and 16, respectively, when accessing sequential files. &lt;BR /&gt;Set the number of buffers to 3 for the most efficient multibuffering of file operations. Use the following DCL command: &lt;BR /&gt;&lt;BR /&gt;$ SET RMS_DEFAULT/BUFFER_COUNT=3 /DISK&lt;BR /&gt;Next, set the size of each buffer to sixteen 512-byte blocks: &lt;BR /&gt;&lt;BR /&gt;$ SET RMS_DEFAULT/BLOCK_COUNT=16&lt;BR /&gt;To set these values for just your user process, you can include the commands in your LOGIN.COM file. To set them on a systemwide basis, you can add the /SYSTEM qualifier a nd include the commands in the DFS$SYSTARTUP file. &lt;BR /&gt;RMS multibuffer and multiblock values that are larger than the default values can slow performance by allowing the application to exceed the DECnet pipeline quota. However, these values are recommendations that may not be optimal for every application. If your application opens many files or if it has a small working set size, you may find these values are too large. &lt;BR /&gt;Note&lt;BR /&gt;________________________________________&lt;BR /&gt;If you prefer, you can set the RMS default multibuffer value by using the SYSGEN parameter RMS_DFMBF. You can set the RMS default multiblock value by using the SYSGEN parameter RMS_DFMBC. &lt;BR /&gt;&lt;BR /&gt;Is this what you were talking about?</description>
      <pubDate>Sat, 28 Apr 2007 12:10:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990518#M35985</guid>
      <dc:creator>Kenneth Toler</dc:creator>
      <dc:date>2007-04-28T12:10:51Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990519#M35986</link>
      <description>Kenneth,&lt;BR /&gt;&lt;BR /&gt;Actually, I suspect that the SCSI cabling is not the problem.&lt;BR /&gt;&lt;BR /&gt;What does the MONITOR show on the disks involved in the operation?&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Sat, 28 Apr 2007 12:11:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990519#M35986</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-04-28T12:11:20Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990520#M35987</link>
      <description>Kenneth,&lt;BR /&gt;&lt;BR /&gt;The BUFFER and BLOCK factors do help, if BACKUP actually honors them in your context. When I do backups, various versions of BACKUP do not honor the RMS defaults.&lt;BR /&gt;&lt;BR /&gt;To force the use of the RMS defaults, I use local DECnet to access the file via transparent file access. The File Access Listener (FAL) DOES honor the RMS settings (although to get them set for the FAL process requires that they be set in the LOGIN.COM file (I generally condition them on F$MODE() .eqs. "NETWORK").&lt;BR /&gt;&lt;BR /&gt;The RMS parameters are not only for the use of DECdfs (which I presume you are not using).&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Sat, 28 Apr 2007 12:16:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990520#M35987</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-04-28T12:16:19Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990521#M35988</link>
      <description>Kenneth,&lt;BR /&gt;&lt;BR /&gt;A small note: I will be leaving my terminal in a few minutes. While I will not post the number for my mobile in the forum, I will be happy to give it to you if you send me an email. &lt;BR /&gt;&lt;BR /&gt;Please remember to remove the "REMOVETHIS" from the email address on my contact page.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Sat, 28 Apr 2007 12:18:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990521#M35988</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-04-28T12:18:22Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990522#M35989</link>
      <description>&amp;gt;&amp;gt; $ SET RMS_DEFAULT/BUFFER_COUNT=3 /DISK&lt;BR /&gt;&amp;gt;&amp;gt; $ SET RMS_DEFAULT/BLOCK_COUNT=16&lt;BR /&gt;&lt;BR /&gt;Not good enough. 16 blocks had been the defautl for decades. The new default is actually 32. So 16 would be a step backward.&lt;BR /&gt;I would suggest 64 as a general default ( /SYSTEM ) and 120 for the process doing the restore.&lt;BR /&gt;&lt;BR /&gt;A multi buffer count of 2 is normally enough to make the sequential file write behind work. But for a I'd err on the save side and set it to 4.&lt;BR /&gt;&lt;BR /&gt;And while you are there... why not SET RMS/SYSTEM/EXTEN=1000 (or more). Better safe than sorry for this one. Overallocated sequential file will be truncated back to the minimum clusters needed on close.&lt;BR /&gt;&lt;BR /&gt;Hope this helps some,&lt;BR /&gt;Hein van den Heuvel (at gmail dot com)&lt;BR /&gt;HvdH Performance Consulting&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2007 12:56:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990522#M35989</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2007-04-28T12:56:41Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990523#M35990</link>
      <description>Kenneth Toler,&lt;BR /&gt;&lt;BR /&gt;In summary I would recommend using LDDRIVER and transferring the container files:  details follow.&lt;BR /&gt;&lt;BR /&gt;Can we assume this is related to your previous questions?&lt;BR /&gt;&lt;BR /&gt;PKZIP for VMS vs backup/log  &lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1114625" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1114625&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;"The savesets are now in excess of 500,000 files with more than 1,000,000 close at hand."&lt;BR /&gt;&lt;BR /&gt;If these 500,000 files total to 130 GB, then the average file size is around 250 KB. or about 500 blocks each.&lt;BR /&gt; &lt;BR /&gt;Total Number of Files and Blocks inside savesets &lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1121642" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1121642&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;If so, then it seems you are restoring a large number of files.  And possibly to a directory that has many thousands of files.  These are both operations that VMS is not optimized for&lt;BR /&gt;&lt;BR /&gt;You don't specify what the target disk(s) are (other than SCSI), is it on an HSZ controller? (from reference to tri-link adapter).  If it is, then writeback caching will help, but not as much as the solution that follows.  Specifically the use of LDDRIVER and container files.&lt;BR /&gt;&lt;BR /&gt;First, if you were restoring multiple savesets to a single "spindle", then I would expect you to get better performance by single threading the restores.  This seems counterintuitive, but by increasing the number of files you are restoring concurrently, you will cause the heads to do additional seeking, and that will kill your performance.  If all your files are 1 block in size, it probably won't make much difference, you are going to get poor performance if you try restoring on a file-by-file basis no matter what you do.&lt;BR /&gt;&lt;BR /&gt;Here is an option.  Use physical backups to sequential savesets on the portable drive and physical restores on the destination machine.  If you don't have identical drives on both sides, use the LDDRIVER (on the freeware or directly from Jur van der Burg's site, see thread &lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1100345" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1100345&lt;/A&gt;  LDDRIVER allows you to make a single file appear like a disk, which can be mounted.  This would&lt;BR /&gt;allow you to create identically sized virtual disk LDAxx units on both systems, a requirement for backup/physical  in older versions of VMS.&lt;BR /&gt;&lt;BR /&gt;If both the customer and you are willing to load the LDDRIVER, then you have another even easier option.&lt;BR /&gt;&lt;BR /&gt;Use an LD device to create you files on.  When it is time to provide a new "snapshot", dismount the LD device and backup/ignorie=(nobackup) or copy the file to the SCSI drive to be taken to the customer site.  If you ever upgrade, you will need to make sure you disable caching on the LD container file, as is mentioned in the release notes. Take the SCSI drive to the customer site and copy the container files off the SCSI drive.  THIS WILL BE MUCH FASTER than file-by-file restores.&lt;BR /&gt;&lt;BR /&gt;When the container files are copied to the customer's disk, use LD connect to create a new LD devices, specifying the new container files.&lt;BR /&gt;&lt;BR /&gt;Mount the LD devices.  You are done.&lt;BR /&gt;&lt;BR /&gt;The LDDRIVER is a very thin (low overhead) intercept driver, so you can use it for production use.  &lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2007 13:19:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990523#M35990</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-04-28T13:19:54Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990524#M35991</link>
      <description>Please identify the VAX or Alpha host here, the SCSI controller(s), and the details of the I/O path out to the target device(s).&lt;BR /&gt;&lt;BR /&gt;The OpenVMS BACKUP performance (as was measured within OpenVMS Engineering, and discussed in various presentations), when using the recommended process quota settings and baseline tuning, has been found to be within 90% of the underlying hardware bandwidth; within 90% of the slowest component from source to target.&lt;BR /&gt;&lt;BR /&gt;The OpenVMS releases after V7.2 can and do have a number of performance optimizations (including various changes relevant to BACKUP and to I/O), but I'd (also) be looking carefully at the performance of the hardware involved.  The host, the SCSI controller, bus contention, and the device.&lt;BR /&gt;&lt;BR /&gt;Extent settings and such and process quotas and such can and certainly do help with performance, but I'd also be looking really, really, really hard at things like I/O bandwidth all the way out to the target, and disk queue depths.  And at the host VAX or Alpha processor here.&lt;BR /&gt;&lt;BR /&gt;Operator process quotas are listed in the tables available at: &lt;A href="http://64.223.189.234/node/49" target="_blank"&gt;http://64.223.189.234/node/49&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I'd also be looking at the specific processing requirements here, to are see if there are process changes that might be relevant.&lt;BR /&gt;&lt;BR /&gt;And a reversal of interpretation on why you really want to look at hardware bandwidth: if you're achieving within approximately 90% of the bandwidth of the slowest I/O giblet, I'd be looking at hardware upgrades, and not at performance tuning.&lt;BR /&gt;&lt;BR /&gt;Stephen Hoffman&lt;BR /&gt;HoffmanLabs LLC&lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2007 14:01:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990524#M35991</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-04-28T14:01:10Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990525#M35992</link>
      <description>Let's look at the numbers provided:&lt;BR /&gt;&lt;BR /&gt;130GB restored in 5 days.&lt;BR /&gt;&lt;BR /&gt;Assuming larger interpretation of 130 GB&lt;BR /&gt;&lt;BR /&gt;130 * 2^30 = 130 * 1073741824 = 139586437120 bytes&lt;BR /&gt;&lt;BR /&gt;5 days = 5 * 24 * 60 * 60 = 432000 seconds&lt;BR /&gt;&lt;BR /&gt;So average throughput is 139586437120 byte/432000 seconds or just under 325000 bytes/second.&lt;BR /&gt;&lt;BR /&gt;So you do need to find out what the bottleneck is.&lt;BR /&gt;&lt;BR /&gt;Since you already have some large files on the SCSI disk, can you measure the time it takes to copy the 14GB saveset to a disk.&lt;BR /&gt;&lt;BR /&gt;You may be able to increase the performance some by adjusting the process RMS block and buffer counts, but both oth those utilities are smart enough to allocate the space in one go, and then copying the blocks, so setting the RMS extend size probably will not affect copy performance.  It definitely helps when creating a file that is going to be large.&lt;BR /&gt;&lt;BR /&gt;One you determine the speed that you can copy a large file, you will have a baseline for whoat you could expect if using the LDDRIVER method.&lt;BR /&gt;&lt;BR /&gt;Note that by using LDDRIVER the performance will be independent of the size or number of files contained in the container file.  The smaller the files are, the more benefit you will get from using LDDRIVER to treat a single file as a disk.</description>
      <pubDate>Sat, 28 Apr 2007 15:12:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990525#M35992</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-04-28T15:12:07Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990526#M35993</link>
      <description>Actually the OpenVMS version is 7.3 not 7.2.  We are about to go to 8.3, but am not sure when.&lt;BR /&gt;&lt;BR /&gt;The system is alpha based now, not VAX.&lt;BR /&gt;&lt;BR /&gt;There are 50,000 to over 500,000 files per saveset.  The 500,000 is the 50 GB saveset.&lt;BR /&gt;&lt;BR /&gt;Any other ideas?</description>
      <pubDate>Sat, 28 Apr 2007 15:22:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990526#M35993</guid>
      <dc:creator>Kenneth Toler</dc:creator>
      <dc:date>2007-04-28T15:22:46Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990527#M35994</link>
      <description>Maybe I wasn't specific enough.  By copy the saveset I mean copy the single file, not use backup to restore the individual files.&lt;BR /&gt;&lt;BR /&gt;for example:&lt;BR /&gt;&lt;BR /&gt;$ copy dkb5:[savesets]disk1.bck dka3:[somedir]&lt;BR /&gt;&lt;BR /&gt;or &lt;BR /&gt;&lt;BR /&gt;$ backup dkb5:[savesets]disk1.bck dka3:[somedir]/interchange/own=par ! this copies the file disk1.bck, it does not expand to individual files</description>
      <pubDate>Sat, 28 Apr 2007 15:23:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990527#M35994</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-04-28T15:23:50Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990528#M35995</link>
      <description>I am not neccesarily needing to copy the saveset.  I need to greatly improve the performance of expanding large savesets with 100s of thousands of files each.</description>
      <pubDate>Sat, 28 Apr 2007 15:29:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990528#M35995</guid>
      <dc:creator>Kenneth Toler</dc:creator>
      <dc:date>2007-04-28T15:29:36Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990529#M35996</link>
      <description>Kenneth,&lt;BR /&gt;&lt;BR /&gt;As I noted earlier, it is very far from clear, without performance data, where the bottleneck actually is.&lt;BR /&gt;&lt;BR /&gt;Looking at the followup postings, it would not be surprising if the issue is not the access to the savesets themselves, but the actual output files themselves. Information about the actual performance of the system is needed to understand what the bottleneck is, and then how the problem can be addressed.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Sat, 28 Apr 2007 16:40:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990529#M35996</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2007-04-28T16:40:40Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990530#M35997</link>
      <description>My point is that by using LD container files, you will not need to recreate millions of files.  That is a very slow operation on VMS.  By using LDDRIVER you can move the whole disk filesystem as a single file.&lt;BR /&gt;&lt;BR /&gt;If a single spindle is sufficient, then they could just access the files directly on the disk you provide.  If they have HBVS licensed,  you could even use that to make the files available while they were being copied (via the shadowing software), although while it is copying, the performance is going to be poor for other users, and the amount of I/O to get the files moved will be higher (I think at least double).  That would be good if you needed availitly instead of raw performance.&lt;BR /&gt;&lt;BR /&gt;The purpose of copying the saveset is just to measure the "best case" speed.  Copying a single large file is going to be faster than copying a whole bunch of small ones.  It doesn't have to be a saveset, just some large file.  If the time it takes to copy the data as a single file is too long, then you need to look at other options, like upgrading the system.&lt;BR /&gt;&lt;BR /&gt;You haven't provided any metrics for what type of performance they are expecting once the files are loaded on the customers system, or even what you consider to be acceptable time to restore.  If you are expecting to be able to restore 130 GB in 1 hour, that's nearly 40 MB/sec, and I doubt your are going to be able to achive that coming from a single spindle (even if you were restoring to a RAM disk).  Hein can probably provide soem good guidelines for what hardware is needed to support sustained raw transfer ratess.&lt;BR /&gt;&lt;BR /&gt;Without you giving us more info, we really can't provide an optimaal solution.  I don't think you are going to find a magic bullet that will be best for any arbitrary case.&lt;BR /&gt;&lt;BR /&gt;If you are interested in considering the LD driver approach, I can give a few more details, otherwise, I'll let others give you their favorite tuning tweaks.&lt;BR /&gt;&lt;BR /&gt;If you think you may be interested, please visit &lt;A href="http://www.digiater.nl/lddriver.html" target="_blank"&gt;http://www.digiater.nl/lddriver.html&lt;/A&gt; for a good description of what it is.&lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2007 16:44:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990530#M35997</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-04-28T16:44:06Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990531#M35998</link>
      <description>&amp;gt;&amp;gt;&amp;gt; My point is that by using LD container files, you will not need to recreate millions of files. That is a very slow operation on VMS. By using LDDRIVER you can move the whole disk filesystem as a single file.&amp;lt;&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;I don't understand this.   You still have to create the files inside the LD device, right?  And LD still has to write through to the underlying disk, right?  Yes, LD has a single file system, but if you're restoring a saveset into individual files you're still working with a file system inside the LD device.  (And LD isn't a RAM-based disk; it does write back to disk for the blocks written into the LD device.)&lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2007 17:29:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990531#M35998</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-04-28T17:29:33Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990532#M35999</link>
      <description>What is LDDRIVER?&lt;BR /&gt;&lt;BR /&gt;If it is not part of the as-delivered OpenVMS 7.3, I can't use it.  If it is part of OpenVMS 7.3, please tell me how to access and use it.</description>
      <pubDate>Sat, 28 Apr 2007 17:36:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990532#M35999</guid>
      <dc:creator>Kenneth Toler</dc:creator>
      <dc:date>2007-04-28T17:36:09Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990533#M36000</link>
      <description>Re: Hoff's question.&lt;BR /&gt;&lt;BR /&gt;Reading between the lines, this appears to be an ongoing requirement.&lt;BR /&gt;&lt;BR /&gt;If he used an LD device on his system, the files would be created in the container file.  This it would be easy to transport the "disk" to the other system as a single file.&lt;BR /&gt;&lt;BR /&gt;So Hoff is corrrect, it doesn't solve the problem of getting from a saveset to the disk.  However, it avoids the need in the future.&lt;BR /&gt;&lt;BR /&gt;Tim: How can I make my car go faster?&lt;BR /&gt;Tom: Why does it need to go faster?&lt;BR /&gt;Tim: I need to drive 2000 miles, and it takes too long.&lt;BR /&gt;Tom: Did you consider taking a plane?</description>
      <pubDate>Sat, 28 Apr 2007 17:47:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990533#M36000</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2007-04-28T17:47:29Z</dc:date>
    </item>
    <item>
      <title>Re: Need to speed up expansion of very large savesets</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990534#M36001</link>
      <description>LDDRIVER is part of OpenVMS Alpha V7.3-1 and later (with some wrinkles), and fully part of V8.2 and later.  LDDRIVER is also available via the Freeware, and (among other uses) central to recording optical media on OpenVMS.&lt;BR /&gt;&lt;BR /&gt;Information on LD is available in the V8.2 and later OpenVMS manuals, and details on configuring and using LDDRIVER are also embedded in the recording details at: &lt;A href="http://64.223.189.234/node/28" target="_blank"&gt;http://64.223.189.234/node/28&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;LD would work as a way to pass over a block of data -- for this volume of data, do ensure you have the current LD ECO, as older drivers had a bugcheck with volumes in this range.  As for approaches similar to LD, I'd be tempted to perform the BACKUP or gzip locally, then transfer the saveset or the archive as a unit.  Or work on compressing or reducing the quantity of data involved.   Or both.&lt;BR /&gt;&lt;BR /&gt;Though pending the information on the system and I/O hardware and on the particular limitation or bottleneck here between the source and the target, I'm only really left to speculate.&lt;BR /&gt;</description>
      <pubDate>Sat, 28 Apr 2007 17:57:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/need-to-speed-up-expansion-of-very-large-savesets/m-p/3990534#M36001</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2007-04-28T17:57:56Z</dc:date>
    </item>
  </channel>
</rss>

