<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: BACKUP over DECnet, file extensions, performance in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308858#M15775</link>
    <description>re reports on experiments&lt;BR /&gt;"I'm somewhat surprised"&lt;BR /&gt;&lt;BR /&gt;I'm not surprised. The file system has no memory of what it previously allocated nor any prediction of future intentions. Indeed it has no idea what a particular request will be used for, or even which process it's for, let alone where related extents might be. The allocation is made according to the state of free space and caches at the time of the request. There is no attempt to try to keep files contiguous and given the place of the allocator in the software stack, it wouldn't be feasible to do so for the general case.&lt;BR /&gt;&lt;BR /&gt;On a freshly installed disk, by default, the file system stuff (INDEXF.SYS and friends) will be in the middle of the disk. That gives you two large extents on either side. As you make requests, I believe the allocation algorithms will tend to try to keep the extents even, so the allocations may tend to flip flop between them. This may look like "all over the disk". On the other hand, there are all kinds of other factors which influence where the blocks come from.&lt;BR /&gt;&lt;BR /&gt;Consider, the allocator will do exactly the same thing for a particular sequence of requests, regardless of if the sequence is from one process for one file, or multiple processes writing multiple files.&lt;BR /&gt;&lt;BR /&gt;By definition, you can't "optimize" everything for everyone. So the bottom line is, if you want your files to be contiguous (even though these days it's not entirely clear that it will be any significant benefit), you need to say so up front. Where possible use large initial allocations and large extend sizes to TELL the filesystem your intentions, rather than trying to second guess epiphenomena and emergent behaviour of file system primitives.&lt;BR /&gt;</description>
    <pubDate>Mon, 01 Dec 2008 21:14:03 GMT</pubDate>
    <dc:creator>John Gillings</dc:creator>
    <dc:date>2008-12-01T21:14:03Z</dc:date>
    <item>
      <title>BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308832#M15749</link>
      <description>I inherited a script that basically did a BACKUP over DECnet (i.e. BACKUP dev:[dir]*.* rnode::rdev:[rdir]saveset.bck/save) (notice use of DECnet proxy)  The files being backed up are mostly huge RMS indexed files.  There are 6 directories that this script backs up (to different savesets) and it did so one at a time.  Total elapsed time about 3 hours.&lt;BR /&gt;&lt;BR /&gt;I thought I could decrease that elapsed time so I had this brilliant idea to just fire off six separate procedures simultaneously since the network guy said we had great bandwidth.  Total elapsed time: over 24 hours!  After much digging I determined it was file fragmentation (all savesets were being written to the same remote disk, which had lots of free space and a free space fragmentation of 0.)  The savesets were constantly being extended and competing with each other.  DFU gave me a file fragmentation index of 643395904.000  (poor) whereas it was close to 0 previously.&lt;BR /&gt;&lt;BR /&gt;I'm not ready to give up on running multiple backups (and other than writing to separate remote disks) does anyone have any better suggestions?  I saw a previous note... &lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1248535" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1248535&lt;/A&gt; where Bob Gezelter recommends setting rms/extend on the remote node.  I'm going to test that later on, but I'm not really thrilled about having to add that set rms/extend command in the login.com (for network jobs) because there could be other network jobs for the account that could be doing other stuff.&lt;BR /&gt;&lt;BR /&gt;Any helpful comments/ideas are most welcome.</description>
      <pubDate>Tue, 18 Nov 2008 21:27:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308832#M15749</guid>
      <dc:creator>EdgarZamora_1</dc:creator>
      <dc:date>2008-11-18T21:27:32Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308833#M15750</link>
      <description>I don't see a way to pre-allocate a big save&lt;BR /&gt;set file, but perhaps it would make some&lt;BR /&gt;sense to create a big LD device on the&lt;BR /&gt;destination system for each job, and write&lt;BR /&gt;the output save set to that.  At least the&lt;BR /&gt;jobs wouldn't be fighting over allocations on&lt;BR /&gt;the same file system.  Also, MOUNT /EXTENSION&lt;BR /&gt;(or SET VOLUME /EXTENSION) might let you set&lt;BR /&gt;a bigger default extension value on an LD&lt;BR /&gt;volume without bothering LOGIN.COM anywhere.&lt;BR /&gt;&lt;BR /&gt;You would need to guess a good LD device&lt;BR /&gt;size ahead of time.  Too small, and the job&lt;BR /&gt;fails.  Too large, and you've tied up a bunch&lt;BR /&gt;of extra space, and you might need to copy&lt;BR /&gt;the save set off the LD to somewhere else to&lt;BR /&gt;be able to recover it.&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Any helpful comments/ideas are most&lt;BR /&gt;&amp;gt; welcome.&lt;BR /&gt;&lt;BR /&gt;If you insist on "helpful", it gets harder.</description>
      <pubDate>Tue, 18 Nov 2008 22:20:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308833#M15750</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2008-11-18T22:20:34Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308834#M15751</link>
      <description>&amp;gt;&amp;gt; but I'm not really thrilled about having to add that set rms/extend command in the login.com &lt;BR /&gt;&lt;BR /&gt;Don't be afraid to set SET FILE/EXTEN to the max (65K), or at least set it to several thousands which is likely to be 100 times better then it is.&lt;BR /&gt;&lt;BR /&gt;Most/many usages of the extent for new files creates will truncate when done.&lt;BR /&gt;&lt;BR /&gt;But admittedly, if you later have hundreds of little files come in by FTP then you want to select a middle of the road value (like 1000?).&lt;BR /&gt;&lt;BR /&gt;Maybe you can use the time of day for a clue as to how to set the extent, or the originating node for the connection?&lt;BR /&gt;&lt;BR /&gt;You probably also want to yack up SET RMS/NETWORK)BLOCK_COUNT&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; The files being backed up are mostly huge RMS indexed files. &lt;BR /&gt;&lt;BR /&gt;You may want to question the value-add of using BACKUP for a few huge files.&lt;BR /&gt;You may be better of just using COPY or even CONVERTing to a remote sequential file or PULLING the file instead of pushing, to control the output better (pre-allocate).&lt;BR /&gt;&lt;BR /&gt;Finally, going from 1 job to 2 concurrent ones might just give you ample improvement over what you had whilest avoiding teh worst of the contention.&lt;BR /&gt;&lt;BR /&gt;Hope this helps some&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Tue, 18 Nov 2008 22:22:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308834#M15751</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-11-18T22:22:17Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308835#M15752</link>
      <description>&amp;gt; &amp;gt; but I'm not really thrilled about having&lt;BR /&gt;&amp;gt; &amp;gt; to add that set rms/extend command in the&lt;BR /&gt;&amp;gt; &amp;gt; login.com&lt;BR /&gt;&lt;BR /&gt;&amp;gt; Don't be afraid [...]&lt;BR /&gt;&lt;BR /&gt;Or set up a new account with its own&lt;BR /&gt;LOGIN.COM, and the appropriate proxies to let&lt;BR /&gt;you use it.  Then go wild.</description>
      <pubDate>Tue, 18 Nov 2008 22:41:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308835#M15752</guid>
      <dc:creator>Steven Schweda</dc:creator>
      <dc:date>2008-11-18T22:41:31Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308836#M15753</link>
      <description>Tweak the receiver DCL, or the FDL.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://64.223.189.234/node/598" target="_blank"&gt;http://64.223.189.234/node/598&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 18 Nov 2008 23:07:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308836#M15753</guid>
      <dc:creator>Hoff</dc:creator>
      <dc:date>2008-11-18T23:07:28Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308837#M15754</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;I have recommend three things using a SET RMS command conditioned on a NETWORK login (using F$MODE() to check):&lt;BR /&gt;&lt;BR /&gt;- EXTEND=66535&lt;BR /&gt;- BUFFER=255 (Hein will disagree with me)&lt;BR /&gt;- BLOCK=127 (Hein will disagree with me)&lt;BR /&gt;&lt;BR /&gt;These are the maximum numbers, you can experiment with lowering them based on observed performance. I presume that memory is not a problem. Paging and working sets may also need to be increased.&lt;BR /&gt;&lt;BR /&gt;As to the issue of other jobs, there are a variety of ways I could see conditionalizing things more precisely (one of which is cloning the account and using a special account for the BACKUP operations). One could also possibly do something other tricks, I would have to check into the details.&lt;BR /&gt;&lt;BR /&gt;And yes, I have seen impressive speedups by varying these settings, even when using DECnet remote file access within a node.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 19 Nov 2008 03:36:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308837#M15754</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2008-11-19T03:36:27Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308838#M15755</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;What bottleneck are you trying to avoid by having multiple streams?&lt;BR /&gt;&lt;BR /&gt;If all the input directories are on the same device (not clear if they are or not), and there is a single output device that will hold all the save sets, you may be increasing the time due to increased head movement on the source and destination disks.&lt;BR /&gt;&lt;BR /&gt;Large file extensions and network buffer tuning will help whether you have multiple streams or not.&lt;BR /&gt;&lt;BR /&gt;I am not convinced that fragmentation is necessarily the cause, I would guess it is the frequent extensions, the multiple streams reducing the effectiveness of the extent caches, and the contention for the single output disk.  By creating the LD devices, you can eliminate the fragmentation, but the disk with the container files will still trash as the heads seek from one container file to another.&lt;BR /&gt;&lt;BR /&gt;Since you are backing up "mostly huge" files, if the backup job is the only thing accessing the drive, I would expect single stream to be near optimal as far getting the data off the disk.  By multi-streaming, you may be able to get more than your fair share of network bandwidth, but unless you are competing with other network activity, the only advantage multi-streaming can provide is more buffering.  And if you can increase the buffers available to the single stream, you may be able to get higher utilization of the network with a single stream.&lt;BR /&gt;&lt;BR /&gt;Summary: multi-streaming is not always better than single streaming, especially when the streams are contending for a common resource.&lt;BR /&gt;&lt;BR /&gt;Jon&lt;BR /&gt;</description>
      <pubDate>Wed, 19 Nov 2008 04:47:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308838#M15755</guid>
      <dc:creator>Jon Pinkley</dc:creator>
      <dc:date>2008-11-19T04:47:39Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308839#M15756</link>
      <description>&amp;gt;&amp;gt; - BUFFER=255 (Hein will disagree with me)&lt;BR /&gt;&lt;BR /&gt;Yes I do, but I should re-run my tests some day soon.&lt;BR /&gt;&lt;BR /&gt;Still, as a mental excercise, how could anything more than a hand full of buffers possibly help with sequential access?&lt;BR /&gt;&lt;BR /&gt;IMHO all that does is, firstly set false expectations, secondly eat some memory and, and finally increase risks a tiny bit. &lt;BR /&gt;&lt;BR /&gt;Just imagine an infinite fast network as well as source. Those buffers at full size will allow the receiver to fill up 16MB of memory. Then what? They still need to be written to the disk and the connection should stay until that is done. So now you potentially launch 255 IO's. Is that going to help anyting? Do you want to blow out of DIRIO? did you want to change from a simple sequential write pattern to effectively random? Do you like spiking you controller cache and hurting other users instead of throttling? &lt;BR /&gt;&lt;BR /&gt;Now nothing is infinitely fast.&lt;BR /&gt;In reality you will end up using 2, maybe 3 buffers. If that's the case, then why confuse the world by suggesting that hundreds of buffers will help!&lt;BR /&gt;Due to the transient nature of tests, anything will be hard to prove. &lt;BR /&gt;Maybe $SET PROC/SSLOG ?&lt;BR /&gt;&lt;BR /&gt;If 255 buffers were to be used for real then RMS would have to walk its buffer Descriptors (BDB's) to find the right one to use. Those are linked in VBN order, so that would get harder and harder. In my prior tests walking 255 BDBs actually took measurable time, when done for each record added. It became slower than doing the IO!&lt;BR /&gt;Fortunately, you typical Network IO is done in sequential only (SQO) mode, and RMS will forgot about the buffer right after the IO as the application promissed not to look back.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; - BLOCK=127 (Hein will disagree with me)&lt;BR /&gt;&lt;BR /&gt;I'm fine with that, allthough I tend to pick 124 'just in case'.&lt;BR /&gt;&lt;BR /&gt;Some tests suggest that writing in multiples of 16 block will reduce XFC overhead a little and thus increase trhoughput, and it does. Blocks=112 is the best you can do for that. But it is hard to argue with doing 10% fewer IOs.&lt;BR /&gt;&lt;BR /&gt;Other tests suggest trying to keep IO started at 4 (16?) block LBN numbers will help the IO controllers (notably EVA). &lt;BR /&gt;But to accomplish that the cluster size and buffer size must both be 4 (16?) block multiples. My 124 choice helps with imporoving those odds, while not increasing the number of IOs too much.&lt;BR /&gt;&lt;BR /&gt;fwiw...&lt;BR /&gt;I recently started experimenting with changing VCC_MAX_IOSIZE to 126. &lt;BR /&gt;This allows one to choose the simple SET FILE/RMS/BLO=127 as a tool to bypass the XFC for those places where you do not come back the data soon.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 19 Nov 2008 05:53:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308839#M15756</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-11-19T05:53:56Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308840#M15757</link>
      <description>You can use T2T.&lt;BR /&gt;&lt;BR /&gt;backup files node::"task=x"/sav/b;ock=32256/group=0&lt;BR /&gt;&lt;BR /&gt;and x.com on the other side&lt;BR /&gt;&lt;BR /&gt;convert/fdl=sys$input disk_file.sav&lt;BR /&gt;xxx&lt;BR /&gt;&lt;BR /&gt;where xxx is an fdl contents of your backup file with big allocatioin and extend size.&lt;BR /&gt;&lt;BR /&gt;Wim (not tested it, using something like this it for remote backup to tape)</description>
      <pubDate>Wed, 19 Nov 2008 07:47:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308840#M15757</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-11-19T07:47:42Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308841#M15758</link>
      <description>Hein,&lt;BR /&gt;&lt;BR /&gt;With all due respect, I will not disagree with you that 255 buffers almost never get used. The key word in the preceding is ALMOST.&lt;BR /&gt;&lt;BR /&gt;While I have not run them in a while, I did run some interesting timing tests in an environment where head movement was a given (e.g., a single user disk environment where the application was doing one-for-one processing of a sizable sequential file). The number of buffers used, even for large block sizes was impressive.&lt;BR /&gt;&lt;BR /&gt;The preceding caused that relatively show CPU, with comparable disks, to range from 10% utilization to saturation, with the only variable being the number of buffers and their size.&lt;BR /&gt;&lt;BR /&gt;Admittedly, that environment did stabilize short of the maximum, but it was very sensitive to the performance of the different elements of the system. In that case, the speed match was between the disk and itself, allowing for fragmentation, window turns, and file extends.&lt;BR /&gt;&lt;BR /&gt;Edgar has shared little with us on the precise configuration of the systems and network involved in this. Even with large extend sizes, BACKUP is impressive at generating a data stream. In my (admittedly limited) experiments using DECnet within a node (effectively an infinite speed network), I have certainly maxed out at more than a handful of buffers, although I do max out the CPU running BACKUP.&lt;BR /&gt;&lt;BR /&gt;As is said with vehicle mileage stickers, "Your mileage may vary". If there is one thing I have learned in all of my years of performance tuning, for systems and applications, one must be prepared to be surprised by the unexpected.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Wed, 19 Nov 2008 10:09:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308841#M15758</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2008-11-19T10:09:46Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308842#M15759</link>
      <description>In enclosure a working example using T2T.&lt;BR /&gt;&lt;BR /&gt;It backups a hardcoded set. If you start it with p1=remote node and P2=100000 it will be 4 times as fast as the default backup (AS500, 7.3).&lt;BR /&gt;&lt;BR /&gt;If you give P2 =100 it will take 6 times as long. My default rms extend is 32.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 19 Nov 2008 12:44:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308842#M15759</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-11-19T12:44:26Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308843#M15760</link>
      <description>Good morning... I didn't expect to see so many responses, but thank you!&lt;BR /&gt;&lt;BR /&gt;Steven, thanks for the LD suggestion.  That did cross my mind as a possible solution.  I will look into that.  And yes I'm also considering using a dedicated account for doing this (sigh... another account I'll have to justify to the SOX auditors as to why it has privs).&lt;BR /&gt;&lt;BR /&gt;Hoff, thanks for the link.  Very interesting article.  Wim, thanks for your example t2t.  I will definitely look into using t2t.&lt;BR /&gt;&lt;BR /&gt;Hein and Bob, thanks for your responses.  In combination with Steven's suggestion of using a dedicated account, the SET RMS commands in login.com is what I'll look into first (since it's really the easiest for me to set up and test right now).  Bob, I didn't share much detail on the environment because I figured it was a pretty generic situation, but here's more info on the environment:&lt;BR /&gt;&lt;BR /&gt;The RMS indexed files are production data being backed up to the development system (so the programmers can test against more recent data).  When the savesets are done being copied over to the development system, another procedure restores them for use by the developers.  This "refresh" exercise is done maybe once a month or so.  The system environment is Alpha OpenVMS 8.3 latest patches; EMC Symmetrix storage for production, MSA1000 for development; standalone systems (no clustering).  DECnet is being used to do this refresh because TCP/IP network between prod and dev are separated.  The two systems involved are actually sitting on the same rack.&lt;BR /&gt;&lt;BR /&gt;Jon, the whole situation arose because, like I said initially, I had this "brilliant" idea that I could decrease the elapsed time (about 3 hours for single stream) of the whole refresh process.  Maybe I should stop trying to improve things?  Yes, the input directories are on the same disk and yes there is a single output device (currently).  So you may be right in that the single stream is the optimal way to go. (and yes I was trying to steal more of the network pie).  I agree with you that fragmentation is not necessarily the cause of the slowness, but I can tell you definitely that the previous savesets when ran one at a time were contiguous and these savesets I created this time had between 100k to 200K fragments each.  Interesting tidbit I saw on the destination system while the slowness was going on... the read ops rate (on the destination disk) was a thousand times (or more) more than the write io rate (no other activity going on on that disk except for the FAL process writing the savesets).  I don't have the screen history anymore so I don't have the exact numbers.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 19 Nov 2008 14:44:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308843#M15760</guid>
      <dc:creator>EdgarZamora_1</dc:creator>
      <dc:date>2008-11-19T14:44:41Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308844#M15761</link>
      <description>I hope Hein can add some explenation on my next test.&lt;BR /&gt;&lt;BR /&gt;I added deferred write in the FDL of the convert and it was accepted by convert. But performance didn't improve.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Wed, 19 Nov 2008 15:36:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308844#M15761</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-11-19T15:36:28Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308845#M15762</link>
      <description>My guess this is the input device slowing down the multiple-stream approach. I would do a simple test: Run the backups to the NLA0 device - once serial single-streams and then the multi-stream. It may show that the reading parts is slowing things down in the multi-stream approach.&lt;BR /&gt;&lt;BR /&gt;Having good file fragmentation on the output size is a performance plus. Also small extend sizes (+/- 1,000) would help to keep the disk head close to the same location for all streams. Imagine all save set files pre-allocated and then filling each bottom-up. Imagine the disk head strokes necessary to do that.&lt;BR /&gt;&lt;BR /&gt;Also using /BUFFER=255. This builds quote a disk I/O queue most likely a whole set from stream A then from stream B etc. That means e.g. stream B might have to wait for all stream A' I/Os queued ahead to finish first. &lt;BR /&gt;&lt;BR /&gt;Also BACKUP does not use RMS but uses $QIO to read from the disk. The number of outstanding I/Os is controlled by the process' DIOLM (among others) - replaced by /IO_LOAD sinc V8.3. Small numbers in the range of 5-10 typically yields better performance.&lt;BR /&gt;&lt;BR /&gt;Keep in mind: Bigger not always is better!&lt;BR /&gt;&lt;BR /&gt;/Guenther</description>
      <pubDate>Wed, 19 Nov 2008 20:27:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308845#M15762</guid>
      <dc:creator>GuentherF</dc:creator>
      <dc:date>2008-11-19T20:27:03Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308846#M15763</link>
      <description>Side discussion(s)...&lt;BR /&gt;&lt;BR /&gt;Wim&amp;gt; I added deferred write in the FDL of the convert and it was accepted by convert. But performance didn't improve.&lt;BR /&gt;&lt;BR /&gt;Correct, as to be expected.&lt;BR /&gt;&lt;BR /&gt;First, Deferred Write is ONLY applicable for SHARED access where the RMS default it to write-through any record changes. Setting DFW tells RMS not to update the disk for each record added, but only when an other accessor jiggles the lock.&lt;BR /&gt;For the purpose demonstrated, there is no sharing and RMS will always defer, untill the buffer is full, at which point WRITE BEHIND (WBH) may or might not get activated. &lt;BR /&gt;&lt;BR /&gt;Even if deferred write was active then RMS would just sit on dirty buffers until one more buffer is requested than available and at that point would it start the write. &lt;BR /&gt;So that would only postpone and create a spike at file close, not provide improvements.&lt;BR /&gt;&lt;BR /&gt;The write-behind option is the strongest performance feature rms has and the strongest performance boost is can offer comes from going to 1 to 2 buffers, anything more that 2 buffers suggests that you are overloading the output device and will only marginally help by keeping stuff in the pipe line.&lt;BR /&gt;Last time I looked the improvements in elapsed time trying 1,2,4,8,16 buffers was like 10, 6, 5, 4.5, 4.4&lt;BR /&gt;&lt;BR /&gt;Your milage can and will vary based on CPU speed, source speed and storage deployed.&lt;BR /&gt;&lt;BR /&gt;btw.. Last time I tried 255 buffers for real the application indeed went from low CPU usage to high CPU usage, but the amount of work done did not increase but decrease as it only added cpu time and did nothing to speed up matters. How could it? The basic work of writing data to the disk still needs to be done. Larger buffers help that, but 'more than enough' buffers do not help more.&lt;BR /&gt;&lt;BR /&gt;Applications using random access in may indeed very well benefit from more buffers, but a simple file copy is not one of those.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Hein.&lt;BR /&gt;</description>
      <pubDate>Wed, 19 Nov 2008 21:14:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308846#M15763</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2008-11-19T21:14:29Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308847#M15764</link>
      <description>Edgar,&lt;BR /&gt;&lt;BR /&gt;  Running jobs in parallel is only going to be an improvement if there are "gaps" in processing that parallelism can fill. For single stream I/O bound jobs, running multiple copies can convert a long stream of single I/Os into multiple streams and may give you a wallclock time win.&lt;BR /&gt;&lt;BR /&gt;  However, in the case of BACKUP it's doing asynch I/Os anyway, so at best it won't be an improvement, and at worst the streams will get in each other's way (As you have observed). Look at your T4 data for the job running. If you don't see any significant wait states running multiple streams isn't the way to go (unless you have multiple CPUs and the I/O streams aren't competing at either end).&lt;BR /&gt;&lt;BR /&gt;  Timesharing is a must for interactive processing, but for batch jobs, even ignoring the context switching overhead, it's a losing proposition.&lt;BR /&gt;&lt;BR /&gt;  Consider, you have 2 jobs which each take 1 hour to process. If they're competing for resources running them together will take 2 hours, so the mean run time is 2 hours.&lt;BR /&gt;&lt;BR /&gt;  Running them in sequence, will still take 2 hours, but the first will complete after 1 hour, so mean run time is 1.5 hours. Again, multiple CPUs may change this.</description>
      <pubDate>Wed, 19 Nov 2008 22:20:54 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308847#M15764</guid>
      <dc:creator>John Gillings</dc:creator>
      <dc:date>2008-11-19T22:20:54Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308848#M15765</link>
      <description>If your backup job is the only one writing to the disk, then the single stream backup job will produce non-fragemented files.  When you have multilple backup jobs then each one is getting an extent of space and then the next job gets an extent.  The default was like 1 block so you can really produce fragmented files.  Can you use COPY/ALLOCATE instead of backup or backup locally and use copy/allocate to move the saveset over to the other node?  &lt;BR /&gt;&lt;BR /&gt;The set RMS/EXTENT command on the remote side will help.&lt;BR /&gt;&lt;BR /&gt;The SET RMS/EXTENT is also very helpful when creating save sets on disks.  On an Itanium server we saw a backup of 72 GB 15K disks on MSA1000 slow down to 1.5 MB/S because the save set on disk was being extended 1 extent at a time.  We were performing over 4,000 FCP called a second extending the save set.  The Ambassador I brought this up to didn't think engineering should be bothered with this problem.  I disagree.  One would hope Backup would be smart enough to ask for a reasonable extent based on how much it thinks it has to back up.</description>
      <pubDate>Mon, 24 Nov 2008 07:49:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308848#M15765</guid>
      <dc:creator>Cass Witkowski</dc:creator>
      <dc:date>2008-11-24T07:49:15Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308849#M15766</link>
      <description>"If your backup job is the only one writing to the disk, then the single stream backup job will produce non-fragemented files"&lt;BR /&gt;&lt;BR /&gt;I tried it with dcl write script on an empty disk. And monitored it with defrag/int=decw report volume fragmentation graph.&lt;BR /&gt;&lt;BR /&gt;The extents are being allocated all over the disk, thus the files are fragmented (actually on the first half of the disk to start with but leaving big spaces between the fragments).&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 25 Nov 2008 07:47:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308849#M15766</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-11-25T07:47:34Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308850#M15767</link>
      <description>&amp;gt; I tried it with dcl write script on an empty disk.&lt;BR /&gt;&lt;BR /&gt;If the disk were empty due to deletion of files, the blocks occupied by those files would be held in the extent cache (if caching is active) and then be first used during creation of the new file. Within the cache no attempt would have been made to consolidate adjacent blocks into one contiguous bundle. This might explain what you've observed.&lt;BR /&gt;&lt;BR /&gt;I'd expect that if your disk were mounted without an extent cache, that all blocks would be allocated from the beginning of the BITMAP (low LBN) to end (high LBN) in a contiguous manner (excepting for placement of the INDEXF possibly in the middle of the block range). (I also realize that having parameters such as no-cache were not previously part of this discussion - and if eliminating caching does not produce this behavior I would be interested in learning this.)</description>
      <pubDate>Tue, 25 Nov 2008 13:03:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308850#M15767</guid>
      <dc:creator>Jim_McKinney</dc:creator>
      <dc:date>2008-11-25T13:03:08Z</dc:date>
    </item>
    <item>
      <title>Re: BACKUP over DECnet, file extensions, performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308851#M15768</link>
      <description>Bad luck : I tested on a newly init-ed disk.&lt;BR /&gt;&lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1141655" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1141655&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;I did all kind of backups of 60 MB between a AS500 and a AS1000 without routers in between (yes I know, very old hardware). 10 Mbit network card and VMS 7.3. All backups have source AS500 and destination AS1000.&lt;BR /&gt;&lt;BR /&gt;1. Backup with extend size 64 : 40 min&lt;BR /&gt;2. backup with extend size 50000 : 35 min&lt;BR /&gt;3. backup to a NFS disk (thus IP) : 10 min&lt;BR /&gt;4. my posted backup script but to the local node, destination a NFS disk (thus IP): 2 min&lt;BR /&gt;5. my posted backup script directly to the AS1000 : 1 min&lt;BR /&gt;&lt;BR /&gt;I redid 1, 2 and 5 later with about the same performance. Of course network load is not always identical.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Tue, 25 Nov 2008 14:02:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/backup-over-decnet-file-extensions-performance/m-p/4308851#M15768</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2008-11-25T14:02:38Z</dc:date>
    </item>
  </channel>
</rss>

