<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VMS Poor SDLT performance in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351454#M3266</link>
    <description>Tim,&lt;BR /&gt;&lt;BR /&gt;Just shooting at everything that moves, hoping for a lucky hit:&lt;BR /&gt;&lt;BR /&gt;Backup user's account quota were already mentioned, so this might be duplicate, but double check the accounts WSQUOTA &amp;amp; WSEXTENT being the same!&lt;BR /&gt;Uwe already mentioned file size and fragmentation, but really 140 I/O is not high for modern disks, unless the transfers are unusually large (large file fragments), and THAT would get you very HIGH data rates, which you obviously have not. Disk-I/O queue length only 13 suggests you MAY have SYSGEN CHANNELCNT or UAF DIOLM throttling your performance. Double-check please!&lt;BR /&gt;Like you said, 100% CPU for Backup is weird.&lt;BR /&gt;What does MONITOR say about that (pages, IO, modes?)&lt;BR /&gt;Your disks are SAN. Does SHOW DEV/FUL of your disk during backup indeed show a DGA path as current, or have you somehow fallen back to MSCP (happened to us. Slow down factor about 8).&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Fri, 06 Aug 2004 14:49:48 GMT</pubDate>
    <dc:creator>Jan van den Ende</dc:creator>
    <dc:date>2004-08-06T14:49:48Z</dc:date>
    <item>
      <title>VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351445#M3257</link>
      <description>I have seen a number of posts but little for our environment.&lt;BR /&gt;I am experiencing very poor performance writing to an SDLT connected to an MDR on a SAN.&lt;BR /&gt;Using VMS 7.3-2 backup command with the following parameters.&lt;BR /&gt;init /media_format=compaction $2$mga0: label&lt;BR /&gt;&lt;BR /&gt;mount /foreign/media_format=compaction $2$mga0: label&lt;BR /&gt;&lt;BR /&gt;backup/noalias/noassist/image/list=admin:[util.bku.log]'vol_name'.log 'devname' 'tape':'vol_name'.bck -&lt;BR /&gt; /save/media_format=compaction /block=32256 /IGNORE=(INTERLOCK,LABEL,NOBACKUP)&lt;BR /&gt;&lt;BR /&gt;I did not include specific tape model info as I hope it is not currently relevant.&lt;BR /&gt;&lt;BR /&gt;Current testing is showing only 16GB/hour.&lt;BR /&gt;I personally believe this is a terrible rate.&lt;BR /&gt;&lt;BR /&gt;Any tips or experiences that anyone wishes to share ?&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 11:13:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351445#M3257</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T11:13:12Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351446#M3258</link>
      <description>Hi Tim,&lt;BR /&gt;I read a few thread in this forum about SDLT tape; the most common solution is increase buffer size with qualifier /BLOC=65536.&lt;BR /&gt;I haven't this type of tape so I can't help more but you can search for SDLT in this forum.&lt;BR /&gt; &lt;BR /&gt;Antonio Vigliotti&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 11:33:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351446#M3258</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-08-06T11:33:59Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351447#M3259</link>
      <description>Thanks.  I must have missed that one..&lt;BR /&gt;&lt;BR /&gt;I will try it out..&lt;BR /&gt;&lt;BR /&gt;The data rate does seem to increase with more data.&lt;BR /&gt;&lt;BR /&gt;device with 4GB used takes about 15 minutes.&lt;BR /&gt;device with 12GB used takes about 30 minutes.&lt;BR /&gt;&lt;BR /&gt;This makes sense as more data typically provides a better stream.</description>
      <pubDate>Fri, 06 Aug 2004 11:40:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351447#M3259</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T11:40:22Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351448#M3260</link>
      <description>Tim,&lt;BR /&gt;here there are some link about SDLT trouble.&lt;BR /&gt;&lt;A href="http://tinyurl.com/474pc" target="_blank"&gt;http://tinyurl.com/474pc&lt;/A&gt;&lt;BR /&gt; &lt;BR /&gt;H.T.H.&lt;BR /&gt;Antonio Vigliotti&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 11:49:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351448#M3260</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-08-06T11:49:05Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351449#M3261</link>
      <description>I do not think replacing /block=32256 by block=65535  will dramatically improve the rate.&lt;BR /&gt;&lt;BR /&gt;If you do not have a /block qualifier, you use the default of 8192, which gives the worst results. 16 384 is enough to have good results, and it marginally improves with higher values.&lt;BR /&gt;&lt;BR /&gt;Check the account you use for backup has correct quotas according to the doc &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://pi-net.dyndns.org/docs/openvms0731/731final/6017/6017pro_046.html" target="_blank"&gt;http://pi-net.dyndns.org/docs/openvms0731/731final/6017/6017pro_046.html&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;11.7 Setting Process Quotas for Efficient Backups&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;To have a good backup performance, you must "feed" the tape fast enough.</description>
      <pubDate>Fri, 06 Aug 2004 12:14:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351449#M3261</guid>
      <dc:creator>labadie_1</dc:creator>
      <dc:date>2004-08-06T12:14:05Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351450#M3262</link>
      <description>Update:&lt;BR /&gt;&lt;BR /&gt;/block increase did not help much.&lt;BR /&gt;&lt;BR /&gt;updating all quotas recommended by doc did not help much..&lt;BR /&gt;&lt;BR /&gt;4GB still taking 15minutes ( 16GB/hour )&lt;BR /&gt;&lt;BR /&gt;All ideas are greatly appreciated..&lt;BR /&gt;&lt;BR /&gt;Thanks !!!!&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 12:59:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351450#M3262</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T12:59:46Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351451#M3263</link>
      <description>Hello Tim,&lt;BR /&gt;what type is the disk that you are saving and what type of files do you have? Can you try a:&lt;BR /&gt;$ monitor DISK&lt;BR /&gt;&lt;BR /&gt;and check how many I/Os you get? It is quite possible that you are attempting to save a disk with lots of small files - in that case BACKUP needs to jump between INDEXF.SYS for the file headers and the file's data. While BACKUP tries to limit the seek distances by using an 'elevator' pattern it is possible that the disk is not fast enough.&lt;BR /&gt;&lt;BR /&gt;Make sure the /LIST output does not go back to the same disk - else you will cause additional I/Os that collide with BACKUP's.&lt;BR /&gt;&lt;BR /&gt;/IGNORE=NOBACKUP will save the contents of files that are marked NOBACKUP like PAGEFILE.SYS or SYSDUMP.DMP, but I guess you are using it to get all files for your test, right?</description>
      <pubDate>Fri, 06 Aug 2004 13:26:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351451#M3263</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-08-06T13:26:46Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351452#M3264</link>
      <description>The disk that is currenlty being backed up is doing about 140 IO/s&lt;BR /&gt;Queue lenth 13&lt;BR /&gt;/list file is not going to same disk&lt;BR /&gt;some of these disk do have a large number of smaller files and some others are larger files.  I will take this into consideration and do some comparitive testing.&lt;BR /&gt;CPU is at 100% which is weird and the backup process is the main user.  &lt;BR /&gt;I will keep testing.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 14:09:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351452#M3264</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T14:09:03Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351453#M3265</link>
      <description>OK, what type of disk (model number) is it? 140 IOs/second does not sound too unrealistic.&lt;BR /&gt;&lt;BR /&gt;You can try to limit the CPU load by using /NOCRC/GROUP=0, but that is only good for testing and not a serious backup, because it will turn off end-to-end checking and remove redundancies from the save-set.</description>
      <pubDate>Fri, 06 Aug 2004 14:47:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351453#M3265</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-08-06T14:47:03Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351454#M3266</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;Just shooting at everything that moves, hoping for a lucky hit:&lt;BR /&gt;&lt;BR /&gt;Backup user's account quota were already mentioned, so this might be duplicate, but double check the accounts WSQUOTA &amp;amp; WSEXTENT being the same!&lt;BR /&gt;Uwe already mentioned file size and fragmentation, but really 140 I/O is not high for modern disks, unless the transfers are unusually large (large file fragments), and THAT would get you very HIGH data rates, which you obviously have not. Disk-I/O queue length only 13 suggests you MAY have SYSGEN CHANNELCNT or UAF DIOLM throttling your performance. Double-check please!&lt;BR /&gt;Like you said, 100% CPU for Backup is weird.&lt;BR /&gt;What does MONITOR say about that (pages, IO, modes?)&lt;BR /&gt;Your disks are SAN. Does SHOW DEV/FUL of your disk during backup indeed show a DGA path as current, or have you somehow fallen back to MSCP (happened to us. Slow down factor about 8).&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 14:49:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351454#M3266</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-08-06T14:49:48Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351455#M3267</link>
      <description>Uwe,&lt;BR /&gt;The disk used are EMC 3930 BCV meta volumes via separate FC HBA&lt;BR /&gt;&lt;BR /&gt;I am gettting some better rates now after upping the UAF quotas i.e. DIOlm, BIOlm, and ASTlm&lt;BR /&gt;&lt;BR /&gt;Averaging 32GB/hour.  This fluctuates when I backup filesystems with alot of small files and others with large files. Range is from 17GB/h on small files to 43GB/h on less but large.&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 15:46:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351455#M3267</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T15:46:08Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351456#M3268</link>
      <description>Jan,&lt;BR /&gt;Here are the newest quota settings.&lt;BR /&gt;Maxjobs:         0  Fillm:       128  Bytlm:        65536&lt;BR /&gt;Maxacctjobs:     0  Shrfillm:      0  Pbytlm:           0&lt;BR /&gt;Maxdetach:       0  BIOlm:       150  JTquota:       4096&lt;BR /&gt;Prclm:          10  DIOlm:      4096  WSdef:         2000&lt;BR /&gt;Prio:            4  ASTlm:      4096  WSquo:        16384&lt;BR /&gt;Queprio:         0  TQElm:        20  WSextent:     16384&lt;BR /&gt;CPU:        (none)  Enqlm:      2000  Pgflquo:      50000&lt;BR /&gt;&lt;BR /&gt;CHANNELCNT                    256        256         31      65535 Channels     &lt;BR /&gt;&lt;BR /&gt;Page info:  everything is a zero ( except available mem and those that should not be at zero)&lt;BR /&gt;Modes info:  user mode at 93% everything else negligable.&lt;BR /&gt;Direct IO Rate: around 400&lt;BR /&gt;Buffered IO Rate: around .66&lt;BR /&gt;Everything else is negligable&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Aug 2004 15:54:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351456#M3268</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T15:54:35Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351457#M3269</link>
      <description>Jan,&lt;BR /&gt;&lt;BR /&gt;Forgot one answer.&lt;BR /&gt;&lt;BR /&gt;Not in a cluster config. No MSCP.</description>
      <pubDate>Fri, 06 Aug 2004 17:24:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351457#M3269</guid>
      <dc:creator>Tim Nelson</dc:creator>
      <dc:date>2004-08-06T17:24:00Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351458#M3270</link>
      <description>Tim,&lt;BR /&gt;SDLT tape is quick to write but slowly when mechanical stops so I guess to run backup fast you have avoid this event.&lt;BR /&gt;I think best solution is keep disk unfragmented.&lt;BR /&gt;Because you make backup/image, I guess you are quite (a few) alone when execute it.&lt;BR /&gt;You could set backup process /PRIO=15/NOSWAP but I'm not convinced this can help you a lot.&lt;BR /&gt;You can also backup with /FAST qualifier; if you have many files can help you.&lt;BR /&gt; &lt;BR /&gt;Antonio Vigliotti&lt;BR /&gt;</description>
      <pubDate>Sat, 07 Aug 2004 03:31:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351458#M3270</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-08-07T03:31:16Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351459#M3271</link>
      <description>Tim,&lt;BR /&gt;I reread your thread "VMS and EMC BCVs" where you changed BIOLM from 100 to 150.&lt;BR /&gt;This means you application require some memory ac can limit for backup.&lt;BR /&gt;So I guess you could set for backup user&lt;BR /&gt;/FILLM=200/BIOLM=200/BYTLM=80000&lt;BR /&gt;I hint you make autogen too.&lt;BR /&gt; &lt;BR /&gt;Antonio Vigliotti&lt;BR /&gt;</description>
      <pubDate>Sat, 07 Aug 2004 03:45:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351459#M3271</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-08-07T03:45:11Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351460#M3272</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;The quotas you specify in the sysuaf may be overruled by sysgen PQL parameters. Thus it is better to check what you received using lexicals or show proc or ana/sys.&lt;BR /&gt;&lt;BR /&gt;And even better : check continuously if you reach your limits of the working set, dio, fillm etc. I do that on all my systems and found more than 1 account with limits too low.&lt;BR /&gt;&lt;BR /&gt;Also : the tape keeps it density with which is was initialized the first time until you specify a new one. If this is not a new tape, it may be using a density that is too low (slower). So add /dens=xxx with xxx being the highest level for your drive.&lt;BR /&gt;&lt;BR /&gt;And last : 32GB/hour is almost 10 MB/sec. What is the theoretic maximum without compression ? The speed with compression is only valid if it is really compressing, which depends on what you backup. E.g. backup of zipped files will have no result because there is nothing to compress. So, this could be faster without compress. &lt;BR /&gt;&lt;BR /&gt;Wim&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 07 Aug 2004 03:51:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351460#M3272</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2004-08-07T03:51:34Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351461#M3273</link>
      <description>Tim,&lt;BR /&gt;&lt;BR /&gt;to explain WHY I give the upcomiong advises, first some theory about the algoritms of BACKUP. It is implemented this way to agressively optimise for speed, but it has some consequences.&lt;BR /&gt;&lt;BR /&gt;BACKUP essentially works in 4 phases: an initial one, and then a cycle of 3.&lt;BR /&gt;&lt;BR /&gt;First, it creates the list of header-info of things-to-do. In case of /IMAGE, simply by processing INDEXFILE.SYS.&lt;BR /&gt;All now still available WorkingSet space is allocated for the transfer-buffer. &lt;BR /&gt;&lt;BR /&gt;Now, phase 2: things-to-do is used to _ONLY_ map the files-to-be-processed (more exactly: the file EXTENTS) into the transfer buffer, and build a list of I/O-info for these segments.&lt;BR /&gt;This phase continues until the first of:&lt;BR /&gt;- transfer-buffer exhausted =&amp;gt; that is why a large-as-possible WSQUOTA is benificial.&lt;BR /&gt;- the number of files (minus process-permanent files, minus image files, minus shared images) reaches FILLM =&amp;gt; this is why if you are processing many small files FILLM should be high enough&lt;BR /&gt;- the number of file segments (minud dito for the above) reaches CHANNELCNT =&amp;gt; which is why CHANNELCNT should be high enough, especially when processing small and/or fragmented files.&lt;BR /&gt;&lt;BR /&gt;Phase 3: Try to isssue the I/O requests for ALL file extents at once. Number of requests limited by DIOLM. This forces the diskdrive into "heavy-load" mode, which means that no longer I/O's are processed as first-come-first served, but help-as-many-as-possible-as-quick-as-possible. Ie., sweep the disk from one end to the other, and process all requests for every track the heads pass by. This minimises seektime, the largest delay in getting data from disk. Each extent goes into the location in the transferbuffer that leads to a contigues chunck when all extents are in. =&amp;gt; This shows why DIOLM must be high enough: if the amount of I/O requests to handle this buffer is lager than BIOLM, then we need more than one disk-sweep.&lt;BR /&gt;&lt;BR /&gt;Phase 4: (Do the necessary calculations for CRC, conversion to Backup-format etc) and: Issue ONE I/O of the TOTAL tranfer buffer to tape. (although at a lower level the hardware may still split up your transfer, but then you are really using your config at the hardware capacity!)&lt;BR /&gt;And especially in tape-units that require a relative long time for start and stop, the size of this chunck can have an important influence on elapsed time.&lt;BR /&gt;&lt;BR /&gt;Name back to phase 2 until finished.&lt;BR /&gt;&lt;BR /&gt;Of course there are some caveats: if WSEXTENT is bigger than WSQUOTA, the working set may expand and shrink, and then the tranfer buffer can no longer be held logically contiguous, and paging will interfere (very detremential!) with the above scheme. =&amp;gt; WSEXTENT and WSQUOTA should be equal. (and watch out for (sysgen) WSMAX:&lt;BR /&gt;that could top of part of the working set into paging if lower than WSQUOTA.&lt;BR /&gt;And of course physical memory should not be so restricted that part of the working set will be in the PAGEFILE!!!&lt;BR /&gt;&lt;BR /&gt;So, now back to YOUR params.&lt;BR /&gt;&lt;BR /&gt;FILLM 128. If many small files are involved, I would be thinking more in terms of 1024 or 4096.&lt;BR /&gt;CHANNELCOUNT double the FILLM of your backup account, if many heavily fragmented files, quadruple. &lt;BR /&gt;I would also increase BYTLM, to the order of 1M or 2M. It is not in the above discussion because I don't exactly now how it influences BACKUP, but I often found srange behaviour when it was too low (cant remember specifically for Backup, though)&lt;BR /&gt;&lt;BR /&gt;A word of warning IF you (or any other reader) applies this to HSG80-connected devices: those have a rather limited maximum IO queue length, (out of my head: 240 IIRC) and there IS an issue in some firmware versions that if it gets saturated it just goes mad, and forgets to present the disks to the systems. VERY painfull.&lt;BR /&gt;&lt;BR /&gt;Antonio:&lt;BR /&gt;since this is the only active process, I don't think PRIO=15 can give you any advantage, although I cannot right now think of how it might hurt eigther.&lt;BR /&gt;SET PROCESS /NOSWAP will have very little effect, since an applic as agressive as Backup is not really a swap-out candidate (unless it is waiting for a device that is not ready, but then there are other problems first)&lt;BR /&gt;&lt;BR /&gt;Well Tim,&lt;BR /&gt;I guess that will have to do for now.&lt;BR /&gt;&lt;BR /&gt;Success!&lt;BR /&gt;&lt;BR /&gt;Jan&lt;BR /&gt;</description>
      <pubDate>Sat, 07 Aug 2004 05:26:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351461#M3273</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2004-08-07T05:26:56Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351462#M3274</link>
      <description>Playing with WSEXTENT does not make sense, because it is usually overruled by PQL_MWSEXTENT and this is set by AUTOGEN to WSMAX anyway (since VMS V6.0 or 6.1, I think).&lt;BR /&gt;&lt;BR /&gt;I would rather come back to the URL that Wim has given, because this page describes inter-dependencies of parameters. If they are not satisfied you can create corrupted savesets.</description>
      <pubDate>Sat, 07 Aug 2004 23:43:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351462#M3274</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-08-07T23:43:14Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351463#M3275</link>
      <description>Uwe,&lt;BR /&gt;sorry but I don't agree with you&lt;BR /&gt;&lt;QUOTE&gt;&lt;BR /&gt;Playing with WSEXTENT does not make sense, because it is usually overruled by PQL_MWSEXTENT [...]&lt;BR /&gt;&lt;/QUOTE&gt;&lt;BR /&gt;Vms assign greater of WSEXTENT in SYSAUAF or PQL_MWSEXTENT; so if you assign to any user WSEXTENT greater than PLQ_MWSEXTENT and smaller or equal to WSMAX you can use more working set.&lt;BR /&gt;See help in AUTHORIZE and SYSGEN (or SYSMAN).&lt;BR /&gt; &lt;BR /&gt;Antonio Vigliotti&lt;BR /&gt;</description>
      <pubDate>Sun, 08 Aug 2004 02:51:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351463#M3275</guid>
      <dc:creator>Antoniov.</dc:creator>
      <dc:date>2004-08-08T02:51:31Z</dc:date>
    </item>
    <item>
      <title>Re: VMS Poor SDLT performance</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351464#M3276</link>
      <description>Antonio,&lt;BR /&gt;I know how VMS assigns the process quota - that wasn't my point. I have checked a few systems and on all of them I see:&lt;BR /&gt;PQL_MWSEXTENT = WSMAX&lt;BR /&gt;&lt;BR /&gt;What do you see on your own systems for PQL_MWSEXTENT and WSMAX?</description>
      <pubDate>Sun, 08 Aug 2004 03:00:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/vms-poor-sdlt-performance/m-p/3351464#M3276</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2004-08-08T03:00:46Z</dc:date>
    </item>
  </channel>
</rss>

