- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: VMS Poor SDLT performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 04:13 AM
08-06-2004 04:13 AM
I am experiencing very poor performance writing to an SDLT connected to an MDR on a SAN.
Using VMS 7.3-2 backup command with the following parameters.
init /media_format=compaction $2$mga0: label
mount /foreign/media_format=compaction $2$mga0: label
backup/noalias/noassist/image/list=admin:[util.bku.log]'vol_name'.log 'devname' 'tape':'vol_name'.bck -
/save/media_format=compaction /block=32256 /IGNORE=(INTERLOCK,LABEL,NOBACKUP)
I did not include specific tape model info as I hope it is not currently relevant.
Current testing is showing only 16GB/hour.
I personally believe this is a terrible rate.
Any tips or experiences that anyone wishes to share ?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 04:33 AM
08-06-2004 04:33 AM
SolutionI read a few thread in this forum about SDLT tape; the most common solution is increase buffer size with qualifier /BLOC=65536.
I haven't this type of tape so I can't help more but you can search for SDLT in this forum.
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 04:40 AM
08-06-2004 04:40 AM
Re: VMS Poor SDLT performance
I will try it out..
The data rate does seem to increase with more data.
device with 4GB used takes about 15 minutes.
device with 12GB used takes about 30 minutes.
This makes sense as more data typically provides a better stream.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 04:49 AM
08-06-2004 04:49 AM
Re: VMS Poor SDLT performance
here there are some link about SDLT trouble.
http://tinyurl.com/474pc
H.T.H.
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 05:14 AM
08-06-2004 05:14 AM
Re: VMS Poor SDLT performance
If you do not have a /block qualifier, you use the default of 8192, which gives the worst results. 16 384 is enough to have good results, and it marginally improves with higher values.
Check the account you use for backup has correct quotas according to the doc
http://pi-net.dyndns.org/docs/openvms0731/731final/6017/6017pro_046.html
11.7 Setting Process Quotas for Efficient Backups
To have a good backup performance, you must "feed" the tape fast enough.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 05:59 AM
08-06-2004 05:59 AM
Re: VMS Poor SDLT performance
/block increase did not help much.
updating all quotas recommended by doc did not help much..
4GB still taking 15minutes ( 16GB/hour )
All ideas are greatly appreciated..
Thanks !!!!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 06:26 AM
08-06-2004 06:26 AM
Re: VMS Poor SDLT performance
what type is the disk that you are saving and what type of files do you have? Can you try a:
$ monitor DISK
and check how many I/Os you get? It is quite possible that you are attempting to save a disk with lots of small files - in that case BACKUP needs to jump between INDEXF.SYS for the file headers and the file's data. While BACKUP tries to limit the seek distances by using an 'elevator' pattern it is possible that the disk is not fast enough.
Make sure the /LIST output does not go back to the same disk - else you will cause additional I/Os that collide with BACKUP's.
/IGNORE=NOBACKUP will save the contents of files that are marked NOBACKUP like PAGEFILE.SYS or SYSDUMP.DMP, but I guess you are using it to get all files for your test, right?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 07:09 AM
08-06-2004 07:09 AM
Re: VMS Poor SDLT performance
Queue lenth 13
/list file is not going to same disk
some of these disk do have a large number of smaller files and some others are larger files. I will take this into consideration and do some comparitive testing.
CPU is at 100% which is weird and the backup process is the main user.
I will keep testing.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 07:47 AM
08-06-2004 07:47 AM
Re: VMS Poor SDLT performance
You can try to limit the CPU load by using /NOCRC/GROUP=0, but that is only good for testing and not a serious backup, because it will turn off end-to-end checking and remove redundancies from the save-set.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 07:49 AM
08-06-2004 07:49 AM
Re: VMS Poor SDLT performance
Just shooting at everything that moves, hoping for a lucky hit:
Backup user's account quota were already mentioned, so this might be duplicate, but double check the accounts WSQUOTA & WSEXTENT being the same!
Uwe already mentioned file size and fragmentation, but really 140 I/O is not high for modern disks, unless the transfers are unusually large (large file fragments), and THAT would get you very HIGH data rates, which you obviously have not. Disk-I/O queue length only 13 suggests you MAY have SYSGEN CHANNELCNT or UAF DIOLM throttling your performance. Double-check please!
Like you said, 100% CPU for Backup is weird.
What does MONITOR say about that (pages, IO, modes?)
Your disks are SAN. Does SHOW DEV/FUL of your disk during backup indeed show a DGA path as current, or have you somehow fallen back to MSCP (happened to us. Slow down factor about 8).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 08:46 AM
08-06-2004 08:46 AM
Re: VMS Poor SDLT performance
The disk used are EMC 3930 BCV meta volumes via separate FC HBA
I am gettting some better rates now after upping the UAF quotas i.e. DIOlm, BIOlm, and ASTlm
Averaging 32GB/hour. This fluctuates when I backup filesystems with alot of small files and others with large files. Range is from 17GB/h on small files to 43GB/h on less but large.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 08:54 AM
08-06-2004 08:54 AM
Re: VMS Poor SDLT performance
Here are the newest quota settings.
Maxjobs: 0 Fillm: 128 Bytlm: 65536
Maxacctjobs: 0 Shrfillm: 0 Pbytlm: 0
Maxdetach: 0 BIOlm: 150 JTquota: 4096
Prclm: 10 DIOlm: 4096 WSdef: 2000
Prio: 4 ASTlm: 4096 WSquo: 16384
Queprio: 0 TQElm: 20 WSextent: 16384
CPU: (none) Enqlm: 2000 Pgflquo: 50000
CHANNELCNT 256 256 31 65535 Channels
Page info: everything is a zero ( except available mem and those that should not be at zero)
Modes info: user mode at 93% everything else negligable.
Direct IO Rate: around 400
Buffered IO Rate: around .66
Everything else is negligable
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 10:24 AM
08-06-2004 10:24 AM
Re: VMS Poor SDLT performance
Forgot one answer.
Not in a cluster config. No MSCP.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 08:31 PM
08-06-2004 08:31 PM
Re: VMS Poor SDLT performance
SDLT tape is quick to write but slowly when mechanical stops so I guess to run backup fast you have avoid this event.
I think best solution is keep disk unfragmented.
Because you make backup/image, I guess you are quite (a few) alone when execute it.
You could set backup process /PRIO=15/NOSWAP but I'm not convinced this can help you a lot.
You can also backup with /FAST qualifier; if you have many files can help you.
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 08:45 PM
08-06-2004 08:45 PM
Re: VMS Poor SDLT performance
I reread your thread "VMS and EMC BCVs" where you changed BIOLM from 100 to 150.
This means you application require some memory ac can limit for backup.
So I guess you could set for backup user
/FILLM=200/BIOLM=200/BYTLM=80000
I hint you make autogen too.
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 08:51 PM
08-06-2004 08:51 PM
Re: VMS Poor SDLT performance
The quotas you specify in the sysuaf may be overruled by sysgen PQL parameters. Thus it is better to check what you received using lexicals or show proc or ana/sys.
And even better : check continuously if you reach your limits of the working set, dio, fillm etc. I do that on all my systems and found more than 1 account with limits too low.
Also : the tape keeps it density with which is was initialized the first time until you specify a new one. If this is not a new tape, it may be using a density that is too low (slower). So add /dens=xxx with xxx being the highest level for your drive.
And last : 32GB/hour is almost 10 MB/sec. What is the theoretic maximum without compression ? The speed with compression is only valid if it is really compressing, which depends on what you backup. E.g. backup of zipped files will have no result because there is nothing to compress. So, this could be faster without compress.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-06-2004 10:26 PM
08-06-2004 10:26 PM
Re: VMS Poor SDLT performance
to explain WHY I give the upcomiong advises, first some theory about the algoritms of BACKUP. It is implemented this way to agressively optimise for speed, but it has some consequences.
BACKUP essentially works in 4 phases: an initial one, and then a cycle of 3.
First, it creates the list of header-info of things-to-do. In case of /IMAGE, simply by processing INDEXFILE.SYS.
All now still available WorkingSet space is allocated for the transfer-buffer.
Now, phase 2: things-to-do is used to _ONLY_ map the files-to-be-processed (more exactly: the file EXTENTS) into the transfer buffer, and build a list of I/O-info for these segments.
This phase continues until the first of:
- transfer-buffer exhausted => that is why a large-as-possible WSQUOTA is benificial.
- the number of files (minus process-permanent files, minus image files, minus shared images) reaches FILLM => this is why if you are processing many small files FILLM should be high enough
- the number of file segments (minud dito for the above) reaches CHANNELCNT => which is why CHANNELCNT should be high enough, especially when processing small and/or fragmented files.
Phase 3: Try to isssue the I/O requests for ALL file extents at once. Number of requests limited by DIOLM. This forces the diskdrive into "heavy-load" mode, which means that no longer I/O's are processed as first-come-first served, but help-as-many-as-possible-as-quick-as-possible. Ie., sweep the disk from one end to the other, and process all requests for every track the heads pass by. This minimises seektime, the largest delay in getting data from disk. Each extent goes into the location in the transferbuffer that leads to a contigues chunck when all extents are in. => This shows why DIOLM must be high enough: if the amount of I/O requests to handle this buffer is lager than BIOLM, then we need more than one disk-sweep.
Phase 4: (Do the necessary calculations for CRC, conversion to Backup-format etc) and: Issue ONE I/O of the TOTAL tranfer buffer to tape. (although at a lower level the hardware may still split up your transfer, but then you are really using your config at the hardware capacity!)
And especially in tape-units that require a relative long time for start and stop, the size of this chunck can have an important influence on elapsed time.
Name back to phase 2 until finished.
Of course there are some caveats: if WSEXTENT is bigger than WSQUOTA, the working set may expand and shrink, and then the tranfer buffer can no longer be held logically contiguous, and paging will interfere (very detremential!) with the above scheme. => WSEXTENT and WSQUOTA should be equal. (and watch out for (sysgen) WSMAX:
that could top of part of the working set into paging if lower than WSQUOTA.
And of course physical memory should not be so restricted that part of the working set will be in the PAGEFILE!!!
So, now back to YOUR params.
FILLM 128. If many small files are involved, I would be thinking more in terms of 1024 or 4096.
CHANNELCOUNT double the FILLM of your backup account, if many heavily fragmented files, quadruple.
I would also increase BYTLM, to the order of 1M or 2M. It is not in the above discussion because I don't exactly now how it influences BACKUP, but I often found srange behaviour when it was too low (cant remember specifically for Backup, though)
A word of warning IF you (or any other reader) applies this to HSG80-connected devices: those have a rather limited maximum IO queue length, (out of my head: 240 IIRC) and there IS an issue in some firmware versions that if it gets saturated it just goes mad, and forgets to present the disks to the systems. VERY painfull.
Antonio:
since this is the only active process, I don't think PRIO=15 can give you any advantage, although I cannot right now think of how it might hurt eigther.
SET PROCESS /NOSWAP will have very little effect, since an applic as agressive as Backup is not really a swap-out candidate (unless it is waiting for a device that is not ready, but then there are other problems first)
Well Tim,
I guess that will have to do for now.
Success!
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 04:43 PM
08-07-2004 04:43 PM
Re: VMS Poor SDLT performance
I would rather come back to the URL that Wim has given, because this page describes inter-dependencies of parameters. If they are not satisfied you can create corrupted savesets.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 07:51 PM
08-07-2004 07:51 PM
Re: VMS Poor SDLT performance
sorry but I don't agree with you
Playing with WSEXTENT does not make sense, because it is usually overruled by PQL_MWSEXTENT [...]
Vms assign greater of WSEXTENT in SYSAUAF or PQL_MWSEXTENT; so if you assign to any user WSEXTENT greater than PLQ_MWSEXTENT and smaller or equal to WSMAX you can use more working set.
See help in AUTHORIZE and SYSGEN (or SYSMAN).
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 08:00 PM
08-07-2004 08:00 PM
Re: VMS Poor SDLT performance
I know how VMS assigns the process quota - that wasn't my point. I have checked a few systems and on all of them I see:
PQL_MWSEXTENT = WSMAX
What do you see on your own systems for PQL_MWSEXTENT and WSMAX?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 08:25 PM
08-07-2004 08:25 PM
Re: VMS Poor SDLT performance
on my own system it's "strange" :-?
WSMAX = PQL_MWSEXTENT
Now I'm indagate on that.
Cheers
Antonio Vigliotti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 08:49 PM
08-07-2004 08:49 PM
Re: VMS Poor SDLT performance
it might look 'strange' to you, but it has been that way for many years and OpenVMS releases like I already said.
Here is another link for proof:
http://h71000.www7.hp.com/wizard/wiz_4523.html
""... Since OpenVMS V6.0, AUTOGEN will, by default, set PQL_MWSEXTENT to WSMAX. ...""
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 10:40 PM
08-07-2004 10:40 PM
Re: VMS Poor SDLT performance
Uwe, it appears you are right on this one, although I am as surprised as Antonio seems to be!.
I will have to dig up again the rather elaborate article on Backup Performance, which I re-told above largely out of memory.
Since I remember my assignment when reading (and implementing) it, it MUST have been in the VMS 6.x timeframe, and that seems to collide with your info.
Looks like in those days also not every VMS develloper was aware of all related changes then.
The WSQUOTA -WSEXTENT story always seemed quite consistent to me.
Strange.
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2004 11:01 PM
08-07-2004 11:01 PM
Re: VMS Poor SDLT performance
I think much of the writing about BACKUP performance is based on the initial release. The rewrite of BACKUP was released with VAX/VMS V5.2. That is even earlier than the changes to the memory system that came with VAX/VMS V5.4-3 (ticker/troller to proactively purge working sets and then swap out processes).
Some years ago I attended a DECUS symposium here in Germany. Somebody from VMS mentioned that one should not use those high values of DIOLM (4096) because the subsystems cannot meet them anyway. Even the EVA supports 'only' 2048 outstanding I/Os on a single controller port.
It looks like another case of were the documentation has not kept up.
http://h71000.www7.hp.com/doc/732FINAL/aa-pv5mh-tk/00/01/119-con.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2004 07:04 PM
08-08-2004 07:04 PM
Re: VMS Poor SDLT performance
Fillm: 2968
Bytlm: 5000000
BIOlm: 2968
JTquota: 16384
DIOlm: 200
WSdef: 20000
ASTlm: 9004
WSquo: 837222
TQElm: 100
WSextent: 1196032
Enqlm: 5120
Pgflquo: 2000000
$backup/nover/log/record/fast/block=65534/ignore=interlock/norew/med=com/lab=XXXXXX 'from' $2$MGA21:from.bck