Operating System - OpenVMS
1825916 Members
3337 Online
109689 Solutions
New Discussion

Re: VMS Poor SDLT performance

 
SOLVED
Go to solution
Tim Nelson
Honored Contributor

VMS Poor SDLT performance

I have seen a number of posts but little for our environment.
I am experiencing very poor performance writing to an SDLT connected to an MDR on a SAN.
Using VMS 7.3-2 backup command with the following parameters.
init /media_format=compaction $2$mga0: label

mount /foreign/media_format=compaction $2$mga0: label

backup/noalias/noassist/image/list=admin:[util.bku.log]'vol_name'.log 'devname' 'tape':'vol_name'.bck -
/save/media_format=compaction /block=32256 /IGNORE=(INTERLOCK,LABEL,NOBACKUP)

I did not include specific tape model info as I hope it is not currently relevant.

Current testing is showing only 16GB/hour.
I personally believe this is a terrible rate.

Any tips or experiences that anyone wishes to share ?
145 REPLIES 145
Antoniov.
Honored Contributor
Solution

Re: VMS Poor SDLT performance

Hi Tim,
I read a few thread in this forum about SDLT tape; the most common solution is increase buffer size with qualifier /BLOC=65536.
I haven't this type of tape so I can't help more but you can search for SDLT in this forum.

Antonio Vigliotti
Antonio Maria Vigliotti
Tim Nelson
Honored Contributor

Re: VMS Poor SDLT performance

Thanks. I must have missed that one..

I will try it out..

The data rate does seem to increase with more data.

device with 4GB used takes about 15 minutes.
device with 12GB used takes about 30 minutes.

This makes sense as more data typically provides a better stream.
Antoniov.
Honored Contributor

Re: VMS Poor SDLT performance

Tim,
here there are some link about SDLT trouble.
http://tinyurl.com/474pc

H.T.H.
Antonio Vigliotti
Antonio Maria Vigliotti
labadie_1
Honored Contributor

Re: VMS Poor SDLT performance

I do not think replacing /block=32256 by block=65535 will dramatically improve the rate.

If you do not have a /block qualifier, you use the default of 8192, which gives the worst results. 16 384 is enough to have good results, and it marginally improves with higher values.

Check the account you use for backup has correct quotas according to the doc

http://pi-net.dyndns.org/docs/openvms0731/731final/6017/6017pro_046.html

11.7 Setting Process Quotas for Efficient Backups


To have a good backup performance, you must "feed" the tape fast enough.
Tim Nelson
Honored Contributor

Re: VMS Poor SDLT performance

Update:

/block increase did not help much.

updating all quotas recommended by doc did not help much..

4GB still taking 15minutes ( 16GB/hour )

All ideas are greatly appreciated..

Thanks !!!!
Uwe Zessin
Honored Contributor

Re: VMS Poor SDLT performance

Hello Tim,
what type is the disk that you are saving and what type of files do you have? Can you try a:
$ monitor DISK

and check how many I/Os you get? It is quite possible that you are attempting to save a disk with lots of small files - in that case BACKUP needs to jump between INDEXF.SYS for the file headers and the file's data. While BACKUP tries to limit the seek distances by using an 'elevator' pattern it is possible that the disk is not fast enough.

Make sure the /LIST output does not go back to the same disk - else you will cause additional I/Os that collide with BACKUP's.

/IGNORE=NOBACKUP will save the contents of files that are marked NOBACKUP like PAGEFILE.SYS or SYSDUMP.DMP, but I guess you are using it to get all files for your test, right?
.
Tim Nelson
Honored Contributor

Re: VMS Poor SDLT performance

The disk that is currenlty being backed up is doing about 140 IO/s
Queue lenth 13
/list file is not going to same disk
some of these disk do have a large number of smaller files and some others are larger files. I will take this into consideration and do some comparitive testing.
CPU is at 100% which is weird and the backup process is the main user.
I will keep testing.

Uwe Zessin
Honored Contributor

Re: VMS Poor SDLT performance

OK, what type of disk (model number) is it? 140 IOs/second does not sound too unrealistic.

You can try to limit the CPU load by using /NOCRC/GROUP=0, but that is only good for testing and not a serious backup, because it will turn off end-to-end checking and remove redundancies from the save-set.
.
Jan van den Ende
Honored Contributor

Re: VMS Poor SDLT performance

Tim,

Just shooting at everything that moves, hoping for a lucky hit:

Backup user's account quota were already mentioned, so this might be duplicate, but double check the accounts WSQUOTA & WSEXTENT being the same!
Uwe already mentioned file size and fragmentation, but really 140 I/O is not high for modern disks, unless the transfers are unusually large (large file fragments), and THAT would get you very HIGH data rates, which you obviously have not. Disk-I/O queue length only 13 suggests you MAY have SYSGEN CHANNELCNT or UAF DIOLM throttling your performance. Double-check please!
Like you said, 100% CPU for Backup is weird.
What does MONITOR say about that (pages, IO, modes?)
Your disks are SAN. Does SHOW DEV/FUL of your disk during backup indeed show a DGA path as current, or have you somehow fallen back to MSCP (happened to us. Slow down factor about 8).

Don't rust yours pelled jacker to fine doll missed aches.
Tim Nelson
Honored Contributor

Re: VMS Poor SDLT performance

Uwe,
The disk used are EMC 3930 BCV meta volumes via separate FC HBA

I am gettting some better rates now after upping the UAF quotas i.e. DIOlm, BIOlm, and ASTlm

Averaging 32GB/hour. This fluctuates when I backup filesystems with alot of small files and others with large files. Range is from 17GB/h on small files to 43GB/h on less but large.
Tim Nelson
Honored Contributor

Re: VMS Poor SDLT performance

Jan,
Here are the newest quota settings.
Maxjobs: 0 Fillm: 128 Bytlm: 65536
Maxacctjobs: 0 Shrfillm: 0 Pbytlm: 0
Maxdetach: 0 BIOlm: 150 JTquota: 4096
Prclm: 10 DIOlm: 4096 WSdef: 2000
Prio: 4 ASTlm: 4096 WSquo: 16384
Queprio: 0 TQElm: 20 WSextent: 16384
CPU: (none) Enqlm: 2000 Pgflquo: 50000

CHANNELCNT 256 256 31 65535 Channels

Page info: everything is a zero ( except available mem and those that should not be at zero)
Modes info: user mode at 93% everything else negligable.
Direct IO Rate: around 400
Buffered IO Rate: around .66
Everything else is negligable

Tim Nelson
Honored Contributor

Re: VMS Poor SDLT performance

Jan,

Forgot one answer.

Not in a cluster config. No MSCP.
Antoniov.
Honored Contributor

Re: VMS Poor SDLT performance

Tim,
SDLT tape is quick to write but slowly when mechanical stops so I guess to run backup fast you have avoid this event.
I think best solution is keep disk unfragmented.
Because you make backup/image, I guess you are quite (a few) alone when execute it.
You could set backup process /PRIO=15/NOSWAP but I'm not convinced this can help you a lot.
You can also backup with /FAST qualifier; if you have many files can help you.

Antonio Vigliotti
Antonio Maria Vigliotti
Antoniov.
Honored Contributor

Re: VMS Poor SDLT performance

Tim,
I reread your thread "VMS and EMC BCVs" where you changed BIOLM from 100 to 150.
This means you application require some memory ac can limit for backup.
So I guess you could set for backup user
/FILLM=200/BIOLM=200/BYTLM=80000
I hint you make autogen too.

Antonio Vigliotti
Antonio Maria Vigliotti
Wim Van den Wyngaert
Honored Contributor

Re: VMS Poor SDLT performance

Tim,

The quotas you specify in the sysuaf may be overruled by sysgen PQL parameters. Thus it is better to check what you received using lexicals or show proc or ana/sys.

And even better : check continuously if you reach your limits of the working set, dio, fillm etc. I do that on all my systems and found more than 1 account with limits too low.

Also : the tape keeps it density with which is was initialized the first time until you specify a new one. If this is not a new tape, it may be using a density that is too low (slower). So add /dens=xxx with xxx being the highest level for your drive.

And last : 32GB/hour is almost 10 MB/sec. What is the theoretic maximum without compression ? The speed with compression is only valid if it is really compressing, which depends on what you backup. E.g. backup of zipped files will have no result because there is nothing to compress. So, this could be faster without compress.

Wim

Wim
Jan van den Ende
Honored Contributor

Re: VMS Poor SDLT performance

Tim,

to explain WHY I give the upcomiong advises, first some theory about the algoritms of BACKUP. It is implemented this way to agressively optimise for speed, but it has some consequences.

BACKUP essentially works in 4 phases: an initial one, and then a cycle of 3.

First, it creates the list of header-info of things-to-do. In case of /IMAGE, simply by processing INDEXFILE.SYS.
All now still available WorkingSet space is allocated for the transfer-buffer.

Now, phase 2: things-to-do is used to _ONLY_ map the files-to-be-processed (more exactly: the file EXTENTS) into the transfer buffer, and build a list of I/O-info for these segments.
This phase continues until the first of:
- transfer-buffer exhausted => that is why a large-as-possible WSQUOTA is benificial.
- the number of files (minus process-permanent files, minus image files, minus shared images) reaches FILLM => this is why if you are processing many small files FILLM should be high enough
- the number of file segments (minud dito for the above) reaches CHANNELCNT => which is why CHANNELCNT should be high enough, especially when processing small and/or fragmented files.

Phase 3: Try to isssue the I/O requests for ALL file extents at once. Number of requests limited by DIOLM. This forces the diskdrive into "heavy-load" mode, which means that no longer I/O's are processed as first-come-first served, but help-as-many-as-possible-as-quick-as-possible. Ie., sweep the disk from one end to the other, and process all requests for every track the heads pass by. This minimises seektime, the largest delay in getting data from disk. Each extent goes into the location in the transferbuffer that leads to a contigues chunck when all extents are in. => This shows why DIOLM must be high enough: if the amount of I/O requests to handle this buffer is lager than BIOLM, then we need more than one disk-sweep.

Phase 4: (Do the necessary calculations for CRC, conversion to Backup-format etc) and: Issue ONE I/O of the TOTAL tranfer buffer to tape. (although at a lower level the hardware may still split up your transfer, but then you are really using your config at the hardware capacity!)
And especially in tape-units that require a relative long time for start and stop, the size of this chunck can have an important influence on elapsed time.

Name back to phase 2 until finished.

Of course there are some caveats: if WSEXTENT is bigger than WSQUOTA, the working set may expand and shrink, and then the tranfer buffer can no longer be held logically contiguous, and paging will interfere (very detremential!) with the above scheme. => WSEXTENT and WSQUOTA should be equal. (and watch out for (sysgen) WSMAX:
that could top of part of the working set into paging if lower than WSQUOTA.
And of course physical memory should not be so restricted that part of the working set will be in the PAGEFILE!!!

So, now back to YOUR params.

FILLM 128. If many small files are involved, I would be thinking more in terms of 1024 or 4096.
CHANNELCOUNT double the FILLM of your backup account, if many heavily fragmented files, quadruple.
I would also increase BYTLM, to the order of 1M or 2M. It is not in the above discussion because I don't exactly now how it influences BACKUP, but I often found srange behaviour when it was too low (cant remember specifically for Backup, though)

A word of warning IF you (or any other reader) applies this to HSG80-connected devices: those have a rather limited maximum IO queue length, (out of my head: 240 IIRC) and there IS an issue in some firmware versions that if it gets saturated it just goes mad, and forgets to present the disks to the systems. VERY painfull.

Antonio:
since this is the only active process, I don't think PRIO=15 can give you any advantage, although I cannot right now think of how it might hurt eigther.
SET PROCESS /NOSWAP will have very little effect, since an applic as agressive as Backup is not really a swap-out candidate (unless it is waiting for a device that is not ready, but then there are other problems first)

Well Tim,
I guess that will have to do for now.

Success!

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Uwe Zessin
Honored Contributor

Re: VMS Poor SDLT performance

Playing with WSEXTENT does not make sense, because it is usually overruled by PQL_MWSEXTENT and this is set by AUTOGEN to WSMAX anyway (since VMS V6.0 or 6.1, I think).

I would rather come back to the URL that Wim has given, because this page describes inter-dependencies of parameters. If they are not satisfied you can create corrupted savesets.
.
Antoniov.
Honored Contributor

Re: VMS Poor SDLT performance

Uwe,
sorry but I don't agree with you

Playing with WSEXTENT does not make sense, because it is usually overruled by PQL_MWSEXTENT [...]

Vms assign greater of WSEXTENT in SYSAUAF or PQL_MWSEXTENT; so if you assign to any user WSEXTENT greater than PLQ_MWSEXTENT and smaller or equal to WSMAX you can use more working set.
See help in AUTHORIZE and SYSGEN (or SYSMAN).

Antonio Vigliotti
Antonio Maria Vigliotti
Uwe Zessin
Honored Contributor

Re: VMS Poor SDLT performance

Antonio,
I know how VMS assigns the process quota - that wasn't my point. I have checked a few systems and on all of them I see:
PQL_MWSEXTENT = WSMAX

What do you see on your own systems for PQL_MWSEXTENT and WSMAX?
.
Antoniov.
Honored Contributor

Re: VMS Poor SDLT performance

Uwe,
on my own system it's "strange" :-?
WSMAX = PQL_MWSEXTENT
Now I'm indagate on that.

Cheers
Antonio Vigliotti
Antonio Maria Vigliotti
Uwe Zessin
Honored Contributor

Re: VMS Poor SDLT performance

Hello Antonio,
it might look 'strange' to you, but it has been that way for many years and OpenVMS releases like I already said.

Here is another link for proof:
http://h71000.www7.hp.com/wizard/wiz_4523.html
""... Since OpenVMS V6.0, AUTOGEN will, by default, set PQL_MWSEXTENT to WSMAX. ...""
.
Jan van den Ende
Honored Contributor

Re: VMS Poor SDLT performance

Well...

Uwe, it appears you are right on this one, although I am as surprised as Antonio seems to be!.
I will have to dig up again the rather elaborate article on Backup Performance, which I re-told above largely out of memory.
Since I remember my assignment when reading (and implementing) it, it MUST have been in the VMS 6.x timeframe, and that seems to collide with your info.
Looks like in those days also not every VMS develloper was aware of all related changes then.
The WSQUOTA -WSEXTENT story always seemed quite consistent to me.
Strange.

Jan
Don't rust yours pelled jacker to fine doll missed aches.
Uwe Zessin
Honored Contributor

Re: VMS Poor SDLT performance

Hello Jan,
I think much of the writing about BACKUP performance is based on the initial release. The rewrite of BACKUP was released with VAX/VMS V5.2. That is even earlier than the changes to the memory system that came with VAX/VMS V5.4-3 (ticker/troller to proactively purge working sets and then swap out processes).

Some years ago I attended a DECUS symposium here in Germany. Somebody from VMS mentioned that one should not use those high values of DIOLM (4096) because the subsystems cannot meet them anyway. Even the EVA supports 'only' 2048 outstanding I/Os on a single controller port.

It looks like another case of were the documentation has not kept up.
http://h71000.www7.hp.com/doc/732FINAL/aa-pv5mh-tk/00/01/119-con.html
.
Jiri_5
Frequent Advisor

Re: VMS Poor SDLT performance

Our experience with backup to SDTL on MDR and SAN is 50-60 Gb per hour. Account have this quotas:
Fillm: 2968
Bytlm: 5000000
BIOlm: 2968
JTquota: 16384
DIOlm: 200
WSdef: 20000
ASTlm: 9004
WSquo: 837222
TQElm: 100
WSextent: 1196032
Enqlm: 5120
Pgflquo: 2000000

$backup/nover/log/record/fast/block=65534/ignore=interlock/norew/med=com/lab=XXXXXX 'from' $2$MGA21:from.bck