- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- VMS Poor SDLT performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2004 07:17 PM
08-08-2004 07:17 PM
Re: VMS Poor SDLT performance
yes, I don't know of any subsystem that could handle 4096 concurrent IO's. The setting will just give you the maximum, and that is not so bad. (Unless part of the subsystem can be trashed by it, like HSG80 in some firmware versions. That constitutes a BUG, but it can hurt).
Your pointer DID show me that working purely from memory is definitely not perfect (at least in my case :-(
I totally forgot about ASTLM and ENQLM.
Insert into the above:
Each I/O that is issued will have to be kept track of, and that is done be declaring an AST that will trigger when the IO completes. So, you need at least one AST for each IO. If the number of generated IO's would exceed ASTLM, then also phase 2 above transfers control to phase 3.
I don/t know where exactly ENQLM comes in. It is the number of entries you can have in timer queues, and it is not clear to me how they would be used in Backup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2004 07:38 PM
08-08-2004 07:38 PM
Re: VMS Poor SDLT performance
Then you can use exactly what you want and you do'nt depend on pql etc.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2004 08:06 PM
08-08-2004 08:06 PM
Re: VMS Poor SDLT performance
This is on a loaded system (running Dayend) copying from HSG Snap disks, but I wouldn't expects this to rise to much more than 32GB p/h on a standalone system.
We've removed CRC checking, but not /GROUP and have had no problems (so far).
Rob.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2004 01:55 AM
08-09-2004 01:55 AM
Re: VMS Poor SDLT performance
> I don/t know where exactly ENQLM comes in. It is the number of entries you can have in timer queues...
Jan,
Did you mean TQELM or ENQLM? I believe TQELM is the one that governs entries in timer queues.
Galen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2004 02:18 AM
08-09-2004 02:18 AM
Re: VMS Poor SDLT performance
must have still been sleepy.
Of course I should not have used the description for TQELM to refer to ENQLM.
ENQLM is the number of locking operations request a process can have. Realising that, I can also explain why it is bad to not have that high enough: accessing and de-accessing files require lock operations, and you will be doing a lot of that.
Sorry for any confusion I generated ;-[
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2004 03:13 AM
08-09-2004 03:13 AM
Re: VMS Poor SDLT performance
I am still doing some testing by upping the quotas.
Ran into another problem which I will put in a different thread. ( Run/uic= does not start process as other user ).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2004 04:01 AM
08-09-2004 04:01 AM
Re: VMS Poor SDLT performance
a HSV100 or HSV110 controller has two ports and can have up to 2048 outstanding I/Os per port. But as far as I can tell OpenVMS will only use one path and the SCSI device driver will not create such a deep I/O queue anyway.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 04:15 AM
08-10-2004 04:15 AM
Re: VMS Poor SDLT performance
Unless I'm mistaken, removing /crc will remove BACKUP's ability to determine that a block is in error, thus greatly limiting the usefulness of redundancy groups specified with /group.
As for the original posters performance problem, if your cpu is close to 100%, and backup is the main user, you're not going to be able to significantly improve performance no matter how you fiddle with quotas.
Specifying /nocrc WILL greatly reduce BACKUP's cpu consumption, and you should test with this just to verify that CPU is your bottleneck (your throughput should go way up with /nocrc if this is the case). However, you should not trust your production backups to /nocrc, and instead consider why so much CPU is being used by backup.
Most VMS systems which support a SAN should be able to drive a single SDLT at full speed without running out of CPU. Please tell us the other details of your configuration. Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 04:31 AM
08-10-2004 04:31 AM
Re: VMS Poor SDLT performance
Let me repeat what I have already written above:
""You can try to limit the CPU load by using /NOCRC/GROUP=0, but that is only good for testing and not a serious backup, because it will turn off end-to-end checking and remove redundancies from the save-set.""
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 05:54 AM
08-10-2004 05:54 AM
Re: VMS Poor SDLT performance
In the old days I would have blindly agreed 100%.
But now, in the days of SCSI....
Way back when, at a previous customer, there was the need to restore a backup tape 10 years old.
After locating a tape-unit still able to read 800 bpi reels (!), I read the backup tape.
Surprisingly, it was CPU-bound, and took quite long.
The final message explained that: 33000-some recoverable errors, and some unhappy operator who had to clean the tapeunit from most of the tapes' magnetite.
THAT is what Backup with full recovery functionality is intended to do for you, ... if you use DSA tape systems.
Now on SCSI: ONE SINGLE parity error, and SCSI forbids reading on.
Antonio will know exactly what I mean when I state that THAT is the reason SCSI is pronounced "scuzi": that is the reply in Italian if you want any recovery.
In the days of TK50 and TK70, there DID also exist DSSI devices (slow, but DSA compliant), so, IF tou had a tape with a parity error, your local DEC would read the tape for you (usually just ONE recoverable error!), and write it to another one.
I am not aware of the same being possible with any DLT II, III, or IV system.
So Tom, Uwe, anybody else, if I am too pessimistic because of ignorance, please educate me: WHAT is the use of /crc if backing up to tape? It is GREAT, but you cannot use it..
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 06:56 AM
08-10-2004 06:56 AM
Re: VMS Poor SDLT performance
Recoverability has nothing to do with a tape drive being DSA compliant (you talk about DSSI and CI-attached drives, right?). I have also seen BACKUP recovering errors on a Massbus-attached tape drive (TU77 or TU78, I don't recall). How does BACKUP do it? It creates redundant data similar to RAID-5!
I know that many people say that todays tape drives do lots of self-correction to cover tape errors and do not let you get past an unrecoverable spot. I still suggest to use /CRC, because it is an end-to-end check on the whole data path from the CPU to the bits on the tape. What happens if BACKUP creates a corrupted save-set? I have seen BACKUP doing that - the CRC detected it.
I am open about using /GROUP=0, but I will continue to use /CRC.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 07:44 AM
08-10-2004 07:44 AM
Re: VMS Poor SDLT performance
I concur.
I never really encountered situations in which the crr overhead was a reason NOT to use it, and, maybe just because of old habit, I keep using it.
But my previous statement stands: i NEVER was able to read past (even a single) parity error on any SCSI tape drive. All the others (Q-BUS. CI etc; no Massbuss experience though) simply tarnsfered it to the OS, kinda "DUNNO, can you make anything of that?", and then, of cource BACKUP could.
But if you don't get any bits out of the drive past the parity error??
About DSSI TF8x: include them into my list of "non-SCSI drives for the tape".
DO such drives also exist for the current tape generations? If yes, Hallelujah!! Tell me about it, and I will somehow get them past the budget-guards.
(would make me happy about NOT doing without crc)
Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 09:44 AM
08-10-2004 09:44 AM
Re: VMS Poor SDLT performance
If you use /crc/group=0, what does that get you? If /crc detects a bad block, you have no XOR block with which to recover the bad block.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 05:47 PM
08-10-2004 05:47 PM
Re: VMS Poor SDLT performance
when I wrote "I am open about using /GROUP=0", I meant somebody has to present some arguments _for_ using it. Thank you for the counter-argument ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-10-2004 07:12 PM
08-10-2004 07:12 PM
Re: VMS Poor SDLT performance
Now, DAT tapes doesn't feel me safe.
For my own I prefer default /CRC/GROUP.
Antonio Vigliotti
P.S.
SCSI is read in italian as scuzi and sound like sorry!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-11-2004 08:19 AM
08-11-2004 08:19 AM
Re: VMS Poor SDLT performance
Shared disk, shared cache, and other shared stuff may lead to shared performance.
Try monitoring 3930 for a time to find out when it is idle, and then retry the backup or at least a large enough portion to have a benchmark.
:) jck
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-19-2004 03:38 AM
08-19-2004 03:38 AM
Re: VMS Poor SDLT performance
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2004 09:39 PM
09-12-2004 09:39 PM
Re: VMS Poor SDLT performance
http://www.quantum.com/NR/rdonlyres/A3096946-7A8A-4A97-AC19-C21624335802/0/CH2e5.pdf
It seems that modern DLT drives do ECC themselves. So, /group is no longer necessary ? And /crc neither ?
It corresponds with /group=4 which is better than the VMS default of 10.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2004 09:47 PM
09-12-2004 09:47 PM
Re: VMS Poor SDLT performance
/CRC is an end-to-end check: from the system's memory/CPU down to the tape media. ECC in the tape drive will not help against corruptions on the SCSI bus or BACKUP corrupting a save-set.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2004 09:52 PM
09-12-2004 09:52 PM
Re: VMS Poor SDLT performance
Is such a corruption possible and where is it documented ? Isn't scsi having protection stuff either ?
What if data is corrupted while being transferred from disk to the VMS system ? Are we protected ?
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2004 10:14 PM
09-12-2004 10:14 PM
Re: VMS Poor SDLT performance
SCSI has parity check and scsi-3 has full crc. So, on my GS160 I could use /group=0 and /nocrc. On my old nodes I should be careful.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2004 08:30 AM
09-14-2004 08:30 AM
Re: VMS Poor SDLT performance
$ SHOW TIME
$ BACKUP/all_your_qualifiers disk: NLA0:dummy/SAVE
$ SHOW TIME
$ BACKUP/all_your_qualifiers disk: NLA0:dummy/SAVE/LIST=test.lis
and
$ SHOW TIME
$ BACKUP/PHYSICAL/BLOCK=65024 disk: NLA0:dummy/SAVE
$ SHOW TIME
$ SHWO DEVICE/FULL disk:
From the first test get the total blocks count from the bottom of test.lis and divide it by 2 * elapsed time to get the KB/sec.
For the /PHYSICAL test divide the device's total blocks count by 2 * elapsed time to get the KB/sec.
The two numbers give you an idea how much the current volume layout (file size, location, fragmentation) is impacting your backup speed.
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2004 09:11 AM
09-14-2004 09:11 AM
Re: VMS Poor SDLT performance
Here are a couple additional items we have noticed:
Giving the process addition system resources, i.e. increasing quotas, has actually made backup times increase. What was noticed was the disk queue increased dramitically initially making me believe that the Symm just could not provide enough IO bandwidth. With further investigation it was even weirder that 100% of the data was coming from the Symm cache and hence no physical disk contention problem.
Later investigation has put us in the current state.
There seems to be some thrashing amongst the FC HBAs. One HBA to the disk. The other HBA to the MDR with the tape drives.
The tape drives will cycle between idle and writing. When writing the disk queues will increase. When idle the disk IO will increase. Hence the thrash.
We are not sure yet if there is some limitation on this AS1200 and its ability to transfer data over the bus between two FC HBAs ( they are separated over hose0 and hose1) or if some other limitation exists.
If the SDLT can never stream because the system is fighting over the IO then we will never reach any good backup rates.
If anyone has thoughts or ideas please add to this lenthy string.
Thanks again !! Points to all !!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2004 09:15 AM
09-14-2004 09:15 AM
Re: VMS Poor SDLT performance
everybody tweaked the backup command sofar, how about the mount?
We use
MOUNT/NOASSIST/MEDIA_FORMAT=COMPACTION/FOREIGN-
/CACHE=TAPE_DATA
for our SDLTs.
Greetings, Martin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-14-2004 06:18 PM
09-14-2004 06:18 PM
Re: VMS Poor SDLT performance
welcome back in vms :-)
Antonio Vigliotti