- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: ABSV3.2(273) on OpenVMS V7.2-2 performance pro...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-03-2008 03:00 AM
тАО01-03-2008 03:00 AM
But I have 3,5 years archive ABS dat file in 44 files, 234023938 blocks.
I want to list all the old archives files
So for january 2006 I did
$ abs lookup DISK$ARC_0106:[*...]*.*;* /catalog=MACH_ARCHIVE_2006 /BRIEF
3865985 objects(s) found in 1 catalog(s).
Elapsed time: 5 12:29:32.87
0106.BRIEF;1 639628
So it will take about ┬╜ year to list of all the data !!!!!
Do any one have any hints to extract the data faster.
Regards,
Per Hvid phv@mach.com
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-03-2008 04:33 AM
тАО01-03-2008 04:33 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
That will fix internal and external fragmentation.
Why don't use post a reply with sample ANALYZE/RMS/FDL/OUT as text attachment, and/or output from my tunecheck tool.
We can possibly give you a generic, tuned FDL to convert with. You might tweak the allocations a little, based on individual archives.
Optimal tuning will be a little more work.
A day of (for fee) consulting should do it.
Specifically I think you need to know whether typically selected records are stored mnore or less in primary key order (even if selected through alternate) in which case one should lean towards a larger primary data area bucket size. If random access is expected than a smaller, but not too small, rms bucket size is needed.
You may also want to try a SET RMS/IND/BUF=50 before making a listing.
Hope this help some,
Hein van den Heuvel ( at gmail dot com )
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-07-2008 09:34 AM
тАО01-07-2008 09:34 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
If you specifiy more of the filespec you are looking for, like its top directory, the search most likely goes a lot faster.
Btw, how big are the ABS$CATALOG:MACH_ARCHIVE_2006.* files?
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-07-2008 10:23 AM
тАО01-07-2008 10:23 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
Indexed file alternate key lookup huh?
Did ABSV3.2 do anythign in the area of explicit rms indexed file buffer count selection, or did it just accept the default? If it accepted the default, then I would certainly retry with SET RMS/IND/BUF=50.
Guenther, What are the ods that a 'next' record by alternate key is also the next record by primary key? High correlation or none?
If there is no correlation, then effectively every name lookup will trigger a data bucket read, possibly with bucket split re-vector IO. If so, the 5 hours is perfectly reasonable:
5 12:29:32.87 ==> 19713 seconds.
3865985/(5*3600+28*60+33) ==> 196 IO/sec.
Per,
What was the IO count and CPU reported?
Have those archives ever been converted / re-organized? You may just want to try a convert with a large data bucket size (32 - 63 ) to clean up any bucket splits and to increase caching affectivness IF there is an alternate - primary sequence correlation.
If there is no correlation, then a large bucket will just be a waste of bandwidth.
Can you post an ANAL/RMS/FDL output in an attachment? Or of you have a recent version of my rms_tune_check, with the -a output, could you post that?
Cheers,
Hein van den Heuvel
HvdH Performance Consulting
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-07-2008 10:40 AM
тАО01-07-2008 10:40 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
Regarding the question of file size:
dir mach_arch*
Directory ABS$ROOT:[CATALOG]
MACH_ARCHIVE_2004_MCA090_VAOEI.DAT;1 1680384
MACH_ARCHIVE_2004_MCA091_VAOEI.DAT;1 1606656
MACH_ARCHIVE_2004_MCA094_VAOEI.DAT;1 1675008
MACH_ARCHIVE_2004_MCA100_VAOEI.DAT;1 1749504
MACH_ARCHIVE_2004_MCA103_VAOEI.DAT;1 1633280
MACH_ARCHIVE_2004_MCA110_VAOEI.DAT;1 824576
MACH_ARCHIVE_2004_MCA113_VAOEI.DAT;1 648960
MACH_ARCHIVE_2004_MCA114_VAOEI.DAT;1 1608192
MACH_ARCHIVE_2004_MCA118_VAOEI.DAT;1 1672960
MACH_ARCHIVE_2004_MCA123_VAOEI.DAT;1 1448448
MACH_ARCHIVE_2004_MCA128_VAOEI.DAT;1 1245952
MACH_ARCHIVE_2004_MCA132_VAOEI.DAT;1 1740288
MACH_ARCHIVE_2004_MCA136_VAOEI.DAT;1 1734656
MACH_ARCHIVE_2004_VAOE.DAT;1 72350720
MACH_ARCHIVE_2004_VTLE.DAT;1 768
MACH_ARCHIVE_2005_1_1.COM;1 2
MACH_ARCHIVE_2005_MCA139_VAOEI.DAT;1 1805568
MACH_ARCHIVE_2005_MCA143_VAOEI.DAT;1 1559552
MACH_ARCHIVE_2005_MCA148_VAOEI.DAT;1 1683968
MACH_ARCHIVE_2005_MCA149_VAOEI.DAT;1 1681920
MACH_ARCHIVE_2005_MCA152_VAOEI.DAT;1 4352
MACH_ARCHIVE_2005_MCA156_VAOEI.DAT;1 1635072
MACH_ARCHIVE_2005_MCA160_VAOEI.DAT;1 1299712
MACH_ARCHIVE_2005_MCA164_VAOEI.DAT;1 1570304
MACH_ARCHIVE_2005_MCA168_VAOEI.DAT;1 1145600
MACH_ARCHIVE_2005_MCA169_VAOEI.DAT;1 1425152
MACH_ARCHIVE_2005_MCA173_VAOEI.DAT;1 1034496
MACH_ARCHIVE_2005_MCA175_VAOEI.DAT;1 964352
MACH_ARCHIVE_2005_MCA187_VAOEI.DAT;1 1700352
MACH_ARCHIVE_2005_VAOE.DAT;1 67031808
MACH_ARCHIVE_2005_VTLE.DAT;1 512
MACH_ARCHIVE_2006_MCA176_VAOEI.DAT;1 1063168
MACH_ARCHIVE_2006_MCA178_VAOEI.DAT;1 777984
MACH_ARCHIVE_2006_MCA181_VAOEI.DAT;1 795904
MACH_ARCHIVE_2006_MCA183_VAOEI.DAT;1 795904
MACH_ARCHIVE_2006_MCA185_VAOEI.DAT;1 888320
MACH_ARCHIVE_2006_MCA195_VAOEI.DAT;1 2470400
MACH_ARCHIVE_2006_MCA199_VAOEI.DAT;1 842752
MACH_ARCHIVE_2006_MCA205_VAOEI.DAT;1 2595840
MACH_ARCHIVE_2006_VAOE.DAT;1 40110080
MACH_ARCHIVE_2006_VTLE.DAT;1 512
MACH_ARCHIVE_2007_MCA203_VAOEI.DAT;1 1521920
MACH_ARCHIVE_2007_VAOE.DAT;1 5997568
MACH_ARCHIVE_2007_VTLE.DAT;1 512
Total of 44 files, 234023938 blocks.
/Per
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-07-2008 12:34 PM
тАО01-07-2008 12:34 PM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
So there are 4 and not 40 files to scan.
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2008 05:29 AM
тАО01-08-2008 05:29 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
It shows an astounding 72 Direct IOs per record reported, but also 24 Buffered IOs per record ?!
I am at a loss to explain any but a few buffered IOs. Mailbox involved? File opens? Log file extending every record?
Guenther are 'child' files being opened and closed around a lookup?
That would make for almost 10 direct IOs per record (prologue, area,
root, 2 levels more index, data) for the child file, instead for the 1 - 2 normally seen for an indexed file lookup with a good bit of the index in cache.
Maybe the master file, if I may call it that (MACH_ARCHIVE_2006_VAOE.DAT?) generates the bulk of the IOs?
Maybe Check with $MONITOR FCP, looking for
"File Open Rate " while running.
Maybe check with Volkers SDA$PROCIO where the direct IOs go. And or SET FILE/STAT on driver and child file. The analyze with my RMS_STATS or MONI RMS or ANALYZE/SYS... SHOW PROC /RMS=FSB
Guenther, Per just want a list of all files.
Is there not a more expedient way to do this? Could a simple tool report directly on the MACH_ARCHIVE_2006_MCA176_VAOEI.DAT file?
Cheers,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2008 08:37 AM
тАО01-08-2008 08:37 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
So, the lookup scans through the VAOE and for each entry it tries to find a matching VAOEI record. This is where the buffered I/Os come from (open/close to avoid runing out of FILLM/CHANNELCNT). That was the unfinished work in ABS V3.*. Just recently this has been improved in the latest ABS version to do this a bit smarter/faster.
And so, using a program/DCL procedure which reads sequentially through a VAOE file printing the filename (the first 255 bytes) would go a lot faster. But if anything else is needed (file is in which save set on which tape) this needs to come from the VAOEI and/or VTLE files.
/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2008 08:44 AM
тАО01-08-2008 08:44 AM
Solution/Guenther
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО01-08-2008 11:30 AM
тАО01-08-2008 11:30 AM
Re: ABSV3.2(273) on OpenVMS V7.2-2 performance problem.
Basically for every master record, each slave file is opened, a lookup done, and closed.
That explains the high Buffered IO Count as well as the high direct IO count.
During development an attempt was made to keep the files open, but for some sites that caused the process to run out of channels and this was unfortunately (imho) dropped. With the benefit of hindsight, and knowing how much quota folks grab these these anyway, I would have prefered to just keep opening and outright fail for the odd few (forcing higher quota) rather than punish ever user all the time. Oh well, too late now.
Considering the access pattern, even a modified LIFO file cache would have helped some, as opposed to FIFO which would jsut chase its own tail.
Considering those files are opened and closed all the time, the RMS local cache does nothing but eat memory and locks.
One ATTEMPT I would make is to use RMS Global Buffers to keep the index buckets in memory. The attached macro sources can do just that. Run it is a little command file as for example:
$set rms/ind/buf=2
$run global_buffers
MACH_ARCHIVE_2006_MCA176_VAOEI.DAT;1 1063168
MACH_ARCHIVE_2006_MCA178_VAOEI.DAT;1 777984
MACH_ARCHIVE_2006_MCA181_VAOEI.DAT;1 795904
MACH_ARCHIVE_2006_MCA183_VAOEI.DAT;1 795904
MACH_ARCHIVE_2006_MCA185_VAOEI.DAT;1 888320
MACH_ARCHIVE_2006_MCA195_VAOEI.DAT;1 2470400
MACH_ARCHIVE_2006_MCA199_VAOEI.DAT;1 842752
MACH_ARCHIVE_2006_MCA205_VAOEI.DAT;1 2595840
MACH_ARCHIVE_2006_VAOE.DAT;1 40110080
MACH_ARCHIVE_2006_VTLE.DAT;1 512
$exit
I only write 'attempt' because the XFC is likely to be caching everything already, avoiding the actual physical IOs.
The global buffers should avoid most direct IO (although RMS will still read the prologue and area descriptor into a local buffer). It may or might not help.
Finally, it is worth your will to convert (stable) ABS AOEI reposetories. Due to bucket splits you can expect to save some 30% space, and can likely reduce an index level (or 2 in extreme cases). After what I learned today I would convert with Index (no compression) in Area 0 and a larger index bucket size. For example 16 for files under 2M blocks and 24 or 32 for file over that 1GB mark. The data bucket can be in area 1 with smaller bucket size (8, or even 4).
Good luck!
Hein van den Heuvel (at gmail dot com)
HvdH Performance Consulting