- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Oracle RDB v7.1 - Possible to fudge RMU UNLOAD out...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2008 05:31 AM
05-12-2008 05:31 AM
Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
So, I'm wanting to work around the issue for the time being.
The problem stems from the fact that a rather large database has a number of its tables dumped (using RMU UNLOAD) at the same time, and to the same disk.
One of the output files is ~11M blocks alone, and these files are subsequently converted to indexed files (making them >= twice the size)using CONVERT.
On occasions, the indexing takes substantially longer (by a magnitude of hours), and I believe that this is (partly) being caused by the RMU output files being badly fragmented.
It's all very well clearing the disk, so that it is empty, but the multiple dumps running at the same time will be causing contention for new disk clusters which can be added to the file.
i.e. say two tables are being dumped, generating files #1 & #2, where file #1 (currently) requires ~10M blocks, and file 2 requires ~5M blocks.
RMU UNLOAD will (I would guess) use the same initial ALQ and DEQ values for any table that is being dumped.
Let's just say the default ALQ is 10000, and the DEQ is 5000, then the "logical" allocation of disk blocks would be:
0 - 9999 File #1
10000 - 19999 File #2
20000 - 24999 File #1
25000 - 29999 File #2
and so on...
What I'm wondering is if RMU itself can be told to request that a file have an initial ALlocation Quantity of X, and Default Extend Quantity of Y, and that the file should be contiguous (ideally, not best-try, and if there's insufficient contiguous disk space, then I could put in error handling to deal with it).
Or alternatively, fudged in some way to /OVERLAY an existing version of the output file which is contiguous (I had thought about creating the output file as ;32767, and then setting it with a version limit of 1, but I suspect RMU UNLOAD always attempts to create a new file, and doesn't re-use existing files).
...I've looked at SET RMS_DEFAULT and FDL files with ALLOCATION, BEST_TRY_CONTIGUOUS, EXTENSION and CONTIGUOUS, but RMS_DEFAULT doesn't allow contiguousness or allocation quantities to be specified, and the FDL is only any use if RMU can be persuaded to use it on the create of the output file.
Does anyone have any thoughts on the matter?
We are on RDB v7.1, Oracle's web site suggests v7.2.2 is the latest, and the 3rd edition RDB Comprehensive Guide book I've recently purchased suggests that it is v8 (though this might be RDB on NT).
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2008 05:59 AM
05-12-2008 05:59 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
There is a RDB specific email list - see
http://www.jcc.com/jccs_oracle_list_servers.htm
with previous postings at
http://www.jcc.com/searchengine/
I wonder if there is a magic RDM logical name that specifies the extend quantity
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2008 06:04 AM
05-12-2008 06:04 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
RMU/UNLOAD ... /ALLOCATION=65000/EXTEND=65000
but I don't see a /contig qualifier
Purely Personal Opinion
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2008 07:07 AM
05-12-2008 07:07 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Ian wrote
>>>
It appears you can do
RMU/UNLOAD ... /ALLOCATION=65000/EXTEND=65000
<<<
Well, although NOT onRDB, this sounds VERY familiar.
It means, that the file-to-be-generated will have initial allocation, and after that extents insofar needed, each being 65000 blocks contiguous-best-try.
If you start on a "clean" disk (making chunks of 65000 possible), that will mean that you will have very little fragmentation.
For us (disclamer: NO guarantee here!!) that worked quite well. Worth a try at least I would say.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-12-2008 05:57 PM
05-12-2008 05:57 PM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Forcing contiguity just creates another potential error condition (and in this case, an unnecessary one). Specifing generous initial allocations and extents should be adequate.
If you can't specify an ALQ, compensate with an even larger DEQ - perhaps even larger than you expect the file to grow.
Remember that as long as the file fits in a single header, you're not likely to feel much performance impact from fragmentation. Use DUMP/HEADER to see the nature of your fragmentation.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2008 03:09 AM
05-13-2008 03:09 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2008 03:22 AM
05-13-2008 03:22 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2008 12:58 AM
05-19-2008 12:58 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Unnecessary in that it can be avoided by NOT forcing contiguousness and using BEST-TRY, or unnecessary because a completely contiguous file is not of benefit?
>Remember that as long as the file fits in a single header, you're not likely to feel much performance impact from fragmentation.
>Use DUMP/HEADER to see the nature of your fragmentation.
When I initially looked at the file, there were (I think) some 1400+ extents, and indexing of the file (with CONVERT) took "forever".
A disk cleanup resulted in significantly more contiguous disk space free on the volume, and with ~850 extents, the indexing seemed to have reverted back to the "usual"/commonly-observed amount of time.
Whilst looking on the system in question just now, to see if I still had an /OUTPUT file of a DUMP /HEAD from weeks ago to confirm the number of extents (alas, no), I did find that the indexing is still ongoing at this time of the morning, and probably caused by the fact that 772 of the 1036 extents are 2048 blocks in size (the default extend size).
How does fragmentation of the file header dramatically alter the performance of file access compared to significant fragmentation of the file itself itself?
[I thought the file header was cached, no? In any case, header fragmentation is only likely to be across a few blocks, whereas dile fragmentation is likely to have the disk heads thrashing backwards & forwards]
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2008 01:04 AM
05-19-2008 01:04 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
The parameters in the FDL file aren't the best, but they're significantly better than the defaults that RDB is using (in this particular case, with a large table).
It's not clear if (from my last update) the performance benefit was gained from having the source of the CONVERT being more contiguous, or if the output file is more contiguous (I would hazard a guess and say that it is more to do with the copntiguousness of the input file and the sort work files (of which there are 4), rather than the CONVERT output file).
Thanks for the thought anyway!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2008 02:26 AM
05-19-2008 02:26 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
May be the convert normally goes fast due to caching. But when you have 2 converts running at the same time, the cache is too small and thus the cause for the slow convert.
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-19-2008 04:13 AM
05-19-2008 04:13 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
note that sequential files can take up a lot more space than indexed as data compression is not used.
You could also optimise your fdl based on indexed files from previous runs
Phil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2008 01:37 AM
05-27-2008 01:37 AM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Fwiw
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-27-2008 12:06 PM
05-27-2008 12:06 PM
Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?
Yes, when two files are concurrently growing then typically their allocations will intermingle / alternate.
>> What I'm wondering is if RMU itself can be told to request that a file have an initial ALlocation Quantity of X, and Default Extend Quantity of Y, and that the file should be contiguous
RMU/Unload 7.1 has the /alloc and /exten switches. Typically that is good enough.
I would not worry about contiguous or not.
Allocation would be the the critical one. The one to get roughly right. The default Extend of 2048 is not bad, but for a million+ block file you may want to use 65K.
I understand your concern though.
It is kinda a shame to have this fragmented intermediate file.
Have you considered an LD drive or two to hold the temp files?
( http://www.digiater.nl/lddriver.html )
You could allocate an LD on top of a good sized contiguous file. You may want to switch of high-water-marking during the create, and not have high-water-marking on the ld volume.
>> One of the output files is ~11M blocks alone, and these files are subsequently converted to indexed files (making them >= twice the size)using CONVERT.
Is the FDL file properly tuned?
You may want to review its allocations and bucketsizes.
Just take the FDL you have, tweak the record count to the value reported by the convert and EDIT/FDL/NOINT to get something to look at.
Phil> replacing the convert with a sort on primary key to as sequential output file
I don't think that will help.
The RDB data may well be sorted by primary key already. In that case be sure to tell COnvert trhough the /NOSORT option.
With several alternate keys, you may want to check out /SECONDARY.
And recent flavors of convert will issue large IOs (up to 127 blocks) holding multiple output buckets.
You _should_ direct CONVWORK and SORTWORK to devices other than the input file and output file.
Mark>> or unnecessary because a completely contiguous file is not of benefit?
Completely contiguous does nto help much.
Just avoiding multiple headers should help some.
Mark> I thought the file header was cached, no?
Within limits. Check out MCR SYSGEN SHOW/ACP ... ACP_HDRCACHE. Default is just 36, but any AUTOGEN would have cranked that up. You may also try MOUNT/PROC=UNIQUE
Hope this helps some,
Hein van den Heuvel
HvdH Performance Consulting