Operating System - OpenVMS
1827885 Members
1385 Online
109969 Solutions
New Discussion

Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

 
Mark Corcoran
Frequent Advisor

Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

I've been looking at some performance problems with a production system, and whilst they will arguably be solved by a hardware upgrade, there's a moratorium on that currently, until half of the cluster is moved to another site.

So, I'm wanting to work around the issue for the time being.

The problem stems from the fact that a rather large database has a number of its tables dumped (using RMU UNLOAD) at the same time, and to the same disk.

One of the output files is ~11M blocks alone, and these files are subsequently converted to indexed files (making them >= twice the size)using CONVERT.

On occasions, the indexing takes substantially longer (by a magnitude of hours), and I believe that this is (partly) being caused by the RMU output files being badly fragmented.

It's all very well clearing the disk, so that it is empty, but the multiple dumps running at the same time will be causing contention for new disk clusters which can be added to the file.

i.e. say two tables are being dumped, generating files #1 & #2, where file #1 (currently) requires ~10M blocks, and file 2 requires ~5M blocks.

RMU UNLOAD will (I would guess) use the same initial ALQ and DEQ values for any table that is being dumped.

Let's just say the default ALQ is 10000, and the DEQ is 5000, then the "logical" allocation of disk blocks would be:

0 - 9999 File #1
10000 - 19999 File #2
20000 - 24999 File #1
25000 - 29999 File #2

and so on...

What I'm wondering is if RMU itself can be told to request that a file have an initial ALlocation Quantity of X, and Default Extend Quantity of Y, and that the file should be contiguous (ideally, not best-try, and if there's insufficient contiguous disk space, then I could put in error handling to deal with it).

Or alternatively, fudged in some way to /OVERLAY an existing version of the output file which is contiguous (I had thought about creating the output file as ;32767, and then setting it with a version limit of 1, but I suspect RMU UNLOAD always attempts to create a new file, and doesn't re-use existing files).

...I've looked at SET RMS_DEFAULT and FDL files with ALLOCATION, BEST_TRY_CONTIGUOUS, EXTENSION and CONTIGUOUS, but RMS_DEFAULT doesn't allow contiguousness or allocation quantities to be specified, and the FDL is only any use if RMU can be persuaded to use it on the create of the output file.

Does anyone have any thoughts on the matter?

We are on RDB v7.1, Oracle's web site suggests v7.2.2 is the latest, and the 3rd edition RDB Comprehensive Guide book I've recently purchased suggests that it is v8 (though this might be RDB on NT).

Mark
12 REPLIES 12
Ian Miller.
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

RDB V8 was RDB on NT and is not around any more.

There is a RDB specific email list - see
http://www.jcc.com/jccs_oracle_list_servers.htm
with previous postings at
http://www.jcc.com/searchengine/

I wonder if there is a magic RDM logical name that specifies the extend quantity
____________________
Purely Personal Opinion
Ian Miller.
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

It appears you can do
RMU/UNLOAD ... /ALLOCATION=65000/EXTEND=65000

but I don't see a /contig qualifier
____________________
Purely Personal Opinion
Jan van den Ende
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Mark,

Ian wrote


>>>
It appears you can do
RMU/UNLOAD ... /ALLOCATION=65000/EXTEND=65000
<<<

Well, although NOT onRDB, this sounds VERY familiar.
It means, that the file-to-be-generated will have initial allocation, and after that extents insofar needed, each being 65000 blocks contiguous-best-try.
If you start on a "clean" disk (making chunks of 65000 possible), that will mean that you will have very little fragmentation.
For us (disclamer: NO guarantee here!!) that worked quite well. Worth a try at least I would say.


Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
John Gillings
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Mark,

Forcing contiguity just creates another potential error condition (and in this case, an unnecessary one). Specifing generous initial allocations and extents should be adequate.

If you can't specify an ALQ, compensate with an even larger DEQ - perhaps even larger than you expect the file to grow.

Remember that as long as the file fits in a single header, you're not likely to feel much performance impact from fragmentation. Use DUMP/HEADER to see the nature of your fragmentation.

A crucible of informative mistakes
Wim Van den Wyngaert
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Note that you can take back what you "gave away to much" by doing set file/trunc after the dump has finished.

Wim
Wim
Wim Van den Wyngaert
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Also : did you check the allocation and other parameters in the fdl used to convert it to an indexed file ? Also try increasing /work to speed things up.

Wim
Wim
Mark Corcoran
Frequent Advisor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

>Forcing contiguity just creates another potential error condition (and in this case, an unnecessary one).

Unnecessary in that it can be avoided by NOT forcing contiguousness and using BEST-TRY, or unnecessary because a completely contiguous file is not of benefit?


>Remember that as long as the file fits in a single header, you're not likely to feel much performance impact from fragmentation.
>Use DUMP/HEADER to see the nature of your fragmentation.

When I initially looked at the file, there were (I think) some 1400+ extents, and indexing of the file (with CONVERT) took "forever".

A disk cleanup resulted in significantly more contiguous disk space free on the volume, and with ~850 extents, the indexing seemed to have reverted back to the "usual"/commonly-observed amount of time.

Whilst looking on the system in question just now, to see if I still had an /OUTPUT file of a DUMP /HEAD from weeks ago to confirm the number of extents (alas, no), I did find that the indexing is still ongoing at this time of the morning, and probably caused by the fact that 772 of the 1036 extents are 2048 blocks in size (the default extend size).

How does fragmentation of the file header dramatically alter the performance of file access compared to significant fragmentation of the file itself itself?

[I thought the file header was cached, no? In any case, header fragmentation is only likely to be across a few blocks, whereas dile fragmentation is likely to have the disk heads thrashing backwards & forwards]
Mark Corcoran
Frequent Advisor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

>Also : did you check the allocation and other parameters in the fdl used to convert it to an indexed file ? Also try increasing /work to speed things up.

The parameters in the FDL file aren't the best, but they're significantly better than the defaults that RDB is using (in this particular case, with a large table).

It's not clear if (from my last update) the performance benefit was gained from having the source of the CONVERT being more contiguous, or if the output file is more contiguous (I would hazard a guess and say that it is more to do with the copntiguousness of the input file and the sort work files (of which there are 4), rather than the CONVERT output file).

Thanks for the thought anyway!
Wim Van den Wyngaert
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

The indexing (or convert) only reads the dump file once. I can't believe this would be delayed for hours due to a fragmented file. Still think the indexed file is more likely to be the cause. Could you post the fdl ?

May be the convert normally goes fast due to caching. But when you have 2 converts running at the same time, the cache is too small and thus the cause for the slow convert.

Wim
Wim
Phil.Howell
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

if the indexed files that are being created have more than one key, then you may get a reduction in run time by replacing the convert with a sort on primary key to as sequential output file, followed by a convert /nosort using your fdl.
note that sequential files can take up a lot more space than indexed as data compression is not used.
You could also optimise your fdl based on indexed files from previous runs
Phil
Wim Van den Wyngaert
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

If still a problem : sort uses memory based upon the wsextent of the process. Increase wsextent and sort will speed up. But make sure to not get in conflict with the maximum pgflquota.

Fwiw

Wim
Wim
Hein van den Heuvel
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

>> Let's just say the default ALQ is 10000, and the DEQ is 5000, then the "logical" allocation of disk blocks would be:

Yes, when two files are concurrently growing then typically their allocations will intermingle / alternate.


>> What I'm wondering is if RMU itself can be told to request that a file have an initial ALlocation Quantity of X, and Default Extend Quantity of Y, and that the file should be contiguous

RMU/Unload 7.1 has the /alloc and /exten switches. Typically that is good enough.
I would not worry about contiguous or not.
Allocation would be the the critical one. The one to get roughly right. The default Extend of 2048 is not bad, but for a million+ block file you may want to use 65K.

I understand your concern though.
It is kinda a shame to have this fragmented intermediate file.
Have you considered an LD drive or two to hold the temp files?
( http://www.digiater.nl/lddriver.html )

You could allocate an LD on top of a good sized contiguous file. You may want to switch of high-water-marking during the create, and not have high-water-marking on the ld volume.

>> One of the output files is ~11M blocks alone, and these files are subsequently converted to indexed files (making them >= twice the size)using CONVERT.

Is the FDL file properly tuned?
You may want to review its allocations and bucketsizes.
Just take the FDL you have, tweak the record count to the value reported by the convert and EDIT/FDL/NOINT to get something to look at.


Phil> replacing the convert with a sort on primary key to as sequential output file

I don't think that will help.

The RDB data may well be sorted by primary key already. In that case be sure to tell COnvert trhough the /NOSORT option.
With several alternate keys, you may want to check out /SECONDARY.
And recent flavors of convert will issue large IOs (up to 127 blocks) holding multiple output buckets.

You _should_ direct CONVWORK and SORTWORK to devices other than the input file and output file.

Mark>> or unnecessary because a completely contiguous file is not of benefit?

Completely contiguous does nto help much.
Just avoiding multiple headers should help some.

Mark> I thought the file header was cached, no?

Within limits. Check out MCR SYSGEN SHOW/ACP ... ACP_HDRCACHE. Default is just 36, but any AUTOGEN would have cranked that up. You may also try MOUNT/PROC=UNIQUE

Hope this helps some,

Hein van den Heuvel
HvdH Performance Consulting