Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

 
Highlighted
Frequent Advisor

Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

I've been looking at some performance problems with a production system, and whilst they will arguably be solved by a hardware upgrade, there's a moratorium on that currently, until half of the cluster is moved to another site.

So, I'm wanting to work around the issue for the time being.

The problem stems from the fact that a rather large database has a number of its tables dumped (using RMU UNLOAD) at the same time, and to the same disk.

One of the output files is ~11M blocks alone, and these files are subsequently converted to indexed files (making them >= twice the size)using CONVERT.

On occasions, the indexing takes substantially longer (by a magnitude of hours), and I believe that this is (partly) being caused by the RMU output files being badly fragmented.

It's all very well clearing the disk, so that it is empty, but the multiple dumps running at the same time will be causing contention for new disk clusters which can be added to the file.

i.e. say two tables are being dumped, generating files #1 & #2, where file #1 (currently) requires ~10M blocks, and file 2 requires ~5M blocks.

RMU UNLOAD will (I would guess) use the same initial ALQ and DEQ values for any table that is being dumped.

Let's just say the default ALQ is 10000, and the DEQ is 5000, then the "logical" allocation of disk blocks would be:

0 - 9999 File #1
10000 - 19999 File #2
20000 - 24999 File #1
25000 - 29999 File #2

and so on...

What I'm wondering is if RMU itself can be told to request that a file have an initial ALlocation Quantity of X, and Default Extend Quantity of Y, and that the file should be contiguous (ideally, not best-try, and if there's insufficient contiguous disk space, then I could put in error handling to deal with it).

Or alternatively, fudged in some way to /OVERLAY an existing version of the output file which is contiguous (I had thought about creating the output file as ;32767, and then setting it with a version limit of 1, but I suspect RMU UNLOAD always attempts to create a new file, and doesn't re-use existing files).

...I've looked at SET RMS_DEFAULT and FDL files with ALLOCATION, BEST_TRY_CONTIGUOUS, EXTENSION and CONTIGUOUS, but RMS_DEFAULT doesn't allow contiguousness or allocation quantities to be specified, and the FDL is only any use if RMU can be persuaded to use it on the create of the output file.

Does anyone have any thoughts on the matter?

We are on RDB v7.1, Oracle's web site suggests v7.2.2 is the latest, and the 3rd edition RDB Comprehensive Guide book I've recently purchased suggests that it is v8 (though this might be RDB on NT).

Mark
12 REPLIES 12
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

RDB V8 was RDB on NT and is not around any more.

There is a RDB specific email list - see
http://www.jcc.com/jccs_oracle_list_servers.htm
with previous postings at
http://www.jcc.com/searchengine/

I wonder if there is a magic RDM logical name that specifies the extend quantity
____________________
Purely Personal Opinion
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

It appears you can do
RMU/UNLOAD ... /ALLOCATION=65000/EXTEND=65000

but I don't see a /contig qualifier
____________________
Purely Personal Opinion
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Mark,

Ian wrote


>>>
It appears you can do
RMU/UNLOAD ... /ALLOCATION=65000/EXTEND=65000
<<<

Well, although NOT onRDB, this sounds VERY familiar.
It means, that the file-to-be-generated will have initial allocation, and after that extents insofar needed, each being 65000 blocks contiguous-best-try.
If you start on a "clean" disk (making chunks of 65000 possible), that will mean that you will have very little fragmentation.
For us (disclamer: NO guarantee here!!) that worked quite well. Worth a try at least I would say.


Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Mark,

Forcing contiguity just creates another potential error condition (and in this case, an unnecessary one). Specifing generous initial allocations and extents should be adequate.

If you can't specify an ALQ, compensate with an even larger DEQ - perhaps even larger than you expect the file to grow.

Remember that as long as the file fits in a single header, you're not likely to feel much performance impact from fragmentation. Use DUMP/HEADER to see the nature of your fragmentation.

A crucible of informative mistakes
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Note that you can take back what you "gave away to much" by doing set file/trunc after the dump has finished.

Wim
Wim
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

Also : did you check the allocation and other parameters in the fdl used to convert it to an indexed file ? Also try increasing /work to speed things up.

Wim
Wim
Highlighted
Frequent Advisor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

>Forcing contiguity just creates another potential error condition (and in this case, an unnecessary one).

Unnecessary in that it can be avoided by NOT forcing contiguousness and using BEST-TRY, or unnecessary because a completely contiguous file is not of benefit?


>Remember that as long as the file fits in a single header, you're not likely to feel much performance impact from fragmentation.
>Use DUMP/HEADER to see the nature of your fragmentation.

When I initially looked at the file, there were (I think) some 1400+ extents, and indexing of the file (with CONVERT) took "forever".

A disk cleanup resulted in significantly more contiguous disk space free on the volume, and with ~850 extents, the indexing seemed to have reverted back to the "usual"/commonly-observed amount of time.

Whilst looking on the system in question just now, to see if I still had an /OUTPUT file of a DUMP /HEAD from weeks ago to confirm the number of extents (alas, no), I did find that the indexing is still ongoing at this time of the morning, and probably caused by the fact that 772 of the 1036 extents are 2048 blocks in size (the default extend size).

How does fragmentation of the file header dramatically alter the performance of file access compared to significant fragmentation of the file itself itself?

[I thought the file header was cached, no? In any case, header fragmentation is only likely to be across a few blocks, whereas dile fragmentation is likely to have the disk heads thrashing backwards & forwards]
Highlighted
Frequent Advisor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

>Also : did you check the allocation and other parameters in the fdl used to convert it to an indexed file ? Also try increasing /work to speed things up.

The parameters in the FDL file aren't the best, but they're significantly better than the defaults that RDB is using (in this particular case, with a large table).

It's not clear if (from my last update) the performance benefit was gained from having the source of the CONVERT being more contiguous, or if the output file is more contiguous (I would hazard a guess and say that it is more to do with the copntiguousness of the input file and the sort work files (of which there are 4), rather than the CONVERT output file).

Thanks for the thought anyway!
Highlighted
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

The indexing (or convert) only reads the dump file once. I can't believe this would be delayed for hours due to a fragmented file. Still think the indexed file is more likely to be the cause. Could you post the fdl ?

May be the convert normally goes fast due to caching. But when you have 2 converts running at the same time, the cache is too small and thus the cause for the slow convert.

Wim
Wim