Operating System - OpenVMS
1748060 Members
5239 Online
108758 Solutions
New Discussion юеВ

Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

 
Phil.Howell
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

if the indexed files that are being created have more than one key, then you may get a reduction in run time by replacing the convert with a sort on primary key to as sequential output file, followed by a convert /nosort using your fdl.
note that sequential files can take up a lot more space than indexed as data compression is not used.
You could also optimise your fdl based on indexed files from previous runs
Phil
Wim Van den Wyngaert
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

If still a problem : sort uses memory based upon the wsextent of the process. Increase wsextent and sort will speed up. But make sure to not get in conflict with the maximum pgflquota.

Fwiw

Wim
Wim
Hein van den Heuvel
Honored Contributor

Re: Oracle RDB v7.1 - Possible to fudge RMU UNLOAD output file to be contiguous and large ALQ size?

>> Let's just say the default ALQ is 10000, and the DEQ is 5000, then the "logical" allocation of disk blocks would be:

Yes, when two files are concurrently growing then typically their allocations will intermingle / alternate.


>> What I'm wondering is if RMU itself can be told to request that a file have an initial ALlocation Quantity of X, and Default Extend Quantity of Y, and that the file should be contiguous

RMU/Unload 7.1 has the /alloc and /exten switches. Typically that is good enough.
I would not worry about contiguous or not.
Allocation would be the the critical one. The one to get roughly right. The default Extend of 2048 is not bad, but for a million+ block file you may want to use 65K.

I understand your concern though.
It is kinda a shame to have this fragmented intermediate file.
Have you considered an LD drive or two to hold the temp files?
( http://www.digiater.nl/lddriver.html )

You could allocate an LD on top of a good sized contiguous file. You may want to switch of high-water-marking during the create, and not have high-water-marking on the ld volume.

>> One of the output files is ~11M blocks alone, and these files are subsequently converted to indexed files (making them >= twice the size)using CONVERT.

Is the FDL file properly tuned?
You may want to review its allocations and bucketsizes.
Just take the FDL you have, tweak the record count to the value reported by the convert and EDIT/FDL/NOINT to get something to look at.


Phil> replacing the convert with a sort on primary key to as sequential output file

I don't think that will help.

The RDB data may well be sorted by primary key already. In that case be sure to tell COnvert trhough the /NOSORT option.
With several alternate keys, you may want to check out /SECONDARY.
And recent flavors of convert will issue large IOs (up to 127 blocks) holding multiple output buckets.

You _should_ direct CONVWORK and SORTWORK to devices other than the input file and output file.

Mark>> or unnecessary because a completely contiguous file is not of benefit?

Completely contiguous does nto help much.
Just avoiding multiple headers should help some.

Mark> I thought the file header was cached, no?

Within limits. Check out MCR SYSGEN SHOW/ACP ... ACP_HDRCACHE. Default is just 36, but any AUTOGEN would have cranked that up. You may also try MOUNT/PROC=UNIQUE

Hope this helps some,

Hein van den Heuvel
HvdH Performance Consulting