Operating System - OpenVMS
1824216 Members
3417 Online
109669 Solutions
New Discussion юеВ

SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

 
SOLVED
Go to solution
Mark Corcoran
Frequent Advisor

SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

I've been looking at a problem to do with the post-processing of table files unloaded from RDB using RMU.

The table content is in the order in which data was added to the table, not sorted in order.

For some tables, the amount of time taken to "index" then after they are unloaded, can be significantly longer than "normal", regardless of whether or not there are significantly more or less rows in the table.

I had been convinced that the issue was the fact that:

a) all of the tables were being unloaded at the same time
b) ...and to the same "disk" (it's actually a shadowset "disk" in a SAN farm, which I'm not responsible for, so who knows how exactly it is configured)
c)after the unload is finished, where a table needed to be indexed, a CONV/SORT would be performed on it, with the output file (and sometimes the sort work and convert work files) being output to the same disk as the input file.

On having examined the input file to CONV /SORT for one table, on a few occasions where the CONV has been running for 12+ hours, I was convinced that the problem was because the input file was badly fragmented (~2000 fragments, for ~45m records across 11.5m blocks)

I've since manually copied the fragmented file to another empty disk, making the file contiguous.

Attempting the CONVERT /SORT still causes it to take hours, or at least, longer than I'm prepared to wait, so I abort it.

Before aborting, I've set the protection on the sort and convert work files, so that CONVERT wouldn't tidy them up on rundown, and analysing them, shows them to have ~200 fragments each.

SORTing the input file first of all (the sorted output file is also contiguous) takes about 13 minutes, and then CONVERT /NOSORT takes about a further 45 minutes.

I aborted both the SORT and the CONVERT /NOSORT operations (on separate runs), after changing the protection on the work files, and could see that they had a similar number of fragments when just a CONVERT /SORT was attempted.

I've tried CONVERT /SORT with differing numbers of work files, and using a different disk for SYS$SCRATCH, but it doesn't seem to make any difference.

All the evidence appears to point to CONVERT/SORT being manifestly less efficient than SORT + CONVERT/NOSORT.

I can't find anything in the documentation that would explain why this should be the case (in both the CONVERT /SORT and /NOSORT cases, the same FDL file is used) - I'm convinced that it shouldn't be the case.

Does anyone have any suggestions as to why I might be observing the problems I'm seeing?


Mark
17 REPLIES 17
Jan van den Ende
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Mark,

I am not really sure that it will be THE issue, but the large number of fraagments for the work files DOES play a part.
Can you use a drive with EXTEND size in the order of the file size for workfiles? That would lead to workfiles with at most 3 fragments, making a significant ( but how much ? ) difference.
And, just to make sure, have a second look at highwatermarking being turned OFF!!

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Hoff
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

In no particular order...

If you're going to look to run with the older OpenVMS tools and settings, then look to create the output file as one huge hunk. And look to allocate the file in (larger) hunks.

45 million records of about 150 characters isn't all that much these days. See if you can sort it in memory. (This depends on which box you're dealing with; Alpha is slow, as are the older Itanium processors. And many of these boxes are under-configured for physical memory and for working set.)

I'd look at several things here. Working set as mentioned; both what this process is authorized and what the system is configured to allow. I'd look at parallelism; Knuth et al. Split these 45 million records across however many processors you have. The standard VMS libraries aren't parallelized, AFAIK. (But just for giggles, see if Hypersort helps.)

OpenVMS I/O isn't fast, and older hardware can be a bottleneck. Fragmented disks (which you appear to have here) and dinky allocation sizes make this worse, and the typical SAN stuff is fairly glacial. The typical 2 Gb HBAs and EVAs are fairly slow these days.

Make sure you don't have highwater marking (HWM) lit.

I'd guess that the core limit here is I/O or paging, but I'd profile the process activity and see where it's spending its time.

If your key strings are close or if your FDL design doesn't allow for the numbers of records here, you can end up with some pretty nasty RMS file structures, too. FDL dates back to when 45 million was very big, so its defaults might need explicit coaching to get you decent performance here.

Tokyo Cabinet or other key-value storage such may well be faster than messing around with the classic stuff. Depending on what you're up to, some of the newer tools can segment this stuff across hosts nicely, too; you can scale up smaller boxes. (I haven't ported it, nor have I seen a Tokyo Cabinet port for OpenVMS, though a port is reportedly in progress. It does run on the other boxes I deal with, and it's fast.)

http://00f.net/2009/an-overview-of-modern-sql-free-databases
http://1978th.net/tokyocabinet/
http://www.johndapps.com/
Mark Corcoran
Frequent Advisor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Gak! I had attempted to reply earlier, and after clicking on Submit, I focused on doing some other work; come back to the session to find the browser blank but with no indication that Forums had accepted my update (I have a vague recollection that it doesn't like you taking a long time between Reply and Submit); I then did some more browsing, so can't find it in my history.

Oh well, here's having a go at remembering what I put:


Thanks Jan (and now Stephen).

I don't think in this case that fragmentation is really the issue - from a clean disk with essentially nothing else on it
(apart from BITMAP.SYS, 000000.DIR yada yada), I have performed SORT & CONVERT /NOSORT, got the stats, deleted the files, then performed a CONVERT /SORT, got the stats and compared them.

The volume characteristics (which would determine the extent size of files created on it) is alas not within my team's purview, but that wouldn't stop me from doing a SET RMS_DEFAULT/EXTEND= prior to the CONVERT command...

With the input & output and work files all on the same disk, and the input file being contiguous, both methods resulted in output files that were contiguous too (in the SORT scenario, there's obviously 3 files - input, sorted, sorted & converted - these were all contiguous).

The fragmentation of the sort work files appears to be similar between methods (~200 framents each).

However, SORT & CONVERT/NOSORT appears to be consistently quicker/more efficient than CONVERT /SORT

I've looked at the 7.3 RMS Record Management Services Reference Manual, and I have to say I'm confused by the wording used when it describes CONVERT, which probably doesn't help me work out it's meant to function...

"1. The primary keys are sorted...

...If your input records are already ordered by the primary key or if the primary key of the input and output files is the same, you should specify /NOSORT.
This qualifier ensures that the primary keys are not sorted again."

The input file in my case is not indexed/keyed.

However, the manual talks about the primary key of the input/output files being the same (in which case, use /NOSORT), but I'd infer that an input file can only have a primary key that matches the primary key of an output file if the input file is actually indexed/keyed.

It also says that the /NOSORT qualifier ensures that the primary keys "are not sorted again", but it's not really clear in which step (of the 6 in the manual) that the primary keys might be sorted a second or subsequent time.

It also goes on to talk about "secondary key" and "the first alternate key" -
Is "the first alternate key" the "secondary key" (and the second alternate key is the "tertiary key")?
Or is "the first alternate key" the first key after the secondary key (i.e. the tertiary key, or numerically speaking, key=2)?

I've pasted the FDL below (with key &etc. names changed, for security reasons).

I can't see anything obviously wrong about it, except for the fact that it has not been regularly updated as data has grown over the years (the Area 0 allocation should be ~11500000 instead of 5000000).

Perhaps the contents of the FDL would make it obvious as to why CONVERT /SORT appears to be slower - I don't find the description in the RMS manual particularly helpful in visualising how CONVERT does what it does (an example would have been very useful).

TITLE "X.unl A B indexing"

IDENT "22-JAN-2003 12:40:51 OpenVMS FDL Editor"

SYSTEM
SOURCE "OpenVMS"

FILE
NAME "X.unl"
ORGANIZATION indexed

RECORD
CARRIAGE_CONTROL carriage_return
FORMAT variable
SIZE 200

AREA 0
ALLOCATION 5000000
BEST_TRY_CONTIGUOUS yes
BUCKET_SIZE 18
EXTENSION 65520

AREA 1
ALLOCATION 9828
BEST_TRY_CONTIGUOUS yes
BUCKET_SIZE 18
EXTENSION 2466

KEY 0
CHANGES no
DATA_AREA 0
DATA_FILL 100
DATA_KEY_COMPRESSION yes
DATA_RECORD_COMPRESSION yes
DUPLICATES no
INDEX_AREA 1
INDEX_COMPRESSION no
INDEX_FILL 100
LEVEL1_INDEX_AREA 1
NAME "A"
PROLOG 3
SEG0_LENGTH 12
SEG0_POSITION 0
TYPE string
KEY 1
CHANGES no
DATA_AREA 0
DATA_FILL 100
DATA_KEY_COMPRESSION yes
DATA_RECORD_COMPRESSION yes
DUPLICATES no
INDEX_AREA 1
INDEX_COMPRESSION no
INDEX_FILL 100
LEVEL1_INDEX_AREA 1
NAME "B"
PROLOG 3
SEG0_LENGTH 20
SEG0_POSITION 42
TYPE string
KEY 2
CHANGES no
DATA_AREA 0
DATA_FILL 100
DATA_KEY_COMPRESSION yes
DATA_RECORD_COMPRESSION yes
DUPLICATES yes
INDEX_AREA 1
INDEX_COMPRESSION no
INDEX_FILL 100
LEVEL1_INDEX_AREA 1
NAME "C"
PROLOG 3
SEG0_LENGTH 11
SEG0_POSITION 68
TYPE string
Mark Corcoran
Frequent Advisor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Incidentally, in reply to some of Stephen's points:

>If you're going to look to run with the older OpenVMS tools and settings, then look to create the output file as one huge hunk. And look to allocate the file in (larger) hunks.
I had actually been working on an update to the command files used to unload and index the files, so that they would specify suitable (per-table) values for the initial allocation and extension sizes in RMU UNLOAD, and then a similar thing with the CONVERT (modifying the FDL file as the initial unload file gets bigger).

Some colleagues in the meantime had been messing around with SORT & CONVERT/NOSORT, and found that it was quicker - I couldn't believe that it could be the case, because CONVERT uses the same old SORT shareable image just the same as SORT does.

I was convinced that they weren't comparing apples & pears, and that fragmentation was still an issue, until I managed to reproduce it myself.


>But just for giggles, see if Hypersort helps
I seem to remember reading that HYPERSORT can't be used with indexed files (does this apply to the input file or the output or both)?


>Fragmented disks (which you appear to have here) and dinky allocation sizes make this worse, and the typical SAN stuff is fairly glacial.
Hmm. Not that you knew, but I was working on changing this for all the table unloads, so that the initial allocation size would be commensurate with the current table unload size, and the extent size either 10% of the unload size, or 65000 whichever was the smaller.

It's interesting to note that the sort work files appear to be similarly fragmented irrespective of what method is chosen, yet the output file(s) remain contiguous (on this "test" disk I'm using), but the CONVERT/SORT always appears to be a no-brainer.

Someone has suggested that "these problems always [seem to] go away after a reboot, and that the memory pools are fragmented, causing issues for sort"; I'm a hardened sceptic however, and I think it's just a case of coincidence resulting in people adding 2 + 2 to make 5.


>Tokyo Cabinet or other key-value storage such may well be faster than messing around with the classic stuff
Thanks for the info on this and the URLs; unfortunately, it's the normal case of "money & is it approved" vs "Yes it might well fix the problem".
Hoff
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Some tests...

Burn some disk space and run three one-key files on three disks.

Run a tuning pass on a fully-populated file and see what that turns up for an index depth suggestion.

Depending on what you're up to, try turning off key compression.

Check non-paged and related.

Given the lack of mention of details and the mention of price sensitivity, I'd infer this is Alpha (read: slow) hardware, too. It'd be entertaining trying a prototype of this on one of the mid-vintage Itanium boxes that show up fairly regularly for under US$1000 on the used market. Between DAS SAS I/O and the faster processors and the extra memory, these used Itanium boxes can blow away most Alpha boxes. (Though a ProLiant x86-64 box will likely beat Itanium on price-performance, if not on single-stream performance.)
Hein van den Heuvel
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

I don't have enough time right now but want to make a few quick observations to help a little perhaps.

- OpenVMS version is important.
-- CONVERT changed from file mode sort to record mode
-- CONVERT added stats.
I think this was all backin 7.2, but check.

>> longer than I'm prepared to wait, so I abort it.

It might have been 'almost done'. :-). I typically use PROCIO to gage where I am, or DIR/SIZE=ALL on the works files, or ANAL/SYST... SHOW PROC/RMS=(FAB,RAB) to actually understand how far it is, how far to go.
Of course with a current CONVERT and interactive session, you can just use ^T (Control-T) to have Convert tell you where it thinks it is.

Use CONVERT/STAT=FULL to see how much TIME the SORT within COVNERT took.

Try with a smaller cut of the input file?

Use CONVERT/SECO=3 for this file to make CONVERT use just one, bigger, CONVWORK.

Maybe more later...

Hein
Hein van den Heuvel
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

A few more minutes, so I'll tackle Mark's reply.

>> Gak! I had attempted to reply earlier, and after clicking on Submit, I focused on doing some other work; come back to the session to find the browser blank

Happened a good few times to me. I try to remember to ^A ^C before hitting submit. If I forget, and it failed, then ALT-back to go back to the prior pages has often brought back my work. Too late for you now....

>> I have performed SORT & CONVERT /NOSORT, got the stats,


CONVERT/STATS=FULL gives nice details.

deleted the files, then performed a CONVERT

>> "1. The primary keys are sorted...

...If your input records are already ordered by the primary key

or

if the primary key of the input and output files is the same, you should specify"

>> The input file in my case is not indexed/keyed.

Fine. Often convert is used from old-indexed to new-indexed.

The 'or' in the manual sentence is critical.
The first part has an implied expectation of a sequential input file.

>> I'd infer that an input file can only have a primary key that matches the primary key of an output file if the input file is actually indexed/keyed.

Right, a common situation, but not your case.


>> primary keys might be sorted a second or subsequent time.

Primary keys are sorted once, nor not at all, if you request so.

>> Is "the first alternate key" the "secondary key" (and the second alternate key is the "tertiary key")?

alternate and secondary are interchangeable for convert. I guess they started with secondary as different from primary. But 3rd secondary sounds too od, so they called it alternate from then onwards?

> the Area 0 allocation should be ~11500000 instead of 5000000

Well, that's a pretty big. That's a lot of potential, and avoidable, extents right there. It also suggests that the BUCKETsize may be stuck in some past value.


>> Perhaps the contents of the FDL would make it obvious as to why CONVERT /SORT appears to be slower

Not really.

>> - I don't find the description in the RMS manual particularly helpful in visualising how CONVERT does what it does (an example would have been very useful).

Hmm, there may have been a release note when /SECON was implemented. $ HELP CONV FILE/SECON is a good read.


Hein
John Gillings
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Mark,

Forum timeouts and lost input are more common than successful posts from this side of the planet (great advertising for HP hardware!)

Standard practice is to select all text and copy before hitting "Submit". Sometimes hitting BACK will recover your text, otherwise you should still have it in the paste buffer.

A really dumb question about this issue... what does SORT of an indexed file actually achieve? I thought the whole point of an indexed file is that it's already sorted (at least on the primary key).

What keys are you sorting on? Maybe if you post the complete commands you're executing I'll understand.

One thing to sanity check is the LRL of the file you're sorting. If the LRL value of the input file is significantly higher than the actual length of the longest record, this can result in severe performance degradation of SORT or CONVERT/SORT (this was/is an issue with STREAM_LF files produced by C I/O). The output file from a CONVERT should have an accurate LRL, so if it's different from the input LRL, investigate the source of the file - why is the LRL inaccurate?
A crucible of informative mistakes
Phil.Howell
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

You converted file has two secondary keys, depending on the key data, this can impact performance.
You should create an intermediate fdl removing the B and C keys and convert to that first, then re-convert using your 3 key fdl.
You could also run this with the pre-sort as well for comparison.

Phil

Hein van den Heuvel
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Yes, pre-sorting can be faster as you noticed.

There are multiple reasons for this.

- CONVERT use RMS RECORD IO to read the input file. Since you don't specify RMS setting nor the OpenVMS version I have to guess the this means a 16 block (old VMS) or 32 block buffer? Try $ SET RMS /BLOC=127/BUF=8

- Because CONVERT uses RECORD IO, it will be in and out of EXEC mode 'all the time' and burn a good bit more CPU than anything else (SORT, C-RTL, COBRTL) reading this file.

- CONVERT typically uses and RFA sort for the primary KEY. So after the SORT it re-reads all input, randomly. Using the potentially small and few buffers.

- CONVERT will switch to a record sort, if there are multiple input files. For grins, add ",NL:" to your input file spec! Of course, with that the CONVWORK file will become 200/6 times bigger! Depending on where the work file lives, that may be faster or slower.

- SORT and HYPERSORT use 127 block IOs, independent of RMS and has its own async/multi-buffer handling.


I checked with LD TRACE. You may want to do that also, with a chunck of the input file.

As input I used an expanded SYS$LIBRARY:VMS$PASSWORD_DICTIONARY.DATA/FDL="RECORD; SIZE 200; FORM FIX"/PAD

RMS/BLOC=32 direct-io load = 589, sort = 556

RMS/BLOC=127 direct-io load = 196, sort = 162

NL:, BLOCK=127 direct-io load = 383, sort = 582

NOSORT, BLOCK=127 direct-io load = 153

f$file("lda6:[000000]test.dat","EOF")/127 = 132

Sort, direct IO = 332 (irrespective of RMS settings)
HYPERSORT direct IO = 288


HYPERSORT can not READ indexed files. It can be used by CONVERT to do its work. just fine.


Memory-pool and reboot? No.


John, Yeah, improper LRL's can block up RECORD sorts. Convert's RFA sort would not be effected.

Phil, No. Think about it.

Mark, use /STAT=FULL, use ^T.

Cheers,
Hein.
Mark Corcoran
Frequent Advisor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

JOHN/HEIN: yep, I normally do select all of the text before attempting to submit, because of this issue of updates failing.

Unfortunately, it's been a while since I last used Forums, and forgot that this happened. Maybe I should get a label stuck on my monitor to remind me ;-)


STEPHEN/HEIN: The hardware on which the unload & indexing takes place will depend on which queue Scheduler happens to have submitted the job to, but whilst doing manual testing, I was using the faster of the nodes in the cluster...

It's a 24-CPU Alphaserver GS1280 7/1300, running OVMS v7.3-2 (yes, yes, I know).


HEIN: I guess using the PROCIO and DIR/SIZE=ALL might be useful in determining how far you are through processing, if you know how much I/O or disk space the files normally end up taking :-D

However, all I know is that when it works (using CONVERT/SORT) it takes <=7hrs, and when it doesn't work (using CONVERT/SORT) it can take >7hrs; SORT + CONV/NOSORT takes ~1hr.

After I'd finished trying to make the CONVERT/SORT as equivalent to the other method as possible (reducing the variables), it was still running after several hours.

I was curious as to how you derived the /SECONDARY=3 value - as I understand it, the /SECONDARY relates to "alternate keys" and by definition, would preclude the primary key.

This particular set of indexing uses 3 keys in total, so I would have assumed the value should be 2?

It's interesting to note that you mention this would make CONVERT use "just one, bigger CONVWORK" - I'm pretty sure I read a similar statement in the manual, but I've not seen CONVERT /SORT use more than one CONVWORK file - what I have seen however, is it using two versions of each SORTWORK file...


JOHN: In answer to your question "does SORT of an indexed file (in this case) actually achieve?"...

The default for CONVERT is to do /SORT unless you specify otherwise.

This particular example table has three keys because three different reports used the file to convert one "identifier" to another, and then use that "identifier" to look up another unloaded & indexed table.

e.g. let's say you have indices on columns A B and C in this unloaded/indexed table, but you have a report that needs to take identifier "A" and determine what its "Z" identifier value is.

However, column "Z" doesn't exist in this table; in order to get the "Z" value, you have to use say the "B" value to look up another unloaded & indexed table, which gives you a "G" value that you use to
index a third unloaded & indexed table to finally get your "Z" column/identifier value.

As it transpires (from discussing the history of this particular table), 2 of the reports no longer use the unloaded/indexed tables, but have been rewritten to access the live database.

The third report is written in C; I've not yet looked at the code to see what it does or how it does it, but apparently, if the indexed file is NOT sorted, it takes about 5 days to run instead of ~7 hours (I'm told it doesn't just simply fopen() it as a sequential file, and that it does use RABs and FABs, but I don't know any more than that).

Also, if the input file does not have records in sorted order, CONVERT /NOSORT will report lots of informationals - "%CONVERT-I-SEQ, record not in order"; I don't think this makes it problematic (because you'll still have it indexed), but it does mean either the job log file gets very big, or you have to turn off messages for the duration of the CONVERT, which may mean you losing diagnostic info if the CONVERT fails; I've also had unconfirmed reports that for each record that results in an informational message, the record doesn't in fact appear in the output file (not tried it myself, so I can't confirm).

You also mentioned the LRL - the LRL is set to 122, which is correct (RMU UNLOAD generates files "properly" through RMS, rather than fputs() calls in C :-D)


ALL: I am in the process of doing a number of trials on different sizes of input files, so I'm not being ignorant and not responding - I'm just trying to gather some suitable stats to report back here.

I have already noticed however, that the elapsed time for the sort of key 0 seems to be directly proportionate to the number of records - 10m records takes ~20 times as much as 500k records.

However, the key 0 load time looks like it might be on some kind of exponential "curve", climbing steeply the more records are in the input file.

It doesn't appear to make any (significant) time difference as to whether or not I reduce the FDL file just to the primary key, or (when having all three keys) use /SECONDARY=3.

Not sure how the "sort tree size" is derived, but again, this doesn't appear to change regardless of the number of records in the input file.

Here is the stats output from indexing on the primary key only (input file reduced to 10m records):

$ convert x.dat x.tmp /FDL=single_key_index.fdl /stat=full

Finished Load of Key 0 at 14:52:31.95

SORT Statistics for Key 0
Records Sorted: 10000000 Initial Runs: 255
Sort Tree Size: 19674 Work File Allocation: 422210
Merge Order: 20 Merge Passes: 14
Direct I/O Count: 79774 Buffered I/O Count: 682
Elapsed Time: 0 00:00:56.58 CPU Time: 0 00:00:45.93
Page Faults: 108

LOAD Statistics for Key 0
Direct I/O Count: 10102898 Buffered I/O Count: 64
Elapsed Time: 0 00:07:49.28 CPU Time: 0 00:07:20.80
Page Faults: 311

CONVERT Statistics
Number of Files Processed: 1
Total Records Processed: 10000000 Buffered I/O Count: 775
Total Exception Records: 0 Direct I/O Count: 10182698
Total Valid Records: 10000000 Page Faults: 435
Elapsed Time: 0 00:08:45.96 CPU Time: 0 00:08:06.74
Hein van den Heuvel
Honored Contributor
Solution

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

[just happened to me. Blank screen after submit. Typed too long? back + submint fixed it this time]

>> It's a 24-CPU Alphaserver GS1280 7/1300, running OVMS v7.3-2 (yes, yes, I know).

Not a bad box.


>> if you know how much I/O or disk space the files normally end up taking :-D

Back of the envelope calculations, early on, tend to give a rather accurate indication of the anticipated end time. blocks/io-sizes/times/io-counts.

>>(using CONVERT/SORT) it can take >7hrs; SORT + CONV/NOSORT takes ~1hr.

And that's critical information which we (me) don't dispute. Just want to explain, make understood. (rime?).

My money is on the default RMS buffers being too small.
What does SHOW RMS tell you? 16 blocks? 32 blocks?
so that's 4 or 8x more IOs than sort.
After I'd finished trying to make the
Plus convert reads twice. Another 2x
Plus convert would typically cause more randomish IOs, to which the the storage subsystem my react poorly.


>> I was curious as to how you derived the /SECONDARY=3 value

No magic. It's larger than 1 which is all that matters, and it is often but not always a reasonable break-even point between too much data in convwork, versus too many scans.

>> so I would have assumed the value should be 2?

Sure. Or larger.

>> t I've not seen CONVERT /SORT use more than one CONVWORK file -

It would only use one at a time.
Don't recall whether it truncates or delete + create. The listings could tell.

Or... use SET WATCH FILE/CLASS=MAJOR


> is it using two versions of each SORTWORK file...

Right. And for critical/frequent converts you may well learn how big they all will be and when they are written/read/deleted.

> The default for CONVERT is to do /SORT unless you specify otherwise.

Right... or when it detects that the keys are defined the same.


>>> if the indexed file is NOT sorted, it takes about 5 days to run instead of ~7 hours (I'm told it doesn't just simply fopen() it as a sequential file, and that it does use RABs and FABs, but I don't know any more than that).

That can probably be readily improved upon. It is rare that a well designed job takes hours and hours to proces a (large) file.

>> Also, if the input file does not have records in sorted order, CONVERT /NOSORT will report lots of informationals - "%CONVERT-I-SEQ, record not in order";

You don't want to do that. Specify /EXCPETION if you must. And check the stats. for in,out, exception.

>> for each record that results in an informational message, the record doesn't in fact appear in the output file

Corect. Check stats or exception file.

>> I'm just trying to gather some suitable stats to report back here.

Excellent.

>> - 10m records takes ~20 times as much as 500k records.

Right. So just try with 1 million or so records.

>> However, the key 0 load time looks like it might be on some kind of exponential "curve", climbing steeply the more records are in the input file.

Randomness versus sequential could explain some. More likely the smaller files had all data in the XFC cache, so randomness was 'forgiven'. That's why I do my experiments on an LD drive mounted /NOCACHE or on a RMS drive.


>> It doesn't appear to make any (significant) time difference as to whether or not I reduce the FDL file just to the primary key, or (when having all three keys) use /SECONDARY=3.

no surprised... if teh RFA sort for the primary is the cause

SORT Statistics for Key 0
Records Sorted: 10000000 Direct I/O Count: 79774

LOAD Statistics for Key 0
Direct I/O Count: 10102898 Elapsed Time: 0 00:07:49.28
CPU Time: 0 00:07:20.80


hmmm, that's a factor of 127 more IO on the load. I expected 8x - 16x. Very small rms default buffer size (SHOW RMS) ?

Also, the working set seems too small to allow SORT to be optimal.

Good luck,

Hein/
Hein van den Heuvel
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

"rms drive" --> RAM drive = MDxx:

Hein
Mark Corcoran
Frequent Advisor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

******************************************************************************************
CONCLUSION/SUMMARY
******************************************************************************************

I'm confident that in our case, the reason for the slow performance between (SORT & CONVERT /NOSORT) and CONVERT/SORT is down to the fact that

1) Unlike SORT (which defaults to /PROCESS=RECORD), CONVERT/SORT effectively sorts the input file using /PROCESS=TAG
2) This results in an initial pass through the input file to build a list (in SORTWORK files) of primary keys and pointers to locations in the input file, where those records can be found.
3) A second pass is then performed through the input file, where - because the file content is not in primary key order - blocks are not read in sequential order.
4) It's clear from the stats that irrespective of what RMS buffer size or quantity you specify, or whether or not the input or SORTWORK files are fragmented when CONVERT /SORT is performed, the number of I/Os required to perform this second pass is "the same".
5) However, with a "large" input file, the amount of time taken for I/O requests on the second pass
to complete is longer.
6) This is consistent with read requests not being satisfied from the XFC cache, and instead having to be serviced by reading "in a haphazard manner" from the input file.
7) This likely contributes to delays whilst disk heads move to the required location (along with delays introduced by using SAN disks rather than directly-attached ones).


Since there is still a need for the dumped/RMU-unload RDB table to be accessed as an index file*, the RMU table dump/unload file (a sequential file) still needs to be CONVERTed into an indexed file.

However we know that attempting to CONVERT a file where the primary keys are not in sequence will result in (some) records not being written to the output file.

The only means of surmounting this is either to pre-sort the input file, or to get CONVERT to sort the input file during processing.

I've already established that in my environment, attempting to CONVERT /SORT a file of this size (partly because it is more than twice the size of the XFC cache), is a non sequitur...

Once CONVERT has built a sorted list of primary keys, the time taken to complete I/Os to then retrieve record from the input file (in order to write it to the output file) would be unacceptable.

[And that is aside from adding more load to the system and to the SAN link & farm itself]

Therefore the only acceptable means of converting this sequential file containing an RDB table unloaded by RMU, is to pre-sort it (using SORT), and then use that as the input to CONVERT /NOSORT.


*(and with the records in sorted primary key order, for reasons yet to be investigated)




******************************************************************************************
DIAGNOSIS/EVIDENCE OF MY CONCLUSION
******************************************************************************************
After much wailing and gnashing of teeth, I'm satisfied that I now know the reason for the difference between SORT & CONVERT/NOSORT vs CONVERT/SORT...

Mainly as a result of Hein's input, I considered a number of possible factors.

A) RMS buffers - number and size thereof.

Examining the existing settings showed this:

MULTI- | MULTIBUFFER COUNTS | NETWORK
BLOCK | Indexed Relative Sequential | BLOCK
COUNT | Disk Magtape Unit Record | COUNT
Process 0 | 0 0 0 0 0 | 0
System 32 | 0 0 0 0 0 | 16

Prolog Extend Quantity QUERY_LOCK
Process 0 0 System
System 0 0 Enabled

CONTENTION_POLICY
Process System
System Never


Common factors across testing:

1) Input file is contiguous, sequential, and has 10m recrods
2) Output file is located on, and SYS$SCRATCH points to the same disk as the input file
3) Convert command used is $ CONVERT INPUT_FILE.DAT OUTPUT_FILE.DAT /FDL=3KEYS.FDL /STAT=FULL


a) The above ("default") RMS settings are in use.
Finished Load of Key 0 at 09:09:30.00

SORT Statistics for Key 0
Records Sorted: 10000000 Initial Runs: 255
Sort Tree Size: 19674 Work File Allocation: 422210
Merge Order: 20 Merge Passes: 14
Direct I/O Count: 79777 Buffered I/O Count: 682
Elapsed Time: 0 00:00:47.98 CPU Time: 0 00:00:43.90
Page Faults: 100

LOAD Statistics for Key 0
Direct I/O Count: 10114452 Buffered I/O Count: 1057
Elapsed Time: 0 00:09:10.21 CPU Time: 0 00:08:49.45
Page Faults: 414

Finished Load of Key 1 at 09:10:04.90

SORT Statistics for Key 1
Records Sorted: 10000000 Initial Runs: 312
Sort Tree Size: 14050 Work File Allocation: 546626
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2 Buffered I/O Count: 3
Elapsed Time: 0 00:00:00.01 CPU Time: 0 00:00:00.00
Page Faults: 0

LOAD Statistics for Key 1
Direct I/O Count: 15676 Buffered I/O Count: 111
Elapsed Time: 0 00:00:34.87 CPU Time: 0 00:00:29.26
Page Faults: 0

Finished Load of Key 2 at 09:13:14.17

SORT Statistics for Key 2
Records Sorted: 10000000 Initial Runs: 0
Sort Tree Size: 10002448 Work File Allocation: 0
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2769 Buffered I/O Count: 7
Elapsed Time: 0 00:00:53.09 CPU Time: 0 00:00:52.21
Page Faults: 269178

LOAD Statistics for Key 2
Direct I/O Count: 1671 Buffered I/O Count: 3
Elapsed Time: 0 00:02:16.17 CPU Time: 0 00:02:02.46
Page Faults: 58

CONVERT Statistics
Number of Files Processed: 1
Total Records Processed: 10000000 Buffered I/O Count: 1916
Total Exception Records: 0 Direct I/O Count: 10214378
Total Valid Records: 10000000 Page Faults: 269797
Elapsed Time: 0 00:13:42.49 CPU Time: 0 00:12:57.35


b) RMS settings changed with the following command:
$ SET RMS_DEFAULT /BLOCK_COUNT=127 /EXTEND_QUANTITY=65376 /BUFFER_COUNT=255 /INDEXED /SEQUENTIAL

[65376 for the extend quantity rather than 65535, because it is the largest multiple of 288 (the
cluster size of the disk on which the output file is generated) that is divisible into 65535 with
no remainder]

The above changes are perhaps at the extreme end (max buffers * max size) but it actually caused
the key 0 load time to double.


Finished Load of Key 0 at 08:52:58.72

SORT Statistics for Key 0
Records Sorted: 10000000 Initial Runs: 255
Sort Tree Size: 19674 Work File Allocation: 422210
Merge Order: 20 Merge Passes: 14
Direct I/O Count: 23388 Buffered I/O Count: 683
Elapsed Time: 0 00:01:07.16 CPU Time: 0 00:00:51.12
Page Faults: 2007

LOAD Statistics for Key 0
Direct I/O Count: 9918781 Buffered I/O Count: 1069
Elapsed Time: 0 00:21:48.72 CPU Time: 0 00:20:57.03
Page Faults: 414

Finished Load of Key 1 at 08:53:35.06

SORT Statistics for Key 1
Records Sorted: 10000000 Initial Runs: 312
Sort Tree Size: 14050 Work File Allocation: 546626
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2 Buffered I/O Count: 3
Elapsed Time: 0 00:00:00.01 CPU Time: 0 00:00:00.00
Page Faults: 0

LOAD Statistics for Key 1
Direct I/O Count: 15675 Buffered I/O Count: 111
Elapsed Time: 0 00:00:36.32 CPU Time: 0 00:00:27.69
Page Faults: 0

Finished Load of Key 2 at 08:55:57.14

SORT Statistics for Key 2
Records Sorted: 10000000 Initial Runs: 0
Sort Tree Size: 10002448 Work File Allocation: 0
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2769 Buffered I/O Count: 7
Elapsed Time: 0 00:00:19.58 CPU Time: 0 00:00:18.42
Page Faults: 39138

LOAD Statistics for Key 2
Direct I/O Count: 1673 Buffered I/O Count: 3
Elapsed Time: 0 00:02:02.49 CPU Time: 0 00:01:58.23
Page Faults: 6

CONVERT Statistics
Number of Files Processed: 1
Total Records Processed: 10000000 Buffered I/O Count: 1929
Total Exception Records: 0 Direct I/O Count: 9962320
Total Valid Records: 10000000 Page Faults: 41872
Elapsed Time: 0 00:25:54.41 CPU Time: 0 00:24:32.54

c) RMS settings changed with the following command:
$ SET RMS_DEFAULT /BLOCK_COUNT=127 /EXTEND_QUANTITY=65376 /BUFFER_COUNT=0 /INDEXED /SEQUENTIAL

Finished Load of Key 0 at 09:49:27.22

SORT Statistics for Key 0
Records Sorted: 10000000 Initial Runs: 255
Sort Tree Size: 19674 Work File Allocation: 422210
Merge Order: 20 Merge Passes: 14
Direct I/O Count: 23392 Buffered I/O Count: 682
Elapsed Time: 0 00:00:51.03 CPU Time: 0 00:00:44.10
Page Faults: 108

LOAD Statistics for Key 0
Direct I/O Count: 10058459 Buffered I/O Count: 1057
Elapsed Time: 0 00:21:02.08 CPU Time: 0 00:20:08.74
Page Faults: 407

Finished Load of Key 1 at 09:49:54.15

SORT Statistics for Key 1
Records Sorted: 10000000 Initial Runs: 312
Sort Tree Size: 14050 Work File Allocation: 546626
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 3 Buffered I/O Count: 3
Elapsed Time: 0 00:00:00.00 CPU Time: 0 00:00:00.00
Page Faults: 0

LOAD Statistics for Key 1
Direct I/O Count: 15679 Buffered I/O Count: 111
Elapsed Time: 0 00:00:26.92 CPU Time: 0 00:00:22.82
Page Faults: 0

Finished Load of Key 2 at 09:53:17.63

SORT Statistics for Key 2
Records Sorted: 10000000 Initial Runs: 0
Sort Tree Size: 10002448 Work File Allocation: 0
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2769 Buffered I/O Count: 7
Elapsed Time: 0 00:01:21.65 CPU Time: 0 00:01:19.35
Page Faults: 489689

LOAD Statistics for Key 2
Direct I/O Count: 1671 Buffered I/O Count: 3
Elapsed Time: 0 00:02:01.82 CPU Time: 0 00:01:58.98
Page Faults: 238

CONVERT Statistics
Number of Files Processed: 1
Total Records Processed: 10000000 Buffered I/O Count: 1916
Total Exception Records: 0 Direct I/O Count: 10102004
Total Valid Records: 10000000 Page Faults: 490488
Elapsed Time: 0 00:25:43.59 CPU Time: 0 00:24:34.01

d) RMS settings changed with the following command:
$ SET RMS_DEFAULT /BLOCK_COUNT=0 /EXTEND_QUANTITY=65376 /BUFFER_COUNT=255 /INDEXED /SEQUENTIAL

Finished Load of Key 0 at 10:09:51.26

SORT Statistics for Key 0
Records Sorted: 10000000 Initial Runs: 255
Sort Tree Size: 19674 Work File Allocation: 422210
Merge Order: 20 Merge Passes: 14
Direct I/O Count: 79830 Buffered I/O Count: 682
Elapsed Time: 0 00:00:56.67 CPU Time: 0 00:00:46.34
Page Faults: 333

LOAD Statistics for Key 0
Direct I/O Count: 10078515 Buffered I/O Count: 1057
Elapsed Time: 0 00:12:10.60 CPU Time: 0 00:10:06.04
Page Faults: 418

Finished Load of Key 1 at 10:10:19.35

SORT Statistics for Key 1
Records Sorted: 10000000 Initial Runs: 312
Sort Tree Size: 14050 Work File Allocation: 546626
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2 Buffered I/O Count: 3
Elapsed Time: 0 00:00:00.00 CPU Time: 0 00:00:00.00
Page Faults: 0

LOAD Statistics for Key 1
Direct I/O Count: 15678 Buffered I/O Count: 111
Elapsed Time: 0 00:00:28.08 CPU Time: 0 00:00:23.57
Page Faults: 0

Finished Load of Key 2 at 10:16:10.34

SORT Statistics for Key 2
Records Sorted: 10000000 Initial Runs: 0
Sort Tree Size: 10002448 Work File Allocation: 0
Merge Order: 0 Merge Passes: 0
Direct I/O Count: 2769 Buffered I/O Count: 7
Elapsed Time: 0 00:03:33.43 CPU Time: 0 00:03:25.65
Page Faults: 1803297

LOAD Statistics for Key 2
Direct I/O Count: 1671 Buffered I/O Count: 3
Elapsed Time: 0 00:02:17.55 CPU Time: 0 00:02:00.35
Page Faults: 61

CONVERT Statistics
Number of Files Processed: 1
Total Records Processed: 10000000 Buffered I/O Count: 1916
Total Exception Records: 0 Direct I/O Count: 10178497
Total Valid Records: 10000000 Page Faults: 1804442
Elapsed Time: 0 00:19:26.45 CPU Time: 0 00:16:41.98




B) XFC Cache
Despite what I've indicated above, my testing wasn't always as standard as above; Hein had
suggested that "More likely the smaller files had all data in the XFC cache, so randomness was
'forgiven'".

I have to confess that I don't know a great deal about cacheing in VMS - I just trust it to do
stuff by itself without any interference from me, so haven't had the need to read up on it before.

Consequently, I didn't know what the XFC cache was (or even at the time, what XFC stood for), what
precisely it caches, or how it works (e.g. if you have large (numbers of) RMS buffers, does this
cause file reads to read more from sequential files than you request, on the basis that you'll
want the next few blocks etc.)

I decided to have a look into cacheing, and I compared the volume shadowset I was using for my
testing with the one normally used by the process which has been having problems.

Test volume:
Extent cache size 64 Maximum blocks in extent cache 60941376
File ID cache size 64 Blocks in extent cache 3788928
Quota cache size 0 Maximum buffers in FCP cache 57995

Normal volume:
Extent cache size 64 Maximum blocks in extent cache 28335832
File ID cache size 64 Blocks in extent cache 0
Quota cache size 0 Maximum buffers in FCP cache 57995

I had been using the test volume for input & output files, and the normal volume for the
sort work files (to improve throughput, and to help reduce fragmentation of the sort work
files (depending on when CONVERT requests to create these in relation to the output file)).

I wasn't sure what the significance of 0 blocks being in the extent cache (whatever that is),
but I suspected it wasn't good, so decided to move the sort work files to the same device as
the input/output files, but this didn't make any difference to timings.


I then had another thought about Hein's mention of "smaller files had all data in the XFC cache, so randomness was forgiven".

I thought that perhaps the randomness was a causal factor...

As rows are deleted from the database and deleted rows are subsequently reused by new rows being added,
the rows would be less likely to be in sequential order.

Given that I knew that attempts to CONVERT /SORT the first 10m records "was okay", whereas the first 20m (and even the first 15m) records "took forever", I did consider that for industry-specific reasons, the data added to the table in the earlier years was
probably more likely to still be in sequential order, and that later data was non-sequential.

Extracting the second block of 10m records, and SORTing this (rather than CONVERT /SORTing it), didn't prove the point - timings for sorting the first 10m and the second 10m records were approximately the same.

I'd hoped that like the description for how CONVERT/SORT works (in the Record Management Utilities Reference Manual), there might be something similar for SORT - but no such luck, so I tried the help library on the offchance that there might be a description.

There wasn't, but it did lead me to finding the default behaviour - SORT /PROCESS=RECORD

What CONVERT /SORT does is essentially SORT /PROCESS=TAG (probably because unlike SORT, CONVERT allows for multiple keys, and therefore, it effectively "needs" to do SORT /PROCESS=TAG).

In my particular case, I've established that we no longer need the second and third keys, but even with a single key, CONVERT /SORT will still essentially do a SORT /PROCESS=TAG.


I then considered that (as Hein had mentioned) CONVERT /SORT is performing at least 2 passes of the input file, and that perhaps the issue was whether or not the file was in the cache in its entirety.

Attempting to SORT 10m records /PROCESS=TAG was taking "forever" too.

I thought that perhaps because of the randomness of the location of the records in the input file, when SORT came to generate the output file (after first having generated sort work files with primary keys and pointers to the location of the records in the input files), the problem was the "virtual" (SAN farm) disk thrashing about all over the input file to get the records in order, rather than the file not being in the cache.

[I also thought that perhaps when CONVERT/SORT happened to work "normally" in the past, it wasn't so much because the input file wasn't fragmented, but *because* it was fragmented...

i.e. the "random" location of the fragments meant that the non-sequential contents of the file, now happened to be laid out on disk in a sequential manner (a bit like a "double negative" making a positive, IYSWIM)]


However, whilst having searched c.o.v postings related to XFC, I came across a command that I'd not seen/used before:

$ SHOW MEMORY /CACHE=FILE=

I used this whilst performing a CONVERT/SORT, and after CONVERT creates its sort work files with the primary keys and pointers to record locations in the input file, it uses them to build the output file.

As the number of reads to the input file increases, the cache hit rate went down (from 98% to 58% and counting^w decrementing)

Obviously, the bigger the file, the less likely it is that it would be cacheable in its entirety (depending on the size of the XFC, and what demands are made on it by other files).

On our system, the full file (~5.5GB) can't be entirely cached by the XFC (which is sized at 2.06GB).

Even if the file could be entirely cached, by the time the file is fully loaded into the cache, due to the random nature in which CONVERT /SORT will be retrieving records,
it will have had a lot of cache hit failures, and many of the blocks read into the cache would only be referenced once or a handful of times, "taking up space" that would be better used by as-yet-unread blocks.

[I'm not sure, but I'm assuming that when SHOW MEM /CACHE reports page counts, it's talking about pages of size 'F$GETSYI("PAGE_SIZE"), rather than defaulting to Vax page sizes?
Do (all) facilities reporting pages really mean pages of size 'F$GETSYI("PAGE_SIZE")', or do they sometimes (for backwardws compatability purposes, mean Vax page sizes?]

My tests with 500k records resulted in an input file of ~120608 blocks, which would comfortably fit in the varying numbers of allocated cache pages I saw (between 136000 and 189000), if the page size being reported on is 'F$GETSYI("PAGE_SIZE")'.

Similarly, with 10m records, the input file is ~2.4m blocks; 189000 pages would equate (on our cluster) to 189000 * 8192 bytes, equivalent to 3024000 512-byte blocks, i.e. enough to cache the 2.4m blocks (same caveat on page size applies).
Hein van den Heuvel
Honored Contributor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Mark,

Thank you very much for the extensive summary reply. It validates the efforts others have put in to try to help some.

- Random versus sequential IO is a HUGE factor.
- Larger buffers help a lot in sequential access.
- Many buffers rarely help, they just burn CPU to find/maintain them,
-- For sequential you only need a few more than one to get read-ahead and write-behind.
-- For random access you need enough to significantly improve the 'odds' of having the right buffers. That's often unlikely.
- Many big buffers is just silly: Each IO reads lots of block, only to use one or a few
A few years back I saw 50 buffers as already wasting more CPU time to wade through versus re-reading from XFC, because RMS uses a (silly!) sequential search through the buffer descriptors. That 50 is not hard, but 255 is probably always a CPU loss (unless reading from floppies or similar :-)

- When the XFC can help it will... big time. But when you exceeds its size, the cliff steep.
-- That's why I often run tests against (virtual) disks mounted NOCACHE or againt and MD device, for which the XFC does not attempt to cache.

>> "attempting to CONVERT a file where the primary keys are not in sequence will result in (some) records not being written to the output file."

- I get the impression there might only be thousands, not millions, out of order. If the out of orderness is controlled (going back), then CONVERT/EXCEPT can be used to capture the odd ones.:
For example, the 12 in 10,11,15,16,12,17,18 will show of in the exception file.
However, if the records were 10,11,20,12,15,16,17,18
then the exception file would have 12,15,16,12,17,18

You and I might recognize the 20 as the odd one out, but convert/sort will not.

Some program/perl script might be able to split the file in an ordered main stream, and the odd-ones-out. You could then convert/nosort the bulk file, and convert/merge in the odd ones later.

>> XFC cache was ... how it works .. ead more from sequential files than you request,

Yes, there is a read ahead.
ANALYZE/SYSTEM has a nice XFC extension.

>> decided to have a look into cacheing
"Extent cache size "....

Those are just the files system meta-data caches.
Not the data caches.
The file system caches typically only have a marginal effect.
Yes you ought to have them reasonable, but they will not make a 10x difference like the XFC may give.


>> What CONVERT /SORT does is essentially SORT /PROCESS=TAG

Yes, it uses and RFA sort and the main reason is SPACE. A record sort doubles the space requirements, Convert, unlike you or me, can not decide to delete the input once sorted.
The RFA sort needs 6 bytes per record which typically is factor of 10 - 1000 less spaces than the real records.
The price is speed, but remember this was coded up 25 years ago when space was the premium.

>> SHOW MEMORY /CACHE=FILE=

Yes, that's the real DATA cache for a file, and it's stats. That's the good stuff!

Yes XFC pages are system pages size: 8192 bytes typically.

You can also use SHOW MEMO/UNIT=BYTES to help confirm.

>> Similarly, with 10m records, the input file is ~2.4m blocks

Yes, that 1+ GB is readily cacheable on a low memory pressure modern system.
You see that in the CPU time approaching elapsed time.


Mark Corcoran
Frequent Advisor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

Grrr!

I've lost my edit session again here, so need to remember what I typed.

[I think, in part, it was either because I was in VMS mode when I was typing, and hit CTRL-H (which in IE will not take you to the head of the line in an edit box, but actually take you back a page), or because my laptop had not sensed/received the break keycode for the depress of the the CTRL key).

So, I'm now going to edit this in MS Word then cut & paste from there (because at least I can undo/redo in Word)]


>Thank you very much for the extensive summary reply. It validates the efforts others have put in to try to help some.

No problem; thanks everyone for their input - it was much appreciated.

I know that not everybody likes a tome-sized update (especially round here, where bullet-pointed 2-word summaries are preferred, even though it then makes them meaningless), but it's a bit like being back in school - getting credit points for showing how you arrived at your answer (to show you understand the process), rather than just give the answer (using a calculator).


>Many big buffers is just silly: Each IO reads lots of block, only to use one or a few
I'd guessed as much, but I felt the need to take the extremes at each end, with suitable intervals in between.

Unfortunately, the amount of time taken to perform tests (and a lot of the tests I did were not shown in my update) limited the number of intermediate RMS (and other) settings that I could use.

[I'd already felt guilty about having spend a lot of time recoding ~35 command files into a single one, to prevent input/output file fragmentation (and to standardise on naming conventions, and to avoid having to make ~35 sets of changes each time something changed).

I was at the stage of testing it when a colleague found that SORT & CONVERT/NOSORT was quicker than CONVERT/SORT; everybody jumped on the bandwagon and blamed CONVERT.

I was a little sceptical - I wasn't happy not knowing why the difference occurred, and was concerned that either the use of SORT worked but might stop working in X amount of time, or that coincidentally something else changed around the same time, and in fact, the SORT didn't fix the problem.

Some IT people/techies are happy to let self-resolving issues go unexplained, but not me; I realise that you can get to a point of "diminishing returns", where how much time you spend investigating it vs how likely you are to get to the bottom of it, becomes more time & less likely; I just hope I'm sensible enough to realise when I've reached that point :-D]


>I get the impression there might only be thousands, not millions, out of order
I was actually wondering as to how many there were, and had considered writing a script to determine this, but I tried your suggestion of using the exceptions file.

The problem is that after ~8mins of running, the number of records processed was ~1/10th the total, and the exceptions file was 1/10th the size of the input file, so I aborted it.

Looking at the exceptions file and the input file, it was clear why this was the case, and is something that I find sometimes happens with DIFFERENCES.EXE when you compare two similar (but not the same) files.

Suppose the first 4 records' primary keys were this:

000000000001
900000000002
000000000003
000000000004

As humans, we can clearly see that the 900000000002 record is the odd one out here; CONVERT however, doesn't have the artificial intelligence to be able to do this - it simply sees the 900000000002 is next in sequence after 000000000001, and that anything less than or equal to 900000000001 must be out of sequence.

[i.e. your example was spot on]

On reflection, I'm not sure how one would programmatically generate some percentage value for out-of-orderness, given the above├в ┬ж


>Yes, it uses and RFA sort and the main reason is SPACE.
It had struck me that space would be the likely reason for the way that CONVERT sorts records, but I think perhaps when a user specifies /FDL and the FDL file only has a single key in it, that this type of sort is no longer suitable (in speed terms - obviously, it depends on the disk space available to CONVERT on SYS$SCRATCH).


>You can also use SHOW MEMO/UNIT=BYTES to help confirm.
Alas, it seems that SHOW MEM /CACHE always used bytes (or GB/MB anyway) as the unit for display, but always uses pages for SHOW MEM /CACHE=FILE.

I tried doing SET PROCESS /UNIT=BYTES, in case for some reason the /UNIT qualifier on the SHOW command was being ignored or overridden, but it made no difference :-(

Thanks again for your help. Now I've got some recoding to do├в ┬ж

Mark
Mark Corcoran
Frequent Advisor

Re: SORT plus CONVERT /NOSORT is quicker than CONVERT /SORT ???

See my last update to the thread.