Operating System - OpenVMS
1826187 Members
2572 Online
109691 Solutions
New Discussion

Indexed file as a FIFO buffer

 
SOLVED
Go to solution
Michael Moroney
Frequent Advisor

Indexed file as a FIFO buffer

I am dealing with some VMS software that generates messages to be sent to another system. In order not to lose the messages if the other system is down, it doesn't send the messages directly. It writes the messages to a temporary indexed file, which is just used as a FIFO buffer. Another process loops reading records from the file, sends them to the remote system, and if successful, deletes the record. If there is no record, it waits and tries again.

The indexed file has one key, the date/time. It's just there to keep the records in order.

The problem is, the file grows and grows, and access gets slower and slower, despite the fact it's usually empty and rarely has more than a couple of records in it. They have a hack that does the equivalent of a $CONVERT/RECLAIM every so many records, but there must be a better way. Is there some FDL magic to tune a file where it is expected that every record is deleted? Or is there a better way to implement a file-based FIFO on VMS? I say file-based since any data (or the fact there is no data) must survive reboots.
21 REPLIES 21
Hein van den Heuvel
Honored Contributor

Re: Indexed file as a FIFO buffer

There are better ways.

Specifically they should probably learn to keep a last-time-looked key value, possibly in a lock value block and use that for a KGE lookup 99 out of 100 times.
Then once every 100 elements, or 1000 seconds (whichever comes first) scan the file from the beginning to catch out of order arrivals.

Also, review the bucket size, and duplicate key requirements.
Switch dups off, and selecting a large bucket size may keep this file under control.

I suspect that this will already help enough. If not, a day of consulting should do it. Several folks in this forum would be eager to help with that (for fee). Or maybe acquire a special program I happen to have for these cases. Send me Email!

This topic has similarities to an other question today: 1189881. Check that out. Post an ANALYZE/RMS/FDL/OUT example ?!

Hope this help some,

Hein van den Heuvel ( at gmail dot com )
HvdH Performance Consulting
Hoff
Honored Contributor

Re: Indexed file as a FIFO buffer

So you're writing some RTR-like middleware. OK. (Writing custom local middleware wouldn't be my first choice when I could purchase same and let somebody else deal with support and upkeep, but there can be cases when rolling your own can be necessary.)

RMS indexed is not the best RMS structure choice for this case, since you care less about the sorted access provided by indexed files than about a sequence of messages. Your file (as you've discovered) grows.

First pass design: Two queues, with pending and free lists. Probably using a relative or a block or fixed-length file for the static storage, or using memory queues and a file-backed section and periodic flushes -- and the flushes could be coded to increase in frequency in response to the remote end dropping off-line. This is standard AST-driven server stuff.

End-points would want to (or have to) maintain sliding windows for processing potentially duplicate data. These sliding windows can be based on what your indexed files are presently using for keys. (Indexed files don't deal with sliding windows all that well.)

Biggest wrinkle would be what to do when the far end receiver was slower than your storage was big. You can only enbiggen for so long before you have to decide to stall (eg: back-pressure) or to drop.

Also cromulent would be the use of 2PC and transactions, whether XA or otherwise with DECdtm.

Middleware such as RTR would be the easiest. This stuff is already tied in with DECdtm.

A different design might parallel how Erlang and its Mnesia avoids writing to disk entirely by always keeping and always replicating the information in memory across multiple hosts, and assuming that some subset of the hosts would survive. Microsoft Research has something logically similar with its Niobe replication platform.

I have pointers to some of this stuff posted at the web site, if you can't find details elsewhere. (And assuming you're not going straight after the queues mentioned earlier, or after the approach that Hein suggests, or after RTR or other middleware.)

Stephen Hoffman
HoffmanLabs LLC
David Jones_21
Trusted Contributor

Re: Indexed file as a FIFO buffer

Doesn't RMS have a rule about not re-using RFAs (record file addresses) in a file so even if you delete a record it still consumes a little bit of space? You end with buckets full of deleted record markers. Does turning off 'allow duplicates' affect this.

I wonder if it would help if you used 2 keys, a primary key whose values are recycled (records re-written) and the timestamp in a secondary key. You never delete records, just overwrite them. Robustly tracking the free list of primary key values for the next write/rewrite would be the biggest issue.
I'm looking for marbles all day long.
Hein van den Heuvel
Honored Contributor

Re: Indexed file as a FIFO buffer

Hoff is absolutely right in suggesting the many alternatives. A 'circular' buffer in a sequential or relative file may well be the better design. Optionally add to that a (clusterwide) lock value block to hold the next free, last finished numbers to be updated to disk every so often.
And several products adrress this: RTR, MQseries, ...

My recommendation were geared to making the most of the current setup with minimal changes. All too often a requirement to get anything done at all!

David>> Doesn't RMS have a rule about not re-using RFAs (record file addresses) in a file so even if you delete a record it still consumes a little bit of space?

It does have that rule, but it implemented through a simple 'next free id' concept. This is a fixed word in the bucket header, always there. It does force a bucket split every 65K records targetted for the same data bucket!

Deleted records are nicely expunged EXCEPT:
a) The last (first !? :-) in a bucket
b) In the case duplicated are NOT allowed.
c) An odd key compression case where deleting the record would cause the NEXT record key to become too big to fit.
in all the above cases RMS leaves the record header plus (compressed) primary KEY behind

>> You end with buckets full of deleted record markers. Does turning off 'allow duplicates' affect this.

You can get those (single bytes!) for ALTERNATE KEY, SIDR entries, not for primary keys.

>> I wonder if it would help if you used 2 keys, a primary key whose values are recycled (records re-written) and the timestamp in a secondary key. You never delete records, just overwrite them.

Then why not create a circular buffer.
Those 'alternate keys' you suggest would become a header records. Record 0. That's all.

>> Robustly tracking the free list of primary key values for the next write/rewrite would be the biggest issue.

Ayup!

Hein.
Michael Moroney
Frequent Advisor

Re: Indexed file as a FIFO buffer

I'm not writing anything new, just dealing with an existing application. We're not about to do a full redesign or rewrite now, just looking for tuning tricks or ideas for minor changes. Right now they have to close the file, call CONV$RECLAIM on it and reopen the file every 5000 writes.

Right now the records are read with a KGE lookup with a key of 0 to get the oldest record, if any. Would just saving this key and doing the next KGE read with it set to the last key rather than zero help?

The remote end is a PC with an Oracle database.

Attached is a FDL of an "empty" version of the file. It has grown to 3231 blocks here.
Hein van den Heuvel
Honored Contributor

Re: Indexed file as a FIFO buffer

>> I'm not writing anything new, just dealing with an existing application.

I expected as much.

>> Right now they have to close the file, call CONV$RECLAIM on it and reopen the file every 5000 writes.

That's just crazy!

>> Would just saving this key and doing the next KGE read with it set to the last key rather than zero help?

Oh yeah. You will not recognize it.
The size will not change, the need to convert will just about go away, or be reduced to daily or so.

>> Attached is a FDL of an "empty" version of the file.

Can you snarf a copy (back/ignore=interlock?) while there still are records?
Or take an empty file and plunk in 1 dummy record just to trigger ANAL/RMS to output more data (DCL write)?
Or use my tune_check!

Better still... Run the attached program against (a copy of) the empty file and share the results in an attached txt file, or in an Email to me.

Cheers,
Hein.
Robert Gezelter
Honored Contributor

Re: Indexed file as a FIFO buffer

Michael,

I can think of several ways of dealing with indexed records that would not give rise to the need to reorganize the file on an ongoing basis.

The core of the problem as you describe it is the use of the date/time as the key. There are several alternative ways of organizing this to prevent a constantly changing primary key, which is effectively the source of your problem.

Depending on how the sources are implemented, the correction could be straightforward, or it could be more involved. As my colleagues would certainly agree, speculating on the ease of modifying sources without a thorough review of those sources is not a sound way to proceed.

I would agree with Hein's original comment, re: retaining a suitably experienced consultant to review the sources and suggest changes [Disclosure: such services are within our scope of practice].

Some of the approaches that I can imagine would virtually eliminate the need to reorganize the file, period.

- Bob Gezelter, http://www.rlgsc.com
Jon Pinkley
Honored Contributor

Re: Indexed file as a FIFO buffer

If messages must be delivered FIFO, how do you guarantee FIFO order when time is changed in a backward direction in the fall (assuming you are under daylight savings/summer time rules.)

Jon
it depends
Michael Moroney
Frequent Advisor

Re: Indexed file as a FIFO buffer

Hein,

I took a copy of the "empty" file and added a bunch of legitimate data and generated the FDL again.

ANALYSIS_OF_KEY 0
DATA_FILL 94
DATA_KEY_COMPRESSION 83
DATA_RECORD_COMPRESSION -4
DATA_RECORD_COUNT 849
DATA_SPACE_OCCUPIED 144
DEPTH 1
INDEX_COMPRESSION 54
INDEX_FILL 2
INDEX_SPACE_OCCUPIED 12
LEVEL1_RECORD_COUNT 12
MEAN_DATA_LENGTH 84
MEAN_INDEX_LENGTH 22
LONGEST_RECORD_LENGTH 132

Your little program produces:

$ xxx -v cadsegment.sfl2

* 3-JAN-2008 16:24:15.45 ALQ=6528 BKS=12 GBC=0 cadsegment.sfl2

Bucket VBN Count Key
------- ---------- ----- ----------------------------------
1 3 76 20080103161046130000
2 15 77 20080103161046150015
3 27 76 20080103161046170015
4 39 75 20080103161046190017
5 51 71 20080103161046210014
6 63 75 20080103161046230004
7 75 79 20080103161046240040
8 87 77 20080103161046260038
9 99 73 20080103161046280034
10 111 77 20080103161046300028
11 123 75 20080103161046320031
12 135 30 20080103161046340032
Hein van den Heuvel
Honored Contributor

Re: Indexed file as a FIFO buffer

Thanks Mike, but I must not have been clear enough. I needed the start of the experiments to be a live datafile, just before a convert/reclaim. I believe that means that just under a multiple of 5000 records lived in the file.
Next...

>> plunk in 1 dummy record

What part of "1" did you not understand :-) :-) :-).

The 1 would just trigger the right ANAL/RMS display.

What you showed looks to me as if you started on a completelly clean file and added some records, not a snapshot of how a file deteriorates in production.

So maybe you want to try that again on a live copy, not on a contrived load.
Or maybe I'm reading/interpreting this all wrong!

Anyways, I'm afraid I am out of processing quota on this topic. Back to real work!

Good luck,
Hein.
Michael Moroney
Frequent Advisor

Re: Indexed file as a FIFO buffer

That is from a copy of the live file, which was empty when I grabbed it. I have to go through contortions to access it, it was copied through CONVERT/SHARE and then ZIP "-V" Definitely not a clean file!

Your program produced:

* 3-JAN-2008 18:25:15.98 ALQ=6528 BKS=12 GBC=0 cadsegment.sfl
- INIT! Primary index has not been initialized?

on the empty file so I don't know if internal RMS stuff survived both conversions so it may have looked clean.
this,
Hein van den Heuvel
Honored Contributor

Re: Indexed file as a FIFO buffer

Convert/share will create a perfectly clean file.
It does NOT copy blocks, it just creates a clean empty file, then grabs any and all records, and inserts those.

What part of "back/ignore=interlock" did you not understand ?

:-) :-) :-)


Cheers,
Hein.
Michael Moroney
Frequent Advisor

Re: Indexed file as a FIFO buffer

Oh, this looks more interesting.

$ xxx -v test.sfl

* 3-JAN-2008 20:20:47.28 ALQ=3264 BKS=12 GBC=0 test.sfl

Bucket VBN Count Key
------- ---------- ----- ----------------------------------
1 375 0 20070710033049870000
2 303 0 20070710134632020000
3 447 0 20070714080525170000
4 363 0 20070715102357510000
5 315 0 20070718012305160000
6 483 0 20070718090455470000
7 351 0 20070721142250520000
8 423 0 20070722204555300000
9 195 0 20070723130144630000
10 531 0 20070723191920290000
11 555 0 20070723210454290000
12 3 0 20070726114637160000
13 339 0 20070727073923170000
14 207 0 20070727082101340000
15 279 0 20070727135630700000
16 519 0 20070727152412180000
17 75 0 20070727195434830000
18 51 0 20070728220627250000
19 27 0 20070729122805830000
20 255 0 20070729215745500000
21 39 0 20070730114345450000
22 87 0 20070801123750730000
23 111 0 20070802151803490000
24 1143 0 20070806170401670000
25 63 0 20070807150010810000
26 219 0 20070810090147360000
27 327 0 20070810091551390000
28 159 0 20070811102111500000
29 891 0 20070814004246690000
30 639 0 20070817113221980000
31 171 0 20070819081143060000
32 723 0 20070821063223960000
33 135 0 20070822220930300000
34 1887 0 20070823143547820000
35 1119 0 20070831191351690000
36 1203 0 20070831191748080000
37 1911 0 20070902223428530000
38 615 0 20070903051929580000
39 711 0 20070905161151090000
40 867 0 20070905200538460000
41 567 0 20070906184151420000
42 1815 0 20070908095952620000
43 507 0 20070908190802700000
44 267 0 20070909103551000000
45 1863 0 20070909150727260000
46 99 0 20070909160621870000
47 1131 0 20070909190244680000
48 1107 0 20070910011119960000
49 1095 0 20070910132824220000
50 15 0 20070910162315710000
51 1311 0 20070911011841630000
52 231 0 20070911133316790000
53 915 0 20070911230722950000
54 591 0 20070912003745390000
55 2067 0 20070912142258950000
56 735 0 20070912195521810000
57 243 0 20070912213229460000
58 603 0 20070913223018910000
59 1827 0 20070915035315350000
60 699 0 20070915185632850000
61 843 0 20070915225413250000
62 1635 0 20070917073026390000
63 2031 0 20070917172333430000
64 2019 0 20070917181651890000
65 2043 0 20070918064515540000
66 831 0 20070918173915300000
67 1239 0 20070918213901040000
68 939 0 20070920194216150000
69 1851 0 20070921184258380000
70 1647 0 20070921232413670000
71 2007 0 20070922052833090000
72 783 0 20070922164054130000
73 651 0 20070923004157750000
74 1971 0 20070923015927480000
75 1227 0 20070923025509980000
76 627 0 20070923173834950000
77 1071 0 20070924072745590000
78 1659 0 20070924194313980000
79 1839 0 20070925153238980000
80 1995 0 20070926141941980000
81 147 0 20070926163157980000
82 1983 0 20070926183352360000
83 1959 0 20070926213325850000
84 1251 0 20070927135621800000
85 1479 0 20070928093934190000
86 2055 0 20070928143346640000
87 1935 0 20070928180048260000
88 1083 0 20070929100533130000
89 1923 0 20070929194846670000
90 1287 0 20070930035228140000
91 1875 0 20070930102907580000
92 1299 0 20070930132322940000
93 579 0 20070930201620780000
94 1215 0 20071001105120020000
95 1191 0 20071002091946560000
96 927 0 20071003205900870000
97 759 0 20071004003748120000
98 1455 0 20071005103325710000
99 1671 0 20071005120903910000
100 1059 0 20071005151800750000
101 399 0 20071011021430270000
102 495 0 20071013001031500000
103 387 0 20071013133621940000
104 471 0 20071014213756800000
105 1947 0 20071015205815970000
106 543 0 20071018104952070000
107 951 0 20071018201516680000
108 1803 0 20071019144134520000
109 1023 0 20071023134600950000
110 855 0 20071023143856560000
111 1791 0 20071023215207790000
112 1263 0 20071024144826850000
113 183 0 20071025075008230000
114 291 0 20071026010238000000
115 1599 0 20071026161210330000
116 1167 0 20071026205738090000
117 1707 0 20071028122647370000
118 1035 0 20071029042119900000
119 1275 0 20071029043916410000
120 411 0 20071029195631180000
121 963 0 20071030111521240000
122 1503 0 20071030213452290000
123 435 0 20071031003616610000
124 1743 0 20071101051809080000
125 123 0 20071101103844460000
126 1539 0 20071101133740090000
127 1683 0 20071102144251740000
128 687 0 20071102173059360000
129 1755 0 20071102213307350000
130 2079 0 20071103111305220000
131 1719 0 20071103131507990000
132 1575 0 20071104183754670000
133 1767 0 20071105065849740000
134 459 0 20071105073658790000
135 1047 0 20071105111120010000
136 1527 0 20071105171210160000
137 795 0 20071106003626660000
138 1179 0 20071106221113370000
139 1899 0 20071108211724920000
140 1587 0 20071109112314050000
141 2512 0 20071122173225260000
142 2476 0 20071123171112090000
143 2115 0 20071126232319230000
144 2091 0 20071127130012220000
145 2632 0 20071128222711800000
146 663 0 20071128230753970000
147 999 0 20071129134747670000
148 2356 0 20071203163815650000
149 2103 0 20071204180349300000
150 2320 0 20071204192519040000
151 2332 0 20071204223834300000
152 2452 0 20071205042844240000
153 2368 0 20071205225108750000
154 2488 0 20071206123623880000
155 2164 0 20071206123844020000
156 1011 0 20071206124021830000
157 1491 0 20071206185443640000
158 675 0 20071207025151370000
159 2440 0 20071207163511680000
160 2644 0 20071209115720830000
161 2584 0 20071209180916340000
162 2296 0 20071209203023170000
163 2152 0 20071210051337330000
164 2344 0 20071212175141390000
165 2500 0 20071213174901080000
166 2572 0 20071213181025200000
167 2560 0 20071213204818540000
168 2536 0 20071214105835490000
169 2428 0 20071214204440550000
170 2416 0 20071214215101860000
171 2404 0 20071215053124990000
172 1383 0 20071217165612300000
173 1371 0 20071218083158410000
174 903 0 20071218094406550000
175 747 0 20071218100331680000
176 975 0 20071218140648150000
177 987 0 20071218213139840000
178 2596 0 20071219112629870000
179 1695 0 20071219160052270000
180 2392 0 20071219161612300000
181 2908 0 20071228090507090000
182 771 0 20080101155859950000
183 2932 38081 20080103195921850000
184 3160 42578 20080103195957580000
185 2884 57017 20080103200028240000
186 3148 49434 20080103200103090000
187 2836 29924 20080103200123510000
188 2752 38642 20080103200124170000
189 2764 18891 20080103200204290000
Hein van den Heuvel
Honored Contributor
Solution

Re: Indexed file as a FIFO buffer

Right. That's more like it.

I'm going to try a joke here, so bear with me:

>> >> Right now they have to close the file, call CONV$RECLAIM on it and reopen the file every 5000 writes.
> That's just crazy!

What part of "That's just crazy!" did you not understand.
That was your cue to the solution all along.

They did not really understand what they were dealing with and actually made matters worse, trying to hide the real problem, not thinking through the real issue.

They should have done a daily 'full' convert every day and you would never had gotten into this predicament.

Just start that now. Tonite still.

I'm quoting 'full' as there will be just a few records and the convert with take less than a second.

Those 182 buckets with a count=0 have exhausted their 65K unique record ID's and can no longer be reclaimed. The other 7 or 8 buckets can, and will be reclaimed a few more times, but eventually will become locked in time, untill a real convert is finally done!

Drop the crazy convert/reclaim code!
Replace it with a 'full' convert every 100,000 - 1,000,000 records. (10*65000)
You may or might not want to bump the bucket size a little and you'll see no more growth ever.

Oh, and just in case it is not blatently obvious, the slowness comes from reading (through) 189 empty buckets before finding a record to work on (or not). With that comes of course also 189 ENQs and 189 DEQs fro the bucket locks, which may or might not have to go to an other cluster member.
Furthermore, they probably failed to set global buffers on this file, so if it is truly actively write shared in a cluster, then the XFC can not cache the file, and most of those 180 read IOs would be real IOs.

ok?

Hein.

Robert Atkinson
Respected Contributor

Re: Indexed file as a FIFO buffer

Michael, my two-penny worth.

Although it's not a particularly elegant solution, using a queue to deliver the information would eliminate the problems you're encountering, as the Queue Manager effectively handles the key recycling for you.

If there is a vast amount of information to be sent, then write it to a plain text file, then submit that to a stopped queue.

If there's a small amount of info, then use /PARAM to store the info in the entry itself.

Your 'sender' routine can then use F$GETQUI to run through the queue entries and delete them as each piece of data is sent.

Rob.
Michael Moroney
Frequent Advisor

Re: Indexed file as a FIFO buffer

>> >> Right now they have to close the file, call CONV$RECLAIM on it and reopen the file every 5000 writes.
>> That's just crazy!

>What part of "That's just crazy!" did you not understand.
>That was your cue to the solution all along.

This SW is so strange that I'm rather immune to people calling it crazy. Also I thought you were referring to the need of doing it so often, not doing it at all.

>They should have done a daily 'full' convert every day and you would never had gotten into this predicament.

>Just start that now. Tonite still.

I have a better idea. Now that I see what is going on, why not simply recreate an empty file once in a while, as long as I know the original is empty?

>Those 182 buckets with a count=0 have exhausted their 65K unique record ID's and can no longer be reclaimed. The other 7 or 8 buckets can, and will be reclaimed a few more times, but eventually will become locked in time, untill a real convert is finally done!

I'm kind of disappointed in hearing that RMS files can be clogged with unusable, unreclaimable buckets like that. After all, if you do lots of file creates/deletes on disk drives, the space occupied by the files doesn't become permanently allocated and unusuable after 65K creates/deletes. But what do I know? I don't understand the magic behind making indexed files work.

Since I recognize the time stamps it is clear why it's slow, if it searches through them all every time. The reads do a KGE access with a key of 0 every time.

As to using a stopped queue, very interesting idea. But will this clog the VMS queue database files with the same problem? (I assume they're RMS indexed files of some sort)
Robert Gezelter
Honored Contributor

Re: Indexed file as a FIFO buffer

Michael,

Commenting without having had an opportunity to see the sources is always a hazard, but the problem is generally not with RMS per se, but is more often a question of how RMS is used.

RMS Indexed files are useful tools, but by no means magic. Changes to the key fields (those that are indexed) will clutter up the index structures.

Reading your description of the programs and their processing does lead me to suspect that there are far better ways of managing the queue file, that would not impact performance, and would, in all likelihood, not require the file to ever be reorganized.

- Bob Gezelter, http://www.rlgsc.com
Robert Atkinson
Respected Contributor

Re: Indexed file as a FIFO buffer

> As to using a stopped queue, very
> interesting idea. But will this clog the
> VMS queue database files with the same
> problem? (I assume they're RMS indexed
> files of some sort)

NO, because the queue manager will reuse the entry numbers, which is like reusing the empty buckets in your case.

As you will only have a few entries at any point in time, i.e. not thousands, you won't see any performance impact.

Rob.
Willem Grooters
Honored Contributor

Re: Indexed file as a FIFO buffer

To my experience, the way that RMS indexed files are organized and how RMS handles deletions 9and updates) is hardly known with most programmers that were educated with either flat files and (relational) databases. Some system managers seem to forget that Indexed sequential files need maintanance, the more updates (including deletes) are applied, the more often: CONVERT, after re-calculation of bucket sizes and buffers...
Looking at the dump you gave on Hein's request, I have the impression the file is HIGHLY fragmented - internally. If you have to walk the index buckets, it would mean re-reading data since your index buckets seem to be scattered over the disk - smashing performance to bits. I dealt with such problems in a not too distant past.

Try this:

$ pipe dum/header/blocks=(count:0) (your file)|searc/match=and sys$pipe "Count:", "LBN:"

and count the number of lines. There should be as few as possible.

If there are many, see if theer is a pattern that matches butcket sizes in this file's FDL.
Willem Grooters
OpenVMS Developer & System Manager
Hein van den Heuvel
Honored Contributor

Re: Indexed file as a FIFO buffer

Folks, I think Michael is well on its way to improve the situation.

Please let us not forget that even with the established inefficiencies the current solution is still useable, It is in active use, allthough it gives concerns.
So if we make this 5x better, it becomes a non-issue. If we make it 25x better, which appears to be possible, then the queue-in-indexed file becomes a perfect solution again! Why mess with success?!

Looking at the deleted record trace, we see that in almost 1/2 year, there were some 12,000,000 transactions requested, or about 66,000 per day, probably 4/sec in a busy time. So that would add a bucket per day when done right, as confirmed by the number of data buckets. (circular logic, I admit :-)
Big whoppee! Very manageable... once you know it is happening.

Michael>> I have a better idea. Now that I see what is going on, why not simply recreate an empty file once in a while, as long as I know the original is empty?

The best (only) way to know whether the file is empty to read it. Well, in that case you can just have convert read it in the process of creating a new file.
The best solution is probably a mix:
- Just call FDL$CREATE to create a new, empty file.
- Close & Open to activate the new file.
- Rename the old file to .old
- Convert/merge or just a read-write loop to merge in any records left behind in the old file.
- Purge/keep=5 on the old file.

Michael>> I'm kind of disappointed in hearing that RMS files can be clogged with unusable, unreclaimable buckets like that.

It's a robustness choice. RMS promisses fool proof RFA access for the life of a file. You may disagree with that choice, but this is the promiss made. The only way to honer that promiss is to retire a VBN when all 65K record IDs have been used in that VBN.

What could be done, and I'll submit it to a wish list, is to give CONVERT/RECLAIM an "REUSE_RFA" option. Don't hold your breath!

Bob>> Changes to the key fields (those that are indexed) will clutter up the index structures.

It's a singe key file. No issue here.

Michael>> The reads do a KGE access with a key of 0 every time

So that's the other, additional, way to tackle this. Make the code remember the last date/time used. And just do a KGE on the time truncated to the hours or minutes.
(better still, subtract a minute or an hour, and KGE on that).

Robert>> using a queue to deliver the information would eliminate the problems you're encountering

I beg to differ. The queue manager is likely to encounter maintenance needs at this load and is garantueed to be 5x less efficient than simple, dedicated, shared file access.

Willem>> Some system managers seem to forget that Indexed sequential files need maintanance, the more updates (including deletes) are applied, the more often: CONVERT, after re-calculation of bucket sizes and buffers...

Absolutely.

Willem>> have the impression the file is HIGHLY fragmented - internally

This is a 3000 block file, largely pre-created with and extend of 540 and only 180 data buckets.
How can that be 'highly fragmented'?
Still, you are right, in that the VBNs are out of order, so don't expect a re-ahead helper from disk, controller or xfc.

At any rate, when a daily or weekly full convert or re-create is put in place, then this problem will be nicely solved.

Cheers all,
Hein.



Michael Moroney
Frequent Advisor

Re: Indexed file as a FIFO buffer

Thanks, all.

I can see that two simple changes will greatly improve things. Changing the key for the reads from 0 to either the last key read or at least something "recent", and creating a new, empty file once in a while (when the old is known to be empty).

This will be redesigned in the not too distant future, more effort will be done at that time.