- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Indexed file as a FIFO buffer
Operating System - OpenVMS
1753259
Members
6051
Online
108792
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-07-2008 11:31 AM
01-07-2008 11:31 AM
Re: Indexed file as a FIFO buffer
Folks, I think Michael is well on its way to improve the situation.
Please let us not forget that even with the established inefficiencies the current solution is still useable, It is in active use, allthough it gives concerns.
So if we make this 5x better, it becomes a non-issue. If we make it 25x better, which appears to be possible, then the queue-in-indexed file becomes a perfect solution again! Why mess with success?!
Looking at the deleted record trace, we see that in almost 1/2 year, there were some 12,000,000 transactions requested, or about 66,000 per day, probably 4/sec in a busy time. So that would add a bucket per day when done right, as confirmed by the number of data buckets. (circular logic, I admit :-)
Big whoppee! Very manageable... once you know it is happening.
Michael>> I have a better idea. Now that I see what is going on, why not simply recreate an empty file once in a while, as long as I know the original is empty?
The best (only) way to know whether the file is empty to read it. Well, in that case you can just have convert read it in the process of creating a new file.
The best solution is probably a mix:
- Just call FDL$CREATE to create a new, empty file.
- Close & Open to activate the new file.
- Rename the old file to .old
- Convert/merge or just a read-write loop to merge in any records left behind in the old file.
- Purge/keep=5 on the old file.
Michael>> I'm kind of disappointed in hearing that RMS files can be clogged with unusable, unreclaimable buckets like that.
It's a robustness choice. RMS promisses fool proof RFA access for the life of a file. You may disagree with that choice, but this is the promiss made. The only way to honer that promiss is to retire a VBN when all 65K record IDs have been used in that VBN.
What could be done, and I'll submit it to a wish list, is to give CONVERT/RECLAIM an "REUSE_RFA" option. Don't hold your breath!
Bob>> Changes to the key fields (those that are indexed) will clutter up the index structures.
It's a singe key file. No issue here.
Michael>> The reads do a KGE access with a key of 0 every time
So that's the other, additional, way to tackle this. Make the code remember the last date/time used. And just do a KGE on the time truncated to the hours or minutes.
(better still, subtract a minute or an hour, and KGE on that).
Robert>> using a queue to deliver the information would eliminate the problems you're encountering
I beg to differ. The queue manager is likely to encounter maintenance needs at this load and is garantueed to be 5x less efficient than simple, dedicated, shared file access.
Willem>> Some system managers seem to forget that Indexed sequential files need maintanance, the more updates (including deletes) are applied, the more often: CONVERT, after re-calculation of bucket sizes and buffers...
Absolutely.
Willem>> have the impression the file is HIGHLY fragmented - internally
This is a 3000 block file, largely pre-created with and extend of 540 and only 180 data buckets.
How can that be 'highly fragmented'?
Still, you are right, in that the VBNs are out of order, so don't expect a re-ahead helper from disk, controller or xfc.
At any rate, when a daily or weekly full convert or re-create is put in place, then this problem will be nicely solved.
Cheers all,
Hein.
Please let us not forget that even with the established inefficiencies the current solution is still useable, It is in active use, allthough it gives concerns.
So if we make this 5x better, it becomes a non-issue. If we make it 25x better, which appears to be possible, then the queue-in-indexed file becomes a perfect solution again! Why mess with success?!
Looking at the deleted record trace, we see that in almost 1/2 year, there were some 12,000,000 transactions requested, or about 66,000 per day, probably 4/sec in a busy time. So that would add a bucket per day when done right, as confirmed by the number of data buckets. (circular logic, I admit :-)
Big whoppee! Very manageable... once you know it is happening.
Michael>> I have a better idea. Now that I see what is going on, why not simply recreate an empty file once in a while, as long as I know the original is empty?
The best (only) way to know whether the file is empty to read it. Well, in that case you can just have convert read it in the process of creating a new file.
The best solution is probably a mix:
- Just call FDL$CREATE to create a new, empty file.
- Close & Open to activate the new file.
- Rename the old file to .old
- Convert/merge or just a read-write loop to merge in any records left behind in the old file.
- Purge/keep=5 on the old file.
Michael>> I'm kind of disappointed in hearing that RMS files can be clogged with unusable, unreclaimable buckets like that.
It's a robustness choice. RMS promisses fool proof RFA access for the life of a file. You may disagree with that choice, but this is the promiss made. The only way to honer that promiss is to retire a VBN when all 65K record IDs have been used in that VBN.
What could be done, and I'll submit it to a wish list, is to give CONVERT/RECLAIM an "REUSE_RFA" option. Don't hold your breath!
Bob>> Changes to the key fields (those that are indexed) will clutter up the index structures.
It's a singe key file. No issue here.
Michael>> The reads do a KGE access with a key of 0 every time
So that's the other, additional, way to tackle this. Make the code remember the last date/time used. And just do a KGE on the time truncated to the hours or minutes.
(better still, subtract a minute or an hour, and KGE on that).
Robert>> using a queue to deliver the information would eliminate the problems you're encountering
I beg to differ. The queue manager is likely to encounter maintenance needs at this load and is garantueed to be 5x less efficient than simple, dedicated, shared file access.
Willem>> Some system managers seem to forget that Indexed sequential files need maintanance, the more updates (including deletes) are applied, the more often: CONVERT, after re-calculation of bucket sizes and buffers...
Absolutely.
Willem>> have the impression the file is HIGHLY fragmented - internally
This is a 3000 block file, largely pre-created with and extend of 540 and only 180 data buckets.
How can that be 'highly fragmented'?
Still, you are right, in that the VBNs are out of order, so don't expect a re-ahead helper from disk, controller or xfc.
At any rate, when a daily or weekly full convert or re-create is put in place, then this problem will be nicely solved.
Cheers all,
Hein.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-08-2008 08:23 PM
01-08-2008 08:23 PM
Re: Indexed file as a FIFO buffer
Thanks, all.
I can see that two simple changes will greatly improve things. Changing the key for the reads from 0 to either the last key read or at least something "recent", and creating a new, empty file once in a while (when the old is known to be empty).
This will be redesigned in the not too distant future, more effort will be done at that time.
I can see that two simple changes will greatly improve things. Changing the key for the reads from 0 to either the last key read or at least something "recent", and creating a new, empty file once in a while (when the old is known to be empty).
This will be redesigned in the not too distant future, more effort will be done at that time.
- « Previous
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP