Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Converting an RMS indexed file

 
SOLVED
Go to solution
roose
Regular Advisor

Converting an RMS indexed file

Hi Guys,

I still quite have not earned enough knowledge yet on RMS file management/tuning, so hope you can help me with this question.

We are running Alpha OVMS 7.3-2 and we have some files that are indexed and we would like to do some tuning to these files. I have backed up these files to a test server and I am seeing some discrepancy here:

When initially I do a dir/size=all on the file:
SYSTEM_VMST01::> dir/size=all chrt.ism;

Directory DISK$TEST_DB:[DB]

CHRT.ISM;1 1698357/3673728

Total of 1 file, 1698357/3673728 blocks.

The used and allocated sizes are different. However, when I do a convert/nosort/fdl= chrt.ism new_chrt.ism, the new file has the same used and allocated size:

SYSTEM_VMST01::> dir/size=all new_chrt.ism

Directory DISK$TEST_DB:[DB]

NEW_CHRT.ISM;1 4194304/4194304

Total of 1 file, 4194304/4194304 blocks.

How can this be? I was hoping that since we did an "optimization" here, the used size will be much lower than that of the original file, and the allocated might be the same or bigger.

I have attached the results an anal/rms/stat on the file before and after the convert, as well as dump/header/record=(count:0) of the before and after files and the FDL file I used to do the convert.

Thanks for your help in advance.
9 REPLIES 9
Hoff
Honored Contributor

Re: Converting an RMS indexed file

What are you optimizing for here?

Most folks aren't looking (foremost) at the file size so much as they're looking for (for instance) application speed.

Over in...

http://forums13.itrc.hp.com/service/forums/questionanswer.do?threadId=1265132

Hein asked "But what problem are you trying to solve? I suspect you want to know when to re-convert the file for maintenance correct? The used vs allocated size is only a minor indicator for that. You could just stash away the original allocation (it is in the FDL) and watch for growth."

There are ways to reclaim deleted storage from within an RMS indexed file (eg: CONVERT /RECLAIM), and there are ways to tune a file, but it would be best to know what particular factor(s) you're optimizing for here.

As for general comments, some of the most common "low-hanging fruit" tends to be getting off of slow storage hardware, getting rid of RAID-5 in favor of a better (faster) RAID level (if RAID is in use), up-rating the storage buses and controllers, adding physical memory (which can do wonders for I/O caching efficacy), disk fragmentation, reviewing and adjusting process quotas, etc. (You may already know of and have reviewed all of this, of course.)

As for alternatives and depending on the particular OpenVMS Alpha box, it's feasible to bring some or all of that 2,147,483,648 byte wad of data into 64-bit virtual memory. (RMS buffering or RMS Global buffering can potentially help here, if there's not an API into this file that allows you to make larger changes to the implementation.)

(The other aspect of this stuff that gets ugly is archival and recovery; getting a consistent and reliable copy of the file. That's what can push folks into RMS Journaling or into a database. But I digress.)
Hein van den Heuvel
Honored Contributor

Re: Converting an RMS indexed file

Concur with Hoff:

1) Forget about the EOF/ALQ filesize. It's irrelevant. An artifact of the copy taken.

2) What are you optimizing for here?

YES!

3) Hein asked "But what problem are you trying to solve? I suspect you want to

[ :-) ] YES!

My points:


4) Please just use ANAL/RMS/FDL.
It has all the stats, in a more useful format, ready to be swallowed by EDIT/FDL/NOINTERACTIVE

5) Just switch DATA_RECORD compression ON.
It has an explicit comment in the FDL.
What is/was the argumentation/reasoning here?

You currently have 1 record per bucket, and with a maximum bucket size of 63, that's all you'll ever get without compression.
Might as well save 5% space and make the bucket size 51 for an exact fit!?

6) To make a SMALLER file, which may perform better or worse change the FDL to :


ALLOCATION 2000000
BEST_TRY_CONTIGUOUS yes
BUCKET_SIZE 63
EXTENSION 63000
:
KEY 0
:
DATA_RECORD_COMPRESSION no
:

7) for compression effectiveness testing, and for better guestimating the required ALLOCATION, please test with 100 - 1000 records first!

And uh.. contact me offline if you want to get serious about tuning the application beyond the single file.

Good luck,

Hein van den Heuvel ( at gmail dot com )
HvdH Perforamnce Consulting
Hein van den Heuvel
Honored Contributor

Re: Converting an RMS indexed file

Ooops... Cut & Paste error.
I meant DATA_KEY_COMPRESSION YES

Hein.

roose
Regular Advisor

Re: Converting an RMS indexed file

Thanks Hein and Hoff for the quick replies!

Actually, what we are really trying to optimize or reorg (it that is the correct term) is really the disk space. We have a lot of these ISM files (120+, I think) and some files were originally allocated with a lot of disk space (1 file even had an allocation of 100GB after the last admin did the reorg). Also, we are using SAN disks for this system (EMC Symmetrix). Unfortunately for us, we have very seldom downtime that will allow us regular reorg of these files, thus we just try to allocate bigger file size during reorg (last reorg done on 2007) whenever downtime are available.

So, for us, even though its a minor indicator, we really use the used/allocated in monitoring the usage of these files.

Therefore, if I were just to stick for now on bringing back the used/allocated indicator for us, would the convert/reclaim do these? So that it would mean that 1, I'll do the convert/fdl command, then after that, 2, will execute another convert/reclaim?

I would love to engage you guys on helping us on this on a commercial manner, but sadly, our budget right now won't allow it :(
roose
Regular Advisor

Re: Converting an RMS indexed file

By the way, the reason why we try to allocate bigger file size during reorg is to really prevent as much possible early defragmentation on the files as again, we don't have regular downtime for this system.
Hoff
Honored Contributor

Re: Converting an RMS indexed file

If I've done the math right here (and that's always an open question), my laptop has room for all of these files. And my laptop disk has an I/O path faster than the HBA used most FC SAN storage controllers.

As for the RMS files here, I usually go to the set the default extent size on the files or on the process or such (to some site-appropriate value of "large", rather than CONVERT. That defers fragmentation. If you know your growth, then pre-allocate.

You're always going to have various nagging issues with RMS indexed files and such. Best to plan for the interval between your downtime windows, and to plan for replacing RMS with a different approach if and as that is deemed appropriate, or supplanting it with RMS journaling or such.
Hein van den Heuvel
Honored Contributor
Solution

Re: Converting an RMS indexed file

Why did you not respond to the COMPRESSION question? It is the single biggest thing you can do to this file!

>> So, for us, even though its a minor indicator, we really use the used/allocated in monitoring the usage of these files.

Well, it is an utterly bogus indicator. But if it makes you happy!

Instead, you may want to try the attached RMS_SHOW_AREAS tool to get insights into a live file for 'empty space'.

CONVERT/RECLAIM, requires standalone access. You might as well convert the file at that time.
It makes buckets with just DELETED records avaiable for re-use. Does you application have deleted records? Probably, but not many. How can you tell?

STATISTICS FOR KEY #0
:
Count of Data Blocks: 2290572 --> 42418
buckets.
:
Count of Level 1 Records: 42418 --> bingo
:
Count of Data Records: 41115 --> fewer.

So there are about 1400 buckets that can be POTENTIALLY be recovered. You tell me whether that's useful.
Note, some may hold RRV records and NOT be recoverable as the remember the original location of a record.

Just try CONV/RECLAI/STAT/KEY=0 ???

>> thus we just try to allocate bigger file size during reorg (last reorg done on 2007) whenever downtime are available.

That's a good thing. Excellent.

>> So that it would mean that 1, I'll do the convert/fdl command, then after that, 2, will execute another convert/reclaim?

You will have waste 50,000 slow read IOs.
After a convert there are no buckets to be reclaimed. There may be excess space, but it can not be reclaimed.

>> ngage you guys on helping us on this on a commercial manner, but sadly, our budget right now won't allow

If the system is not important enough to maintain properly then so be it. Just get a nice 'told you so' story ready for management.

Your 'first 15 minutes are free' are up after this reply!

>> why we try to allocate bigger file size during reorg is to really prevent as much possible early defragmentation on the files as again

The 65K (max) EXTENT that you use will go a long way to address that.
I have my real customer use a simply tool I wrote to EXTEN a file with a 32-bit quantity, as needed, when needed. COntiguous or best-try as desired.

Cheers,
Hein.
roose
Regular Advisor

Re: Converting an RMS indexed file

Thanks again for the feedback! Will definitely input this in our planning.
Hein van den Heuvel
Honored Contributor

Re: Converting an RMS indexed file

I looked at the pre-convert analysis file some more (boring plane ride), and noticed relatively odd behaviour on the alternate key.

It takes 20,000+ buckets for 500+ MB, where it only needs 21 buckets and 1/2 MB to hold the data. So most buckets are empty, probably all except the last dozen or so.

I suspect that $ CONV/RECLAI/STAT/KEY=1 will do wonders, and relatively quickly so.

Are this records relatively constantly/frequently updated with a fresh time-stamp, perhaps to the tune of 1000+ times since creation? Perhaps twice a day?
That would explain (to me :-) as it would file buckets for certain date/time ranges, remove for the updates and never come back to reuse those older buckets.

What does the timestamp key look like?
8 bytes is short for a simple text time stamp like a yymmddhh string? It is a little short for julian seconds which needs 10 positions (now = 1255227297 :-)
8 bytes matches a binary OpenVMS timestamp, but by telling RMS it is a simple string, not an bin8 this would have the wrong byte order.
(and my explanation would not hold :-)

Cheers,
Hein.