Operating System - OpenVMS
1748195 Members
3326 Online
108759 Solutions
New Discussion юеВ

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

 
SOLVED
Go to solution
Mark Corcoran
Frequent Advisor

0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Following on from my previous thread about CONVERT /SORT versus SORT + CONVERT /NOSORT, another
problem has arrived in my lap...

A job which post-processes RDB dumped tables (RMS indexed files) to generate a file with records
formed from related parts of these tables, has started to slow down.

[There is one main table file which is sorted in order, and depending on the field/columnar values on each row/record, will
determine whether or not the C program has to check other dumped table files]

Unfortunately, there's no evidence to back this up, just people's vague recollection of how quick
they think it used to be.

Looking at the job, the first thing I found is that the output file it generates, was very
fragmented - between 5000 and 7000 fragments of 200 to 900 blocks each.

To see if the fragmentation was the main issue, I worked around this by doing the following:

$ SET RMS_DEFAULT /EXTEND_SIZE=65376
$ COPY NLA0: dev:[dir]output_filename.ext /ALLOCATION=11000000 /CONTIGUOUS

The device on which the file is created has a cluster size of 288 blocks, and 65376 was the highest
multiple of 288 possible that was <= 65535.

The COPY pre-allocates a contiguous file for the C program which was updated to open the file in
append mode.

After running the new job, it was obvious from the following:

$ SHOW MEMORY /CACHE=FILE=dev:[dir]main_table.DAT

that whilst the main input table file was being cached, virtually no reads were being serviced by
the XFC from read aheads, and virutally all were read throughs.

I momentarily forgot that whilst the main input table file is an RMS indexed sequential file, it is
being read sequentially by the C program using simple fgets() calls.

Thinking that the file was perhaps not sorted in order after all, I TYPE/PAGEd it (given that it
has ~49m records in it, I wanted some control over when my ^C would get picked up), then held the
RETURN key down for a good minute or so.

The records appeared to be in order, but what I did notice was that after this, the SHOW MEMORY
/CACHE indicated that every single read was being serviced as a read ahead from the XFC.

After about 90mins, I killed the job, and found that the XFC cache hit rate was at ~90% (obviously,
it would never get to 100%, because of the initial ~14,000 which were treated as read throughs).

I then ran the job again, but without using TYPE /PAGE on the main table file.

It has now been running for almost as long as the first run, but the cache hit rate is 54%, and
although the read ahead counter value displayed by SHOW MEMORY /CACHE is increasing, so is the
read through counter - approximately 1 in 4 reads end up as READ AHEAD.

Now, I know that this is only 2 individual runs, and hardly what you'd call exhaustive evidence...

However, I'm going to go out on a limb here, and say that without my TYPE/PAGE, either:

a) the C program is largely running ahead of the XFC cache in reading the file contents, so most
reads won't cause sequential read ahead of the file to occur (unless from outside interference,
such as me doing TYPE /PAGE)

or

b) however XFC determines that something is performing sequential reads doesn't work (in this
particular scenario).


For what it's worth, the file attributes are this:

Size: 13956192/13956192 Owner: [SYSTEM,*]
Created: 5-APR-2010 11:01:01.32
Revised: 6-APR-2010 18:24:21.06 (4)
Expires:
Backup: 7-APR-2010 02:34:38.75
Effective:
Recording:
Accessed:
Attributes:
Modified:
Linkcount: 1
File organization: Indexed, Prolog: 3, Using 3 keys
In 2 areas
Shelved state: Online
Caching attribute: Writethrough
File attributes: Allocation: 13956192, Extend: 65520, Maximum bucket size: 18
Global buffer count: 0, No version limit
Contiguous best try
Record format: Variable length, maximum 200 bytes, longest 0 bytes
Record attributes: Carriage return carriage control
RMS attributes: None
Journaling enabled: None
File protection: System:RWED, Owner:RWED, Group:RE, World:
Access Cntrl List: None
Client attributes: None

Total of 1 file, 13956192/13956192 blocks.



The last SHOW MEMORY /CACHE command gave the following results:
Extended File Cache File Statistics:

_dev:[dir]table.DAT;1 (open)
Caching is enabled, active caching mode is Write Through
Allocated pages 5122 Total QIOs 144399
Read hits 79682 Virtual reads 144399
Virtual writes 0 Hit rate 55 %
Read aheads 22443 Read throughs 144399
Write throughs 0 Read arounds 0
Write arounds 0

Total of 1 file for this volume

Write Bitmap (WBM) Memory Summary
Local bitmap count: 93 Local bitmap memory usage (MB) 8.40
Master bitmap count: 96 Master bitmap memory usage (MB) 8.27



Is the fact that the Global Buffer Count set to 0 and/or the fact that the file is an RMS indexed
file being read using the C RTL fgets() partly to blame here, or is something else going on?

Clearly, I'm reluctant to have a second concurrent job run at the same time as this main job,
simply to TYPE the table file, then be killed after a minute, to ensure that a sufficient
quantity of the file is cached to permit the XFC to service read requests.

If anybody has any thoughts/suggestions, I'd be most grateful.


Mark

[Grrr, hit some sequence on the keyboard, causing IE to go back a page, and lose 90% of this
post, so had to go back and do it from scratch again in notepad...]
25 REPLIES 25
Hein van den Heuvel
Honored Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Hello Mark,

That sure is long description, and I coudl not always follow it they way I would have liked to, but at least we have some pertinent data. Good!.
I'll takes a first reply to clear some crud, and then try to get to the real problem.


[There is one main table file which is sorted in order, and depending on the field/columnar values on each row/record, will
determine whether or not the C program has to check other dumped table files]

>> people's vague recollection of how quick
they think it used to be.

Too late now, but sprinkle your programs liberally with LIB$SHOW_TIMER!

>> the output file it generates, was very fragmented

That can certainly cause unpredictable run times. pre-allocate, perhaps based on input file size, and max extend (64000, 65535, whatever).


>> $ SET RMS_DEFAULT /EXTEND_SIZE=65376

Fine for a process. But too much if done system wide. Slows down tasks like unzipping many little files.

>> $ COPY NLA0: dev:[dir]output_filename.ext /ALLOCATION=11000000 /CONTIGUOUS

Excellent. If contiguous then extend size is irrelevant.
I uses to use COPY NL: all the time myself for that purpose.
Since 8.3 I use the inline FDL strings:

$cre/fdl="file; contiguous yes; allo 12345678"/log x.x

>>highest multiple of 288 possible that was <= 65535.

Nice thought/touch, but largely irrelevant. OpenVMS has no choice but to round up.

>> The COPY pre-allocates a contiguous file for the C program which was updated to open the file in
append mode.

Excellent

>> input table file is an RMS indexed sequential file, it is being read sequentially by the C program using simple fgets() calls.

No matter. Those maps to RMS SYS$GET calls.

Next step is probably to SET FILE/STAT on the existing files, in an output, and use ANAL/SYS.. SHOW PROC/RMS=FSB or my RMS_STATS tool to display all counters.

>> Thinking that the file was perhaps not sorted in order after all

An indexed file is sorted by primary key. No ifs or buts about that.

>>, I TYPE/PAGEd it (given that it
has ~49m records in it, I wanted some control over when my ^C would get picked up), then held the
RETURN key down for a good minute or so.

How crude.
$ perl -pe "last if $. > 10000" > nl:

>> The records appeared to be in order, but what I did notice was that after this, the SHOW MEMORY
/CACHE indicated that every single read was being serviced as a read ahead from the XFC.

As pre-loaded by the program.

>> it would never get to 100%, because of the initial ~14,000 which were treated as read throughs).

Read-throughs is just through the cache, not through to the disk.
Read to the disk = reads-hits + ahead.

See HELP SHOW MEMORY... deep down:
7 Read throughs Number of Virtual Reads that are capable of being satisfied by the extended file cache.



>> Size: 13956192/13956192 Owner:

Is that the table/driver file?


>> Is the fact that the Global Buffer Count set to 0 and/or the fact that the file is an RMS indexed
file being read using the C RTL fgets() partly to blame here, or is something else going on?

Nah.

>> Clearly, I'm reluctant to have a second concurrent job run at the same time as this main job, simply to TYPE the table file, then be killed after a minute, to ensure that a sufficient
quantity of the file is cached to permit the XFC to service read requests.

That's not so clear to me.
Clearly TYPE is a silly tool for this, but you know more that the XFC can guess.
So launching something to pre-read is not that crazy an idea for predictable jobs with critical run time requirement.

I once created a 'read-ahead-and-keep-ahead' tool, just for that reason.
It woudl pre-read N buckets worth of data. The used an RMS compatible bucket lock with blocking AST on the first to detectt 'interest in a bucket'. When the AST triggered on bucket M, grab a lock for the next (M+1), release M, read M + N + 1.

>> If anybody has any thoughts/suggestions, I'd be most grateful.

- RMS stats.
- Be sure to watch activity on those other files.
- Engage a professional in this space if it is really critical.

Cheers,
Hein van den Heuvel ( at gmail dot com )
HvdH Performance Consulting

Hein van den Heuvel
Honored Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Meant to open with

0% hit rate is perfectly normal for
- files that have not been read/written in a while.
- files that well exceed the cache capacity and a read sequentially
- when nochace is in effect
- when IOs are done larger than the max-cache IO size.
- when concurrent updates are happening on other nodes in the cluster.

Hein
Ian Miller.
Honored Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

At present
- files that well exceed the cache capacity and a read sequentially

looks likely but how big is the cache on this system, and what aged version of VMS is being used?
____________________
Purely Personal Opinion
Mark Corcoran
Frequent Advisor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Hein:
>Too late now, but sprinkle your programs liberally with LIB$SHOW_TIMER!
I know how much the bean-counters like to have stats, so I always try to make sure I get timing info for various stages of programs (can also be useful for myself too).

Alas, this is someone else's code, developered some time ago, and the concern was more with getting it working than making it perfect ;-)



>>> $ SET RMS_DEFAULT /EXTEND_SIZE=65376
>Fine for a process. But too much if done system wide

Don't worry, it was only for this one job, as a test :-)



>Excellent. If contiguous then extend size is irrelevant.

I'd wondered about this - assuming that the next highest contiguous block on the disk was 130752 blocks, and the RMS extend size had been set to 65376, I'm guessing that if exactly 130752 blocks were required, then:
a) they'd be allocated in two logical single operations
b) as far as BITMAP.SYS is concerned, the fact that there are two groups of 65376 blocks is irrelevant, because they are "next to each other", so would appear as a single fragment...



>>>highest multiple of 288 possible that was <= 65535.
>Nice thought/touch, but largely irrelevant. OpenVMS has no choice but to round up.

So, if I set extent size to 65535, and the cluster size was 288 blocks, presumably extending the file should theoretically mean 65664 blocks allocated?

I had guessed that the 65535 limit was as a result of a word being used to store the value, so I couldn't see how 65664 (17 bits) would fit...



>Next step is probably to SET FILE/STAT on the existing files, in an output, and use ANAL/SYS.. SHOW PROC/RMS=FSB or my RMS_STATS tool to display all counters.

I tried the SET FILE/STAT and a MONITOR RMS /FILE=, but to be honest, it didn't reveal very much - the only non-zero counters were the CUR, AVE andMAX $GET Call Rate (Seq).

I knocked up a quick .EXE of my own to effectively do the same as the real one, and this was the MONITOR RMS /FILE output (as a snapshot):

Active Streams: 1 CUR AVE MIN MAX

$GET Call Rate (Seq) 19375.33 4123.63 0.00 21861.00
(Key) 0.00 0.00 0.00 0.00
(RFA) 0.00 0.00 0.00 0.00
$FIND Call Rate (Seq) 0.00 0.00 0.00 0.00
(Key) 0.00 0.00 0.00 0.00
(RFA) 0.00 0.00 0.00 0.00
$PUT Call Rate (Seq) 0.00 0.00 0.00 0.00
(Key) 0.00 0.00 0.00 0.00
$READ Call Rate 0.00 0.00 0.00 0.00
$WRITE Call Rate 0.00 0.00 0.00 0.00
$UPDATE Call Rate 0.00 0.00 0.00 0.00
$DELETE Call Rate 0.00 0.00 0.00 0.00
$TRUNCATE Call Rate 0.00 0.00 0.00 0.00
$EXTEND Call Rate 0.00 0.00 0.00 0.00
$FLUSH Call Rate 0.00 0.00 0.00 0.00


As for the ANA /SYS and SHOW PROC /FSB, that didn't reveal much either:

FSB Address: 00064000
-----------
OPEN: 1. CLOSE: 0.
CONNECT: 1. DISCONN: 0.
REWIND: 0. FLUSH: 0.
EXTEND: 0. blocks: 0.
TRUNCATE: 0. blocks: 0.

FIND seq: 0. key: 0. rfa: 0.
GET seq: 159199. key: 0. rfa: 0. bytes: 18296029.
PUT seq: 0. key: 0. bytes: 0.
UPDATE: 0. bytes: 0.
DELETE: 0.

READ: 0. bytes: 0.
WRITE: 0. bytes: 0.

LOCAL CACHE attempts: 161187. hits: 159198. read: 1989. write: 0.
GLOBAL CACHE attempts: 0. hits: 0. read: 0. write: 0.
GLOBAL BUFFER INTERLOCKING:
GBHSH Intlck Collisions: 0 GBH Intlck Collisions: 0
GBHSH Held at Rundown: 0 GBH Held at Rundown: 0

LOCKS: Enqueue Dequeue Convert Block-ast
Shared file: 0. 0. 0. 0.
Local buffer: 0. 0. 0. 0.
Global buffer: 0. 0. 0. 0.
Shared append: 0. 0. 0. 0.
Global section: 0. 0. 0. 0.
Data record: 0. 0. 0.

XQP QIO: 1.

BUCKET SPLIT (1) : 0. SPLIT (N) : 0. OUTBUFQUO: 0.

DEV1 .. DEV5: 00000000 00000000 00000000 00000000 00000000




>An indexed file is sorted by primary key. No ifs or buts about that.
Ah sorry, I *think* what I meant was that the file is indexed in order, and the records are also stored in order (rather than having the a nice sequential index still pointing to "random" disk blocks).



>>> Size: 13956192/13956192 Owner:
>Is that the table/driver file?

Yes, this is the primary input file, just under 14m blocks in size.



>That's not so clear to me.
>Clearly TYPE is a silly tool for this, but you know more that the XFC can guess.
I looked up XFC in the system management manual, and its discussion of XFC detecting sequential reads of same-size I/O requests led me to the VCC_READAHEAD SYSGEN parameter - thinking that perhaps it wasn't set, but alas it was.

On the face of it, it appears that the executable is simply reading from the primary input file sequentially quicker than XFC can detect that that is what is happening, so although XFC is cacheing the file, it's always behind the executable (unless it gets a head start from something else, whereby the reads from executable allow XFC to keep on topping up the file into the cache).




>Engage a professional in this space if it is really critical.
perhaps this is not the place to discuss it, but I never heard the story about how you and the Hoff come to part ways with HP - jumped, or pushed? How has the private sector been treating you since?



>when concurrent updates are happening on other nodes in the cluster.
Not the case here - other jobs may happen to read the same primary input file, but certainly during my testing, there was just the one process accessing the file, and it was doing the sequential read.



Ian:
>looks likely but how big is the cache on this system
XFC currently allocated at 2.75GB.

>and what aged version of VMS is being used?
You know me and many other HP customers only too well ;-) 7.3-2 on this cluster.
Hein van den Heuvel
Honored Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?



>> I'd wondered about this - assuming that the next highest contiguous block on the disk was 130752 blocks, and the RMS extend size had been set to 65376, I'm guessing that if exactly 130752 blocks were required, then:
a) they'd be allocated in two logical single operations

Yes.

>> b) as far as BITMAP.SYS is concerned, the fact that there are two groups of 65376 blocks is irrelevant, because they are "next to each other", so would appear as a single fragment...

They would appear as a single fragment in the MAP area for the file using them ($ DUMP/HEAD/BLOCK=COUNT=0 ). In the bitmap they would be 2 * 227 adjacent bits.

>> So, if I set extent size to 65535, and the cluster size was 288 blocks, presumably extending the file should theoretically mean 65664 blocks allocated?

Yes indeed. Because VMS has to give you 227 + 1 cluster to satisfy the extend request.

>> the 65535 limit was as a result of a word being used to store the value

Correct

>>> I tried the SET FILE/STAT and a MONITOR RMS /FILE=, but to be honest, it didn't reveal very much

IMHO the way MONI RMS presents that data is next to useless.

>>> As for the ANA /SYS and SHOW PROC /FSB, that didn't reveal much either:

FSB Address: 00064000
:
GET seq: 159199. key: 0. rfa: 0. bytes: 18296029.
:
LOCAL CACHE attempts: 161187. hits: 159198. read: 1989. write: 0.

IMHO that indicated a lot. You needed an IO about once every 80 records. So there must have been 80 records to a bucket. Those 1989 IOs would have gone through to the XFC to be resolved thread from a prior read (ahead) or from a real IO.

>> Ah sorry, I *think* what I meant was that the file is indexed in order, and the records are also stored in order (rather than having the a nice sequential index still pointing to "random" disk blocks).

Got it. Yes, for records arriving in primary key order both CONVERT FAST-LOAD and Plain-old RMS will allocate in ever increasing adjacent buckets. A minor exception is that if the file needs to grow while doing so, then the new bucket is started in the fresh extend, potentially leaving the tail end of the current extend unused for up to bucket size minus 1. In this case the bucket size divides evenly into the cluster size, so that's not an issue.


>>> On the face of it, it appears that the executable is simply reading from the primary input file sequentially quicker than XFC can detect that that is what is happening

I never really studied the read-ahead for XFC. RMS only does read-ahead for sequential files, not indexed, and for sequential files it 'bursts' reading a bunch, but not keeping ahead. I actually tried to implement that while in RMS engineering but there were gotcha and I had to abandon at the time.

>> I never heard the story about how you and the Hoff come to part ways with HP - jumped, or pushed?

I can only speak for myself. I received an early retirement opportunity which seemed too nice to refuse. It was a volunteered choice creating optimal (financial) conditions to try work independent for a while. That was October 2005. So far so good!

Regards,
Hein
John McL
Trusted Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Hein, I'm watching this thread with some interest so a question - two actually - for you...

In the second last paragraph of your response immediately above this one you seem to be implying that there's no read-ahead on indexed files but there is for sequential files. Is this correct?

If so, is that set by the file characteristics or by the parameters in the open statement?
Hein van den Heuvel
Honored Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Hello John

John >>In the second last paragraph of your response immediately above this one you seem to be implying that there's no read-ahead on indexed files but there is for sequential files. Is this correct?

Only from an RMS perspective, is it not reading ahead into its buffers.
The XFC is blisfully ignorant as to whether RMS is doign an IO from a sequential file or indexed file, so the XFC can independent from RMS trigger a read-ahead into its buffers for RMS to find the data later.
And behind the XFC the Controller knows even less and it can do read-aheads, and behind that the physical Disk can be doing read ahead. So the odds that you'd be waiting for a disk seek/rotation are low!

>> If so, is that set by the file characteristics or by the parameters in the open statement?

For sequential file you have to request RAB$V_RAH in the connect, which is part of teh OPEN from an HLL perspective. It is the default for many languages. The number of buffers defines how deep the read ahead goes.

The RMS read ahead (on sequential files) can probably disrupt the XFC read ahead recognition. I never experimented with that though.

RMS Read ahead on indexed file would not seem too hard to implement, but it was never done nor requested. Regrettably. Again, the XFC may well decided to do the read ahead for indexed files.

I haven't looked at the code, but it woudl nor surprise me if the XFC would find it easier to do read-ahead for IOs which nicely line up with its 16-block cache lines. But for that to happen for an indexed files, many stars need to line up! (Bucketsize 2, 4, 8,16, or 32. Clustersize a power of 2. Rms primary key data NOT in area 0, or not pre-allocated.)

Hein
Hoff
Honored Contributor

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

Records in an indexed file aren't necessarily adjacent, so there's no direct way to warm up a generic block cache given the current design of RMS. RMS would need to do that, or to provide hints to XFC. Neither of which, AFAIK, exists at present.

Whether Hein's suggested leading-traversal approach might be worth the implementation effort is interesting; I'd want to measure that cache pre-populate scheme.

It would be equally interesting to toss upgrade or a RAM disk or an SSD at the problem, and measure throughput with that. 66 megabytes isn't all that much data; that'd be close to fitting entirely into the RAM in my cellphone, and would be dwarfed by what I've got stored in the flash. Best case, this application should be limited by the spiral transfer rate of the disk. Or by your RAM disk or SSD bandwidth. Arguably, RMS could just be getting in the way here if you can run from analogous in-memory data structures. (RMS doesn't have the concept of hauling an entire file into memory as one big wad, performing the required operations, and then rolling it all out as a big wad.)

It'd be interesting to compare RMS indexed files to an application built on Apache Cassandra, too. But that's fodder for discussion on another day. And no, I'm not aware of a VMS port of Cassandra.

And after that wall of text...

When I go after RMS files from C, I use this code:

http://labs.hoffmanlabs.com/node/595

And generally not with the file I/O portions of the C RTL.

The C I/O has its share of considerations here; that you can even get at indexed files through a mostly-generic C API is somewhat of a remarkable implementation achievement. But by that same token, don't expect it to be the go-fast implementation. I might well look to haul it all into memory with a few and large I/Os.

P Muralidhar Kini
Honored Contributor
Solution

Re: 0% read hit rate on XFC cache for RMS indexed file being read sequentially using C RTL fgets()?

XFC will not cache IO's to a particular file
in case -
* IO's done to the file are of size greater
than VCC_MAX_IO_SIZE blocks.

* file is present on a local RAMDISK

* The file is accessed cluster wide and
there is atleast one node in the cluster
that is doing a write IO to the file.

* file will be temporarily not cached if
logical IO's are done to the file or the
volume on which the file resides


XFC ReadAhead -
* XFC does read ahead for a file if the
SYSGEN parameter VCC$GL_READAHEAD is set
to 1.

* XFC has a read ahead factor of 3 which
would mean that when read ahead is being
performed on a file, 1 among 4 IO's to the
file will be read ahead.

XFC ReadHits
* Whether the IO is read-through or
read-ahead, it is still a Read IO
operation that XFC has to perform and
would be used in the statistic as a IO.

* The hit rate for the file is calculated
as follows -
HitRate = ReadHits/TotalIO

Here,
ReadHits - Number of times a Read
operation was satisfied from
the cache
Total IO - Number of Read operations

Both the "ReadHits" as well as "TotalIO"
include read-through as well as read-ahead.


From the information you have provided,
>> SHOW MEMORY /CACHE
>> Allocated pages 5122
>> Total QIOs 144399
>> Read hits 79682
>> Virtual reads 144399
>> Virtual writes 0
>> Hit rate 55 %

IO's to the file are going through the XFC
cache and there are some number of IO's
that are getting satisfied from the cache
and hence we are seeing the hit rate of 55%.

The question is why so low Hit-rate?
My suspicion is that, there is some other
operation on that file (or volume on which
the file resides) that is causing the
contents of the file to get deposed
(i.e. cleared) from the cache once in a
while. This would cause subsequent IO's to
the file to get read from the disk(read
miss). Couple of obvious reasons for the
file depose would be either logical IO's
to the file/volume or cluster-wide write
operations on the file.


Please provide the following information about the file -
1) XFC statistics from SDA
ANAL/SYS
SDA> XFC SHOW FILE/ID=/STATS
SDA> XFC SHOW MEM

NOTE: FID_IN_HEX is the FID of the file
(dev:[dir]table.DAT;1) in Hex

2) How big is the IO size issued by the
application to the file
(i.e. How big is the IO's that the
application issues to the file. are
they 50 blocks or 100 blocks ....)

3) Is the file accessed cluster-wide.
If yes, what type of IO (Read/write)
are performed on that file cluster-wide
and how frequently

These information could provide further
clues as to why the hit rate is very low
for the file.

Regards,
Murali
Let There Be Rock - AC/DC