1753762 Members
4684 Online
108799 Solutions
New Discussion юеВ

Re: About IO_PERFORM

 
John Gillings
Honored Contributor

Re: About IO_PERFORM

Roberto,

A simple thought experiment... work out the relative time costs of CPU, memory access and disk access. If we scale up a CPU register to register ADD operation and call it 1 second, how long will the other operations take? VERY rough numbers (worked out a few years ago, so potentially changed)

ADD operation 1 second
Chip Cache memory access 10-50 seconds
Main memory access 5-20 minutes
Disk access 8-12 MONTHS

So the cost of an I/O operation is WAY WAY dominated by the physical access to the disk. Yes, if used correctly, FASTIO can shave off CPU overheads, but if all that's doing is reducing a few minutes out of a timeframe of months, it's not really worthwhile.

If you can spend CPU time eliminating the need to go to the physical disk, your overall throughput will be higher. That's where things like caching and writebehind come in. RMS, RDB and the OpenVMS file system can automatically provide you with caching, which you'll lose if you replace RMS with your own low level code.

To improve write performance, you need to minimise the number of I/O operations that go all the way to disk. This sometimes needs to be traded off against data security - delaying a write, keeping data in memory extends the window of vulnerability to crashes and power fails.

As others have said, analyze your data flow, determine the bottle necks and use your knowledge of the data to move the bottle necks to more acceptable places.

Before diving into the depths of low level programming, make sure you're tried all the easy things, like making sure you've chosen the most appropriate data structures, enabling global buffers and choosing optimal bucket sizes.
A crucible of informative mistakes
Bob CI
Advisor

Re: About IO_PERFORM

Again, thanks for your advices....

I├В┬┤m reading all i books i found about performace management........

I├В┬┤m using MONITOR RMS, MONITOR IO, MONITOR DISK and SET FILE/STAT

I├В┬┤m starting to investigate bottlenecks and with MONITOR RMS for my target file i had this results

"Loc Buf Cache Hit Percent " --> 93
"Loc Buf Cache Attempt Rate "--> 60

The disk reponds very well , cause

"I/O Request Queue Length "--> < 1 always...

and

"I/O Operation Rate "for target disk is
> 2000...

If only i├В┬┤m inserting records in my relative file ( RECORD SIZE ID 302, bucket size 10 and not deferred write couse i need maximum consistency in data), i suppose that always would have veru local cache fault, cause every insert is new and i haven├В┬┤t that in the cache.

Wolud be possible that when i write a record on disk, rms recover serveral blocks ( just my bucket size ) and this blocks in local memory are valid for future inserts ?

Is this the reason for my hit in local cache ?

All inserts are with RRN = RRN + 1.

Always one after other....


Thanks to everybody in advance....
Hein van den Heuvel
Honored Contributor

Re: About IO_PERFORM

>> Again, thanks for your advices....

Good. Btw... say thanks with 'points' every now and then to give an indication as to which reply helped and which was 'thanks for playing'. :-)

>> I├В┬┤m reading all i books i found about performace management........

Excellent!

>> I├В┬┤m using MONITOR RMS, MONITOR IO, MONITOR DISK and SET FILE/STAT

Excellent.
Next step: T4 & and also my RMS_STATS or ANAL/SYS.. SHOW PROC/RMS=(FSB,BDBSUM)
http://h71000.www7.hp.com/freeware/freeware60/rms_tools/rms_stats.exe
Send Email for most recent sources
(hmmm, rms_stats expects a file opened shared, so that might not work)

>> "Loc Buf Cache Hit Percent " --> 93

ho hum.


>> "Loc Buf Cache Attempt Rate "--> 60

Low.

>> The disk reponds very well , cause
>> "I/O Request Queue Length "--> < 1 always...

Good news and bad news.
Good: Yeah it keeps up.
Bad: There probably is no concurrency at all.

Are the writer tasks using RAB$V_ASY = 1 + SYS$WAIT just before the $PUT to make sure the last write is done?

"I/O Operation Rate "for target disk is
> 2000...

That does not jive with the RMS stats above unless there are 2000/~60 =~ 30 active files.

>> If only i├В┬┤m inserting records in my relative file ( RECORD SIZE ID 302, bucket size 10 and not deferred write couse i need maximum consistency in data)

So you can fit 16 records in a bucket. Every 16 records you'll see 16 writes, 1 read and 17 cache attempts. That's 94% hit rate as observed.

However... please realize you are writing 10 blocks of data every time!

Going to 1 block buckets is no solution as this will cause a 1 read + 1 write per $PUT.

Going to $WRITE, with 2 block faked buckets will probably be optimal.
If the (excessively!?) stringent 1 write IO per $PUT is maintained. A real speedup would comre from postponing the writes to 1 per N records or 1 per M milliseconds, whichever comes first.
And a real-real speedup could come from a 'group commit' grouping writes from all streams for N milliseconds and then commiting all streams in 1 IO and giving an ACK back to each stream in the group.

>> i suppose that always would have veru local cache fault, cause every insert is new and i haven├Г ├В┬┤t that in the cache.

Depends a little on pre-allocation.
RMS will read the bucket and page-fault at that time.

>> Wolud be possible that when i write a record on disk, rms recover serveral blocks ( just my bucket size ) and this blocks in local memory are valid for future inserts ?

yes, but as indicated, RMS writes the whole bucket, including previously written records.

>> Is this the reason for my hit in local cache ?

See above.

>> all inserts are with RRN = RRN + 1.

Are you using 1 file per connection?
Are you using SHARING? APPEND (RAB$V_EOF=1) ?
Where do you get the first RRN?

Maybe you want a shared memory section + $updsec to bring it to the disk?

Good luck!
Hein.
Bob CI
Advisor

Re: About IO_PERFORM

Hello, guys.....thanks for all your advices!!!

I wa on holidays, and i returned to my job last week. I had several meetings with other people in my organization. Finally was possible that application people change the application and then we can use the deferred write, because we have now a more robust recovery.

Speed has increased with this rms option.

We make an rms flush every block o messages, and not every messages like previous option.

As well, we update the lock ( we use a lock and other processes use blocking-ast to notice the insertion of a new register in the file ) every block of messages.

We are changed the bucket size to higher value....

We are investigating more to improve the application...

Thanks to everybody for all....