Operating System - OpenVMS

About perfomance with RMS relative files...

 
SOLVED
Go to solution
Bob CI
Advisor

About perfomance with RMS relative files...

Hello to everybody the forum!!!!

We have a process in our installation that makes a lot of writes in ascending order in a relative rms file. The size of the record is 500, less than 1 block. The bucket size, actually, is 12 blocks, but we are only worried in the writing capacity of the process, and not in the performance of the reader processes. We want to speed up writings.....We want more writing per second...

I know that from the point of view of a writer, and with deferred write no suitable for our application ( critical integrity ), smaller bucket sizes are reasonable, and we want to reduce the bucket size of the file, that has value of 12 for years...

In an parallel way, system manager comment us, that with EVA in our GS80, he is interested in reducing the io sizing, because EVA has less stress with small ios.

We can make a lot of probes with different values of bucket sizes from the actual 12, each time smaller than previous time.

But, with the tools that has the operative system ( because system manager doesn´t let use other tools that operative system has ), like MONITOR RMS, MONITOR IO, MONITOR DISK and others,

I can meter the io writing io in the process taking times....but

How can detect which is the optimal value from bucket size with system tools, when we make simulations with stressed stages ????

Thanks!!!

42 REPLIES 42
Wim Van den Wyngaert
Honored Contributor

Re: About perfomance with RMS relative files...

Wim Van den Wyngaert
Honored Contributor

Re: About perfomance with RMS relative files...

I know nothing but would try :
1) minimize the cluster size on disk
2) get the cluster size of the file the same
3) disable buffering for the process as much as possible (set rms)
4) disable EFC (set file/cache=no
5) make sure that the extension qty of the file is high (to avoid regular enlargments)

Of course only when reading is not the issue.

May be you could post the code writing to the file.

Wim
Wim
Hein van den Heuvel
Honored Contributor

Re: About perfomance with RMS relative files...

>> We have a process in our installation that makes a lot of writes in ascending order in a relative rms file.

You may want to review whether the application actual uses any of the relative file semantics. Does it 'delete' records and/or test whether a record read existed or not? You may find that a simple fixed length record file, or a block mode access file (SYS$READ, SYS$WRITE) would do just fine with less overhead and more flexibility allowing readers to use large chunks.

>> The size of the record is 500, less than 1 block.

That's not too important, but does suggest you could go down all the way to 1 block buckets.

Is there CLUSTER level sharing of the file?

>> The bucket size, actually, is 12 blocks,

That's not excessive it seems.

>> We want to speed up writings.....We want more writing per second...

Smaller buckets will help a little, but very little. What rates are we currently talking about?

>> I know that from the point of view of a writer, and with deferred write no suitable for our application ( critical integrity ),

Yeah yeah, that's what they all say.
But you may find that you can defer writes to some degree. For example, think about what Oracle does. On first commit it triggers a write IO. All commits coming in while that IO is active are grouped together and when the first IO ready, then all accumulated commits are executed with a single IO more.
You can do something very similar very easily.
1) Try using RMS Async IO (RAB$V_ASY)
2) Use deferred write. Set a 10 millisecond timer on the first IO when no timer is active. Just put if timer is armed. Commit when timer expires.

Depending on your needs, at some point something gotta give. Pick:
- speed,
- wallet
-- fastest controllers possible in use?
-- buy a solid state disk?
- integrity.

>> smaller bucket sizes are reasonable, and we want to reduce the bucket size of the file, that has value of 12 for years...

So try it.?

In an parallel way, system manager comment us, that with EVA in our GS80, he is interested in reducing the io sizing, because EVA has less stress with small ios.

Sure.


>> We can make a lot of probes with different values of bucket sizes from the actual 12, each time smaller than previous time.

Yes. You could have written a test in less time than it took me to reply to this.

There is nothing like just trying on your system, with your exact cpu, memory, cables,...

>> But, with the tools that has the operative system ( because system manager doesn´t let use other tools that operative system has ),

Fire his sorry ass. That person is a risk to your environment. IMHO system managers are SERVANTS, not rulers.

>> like MONITOR RMS, MONITOR IO, MONITOR DISK and others,

SDA XFC TRACE.
LD IO TRACE

>> I can meter the io writing io in the process taking times....but

Yes, and that is the only real way.

>> How can detect which is the optimal value from bucket size with system tools, when we make simulations with stressed stages ????

The shortest time to write N (1000?) ascending blocks to a pre-allocated, contiguous file repeated M (1000?) times. That's all.

I suspect you will find 1 is the best, but with very little difference, possibly not enough difference to hurt the readers with.

Keep an eye on CPU time. With OpenVMS 8.2 and better you can instrument your test program with GETJPI for USER, EXEC and KERNEL mode cpu time usage. (I wrote simple 'my-timer' functions for that )
Those CPU stats would be very interesting to see alongside of the elapsed times!

What rates are being obtained now?
Where do you need to be?

Hope this helps some,
Hein van den Heuvel (at gmail dot com)
HvdH Performance Consulting
Wim Van den Wyngaert
Honored Contributor

Re: About perfomance with RMS relative files...

I sure know nothing. Replace 1 and 2 by

1) minimize the bucket size of the file

Wim

Wim
Hein van den Heuvel
Honored Contributor

Re: About perfomance with RMS relative files...

Ah yes Wim, thanks for reminding us.

If there is any concern about write speed, then that output file had better be totally pre-allocated and contiguous as well.

Bob did not mention this, but I suspect that is in place. It had better be as is is a free, first step.

With full contiguous pre-allocation in place as it should be, the clustersize and extend quantities become totaly irrelevant.

Cheers,
Hein.
Hein van den Heuvel
Honored Contributor

Re: About perfomance with RMS relative files...

Ah, cute timing....

Wim wrote>> I sure know nothing.

And Hein replied>> Ah yes Wim, thanks for reminding us.

But that was in reference to the earlier extend/clustersize reply!

:-)

:-)

Hein.




Wim Van den Wyngaert
Honored Contributor

Re: About perfomance with RMS relative files...

Hein will correct me if I'm wrong but you could use varaiable length if the average erecord length is substantialy lower than 500.

Also, if the file is very fragmented and you have concurrent readers, you could get window turns (or whatever it's called nowadays) that delay the file writes. A fragmented file could also cause many disk head moves.

Wim (rms-ing again after 20 years, rust never sleeps)

Wim
Hoff
Honored Contributor

Re: About perfomance with RMS relative files...

Writes per second are limited by the available bandwidth from the host out to the media. If you need "faster", you need more aggressive caching, fewer (and larger) I/O operations, simpler I/O, and/or faster hardware. Or you shard the application processing.

As for the hardware, the AlphaServer GS80 is ancient gear, and a mid-range Integrity will very likely provide better performance. I might well look as low as an Integrity rx2660 or Integrity rx3600 as a replacement box, or one of them blade-thingies HP is so fond of.

You indicate this I/O is lowercase-i integrity-critical. How many file updates can you drop? What are the application failure scenarios? What sort of journaling and archival processes are you using?

Here, I'd look at whether it is feasible to implement the mirroring of your data within the application. One more simplistic path (trading off simplicity for speed) that's reliable on failure (for your transaction log), and a second path that's cached and fast and shared (your production file).

And you mention "installation". That's ambiguous. Is this a one-time file I/O sequence that happens when your application is loaded, or a reference to an ongoing matter within your environment?

I might well prototype direct virtual I/O, too. But RMS does pretty well here, as a rule. (I'm generally loathe to rewrite and replicate the features of a file system or a database, as that tends to turn into a maintenance problem.)

And disk fragmentation and other system and disk tuning matters are on the table here, too. But I'll assume that investigation is either underway, or has not been particularly fruitful. (And once the low-hanging fruit has been picked, faster hardware is usually a better choice when cost is a factor.)

As for sharding, that's analogous to an application-level form of disk I/O striping; where the application load is split across multiple servers and/or multiple storage controllers. This requires you to have some sort of a cleft or mechanism where you can split and route the I/O. With disk striping, the numeric block address determines which physical spindle the block ends up on within a striped disk; within a stripe-set. If you're working with tag-value pairs in your relative file, this could be where you have the first gazillion tags to one server, the second gazillion to another, and so on.

Though one interpretation of the comments from the system manager here is as a roadblock, the other is that you're trying this application testing on a production server. Now whether production monitoring is in place (and it is often best to have that available), that's another discussion. And moving toward more and smaller I/Os produces more stress on an I/O system, in my experience. Not less. And regardless, there do appear to be some staffing-level stress points here that need to be addressed. (It's distinctly possible that this ITRC thread will eventually come to the attention of the system manager, for instance.)

Bob CI
Advisor

Re: About perfomance with RMS relative files...

Ok! I understand all yours offers....

We don´t use journaling for speed up writings on it and the application makes labours of recovering the situation after a crash, because these writings are result of a executed transaction that builts these messages, apart of other messages toward mailboxes, BBDD, etc...). These writings are generated again if it is necesary.

Every morning we begins with a empty file.
and the file receives several millions of records per day. We used fixed record format.

This file receives every day, enought space to allocation for the day, with a optimized FDL with great allocation, and rarely needs to extend.....

Really, the only we want is to know, is if we decrease bucket size in this relative file from 12 to 4 for example, if our writing ratio with EVA will suffer.

We understand that the writing ratio will not suffer, and perphaps will be increased,
but we can see this and confirm with the tools that operative system offers, if is possible, apart of statistical report with our meters.

Thanks in advance!