Operating System - OpenVMS
Showing results for 
Search instead for 
Did you mean: 

OpenVMS Shadowing and Disk Cache

Go to solution

OpenVMS Shadowing and Disk Cache


Here we start do implement OpenVMS Shadowing.

The targetted storage to use on 2 different site is 2 EVA8100, one on each site.

We currently use 1 EVA5000 on one site and a EVA3000 on the other site, using Continuous Access to replicate the data from the EVA5000 to the EVA3000.

We are in the phase of migrating the data, then we have created Shadow Sets to mount physical disk from the EVA5000, and the 2 EVA8100.

After the migration of the data we will use only the 2 EVA8100 and Shadowing for the data replication.

But here is the problem :

1) The elapsed time of batch job are longer than before the implementation of Shadowing.

2) Also, the DataBase Disks are mounted without cache because DataBase have their own cache!...

Questions :

1) With OpenVMS Shadowing, is it better to mount all the disk with cache ?

2) Is OpenVMS Shadowing taking avantage of the disk cache ?

Thanks you for your attention.

Yves H.
Honored Contributor

Re: OpenVMS Shadowing and Disk Cache

The usual recommendation with these is to try some combinations of caching, and (if you're going to spend some time investigating and tailoring this) to look specifically at the application-level code and at the process-level operations.

As with databases and targeted caching algorithms (and the classic "faster hardware" solution), it can be best to target your tuning.

1: that's expected particularly with writes; shadowing is not intended as performance tweak, it's a reliability tweak. Writes across multiple disks and/or across remote links will be slower than writes that aren't. Reads arrive from the first available source.

2: Tuned and tailored and intelligent I/O caches tend to be better at caching than can be a generic block-level cache. There are cases where a generic cache is fast, but if the patterns are (from the viewpoint of and the information available to the host controller) seemingly "random" (not truly random, but occurring in a fashion a controller-level cache algorithm might not predict), then higher-level caching can help.

1: Things are usually better with caching enabled; memory is inherently faster than controller cache is faster than outboard cache is faster than disk cache. Storage is a hierarchy of relative costs and relative speeds, of course.

2: OpenVMS Host-based Volume Shadowing operates "below" the OpenVMS RMS and XFC caches, and above the level of the host controller and outboard controller and disk-level caches.

As for generic recommendations: Load up T4 and EVAMON and/other such tools, and run some collection passes and subsequent investigations.

You might find some local weirdnesses here, for instance, or specific I/O patterns that might push you into a specific tuning direction.

Trusted Contributor

Re: OpenVMS Shadowing and Disk Cache

I am not suprised that with Host Based Shadowing (HBS) this runs slower. A write to a HBS set does not complete until all member I/Os have completed. A write for a controller based application typically returns when the data is in the controller's cache. I am not familiar with the Coninous Access feature in the EVA so it may vary a bit from what I said.

Using MOUNT/CACHE - the default - turns on file caching (and file system meta data caching but that's irrelevant here). With file caching data written to files is kept in host memory. If the file data block is read again it can be quickly obtained within the host memory. Very efficient! However, file caches (XFC is the one) have certain restrictions of what is actually cached. There is a good chance that you get better performance using MOUNT/CACHE. Especially when your batch jobs do things like creating/deleting lots of files (file system meta data cache kicks in).

HBS does physical reads/writes to the presented disk drives. If the controller uses caches for these drives then they are used. This is transparent to HBS. But in this case the data still has to cross the Fibre. Any cache closer to the data producer/consumer is better in performance. So a cache within the application or database is the best.

Which database product are we talking about?

Uwe Zessin
Honored Contributor

Re: OpenVMS Shadowing and Disk Cache

Hello Guenther

> I am not familiar with the Coninous Access feature in the EVA
> so it may vary a bit from what I said.

It works very similar when using synchronous replication in Continuous Access:

- host sends data to the first EVA
- 1st EVA stores data in writeback cache.
- 1st EVA sends a copy of the data to the second EVA
- 2nd EVA stores data in writeback cache and
-- sends a confirm back to the 1st EVA
- 1st EVA now confirms write from the host
Jon Pinkley
Honored Contributor

Re: OpenVMS Shadowing and Disk Cache


You didn't specify how far apart your sites are, and whether you are operating CA in synchronous or asynchronous update mode. With asynchronous mode you are in essence using the active (hopefully local) EVA as a cache, which is good for performance, but with the risk of loss of data if the site with the active EVA is destroyed. I am not sure if CA supports "semi-synchronous" operation or not, where it operates in writeback cache mode with a very limited number of buffer credits between the two EVA to limit the exposure. Uwe will know.

Take a look at Keith Parris's resources page http://www2.openvms.org/kparris/ There is a lot of good info there about VMS Clusters and Disaster Tolerance.

This covers your specific question: Continuous Access or Host-Based Volume Shadowing: Which Should I Choose for my OpenVMS Data Replication, and When?
http://www2.openvms.org/kparris/HPTF2006_HBVS_cf_CA.ppt [PPT]
http://www2.openvms.org/kparris/HPTF2006_HBVS_cf_CA.pdf [PDF]

In general, using something like CA provides better performance when the sites are separated by a long distance, and the asynch update mode is used, but is inferior from a failover perspective.

Latencies due to speed of light can be a performance killer when long distances are involved. Any locking that involves nodes at another site will suffer, and shadowing I/O can require 2 round trips, for example when merges occur.

Here's usenet thread discussing some of the issue of "VMSclusters and data replication Options"


it depends

Re: OpenVMS Shadowing and Disk Cache

Hello the World!

The 2 site are within 10 Km, also we use four (4) 4 Gig/bits links, 2 links of 4 Gig/bits on 2 Fabric.

Before using HBOS we where using Continuous Access betwen the 2 site. The EVA5000 was doing the replication to the EVA3000, in synchronus mode.

Then, a I/O have to be written (in cache) to the EVA5000 and EVA3000 before completion. HBOS write the data in parallel on the 2 EVA8100 instead of Continuous Acces doing it in serial!

Also, each EVA8100 have 96 Disk (450 Gig/15K rpm) on one group instead of EVA5000 80 disk (72 Gig/15K rmp) on 2 group of 40, and EVA3000 40 disk in one group.

The 2 EVA81000 should outperform the EVA5000 and EVA3000.

We are using Ingres (80%) and Oracle (20%). The slow down is on batch jobs using Ingres.

We think it will be help full to mount Data Base Disk with cache (MOUNT/CACHE)!...

About the documentation, we found on this site the "Cookbook of Performance Slowdown, VAX and Alpha, V6.0 and above".

Before implanting this solution we consult HP but, as I can see, all the site are differents and have their own way to do the things they have to do...

It is help full reading your answers, it help us to have new idea or to see the problem in an other way.

Thanks to all (merci à tous).

Yves H.

Re: OpenVMS Shadowing and Disk Cache


After several tests in laboratory and after consulting "guru" we implement OpenVMS Shadowing succesfully with HBMM enabled :

1) All disk are mounted with cache ;
2) 2 specifics parameters have a big impact on performance when HBMM is enabled ;
2.1) WBM_MSG_UPPER, the default value with OpenVMS V8.3 is 20, we rise it over 300 ;
2.2) WBM_MSG_LOWER, the default value with OpenVMS V8.3 is 10, we rise it over 30 ;
2.3) Those parameter control the way the HBMM messages are sent, one by one or in packet ;
2.4) The performance gain is around 50% ;
2.5) The performance is also depending on the server and the LAN use for SCS communication.

Yves H.
Jon Pinkley
Honored Contributor

Re: OpenVMS Shadowing and Disk Cache


Thanks for the update.

Can you please tell us what the performance criterion was? I am assuming it was elapsed time to do a specific job.

The WBM_MSG* parameters affect mode that Write Bitmap update messages are sent from the master to the remote nodes. Increasing the upper limit will tend to keep the messages operating in single message mode more frequently. That mode has a lower latency, but higher overhead. On a fast processor with a high speed SCS interconnect, that may be the preferred mode to operate in. The defaults are meant to accommodate any system, from a lowly AS200 with a 10Mb Ethernet SCS connection on up.

Another issue is that WBM_MSG_INT is in millisecond units, but (at least on 7.3-2) the minimum value it can be set to is 10, even though the hwclk interval is less. When operating in buffered-message mode, the messages are collected for a specified interval and then sent in one SCS message. This is "good" from an overhead standpoint, but can add significant latency. I think engineering should consider allowing 1 ms as the lowest value for WBM_MSG_INT, as 10ms is too long for current processors.

Perhaps AUTOGEN should consider processor and interconnect speed and set these parameters accordingly as well.

it depends

Re: OpenVMS Shadowing and Disk Cache

Yes Jon,
the performance criterion was elapsed time to do a specific job.

Trusted Contributor

Re: OpenVMS Shadowing and Disk Cache

You can get the best of IO Cache by mounting
the disks that have the database
mount /nocache on the database disks.

Shadowing significantly speeds up reads.

Unfortunately it also turns off other types of cache. You don't want to cache cached data!

Bob Comarow