HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Hours:
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Mounting a remote Volume shadow set member

 
SOLVED
Go to solution
SANJAY MUNDHRA
Valued Contributor

Mounting a remote Volume shadow set member

Hi Folks,

I have 5 node VMS cluster, each node booting using its local disks which are volume shadowed. One of the nodes will be used as backup server, what whould be the best way to mount the "local" shadow member disks of all the cluster systems which are currently visible using MSCP on the backup server without impacting the data.

thanks in advance.

Cheers,
Sanjay
8 REPLIES
Hoff
Honored Contributor
Solution

Re: Mounting a remote Volume shadow set member

You would mount the shadowset member volumes on the local host, creating an instantiation of the shadowset virtual unit on your backup server host.

You will need to ensure each of the shadowset disk volumes has a unique volume label. This is probably already the case here, though.

Without additional details or a rationale for the current cluster configuration, this configuration does look a little odd. Too many system disks. Or too little shared storage. Or both. Maintaining multiple and non-skewed system disks can be an effort, as I'm sure you're finding.

I'd probably drop down to two system disks (for each architecture present), intending this to allow for rolling upgrades.

If you do not want to have *any* potential exposure, extract the node from the cluster. Having a backup node configured in the production cluster, and having it used for tasks such as testing or development, can be hazardous. Rogue application operations or operator errors can potentially affect the production environment when clustered with same.

Stephen Hoffman
HoffmanLabs LLC
SANJAY MUNDHRA
Valued Contributor

Re: Mounting a remote Volume shadow set member

Hi Hoff,

Thanks very much for your quick response.
Remember meeting you for a brief while in Colarado in Jan'07 (VMS User conference on a Friday), I had come there for a VMS Mentorship program with GSC. And also met you last year during your trip to Singapore along with Sue.

For this case, I have attached more details in the excel file. Is it possble to share the Local Volume Shadow device (ie DSAxxx) accross the cluster.

Thanks and Regds,
Sanjay Mundhra
Singapore
Robert Brooks_1
Honored Contributor

Re: Mounting a remote Volume shadow set member

As Steve said, Host-based Volume Shadowing virtual units are not shared across a cluster.

You need to create the virtual device local to each system; only the member units are MSCP-served. Virtual units cannot be served.

The Host-based Volume Shadowing manual describes the above in good detail.

While you are learning about shadowing, it would be a good idea to understand how the use of host-based minimerge (HBMM) with shadowing can make your life easier.

-- Rob
Shardha
Valued Contributor

Re: Mounting a remote Volume shadow set member

Dear Sanjay,

just to mention here about volume shadowing. From 7.3 onwards HP has introduced the mini merge copy, which takes away a lot of overhead which was not possible in prior versions of OVMS.
It take ages to complete the merge copy incase of a crash.

Volume shadowing is a very good facility from OVMS.
Shardha
SANJAY MUNDHRA
Valued Contributor

Re: Mounting a remote Volume shadow set member

Hi everybody,

Thanks for the quick responses, I am thinking of doing the following (as per Keith Parris reccomendation:

1) Share the Local disks on each node using MSCP and assign the MSCP traffic to Bckup LAN Gigabit controller.

2) From the backup server node I should now be able to do a backup of each node's local shadowset this way:

$ MOUNT/SYSTEM DSA40: label
$ BACKUP/IMAGE/IGNORE=INTERLOCK DSA40: tape-device:saveset.bck/SAVE
$ DISMOUNT DSA40:
And do the same for each of the other shadowsets.

Regds,
Sanjay
Hoff
Honored Contributor

Re: Mounting a remote Volume shadow set member

What you'll want to do here is split out a member volume from the HBVS shadowset -- on all nodes -- and then use something like MOUNT /NOWRITE /OVER=SHAD to haul the disk on-line, and use BACKUP /IMAGE to replicate the contents of the member volume out to archival storage.

If your data is sufficiently valuable and you can have a pool of disks available, you can yank a member volume for archival purposes, spin in another spare volume and run a full copy to it, and rotate through a set of disks containing near-line copies. You can periodically then copy one of these "spare'd" volumes to longer-term archival storage as appropriate.

Or present a physical disk to HBVS from the controller, and use a controller-level operation to pull out a copy "underneath" the physical unit visible to HBVS. (And I'd tend to avoid RAID-5 here given the price of disks these days, as '5 has some very nasty failure and recovery processing. Details available upon request.)

Another approach is to see if you can snapshot the contends at the controller level, if you have that available. There are various controller-level replication options around.

If there are databases open and active on the target shadowset virtual units, most of (all of?) the databases really don't like having the disks yanked like this. From direct personal experience patching the pieces of Humpty Dumpty back together again, I know Rdb gets miffed; severely and massively cranky. RMU is your BFF here.

The SCACP tool can help you prioritize your path use. It'll help you keep your paths all available, but to send SCS traffic where you want it until/unless failure.

[January 2007? Colorado? HP User Conference? I think you met my Doppelganger there; if you meet him again, could you have him call me? :-) November 2005 was my final round of roadtrips from back when I was working for HP.]
Anton van Ruitenbeek
Trusted Contributor

Re: Mounting a remote Volume shadow set member

Sanjay,

If you have the opertunity you can do the best of create a member of the shadowset on your backup system and remove this member during the backup using minimerge on.
Do not use /OVER=SHADOW during mounting the shadowdisk localy because this creates a full copy of the disk during placing the disk back into the original shadowset. (and this is not what you want over the WAN).
Its advisable to handle all the disk the same. So systemdisks are (as systemmanager view) the same as datadisks (but make sure you as systemmanger do separate them). So always mount all the disks clusterwide.
Have you ever thought about it to create 1 systemdisk for the whole cluster ! Its much easier to manage a system using 1 systemdisk and if you using HBVS its not more/less secure.
Its also very advisable to NOT USE controller based mirroring or raid solutions. Keep track of the devices from OpenVMS and not using some obscure controller language.
I'm not quite aware of the hardwareconfig you are using now. All SCSI or SAN or mix ?
If your using SCSI, maybe a scsi-cluster (per two) is also an option.
We are having a multisite OpenVMS cluster with multiple nodes per site. All the clustermembers are booting as a satelite and NOT from the direct attached SAN device.
The backup is done by dismounting one member of the disk (we have a 3 member shadow set one on each site and on the backupsite one extra) and backup this one exclusively.
We did requalary tests or the backups (of all disks) where restorable and useble. The testsystems (where we did de restores on) where coming up so the method did work fine.
Now we have a clusteruptime of 10 years and the oldest screw is about 4 years. The software is up to date till 1 year ago.
This makes this a proven concept.

AvR
NL: Meten is weten, maar je moet weten hoe te meten! - UK: Measuremets is knowledge, but you need to know how to measure !
SANJAY MUNDHRA
Valued Contributor

Re: Mounting a remote Volume shadow set member

Thanks very much for the responses. I will as such close this thread.