- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Automatic mounting of HBVS disks in systartup_...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 01:15 PM
тАО11-24-2007 01:15 PM
What is the consensus, if any, on the automatic mounting in systartup_vms.com of DSAnnn: devices. I am specifically talking about data disks, that is, disks not containing important information for the system but purely and simply used to store user data on them.
While playing around the other day, with LD, I managed to make an inited logical drive be the master and a drive with the data to be the copy target. Not good. (This was creating a NEW shadow set)
I have read extensively on the issue of synchronising the disks, but there are so many scenarios that can occur, so I wonder whether an automatic mount of data disks is a "foolproof" way to get the system running after a reboot.
A scenario:
One server, the primary server, nodeA, has a HBVS volume called DSA1:, made up of $3$dka100 and $4$dka100. 3 is the allocation class for nodeA, 4 is that of nodeB.
Suppose the device $3$dka100 disappears out of the volume, and I am supposing some mount verification happens anyway, and then writing of data resumes to DSA1: whereby it only has $4$dkc100 as a member and which resides on the other node, nodeB.
If nodeB then goes down, and $3$dka100 comes back online, then the data is corupted?
I realise this is all covered in chapter 6 of "Volume Shadowing for OpenVMS", but I have learnt the hard way that manuals are often vague, partially correct until you read several other manuals or just plain wrong.
So, in summary, can I trust implicitly that the shadow set generation number will protect my data from being wiped by an older member starting to copy to a newer member?
I hope I am not too vague in my description.
Regards,
Mark
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 01:19 PM
тАО11-24-2007 01:19 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
And, how do I prevent the scenario of $3$dka100 becoming the only member of DSA1: after leaving the volume and getting new data updated onto its older data (the newer data is on $4$dkc100)?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 04:39 PM
тАО11-24-2007 04:39 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
--
In general, yes, the generation number is the way that the shadowing driver determines in which direction a copy goes.
In your setup, it appears as though you've got two (or more) nodes with the storage local to a single node, and is MSCP-served to the other nodes. If that's not correct, then I think we'll need a bit more detail w.r.t. your configuration.
Still, MSCP or not, the shadowing driver will not clobber data. I'm not going to state, however, that it absolute, positively cannot happen. Bugs do happen, and the shadowing driver is likely the biggest and most complex of all the VMS device drivers.
End-user error, however, can lead to "bad things" happening. I cannot
guess what went wrong with your LD-created device experiment.
-- Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 06:40 PM
тАО11-24-2007 06:40 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
if you have the members of a shadowset connected to 2 different systems - not connected to a shared IO bus (I call this a poor-man's shadowing configuration) - and you're relying on MSCP serving the members to the other node, it may be a good idea to:
- use MOUNT/NOCOPY to prevent the shadowset to be mounted, if shadowing thinks a shadow-copy would be required
or
- explicitly check, that both members exist AND their hosts are available using F$GETDVI("disk","HOST_AVAIL"), before issuing the MOUNT command.
While experimenting with LD, did you INIT the new volume with another label ? Or did you do:
$ MOUNT/SHAD=(LDA1:,LDA2:) DSA1: label
$ DISM DSA1:
$ INIT LDA1: label
$ MOUNT/SHAD=(LDA1:,LDA2:) DSA1: label
It never happend to me, that a shadow-copy went into the wrong direction. When mounting shadow-members manually, I always use /CONFIRM and look at the output from MOUNT, before I answer "Y" ...
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 09:43 PM
тАО11-24-2007 09:43 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
This "poor man's shadowing configuration" is exactly what we've got here.
So, using mount/nocopy, as expected would not allow the copy to proceed. I thought of this. I could then use analyze/system to determine the master and then what?
What if it thinks it's the master when in fact the other disk is? Or can this not happen due to the generation number?
(PS. is there another way of seeing which is the master, ie, lexical function?)
With the LD, I did, I think (I did so many to test various things), init/erase lda2: data99
Then I did the mount dsa99: /shad=(lda1:,lda2) data99
lda1: was already existing with data on it.
Then I watched the contents of lda1: being nulled out.
I too use /confirm when manually mounting shadow sets. That is good advice for all.
Regards
Mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 09:48 PM
тАО11-24-2007 09:48 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
Two nodes, local storage, each share one disks to make up a two device shadow set.
There are 3 shadow sets. Interconnect is NI.
The data is being backed up, so if there is a corruption, we can rebuild, but that is not my concern. Bugs also can't be helped. What I was trying to discern is the likelihood of corrupting data if one node begins a copy when the other is more up to date.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 11:18 PM
тАО11-24-2007 11:18 PM
SolutionMOUNT/NOCOPY will not mount the shadowset DSAn: at all. This may not be want you want to do during startup, but it would be a protection against starting automatic shadow-copies during boot. Determining 'who is shadow master' of a non-existing shadowset can only be done via tools like DISKBLOCK (from OpenVMS freeware disk), which can read and display/format the SCB (Storage Control Block) of a disk (when mounted foreign). And you would manually have to evaluate the same data as MOUNT would need to do during MOUNT, if it can access the SCBs of both potential members - that IS the critical point ! So you should re-write your mount procedures during startup, that an automatic mount should only occur, if both members are visable and their hosts are available (i.e. cluster members and MSCP servers). Then let shadowing decide who the master is...
With the LD, I did, I think (I did so many to test various things), init/erase lda2: data99
Then I did the mount dsa99: /shad=(lda1:,lda2) data99
lda1: was already existing with data on it.
This is what I had expected. By INIT LDA2:, you write a new shadow generation date on the disk. And shadowing will honor that date and - of course - copy from the NEW to the OLD disk ! If you would have used another label for INIT LDA2:, this should not have happened.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 11:22 PM
тАО11-24-2007 11:22 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
To paraphrase Rob slightly, it is not clear precisely what happened during your experiment. I would propose to clarify the conversation, that you add the following intermediate step to your experiment and repeat it.
After initializing the dummy volume, and BEFORE mounting either volume, dump the Shadowing control blocks on the volume in hexadecimal using DUMP (after, of course, MOUNTing each volume /FOREIGN /NOWRITE).
Using the DUMP of the shadowing structures, follow through the process described in the documentation.
Often, behavior that seems counterintuitive becomes far more clear, or at least comprehensible, than the results of simple experiments that do not yield the anticipated result.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-24-2007 11:34 PM
тАО11-24-2007 11:34 PM
Re: Automatic mounting of HBVS disks in systartup_vms.com
it may be a wise idea (especially in your configuration), that you INIT new disks, which are to be brought into existing shadowsets, with a label like TEST and NOT with the same label as the existing shadowset. Then nothing bad can happen, even if your system crashes at the wrong time...
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-25-2007 12:18 AM
тАО11-25-2007 12:18 AM
Re: Automatic mounting of HBVS disks in systartup_vms.com
another thing to keep in mind: with this 'poor man's shadowing configuration', you are actually running something like a multi-site cluster. Strict rules need to be defined and followed in such a configuration to prevent data loss after (multiple) site failures. Your case is the easier one, as the cluster and IO interconnects are the same, i.e. the LAN. Things get even more interesting when using Fibre Channel SANs.
If you're running a current version of OpenVMS (like V7.3-2 or higher), you can also take advantage of the mini-copy feature to prevent full copies after a shutdown of one node - which will break the existing shadowsets if exceeding SHADOW_MBR_TMO seconds. This can be done like this:
When shutting down node B, use SYSMAN to issue the following command on node A:
SYSMAN> SET ENV/NODE=nodeA
SYSMAN> DO DISM/POLICY=MINICOPY $4$DKA100
SYSMAN> EXIT
This will create a WBM (Write Bitmap) for $4$DKA100: on NodeA and keep track of further writes to the DSA1: shadowset from Node A. Once you boot NodeB again and remount the DSA1: shadowset, $4$DKA100: can be brought back into the shadowset without a FULL copy. This saves a lot of time and resources.
Volker.