- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: New clustered shadowset
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 07:22 AM
тАО04-16-2007 07:22 AM
I'm about to create a new shadowset, but for the first time (for me), it will be across members in a cluster. I'm trying to get my brain around some of the concepts, think I've got it, but want to run a sanity check back against the collective intelligence of this forum.
The scenario is that I'll have a 3 node Integrity -based VMScluster running 8.2-1, with one drive dedicated per host to the shadowset, and this 3-disk shadowset will be used to hold the common cluster files, user directories, among other things.
To create the shadowset, I'll have all 3 nodes booted into the cluster, then will issue "Init /Shadow=($1$DKB200,$2$DKB200,$3$DKB200) Common /Erase" (note 3 different AlloClass values, plus the Erase switch). No big surprise there, I don't think.
Now, to mount the drives, it's easy when it's all on a single node... But on a cluster, where you don't know what sequence a particular system and its disk will be added?
I think the correct command to put in each node's SyLogicals.com is: "Mount /System DSA10: /Shadow=($1$DKB200) Common /Include /Policy=MiniCopy=Optional" on the node with AlloClass = 1 ($2$ on node 2, etc.)
Now, make sure I understand how this is working, if you please: Since the full list of disks is included in the shadowset's metadata, I don't have to spec all drives, just the drive I'm adding on the currently booting node. This should take care of the instance even when the entire cluster is booting, while the first host is waiting for a second to create quorum, if I understand things correctly. Also, I shouldn't spec /Cluster, because that would cause the shadowset to be remounted on the remote nodes -- or would it cause the mount to fail?
Now, in the SyShutdwn.com file, I add a "Dismount $1$DKB200/Policy=MiniCopy=Optional" on node 1, $2$ on 2, etc., in order to gracefully remove each host's local drive from the shadowset, right? I do NOT dismount the DSA drive, because that would dismount it on the other 2 cluster nodes, right? Or would it just break the shadowset by removing the local disk from its definition?
Any other tricks I should know about this new shadowset? What do I do to recover efficiently/effectively from an unexpected system crash? Where can I find docu for HBMM on 8.2-1? It's included in the OS, but the Shadowing document's not yet been updated to include it. (I've never used HBMM before.)
Thanks for any insight you might be able to offer!
Aaron
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 08:26 AM
тАО04-16-2007 08:26 AM
SolutionThat will work out just fine,
BUT,
in view of potential future devellopments,
1)
Stay away from $1$ and $2$, as those are now the defined "alloclasses" of SAN disks and tapes, resp.
2) _DO_ take care to specify /LIMIT= <1Tbyte> upon INIT! (or, if you use /CLUSTER < 8 , then /LIMIT = 1 / 8 * clustersize * 1 Tb)
This way, you allow for future online shadow set expansion.
And please review any and all INIT qualifiers in the manual or in HELP. Several of those merit consideration. I already mentioned /CLUSTER, but others may also prove benificial.
Success!
... and, have fun!
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 08:28 AM
тАО04-16-2007 08:28 AM
Re: New clustered shadowset
upon rereading I also noted your question on HBMM.
Idf you do not have any other info, download the 732 HBMM patch. It has quite extensive release notes.
Proost.
Have one on me,
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 10:20 AM
тАО04-16-2007 10:20 AM
Re: New clustered shadowset
---
Using the /CLUSTER qualifier on the mount won't cause a "remount"; there's no such thing. If the shadowset is already mounted, the operation essentially becomes a no-op.
On the other hand, you don't need to specify /CLUSTER if you are simply adding a member to a shadow set that is already mounted somewhere on the cluster. Simply adding the new member on any node will force the shadow set to reevaluate the membership and the other two nodes will automatically add the new member(s). In general, I avoid using /CLUSTER; MOUNT can tend to get "confused" sometimes, although some effort has been made to bring sanity to the /CLUSTER qualifier; I think it first showed up in V8.3, although it may have been V8.2.
For HBMM, the online help (with extensive examples!) is pretty good, although the stuff we included in the V7.3-2 HBMM kit will explain things in good detail.
The HBMM policy mechanism may look at first glance to be quite complicated, but for a one-site cluster with only a few nodes, your HBMM policy definitions should be quite simple.
-- Rob (part of the HBMM team)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 11:03 AM
тАО04-16-2007 11:03 AM
Re: New clustered shadowset
How are the local disks presented to the other cluster members? Via MSCP?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 12:52 PM
тАО04-16-2007 12:52 PM
Re: New clustered shadowset
>But on a cluster, where you don't know
>what sequence a particular system and its
>disk will be added?
Exactly! Shadowing in a cluster works best when storage is shared, like a SAN.
The big danger in mounting a three member shadowset in this type of configuration is mounting the wrong (ie: older) drive first and losing data.
Depending on your exact requirements, my preference is to use /POLICY=REQUIRE_MEMBERS - this means the shadowset will not mount unless all members are available. Add also use /CLUSTER. This means the shadowset will be mounted by the LAST node up, and you will need to qualify your application startups to check that storage is available.
It also means you need to do something to force mounting in a disaster recovery if a node has been lost and you can manually ensure the shadow members are mounted in the correct sequence - maybe use one of the USER SYSGEN parameters?
One step better is to abstract your disks away from physical devices. All application code references devices by logical name, you then build a procedure that knows the mapping from logical name to physical name and how to mount the devices, and any special logic involving host availability. There may be multiple logical devices mapped to a single physical device. Your application code then asks "get my storage", and there's a single place that can check if it's already mounted, knows how many members to mount, how long to wait and what policies to enforce. The same module can be responsible for reducing shadowsets for backup, handle recovery etc...
This approach makes it much easier to manage and maintain your storage, and, most importantly, protect yourself from mistakes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 01:13 PM
тАО04-16-2007 01:13 PM
Re: New clustered shadowset
<
Dismounting the DSA volume would only dismount it on that node, unless you specific /CLUSTER.
Dismounting the member DKBnnn will remove this member from the shadowset cluster wide. The DSA volume would still be available to the node shutting down, via MSCP served from the remaining nodes (assuming you have MSCP enabled).
Be careful with what you are doing dismounting disks in SYSHUTDWN. Consider the cluster shutdown scenario, and whether your system still needs access to this disk during the shutdown process.
What are you planning on putting on this disk? SYSUAF? QMAN$MASTER.DAT etc?.
Shadowsets with local disks presented via MSCP between cluster members can be a bit tricky to manage. In this configuration I tend to manage the mounting/dismounting manually.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-16-2007 09:18 PM
тАО04-16-2007 09:18 PM
Re: New clustered shadowset
>>>
Now, in the SyShutdwn.com file, I add a "Dismount $1$DKB200/Policy=MiniCopy=Optional" on node 1, $2$ on 2, etc., in order to gracefully remove each host's local drive from the shadowset, right? I do NOT dismount the DSA drive, because that would dismount it on the other 2 cluster nodes, right? Or would it just break the shadowset by removing the local disk from its definition?
<<<
Any _MEMBER_ dismount reduces the shadow set by that member, and therefor operates ingerently on ALL nodes that have it mounted.
And, you CANNOT dismount the last remaining member of a set !!
OTOH, if you want to dismount on just one node, then, on that node, you DISMOUNT the shadow SET.
In working with shadow sets, I find it helps a lot to start thinkong on DRIVES ( = members ) and VOLUMES ( = is datasets available to programs etc ).
If you look back through any documentation, (and the relevant commands) you will find that that distinction has "always" been consistently made, albeit silently.
hth
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-17-2007 02:08 AM
тАО04-17-2007 02:08 AM
Re: New clustered shadowset
Here's a not so simple procedure for mounting/dismounting disks that I use on a couple of clusters running Oracle. Each system has it's own disks with 2 of them having some shared SCSI drives. All 3 system disks are separate, so this procedure is copied to each sys$manager directory.
Before full backups are done Oracle gets shut down, this procedure gets called to remove the members of shadowsets that belong to the node running the backup job, and Oracle gets restarted.
There's a subprocedure referenced -- here's a copy of disk_sites.com
I tried to comment this fairly thoroughly -- if you have any questions let me know.
Robert
$!
$ set noon
$ call find_site 1
$ call find_site 2
$
$FIND_SITE: subroutine
$ set noon
$ site = p1
$ dev_context = 2*site+1+2048
$
$DEVICE_LOOP:
$ next_dev = f$DEVICE("*:","DISK",,dev_context)
$ IF next_dev.nes.""
$ THEN
$ if f$verify() then $ show symbol next_dev
$ if f$ext(0,4,next_dev).eqs."_$''site'$"
$ then
$ next_dev = next_dev - "_"
$ if f$getdvi(next_dev,"SHDW_MEMBER") then -
$ SET SHADOW/SITE='site' 'next_dev'
$ endif
$ goto DEVICE_LOOP
$!
$ ENDIF
$
$EXIT:
$ exit
$ENDSUBROUTINE
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-17-2007 02:29 AM
тАО04-17-2007 02:29 AM
Re: New clustered shadowset
:-)
Robert