- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Cluster migration (system disk) on other stora...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 03:25 AM
05-03-2010 03:25 AM
Have the VMS cluster one system disk for all 4 nodes. Alpha servers (GS1280 and GS80), VMS version 8.2. Using storage tools can to do clone of system disk no shutdown all nodes in cluster. Is it possible to restart one-by-one node from new system disk at other storage? Example: shutdown 1 node from 4 (3 nodes work on old system disk), change bootdev_def on 1 node, boot it from other system disk and e.t.c. At very short time (~1,5-2 hours) cluster will work on different system disks.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 03:39 AM
05-03-2010 03:39 AM
Re: Cluster migration (system disk) on other storage
Why you need this ? To migrate to a new system disk ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 03:43 AM
05-03-2010 03:43 AM
Re: Cluster migration (system disk) on other storage
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 03:53 AM
05-03-2010 03:53 AM
Solutionwhether this operation is possible in your cluster and how easy it is, mainly depends on where you've kept all your cluster-common files (SYSUAF, QMGR-database etc.).
You could boot individual systems from the new system disk and still access the 'common' files on the old system disk. Needs a couple of changes in SYLOGICALS.COM on the new disk.
WHY do you want or need to do this ? Maybe there are better ways to achieve what you want to do.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 03:58 AM
05-03-2010 03:58 AM
Re: Cluster migration (system disk) on other storage
RB
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 03:58 AM
05-03-2010 03:58 AM
Re: Cluster migration (system disk) on other storage
PS. All data disk yet migrated on new storage. On old storage system I have connected only boot system disk.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 04:07 AM
05-03-2010 04:07 AM
Re: Cluster migration (system disk) on other storage
and you don't have a shadowed system disk...
Where are your 'common' files ? On the old system disk ? Then it might be impossible to move them to the new disk, while any of the nodes still have those files open on the old disk.
The most problematic file would be the queue manager database, as you can't change/move that with the queue-manager (and queues) running.
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 04:20 AM
05-03-2010 04:20 AM
Re: Cluster migration (system disk) on other storage
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 04:21 AM
05-03-2010 04:21 AM
Re: Cluster migration (system disk) on other storage
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 04:58 AM
05-03-2010 04:58 AM
Re: Cluster migration (system disk) on other storage
If you intend to do a full shutdown, then NOW is the time to prepare for a potential future similar action - or perhaps there might come a time to move to IA64 in a gradual way?
So _NOW_ (on the old config) look in SYSLOGICALS.COM.
By default, many files there are specified, but commented out.
Define a new location for them NOW, and have that location that is NOT on the system disk.
The files that are currently open should be
$ CONVERT/SHARE
Now reboot nyour cluster; and from then on you CAN do rolling upgrades, and rolling hardware replacements.
We did this many times (the main reason we could get to > 10 years cluster uptime!)
Good luck.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 06:04 AM
05-03-2010 06:04 AM
Re: Cluster migration (system disk) on other storage
I have done this transition for clients in the past. It is very feasible, but it does require care.
I will describe the sequence from memory, but I would recommend careful checks before using this posting as a guideline for a production system.
The element that requires care is the cluster common files (generally defined in SYLOGICALS.COM).
Though somewhat simplistic, the outline of the process is:
- create a system image on the new disk
- mount the new disk cluster-wide
- shutdown the queue manager temporarily
- refresh the common files to the new system disk
- restart the queue manager
- redefine all the logical names to point to the new location
- do a rolling reboot
Generally, I would recommend moving the cluster common files off of the system disk at this point to a separate, small logical volume.
Care is necessary, but this can be done with success, virtually eliminating downtime.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-03-2010 06:10 AM
05-03-2010 06:10 AM
Re: Cluster migration (system disk) on other storage
>PS: I don't have a shadowed system disk...
So go license and go configure shadowing. Even if you're using and depending on controller-based RAID here, having host-based volume shadowing (HBVS, software RAID-1) available and having DS shadowing disk devices for all your critical disks is useful for maintaining uptime. Even if you have just one disk or one controller-presented virtual disk available as a shadow member volume for most of the time, the DS devices are useful.)
In particular, HBVS (once configured) also allows you to use dynamic volume expansion (DVE); you can transparently roll your way to larger disks. Without going off-line.
You'll likely also want to start deploying the DECnet and IP cluster alias mechanisms, if that's not already on-line.
Oh, and setting up SYLOGICALS.COM and shadowing and the other tools means that you'll also be able to set up rolling upgrades, and to move from older OpenVMS releases to newer releases without rebooting the cluster. It also means that you can start rolling in and migrating to the smaller and faster replacements for this Alpha gear; to the Integrity servers.
I don't trust the on-line storage-level replication tools for all the same reasons that I don't trust the BACKUP /IGNORE= INTERLOCK (also known as /ALLOW= DATA_CORRUPTIONS /NOWARNINGS ); the lower-level and block-level storage stuff has no idea what's going on "upstairs" in the operating system and the applications. You need to have your host activity quiesced and all system and application caches flushed for that block replication to work. (And VMS doesn't have that knob.) The storage controller folks have been re-learning this particularly lesson for every product generation since 1982 or so, and quite possibly longer.
Get somebody in to have a look at this stuff, and to work through a configuration that best meets your requirements. (You might not, for instance, be able to migrate to Integrity servers, but having access to rolling upgrades and multiple system disks can still be useful...) This if you're not in a position to read through the wall of documentation that's available.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
09-12-2010 11:50 PM
09-12-2010 11:50 PM