- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Cluster Migration Procedures
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2007 08:55 AM
01-17-2007 08:55 AM
The actually Cluster nodes have independent system disks ( Shadow Disks ) in their local bus.
The Quorum Disk and data disks resides in the SAN ( EVA5000 )
Which it is the better method in order to carry out it in sure and quicker form without having to reinstall the applications?
We have planned to carry out the following :
1- Shutdown one node
2- Image backup the system disk to SAN ( EVA5000 )
3- Boot one ES47 from this copy in the SAN ( only with the system and quorum disk )
4- Resolve any related issues :
Different system disk device name
Licensing
Hardware differences ( network devices )
DECNET and TCPIP reconfig
AUTOGEN
5- Test the clustering mechanism
6- Mount the data disks and test de applications.
5- Test the aplications for a week or so
If anything is OK... then do the same procedure with the another ES47.
Is this method completely sure?
Do some other complications exist that don't we be seeing?
Do we lose facilities, improvements or performance making the migration in this way, without an "clean installation"
Or given the implied risks and complexities would be better carry out make "clean installations" ( including the applications ) and adding the new nodes to the cluster.
Thanks
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2007 11:05 AM
01-17-2007 11:05 AM
Re: Cluster Migration Procedures
Personally, I prefer, if at all possible to create new cluster members:
- leaving all of the original nodes up
- imaging one of the system disks
- mounting the copy privately
- making the configuration changes (SCSNODE, etc.)
- booting the cloned system disk on one of the new systems
- testing the new node
- including the new node in the production mix
Repeating the last two steps until all of the new systems are in production. The old hardware can then be idled and disconnected.
While it is a little bit more effort, this scheme can be implemented without ANY INTERRUPTION in system availability. For that matter, the cluster will never actually be down. Some members will be added, and other members will be removed.
This is the way that OpenVMS clusters have recorded uptimes in excess of a decade, through generations of software, CPUs, and mass storage.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2007 11:47 AM
01-17-2007 11:47 AM
Re: Cluster Migration Procedures
I would consider moving the system disks to SAN with backup/image. Configure the two 8400s to use the SAN for boot disks.
Run cluster_config to create new roots for the ES-47s on each system disk. This gives you 2 systems disks, with 2 roots on each system disk.
Bring the new ES-47s into the cluster, test the application, adjust modparams etc. You'll have a 4 node cluster during the testing period.
Shutdown the 8400s, removing nodes and adjusting quorum.
I would also run console diagnostics for 48-72 hours prior to booting the new systems into the cluster.
With proper planning, the cluster will remain available.
Andy
"There are nine and sixty ways . . . "
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2007 03:16 PM
01-17-2007 03:16 PM
Re: Cluster Migration Procedures
Personally, I would lean towards doing a clean install rather than copying an existing system disk, unless there is a good reason why you can't or don't want to. It's hard to say which option is better, it depends on your environment and which option you are more comfortable with. There are complexities either way...
If you do decide to use a backup copy (or shadow copy) and then rename this build, I would suggest that first you find a test box, preferably with similar products installed, and rename it. What you really want to avoid is a situation where you boot the new node into the cluster and you cause a problem with the existing cluster members because you overlooked something.
As for performance, make sure that if you choose the clean install option that you take a good look at your existing SYSGEN settings. You will probably need to duplicate some settings for the new build. AUTOGEN won't be much use until the new servers have seen a period of usage.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-17-2007 11:22 PM
01-17-2007 11:22 PM
Re: Cluster Migration Procedures
from experience, I can say that I definitely agree with Robert & Andy, and I tend to disagree with Martin.
Migration "on the fly", by first adding a node, and if it is functioning satisfactorily, removing an old one (repeat as necessary).
THE big issue with fresh install is, that you need to re-apply ALL layered- and 3rd-party stuff. In my experience, you ALWAYS forget something!
And the organisation will not even notice if things go as is to be expected, but, IF you go the fresh route and you DO hit an issue, THAT will be when you have some explaining to do!
hth
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2007 01:43 AM
01-18-2007 01:43 AM
Re: Cluster Migration Procedures
The image of the "old" system disk has everything required ( images, dcl scripts, etc ) for the new ES47 ?
In affirmative case that could so much affect this ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2007 02:16 AM
01-18-2007 02:16 AM
SolutionFrom: http://h18002.www1.hp.com/alphaserver/download/alphaserver_es47_ds_0704.pdf
on page 3, it says the ES series are supported by v7.3-1 with TIMA kit, ie. I would think v7.3-2 should be good.
And from:
http://h71000.www7.hp.com/doc/732FINAL/6668/6668pro_012.html#alpahes
it doesn't sound like there are any hardware dependencies to "worry" about.
And just to clarify, you have two _different_ system disks, or one system disk shadowed on both nodes?
My 2cents worth ... perhaps you could go your way, from a backup, bring it up independantly with the new ES47 on the seperate SAN space and work out the changes that will be required. Then when your ready, take the suggestions given and integrate them into your existing cluster.
Always bound to miss something the first time around, always best to discover it in test!
Good luck,
Art
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2007 02:16 AM
01-18-2007 02:16 AM
Re: Cluster Migration Procedures
If you are going from 7.3-2 to 7.3-2, then everything is there. I would take a look at the patches to see if there are any patches specific to the hardware configuration on the new systems that are applicable (and may have been ignored because they did not apply to the 8400 systems).
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-18-2007 05:17 AM
01-18-2007 05:17 AM
Re: Cluster Migration Procedures
>>>
And what about losing facilities, improvements or performance by not having made a "clean installation" ?
<<<
Any improvements on the "clean" installation are also on upgraded systems.
(btw, the LEAST you need on a clean installation will be to apply the 732 patches again!)
Also consider that your current system disk IS already configured and tuned for YOUR environment (at least I sure hope and trust so)
Any (re-)configuring left will be for your new NICs, and adaptations for the undoubtably increased memory and CPU-speed.
Of course, moving to SAN requires its own configuring whichever way you choose.
However you decide: thinking ahead is much better than correcting afterward!
Succes.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2007 06:00 AM
01-23-2007 06:00 AM
Re: Cluster Migration Procedures
The idea is shutdown an node 8400 y startup an new ES47 using the copy modified of the system disk, and join to the cluster together with the other 8400. Then validate the cluster operation some days and if OK, then repeat the same procedure with the other node. Is safety to make that in this way ? Will we have problems to the moment to make the JOIN to the active cluster with the new ES47 ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2007 06:26 AM
01-23-2007 06:26 AM
Re: Cluster Migration Procedures
There are some issues which you should care about.
1. Do you have everything in SYS$COMMON ?
2. Do you have all the major and most important things off the system-disk. I mean the queuemanager, sysuaf, IP-config files, DECnet config files etc.
If not, you got first to think about how to solve these problems of copying these whithout a clustershutdown. Whe have experienced with going from SAS to SAN. Because whe had all the cluster environment on a different disc whe had less problems.
I also think (and hope) you have your systemdisk in shadow !
I would go for a scenario of:
a. use clusterconfig to add a new node.
b. dismount one member of the systemdisk
c. backup/image this member to a SAN disk
d. mount the systemdisk back again
e. mount the 'new-systemdisk' private and give this one another name.
f. try to get rid of the quorum disk.
g. modify the tree of the new node for the new ES47.
h. make sure there is a queuemanager available before the ES47 can boot.
i. boot the ES47 and check or everything is fine. If not, keep on modifying and booting till it does.
j. shutdown one 8400 and boot it up (using LAN) on the new ES47.
k. check or everything is allright and working. If not, modify and reboot !
l. do the same for the next ES47 and AS8400
I would consider a clean systemdisk as like spoken before.
AvR
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2007 06:33 AM
01-23-2007 06:33 AM
Re: Cluster Migration Procedures
There are no issues with changing the hardware behind a IP or DECnet address.
Andy
There are nine and sixty ways . . .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-23-2007 06:36 AM
01-23-2007 06:36 AM
Re: Cluster Migration Procedures
As for the clustering... Assuming that VOTES and EXPECTED_VOTES are set correctly, and assuming that the original source node for the system disk was hard-halted and won't be visiting the cluster during your migration, and assuming you made full-disk copies with BACKUP/IMAGE without resorting to /IGNORE=INTERLOCK or on-line BACKUP or such...
What you'll encounter with your copies is a need to reconfigure the network software stack and other physical device references that might exist as it's unlikely that your AlphaServer 8400 and your AlphaServer ES47 will have the same disk configuration. You will need to track the FC SAN addresses, for instance.
But you'll certainly be able to get it booted far enough to fix these references, once you have a path to the cloned system disk. A conversational bootstrap from the console may be required here, particularly if the system startups aren't coded to deal with startup-level errors. (If the startups lack a SET NOON command at the top, for instance.)
Then an AUTOGEN to re-set various parameters.
Off you go.
If you're paranoid or in a hurry or feeling nervous, again, get some help in here. I'd tend to at least make disk BACKUP copies of everything critical, and would consider keeping the business-critical stuff off-line until you're ready for it.
If you're really paranoid or your exposure to business-critical failures is high, take a copy of the disks, configure the node to NOT join the cluster, test, and prepare for deployment. If you are testing production applications within the production cluster with production applications, you can obviously potentially stomp on the production data and the production environment.
As for some of the core questions I can infer here, OpenVMS doesn't care what hardware SCSNODE and SCSSYSTEMID pair are associated with. Clustering does get cranky if it sees the pairing of these two values broken. And DECnet and IP don't care what host address and host name are associated with what hardware. That sort of stuff is not where I'd be focused here; that stuff is easy, easily fixed, and comparatively low-risk.
Stephen Hoffman
HoffmanLabs