- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Change the HW from 4100 with local Discs to DS25 C...
Operating System - OpenVMS
1758149
Members
2815
Online
108868
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-08-2004 06:11 AM
08-08-2004 06:11 AM
Re: Change the HW from 4100 with local Discs to DS25 Cluster with EVA3000 SAN
Hi Manfred.
I migrated an Alpha 4100 + 2100 cluster to 2 x ES40 cluster a couple of years back. the disks were being moved from DSSI to Fibre channel via HSG60 controllers.
Once the disks were configured by our reseller, I used our standard Disaster Recovery procedures to build the new system.
We also run Adabas v4.1.1 and this is usually quite happy. You only need to make changes if the disk names where the container files are located have changed - quite possible if you're moving to an EVA, but dependent upon you're old config and whether you've used logical names.
I try to configure the system from the start so that any changes are limited to SYSTARTUP_VMS .COM, if possible, although our latest software acquisition, Attunity Connect, does store physical device names in several places, not just in its startup files.
If you have no DR procedures, I'd suggest using this exercise to develop a set of instructions. If time allows, you could use the opportunity to work out a full set of procedures.
As already mentioned, it is rarely easier to reinstall everything from scratch - too much happens over the years that you or previous managers haven't noted down. It is far easier to restore your existing system and change any parameters/device names as necessary.
I wish you success!
I migrated an Alpha 4100 + 2100 cluster to 2 x ES40 cluster a couple of years back. the disks were being moved from DSSI to Fibre channel via HSG60 controllers.
Once the disks were configured by our reseller, I used our standard Disaster Recovery procedures to build the new system.
We also run Adabas v4.1.1 and this is usually quite happy. You only need to make changes if the disk names where the container files are located have changed - quite possible if you're moving to an EVA, but dependent upon you're old config and whether you've used logical names.
I try to configure the system from the start so that any changes are limited to SYSTARTUP_VMS .COM, if possible, although our latest software acquisition, Attunity Connect, does store physical device names in several places, not just in its startup files.
If you have no DR procedures, I'd suggest using this exercise to develop a set of instructions. If time allows, you could use the opportunity to work out a full set of procedures.
As already mentioned, it is rarely easier to reinstall everything from scratch - too much happens over the years that you or previous managers haven't noted down. It is far easier to restore your existing system and change any parameters/device names as necessary.
I wish you success!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-09-2004 01:57 PM
08-09-2004 01:57 PM
Re: Change the HW from 4100 with local Discs to DS25 Cluster with EVA3000 SAN
Manfred,
Since you're at V7.3-2, why not migrate the system with (almost) no downtime? Start with a minimal installation, temporary system disk for the DS25. Use clustering to add the new node, and make the SAN disks available to the old nodes, then use dissimilar device shadowing to move the data without interrupting processing. Once that's done, shutdown everything and boot the DS25 from the migrated system disk, AUTOGEN and reboot!
Obviously you'll need to do some detailed planning, and you will need to modify your MOUNT commands, but everything is now present in OpenVMS 7.3-2 to perform transparent physical migrations of entire data centres. With your system disk on Fibre Channel, you don't even need to have the reboot as your new node can boot from a new root on the existing system disk.
Design your clusters carefully and you'll be able to upgrade nodes, storage and sites with zero downtime!
Since you're at V7.3-2, why not migrate the system with (almost) no downtime? Start with a minimal installation, temporary system disk for the DS25. Use clustering to add the new node, and make the SAN disks available to the old nodes, then use dissimilar device shadowing to move the data without interrupting processing. Once that's done, shutdown everything and boot the DS25 from the migrated system disk, AUTOGEN and reboot!
Obviously you'll need to do some detailed planning, and you will need to modify your MOUNT commands, but everything is now present in OpenVMS 7.3-2 to perform transparent physical migrations of entire data centres. With your system disk on Fibre Channel, you don't even need to have the reboot as your new node can boot from a new root on the existing system disk.
Design your clusters carefully and you'll be able to upgrade nodes, storage and sites with zero downtime!
A crucible of informative mistakes
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP