- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: "Standby" Node Management Advice
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-20-2010 07:53 PM
тАО10-20-2010 07:53 PM
Re: "Standby" Node Management Advice
bob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-21-2010 04:43 AM
тАО10-21-2010 04:43 AM
Re: "Standby" Node Management Advice
We had a similiar situation when we were testing new replacement servers with SAN attached storage.
We would only present the storage to the server we wanted to boot on the storage array (EVA and EMC arrays).
When we wanted to boot the new server for testing a script would be run on the storgae arrays to unpresent the disks from one server and present them to the other server.
Of course this doesn't stop someone from executing the script to move the disks between the servers when it's not intended but that would be an operational process that would need to be implemented. We were in a test enviornment though. We also made sure we had valid backups available.
Quick and dirty way if clustering isn't an option. Just an extra step of performing storage masking on the array when both servers are down.
HTH,
Bob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-21-2010 07:23 AM
тАО10-21-2010 07:23 AM
Re: "Standby" Node Management Advice
I was thinking more along the lines of the way RAxx disk drives worked. They had two ports, but could only be connected on one port at a time. Once system A connected to the drive on Port A, then system B's attempts to to connect on Port B would not succeed. This worked well for us when we had an active system and a warm standby with data disks to be connected to whichever system was active.
But I don't know if a modern SAN can be made to work this way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-21-2010 07:35 AM
тАО10-21-2010 07:35 AM
Re: "Standby" Node Management Advice
We attached a serial T-switch to both nodes' serial ports, and put a serial loop-back connector (free from DEC with every computer!) on the common connector. Both nodes were on all the time ("warm" instead of "cold"). When the nodes booted, they transmitted a string to the T-switch; the one that got the string echoed back was the "primary". The nodes also connected to one another task-to-task to ensure we didn't have two "primaries". The other node was the "secondary". The boot process then defined a logical name that contained the node's status, and that logical was used to determine which site-specific start-up files were run, and also to alert users if they happened to log into the wrong machine.
This approach allowed us to automatically back-up application status to the secondary, and also to fail-over without rebooting if we wished. But it was simple enough for a non-technical person to perform a fail-over if necessary.
Now, at the time we did this, using clusters would have meant spending a lot more on hardware as well as software. But this solution worked well enough that we're still using it today on the same application, abet on slighter newer VAXen.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-21-2010 10:09 AM
тАО10-21-2010 10:09 AM
Re: "Standby" Node Management Advice
Dan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-25-2010 08:35 AM
тАО10-25-2010 08:35 AM
Re: "Standby" Node Management Advice
Some quick answers:
- will be running third party app so can't change code
- vendor does not support clusters
- cost of cluster license not an issue - just looking for quick way to recover from failure of a critical system
That said:
You've all convinced to me of the error of my design. Since we do have monthly boots scheduled (how rare these days!), we will include a quarterly switch between nodes to test the standby node.
- « Previous
-
- 1
- 2
- Next »