- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- splitting a cluster
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2007 02:23 AM
08-28-2007 02:23 AM
We have had serious issues in MQ series messaging system used by the application extensively. TO overcome this issues, we would like to split the cluster into non-rpd and production. we use two system disks, one productiona and second non-prod. Both have disks from XP array, EVAs and HSGs presented thru fibre switch. They do share SYSUAF residing in a common disk.
What are the things to be considered if we decide to go ahead with the cluster split. I am just looking at the implications and realize that it would take some downtime.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2007 03:01 AM
08-28-2007 03:01 AM
SolutionAdmittedly obviously, I would proceed with care. I would also be interested first in fully understanding why there is a performance problem with MQ.
My concern is that the entire evolution of splitting the cluster could be accomplished, and the performance problem could remain, or re-appear at a later date.
Removing the development nodes from the cluster will likely proceed without incident.
Changing the cluster group number on the non-production cluster will create a second cluster. There is more to achieving the most benefit than merely creating a second cluster.
The more complex process is ensuring that the new development cluster has all of the files that it needs, and that no device accidentally ends up being used by both clusters. Good use of the SAN is potentially useful here, but one should be careful. It may be [read that as IS] highly desirable to retain the option of quickly moving development hardware to the production cluster to replace failed units.
Cloning the common disk, and then deleting elements from both sides is one possible technique. This begs an additional question of whether one wants to maintain both sets of accounts on both clusters.
File access between the now-separated clusters can be eased by the use of proxies and concealed logical names, but this two requires care.
This evolution can be accomplished is a fairly non-disruptive fashion, but it does require care. It may be a sound idea to retain outside expertise with experience in this type of situation [Disclosure: My firm does provide services in this area, as do several of the other regular contributors to this forum].
- Bob Gezelter, CDP, CSA, CSE, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2007 03:23 AM
08-28-2007 03:23 AM
Re: splitting a cluster
Not a performance issue, but somehow MQ in non-prod tried to communicate with Prod resulting in hung MQ in production and many hours of downtime of it. Without naming any one , it is being pointed out as a possible cluster lock manager issue. No proof has been provided yet by those.
>My concern is that the entire evolution of >splitting the cluster could be >accomplished, and the performance problem >could remain, or re-appear at a later date.
By splitting the cluster, we hope that non-prod MQ will not mess up production MQ.
>no device accidentally ends up being used >by both clusters.
Good point, we are aware of that.
>Good use of the SAN is potentially useful >here, but one should be careful.
We have, to the maximum extent possible kept separate disk groups in SAN but there are some disks shared by both.
>It may be [read that as IS] highly >desirable to retain the option of quickly >moving development hardware to the >production cluster to replace failed units.
Yes, our non-prod nodes are actualy 3-way partitioned single GS1280 , which can be unpartitioned in a hurry to be used a prod node.
Thanks for your suggestion, appreciate them.
James
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2007 03:29 AM
08-28-2007 03:29 AM
Re: splitting a cluster
You are welcome.
In the situation that described in your follow-on posting, it is clarified. Certainly, separating the production from the non-production will eliminate the "blame throwing" hazard. I would consider several other configuration items, such as separate dedicated LANs for each cluster's SCS traffic.
If I can be of assistance, please let me know.
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2007 06:23 AM
08-28-2007 06:23 AM
Re: splitting a cluster
I largely agree with Bob (not surprisingly).
If the setup has/had this in mind beforehand, it is even simplicity itself.
We have a config, in which one node if configured to be a cluster member if booted from the cluster system disk, but it can also boot from another disk, and then it is a totally standalone (actually single-member cluster) system.
Now in the existing cluster, the hard part will be the data (in the broadest sense, program files are also just data).
How easy or difficult is it to separate that.
In a 9-node cluster I would guess you use volume shadowing. The following assumes you are.
You need the help of a GOOD SAN manager.
You should purcase some extra drives (nowadays rather affordable, certainly compared with the effort you are facing).
Split one member of each shadow set, and have SAN configure those members into the new zone of the test/dev cluster.
Rebuild the PRD shadow set using the new drives.
Boot one of the designated tst nodes conversational, set VAXCLUSTER to zero, and run cluster-config to form a new cluster.
This generates a new cluster file, and in effect there is a new cluster. Other nodes may join.
Best case: all data is reference by logical names that point to concealed devices. Except for the concealed devices NOTHING references disks directly. You are almost done: just define the concealed devices in line with the new hardware config.
Worst case: if the data is really intertwined
Now comes the job of selectively removing from each cluster the data that does not belong.
And somehow keep thrack of the expeses & efforsts. In the likely case that there is _NO_ difference in the MQ issues, you have a bill to present!
Success, have fun.
Proost.
Have one on me.
jpe
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-28-2007 11:32 PM
08-28-2007 11:32 PM
Re: splitting a cluster
http://64.223.189.234/node/169
Most of this is going to be splitting the FC SAN, but the core of splitting the cluster is splitting the personality files.
Given this is a split into production and testing clusters, ensuring that production and testing track together is going to be another issue.
Stephen Hoffman
HoffmanLabs LLC