ProLiant Servers - Netservers
cancel
Showing results for 
Search instead for 
Did you mean: 

Migration of MSCS Cluster to new blades, keeping MA8000

 
Highlighted
New Member

Migration of MSCS Cluster to new blades, keeping MA8000

Currently we have a 2-node WS2K3EE MSCS cluster implemented with 2 PL8500Rs and a MA8000 unit. We've purchased two new BL45Ps.
We'd like to migrate our current cluster to the BL45p's - continuing to use the MA8000 - by either extending the node count to 4 and then deleting 8500R-nodes OR by deleting one 8500R-node at at time and adding a new PL45p-node in it's place. (Since the MA8000 and it's data is staying put - it's more desireable and easier to migrate by extension/replacement than rebuilding a cluster from scratch.) However, we can't find MSCS HP documentation regarding MSCS configuration. (For instance, how does one *safely* set up BL45p-based redundant pathing/SecurePath (i.e. NSPF) for pre-MSCS-"add node" connectivity to MA8000? Does WS2K3 MSCS automatically assing correct drive letters, etc. etc.
Anyone done this? Have you found a "cookbook" Etc. Advice and pointers (URL and otherwise) are needed. Thanks.
4 REPLIES 4
Highlighted
Honored Contributor

Re: Migration of MSCS Cluster to new blades, keeping MA8000

Hi,

not sure about HP documentation. And I don't know if Blade servers are supported on the MA8000.
http://h18006.www1.hp.com/products/storageworks/bladesystemmatrix/bl40-45.html

What HBAs are you installing? Try and keep to the same OEM-type as your PL8500s (my guess is they are Emulex-based KGPSA Adapters).
http://h18004.www1.hp.com/products/blades/components/Mezzanine/emulex/index.html

You should install driver version 5.4.82a16 for the HBAs in the new servers, not anything newer, as it would not be supported on MA8000.
http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?lang=en&cc=us&prodTypeId=12169&prodSeriesId=315733&prodNameId=315735&swEnvOID=1005&swLang=8&mode=2&taskId=135&swItem=co-17519-2

Check beforehand if the existing LUNs have selective storage presentation enabled. (SHOW Dxx), and see what the ACCESS is set to. It it's ALL, it might be a bit of a problem. You should restrict LUN access to the 2 existing cluster nodes until you are ready to present them to the blades.

You will also need SecurePath. This is the only multipathing option available for the MA8000. Add the blades' HBAs to the cluster zone (if you have zoning) and setup the new connections (!NEWCONxx) on the MA8000.

For getting drive letters, you should probably shutdown the existing nodes, then present the disks to the blade servers so that they can see them. Then you should set their drive letters, but do nothing else with them. If the cluster is not shutdown, then you would see the disks in Disk Management, but they would show as "offline", as the cluster locks the disks. Then shutdown the blades and restart the 8500s. Start the blades again and add them as nodes to the cluster. Whenever you're satisfied that the cluster works on the blades, you can then evict the 8500s from the cluster and take them out of service.

Here's a link to some whitepapers on ProLiant Clustering:
http://h18004.www1.hp.com/solutions/enterprise/highavailability/whitepapers/

But, as I mentioned above - the question of support remains unanswered.

Regards,
Stephen
Highlighted
New Member

Re: Migration of MSCS Cluster to new blades, keeping MA8000

Stephen.
Thanks for the rapid reply.
The BL45p's differ from the BL40p's in that there are no blade-internal - i.e. dedicated - HBA's. In the 45p's you have to use the enclosure module's SAN ports.
I've been looking for documentation for some time and haven't found the 45p mentioned in a lot of documentation; so I wasn't sure if there is a support issue or just a documentation backlog issue. (BTW, VMWare does not support MA8000-based SAN - EVA - which we have in house - but no MA8000. So we were looking.)
Our HP sales agent sent word yesterday that *he's* not sure MA8000 is supported either. And he's looking into the issue. Rather late I'd say, since we've mentioned to just about everybody our desire to use the MA8000 with blades for our SQL MSCS cluster.
If you find out anything more will you please e-mail me (mccollam@arizona.edu)?
And, again, thanks for your response.
Highlighted
Honored Contributor

Re: Migration of MSCS Cluster to new blades, keeping MA8000

Hi,

I'm not up-to-date on Blade Systems, I rarely get to see them. But I can see from the QuickSpecs that you have 2 options on FC connectivity for blades - either Emulex (394588-B21) or Qlogic-based (381881-B21) HBAs. I was recommending you use the Emulex one.

Since the MA8000 has been EOL for a while now, support for the newer server products and operating systems is tricky. I think VMware is unsupported as it uses it's own Multipathing drivers, and does not support the older storage.
I think you would be able to get your cluster working, but I cannot see HP supporting this config. Actually, I had a situation where a customer wanted to migrate Windows servers from MA8000 to newer a EVA6000, but there was no HBA driver that would support simultaneous access to both, and HP said they would "never qualify this solution". The older HBA driver does not support the new EVAs, and the newer HBA drivers don't support HSG80-based storage. We got it working with older HBA drivers, anyway. Even multipathing went OK. Used MPIO for the EVAs and SecurePath for the HSG80. Each took care of its own LUNs, and ignored the others.

I think your HP rep may "persuade" you to migrate the cluster data to an EVA. :-)

Regards,
Stephen
Highlighted
New Member

Re: Migration of MSCS Cluster to new blades, keeping MA8000

Stephen.
Ironically, we have an EVA5K on site - which I had ordered and shepherded thru purchasing over a year ago to replace our MA8000 - only to have a "rival" department walk off with it. So, now, here we are in the odd position of having to ask them to put our Production db - in a non-best practice configuration, no doubt - on our "own" (formerly, at least) EVA5K. I'll be willing to bet, there'll be NO cloning or snap options available to us either.
Tsk.
Again, thanks much for your help.
Don