MSA Storage
1748089 Members
5155 Online
108758 Solutions
New Discussion юеВ

Re: Adding new MSA1000 to existing RA4100 San

 
SOLVED
Go to solution
Velocity
New Member

Adding new MSA1000 to existing RA4100 San

Hi,
I need to add a new MSA shelf to our exsting san off which a couple of sql clusters run. The plan I have for doing this is as follows :
1. Connect MSA up to both san switches.
2. Power MSA on.
3. Identify sql node 1 to storage through SSP.
4. Scan for new hardware on sql node 1 and install MSA.
5. Create raid sets and partitions on storage from node 1.
6. Create new cluster disk group with preferred owner as node 1.
7. Identify sql node 2 to MSA via SSP.
8. Scan for new hardware and install MSA on node 2.
9. If not automatically done, create redundant paths via secure path manager on node 1.

Just wondering if this is the correct order to do things, whteher anything is missing and whether i'd actually have to power off sql node 2 before adding the storage.
Any help is much appreciated !!
Thanks, Dave.
4 REPLIES 4
Uwe Zessin
Honored Contributor
Solution

Re: Adding new MSA1000 to existing RA4100 San

Hello Dave,
the SAN Design Guide says:

""Servers accessing RA4100 or RA4000 storage systems must not have access to EVA5000/EVA3000, HP XP or VA, EMA/ESA12000, EMA16000, MA/RA8000, MA6000, or MSA1000 storage systems. Zoning is required to prevent access from servers to multiple storage system types when configuring these storage systems in the same physical SAN.""
.
Velocity
New Member

Re: Adding new MSA1000 to existing RA4100 San

Hi Uwe,

We've already got another MSA1000 running on the same san fabric, so although it may not be the right thing to do according to HP, you can have an RA4100 and an MSA1000 co-existing on the same san without zoning in place !

Any idea about the steps I outlined above to add the additional MSA1000 ?

Thanks,
Dave.
Steven Clementi
Honored Contributor

Re: Adding new MSA1000 to existing RA4100 San

At the very least, if you going to run with a un-supported configuration, I would at least zone the storage seperately to try an minimize any issues that might arrise.

1st. I would make sure that both MSA's are running the same firmware.

After that, since your servers already have Secure Path on them, I would not leave step 9 to the end, I would confirm that the redundant paths are there from the get go.(Right after you present the storage to each of them seperately).

If you going to zone, make sure only node 1 is zoned to see the new MSA. Create your storage and configure SSP. Once SSP is configured, you can change your zoning to allow host 2 to see the msa so you can scan in hardware and such. Don't add it to the SSP yet though.

Moving along, I usually like to make sure both servers see the new disks as the same "disk number" and same driver letter before adding as a cluster resource. Create/Present your storage to node 1, write signature, format, give letter.

Afterwards, change the ssp to only allow host 2 to see the drives... confirm disk numbers and drive letters. Add the disks as resources into the cluster on node 2. Afterwards, changing the SSP to allow both hosts access. Sometimes i change the SSp before hand so both nodes have access to the drives. Since were not writing to them at this point, there usually is no chace for corruption, or very little chance.

Wow.. your done.


I do not condone this configuration, but am open minded to it. ;o) Usually, there are specific reason why HP wopuld recommend NOT doing something. Sometimes you can get away with it and never any any issues, sometimes not.


Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
Velocity
New Member

Re: Adding new MSA1000 to existing RA4100 San

Thanks for your help steven - after discussing it for a while we decided thesensible option would be to migrate everything onto the new MSA - Your advice will definitely come in handy when we do the migration though !
Cheers, Dave.