HPE EVA Storage
1748181 Members
3984 Online
108759 Solutions
New Discussion юеВ

Re: Connecting 2 redundant MSA 1000's together for ESX 3.5

 
dataerror
Occasional Contributor

Connecting 2 redundant MSA 1000's together for ESX 3.5

Hello,

I've been searching/reading about this for a while without concrete answers, so I figure I'd come to the place where people have more experience with this stuff.

My environment currently has 2xDL580's connected to an redundant MSA 1000 in active/passive running ESX 3.0.1. It works fine.

Recently, I managed to obtain another MSA 1000 (with redundant controllers), 2xPE6850's and some extra HBA's. I wish to extend our VMware capacity with this new hardware.

The idea is to upgrade the firmware on all MSA's to active/active and connect the two redundant MSA 1000's together via 2/8 fibre switches and allow all 4 ESX servers to see all of the LUNS contained within the MSA's. There will be 2 LUNs per MSA, one for each server, so if I understand this correctly, that means that each controller will be dedicated to a particular LUN/server unless a failure occurs or vMotion kicks in.

This is all a bit of a mouthful, so please take a look at the attached image of what is currently there and what it is that I'm trying to accomplish.

While playing around, I've tried connecting the second MSA 1000 to the first one, however ESX started throwing warnings. I did a little bit of research on them and it turns out that ESX thought that the LUNs on each MSA were shadow copies of each other.

Being somewhat new around SAN's, I have a few questions that I'm hoping some of the experts can address:

- Is my approach good and is it achievable with MSA 1000's?
- What did I possibly misconfigure on the second MSA so that the ESX server would think that LUNs are shadow copies? Do I need to name the units on each MSA differently? Do they both need to have the same firmware (the new one is currently active/active)
- Any other tips/suggestions/warnings?

Thank you for your time.
5 REPLIES 5
Patrick Terlisten
Honored Contributor

Re: Connecting 2 redundant MSA 1000's together for ESX 3.5

Hello,

after you update the MSA to A/A, you will need to resignature the datastores because of a SCSI inquiry string change on the MSA due the firmware update. A/A is supported with ESX, but you should update to 3.5U4. When you connect them, you need to balance the paths yourself, the MSA or VMware won't do that for you.

The "snapshot warnings" occur, if VMware ESX detects changes in the signature or the LVM header. This can occur due LUN number changes, SCSI inquiry string changes etc.

Best regards,
Patrick
Best regards,
Patrick
marsh_1
Honored Contributor

Re: Connecting 2 redundant MSA 1000's together for ESX 3.5

hi,

you also have a single point of failure with only one fc module in the msa1000's

hth

Patrick Terlisten
Honored Contributor

Re: Connecting 2 redundant MSA 1000's together for ESX 3.5

Hi,

update to A/A firmware need dual-controllers and two installed fc i/o modules. Generally the MSA needs two fc i/o modules if two controllers are installed. Mixed configurations with one fc-hub or switch on the one side, and fc i/o module on the other side, are not supported.

Best regards,
Patrick
Best regards,
Patrick
dataerror
Occasional Contributor

Re: Connecting 2 redundant MSA 1000's together for ESX 3.5

Thanks for the replies. I realize that the current setup has a SPOF but I am trying to minimize that with the future configuration.

I do have (in total) 4xcontrollers, 2x2/8 switches and 2xI/O modules (see image about future config attached to the original post).

What I resignature the datastores, what is it really doing? Is there potential for data loss or loss of connectivity to the datastores?

Thanks.
Uwe Zessin
Honored Contributor

Re: Connecting 2 redundant MSA 1000's together for ESX 3.5

Resignaturing a VMFS datastore means that it gets a new file system identity. It is used to _regain_ access to a datastore if the original became inaccessible, e.g. due to an identity change of the SCSI LUN (Patrick has mentioned the SCSI inquiry string example).

Another use is to gain access to a clone of the original (can be a block copy or a snapshot). By default, VMware ESX blocks access because it cannot be 100% sure whether this is an independent view or just another path to the same data. If it blindly treated it as alternate paths - data corruption can happen.

Now, after the resignature, you have access to the datastore, but you do not have access to the registered VMs, because the path has changed, e.g. from:
/vmfs/volumes/44b5efac-3ad9fe64-eb4b-000e7fadabfa
to:
/vmfs/volumes/44bdd7c8-2f389908-2426-000e7fadabfa

The solution is to re-register the VMs on the new paths. Not pretty, but better safe than sorry.

Please do NOT attempt to permanently work around this by setting the DisallowSnapshotLUN parameter. This parameter is meant for:
1. some arrays which have a LUN presentation defect
2. some special situations in environments that use controller-based replication
.