HPE EVA Storage
1752806 Members
5646 Online
108789 Solutions
New Discussion

EVA Continuous Access and vSphere4

 
Steven Clementi
Honored Contributor

EVA Continuous Access and vSphere4

Please don't rip me a new one... I've searched the forums, the Enterprise Library, other websites... to no avail.  If I missed something, please just post a link to it.

 

Here is my scenario:

 

I have a Campus SAN, 3 buildings all directly connected with Fibre.

 

Site 1: EVA6400 - Primary Data Access

Site 2: EVA4400 - Primary Data Backup (CA-Fibre Disk)

Site 3: EVA4400 - Secondary Data Backup (CA-FATA Disk)

 

All three sites have c7000 Blade Enclosures with multiple vSphere 4 Cluster nodes.

 

I am looking for the appropriate settings to allow any particular ESX 4 or ESXi 4.1 server to automatically see/use the vdisks in a failed over DR Group.

 

My initial testing shows that the ESX servers "see" the new paths as part of the original disk config, but the transition seemed to take longer than the time out period for the test vm thus forcing the VM to power off.

 

The failback process did not go so easily either as I had to add back in the datastore.

 

The Best Practice guide "  http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA1-2185ENW.pdf " doesn't have much in it Re: EVA CA, just some general guides for i/o balancing.

 

The Vi3 Guide for CA doesn't help since those settings are not available in ESX 4.x as they moved to a command line structure.

 

 

I am simply looking for anyone who has successfully failed over a DR Group without any issues with the VM's (I.E. The VM's stayed online with perhaps only minimal interruption in i/o.), or has a link to another guide that has the config settings and/or procedure to do such a thing.

 

Also, I know this works just great with SRM.  Please don't suggest I use it.  I've been down that road already.   

 

Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
1 REPLY 1
Steven Clementi
Honored Contributor

Re: EVA Continuous Access and vSphere4

Ok, so after some additional testing... I've found that if I pre-populate the disk paths to my replicated disks using the "Inquiry Only" setting on the DR Group... the failover is 99% better/faster.

 

I was able to failover a single VM on a single vdisk without any issues.

 

I was able to failback the same vm/vdisk with only very minor issues.  The VM paused a moment on failback, but it was only for a second or three.

 

 

The question then would be... in a vSphere 4 Environment... would it be safe to keep the "Inquiry Only" setting toggled on?  If not, then I need to better understand the full functionality of the setting and I have not been able to find additional information.

 

In a previous post Uwe mentions:

" INQ_O is used in some special VMware failover environments to pre-populate SCSI targets/LUNS. "

 

In the Online Help:

" Inquiry only: The virtual disk can be presented to hosts, but hosts can only make SCSI inquiries. No host I/O is allowed. This mode is typically used with host clusters. "

 

Has anyone used this "feature" before and knows more about it?

 

The idea here is to be able to keep things running when maintenance needs to happen (firmware updates, etc).  The environment is a hospital and many of the ER/Doc Apps run in the virtual environment.  Obviously in a real site failure situation, the primary storage may already be offline and as such... so will all of the vm's that depend upon it.

 

With that in mind, would it be better to keep the access to none, and change it to inquiry only when the group needs to failover?

 

Example Procedure:

1. Plan for Maintenance

2. Set groups to Sync mode, if in async currently

3. Set DR Groups Destination Access to "Inquiry Only" for disks in that cluster

4. Rescan HBA's in that cluster

5. Confirm ESX Servers can "see" DR Paths.

6. Failover DR Group(s) for that cluster.

7. Turn off "Inquiry Only"

8. Confirm VM's are running, take appropriate actions if not.

9. Move on to next cluster (jump back to 2), otherwise go on.

10. Perform maintenance, follow similar procedure to "failback" after complete.

 

Thoughts?

 

 

Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)