Disk Enclosures
1751695 Members
4955 Online
108781 Solutions
New Discussion юеВ

Re: Ownership changes on A/A EVA Controllers

 
SOLVED
Go to solution
Del_3
Trusted Contributor

Ownership changes on A/A EVA Controllers

What event other than a SAN component (controller, switch, cable etc) failure would cause an IO (read or write) request to be sent to the slave controller? I have seen this happen through the controller logs but I cant figure out what component caused the change. Is this a host path management software triggered event?

Thanks

15 REPLIES 15
IBaltay
Honored Contributor

Re: Ownership changes on A/A EVA Controllers

Hi,
in AA EVAs all writes goes always to both controllers (via the mirror ports). Reads could go either only to the managing controller if the MPIO ALUA is correctly set, else also to the proxy controller (the one that does not handle the backend disk system directly, but only via the managing controller).
the pain is one part of the reality
Patrick Terlisten
Honored Contributor

Re: Ownership changes on A/A EVA Controllers

Hello Del,

this is a normal behavior. IO can be received by both controllers. A write IO goes, because of the mirrored cache, to both controllers. But a read IO can only be handled by the owning controller. If an IO is received by the "wrong" controller it is transferred over the mirror port and is done by the owning controller. If to many IOs are received by the "wrong controller", the LUN ownership is changed. This threshold can be set only by SSSU.
Best regards,
Patrick
B.K
Advisor

Re: Ownership changes on A/A EVA Controllers

Hi Del,

This is the normal behaviour. Controller tries to balance the i/o load. So if while creating the vdisk you have set the preference as "No Preference" then at anytime vdisk ownership can change to other controller. It is always recommended to choose "No pref" option, let the controller decide how they want to deal with vdisks.

Please assign poits if you are satisfies with the answer.
IBaltay
Honored Contributor

Re: Ownership changes on A/A EVA Controllers

Hi,
to chose between the
a) set prefered path to controller a/b failover/failback
or
b) none

it differs among the Operating systems:

1. HPUX pre 11.31 (no native MPIO) - always set prefered path to controller a/b failover/failback
2. HPUX 11.31 (native MPIO) - None

HPUX MPIO has lot of switches - e.g. failover, it knows to failover to the original managing controller after the failure

3. Tru64 - set prefered path to controller a/b failover/failback

4. Linuxes - set prefered path to controller a/b failover/failback

5. Windows - set prefered path to controller a/b failover/failback

6. AIX/Solaris - set prefered path to controller a/b failover/failback
3.
the pain is one part of the reality
Del_3
Trusted Contributor

Re: Ownership changes on A/A EVA Controllers

I appreciate the responses but y'all are not answering my question exactly.

Lets talk about read IO only. Why would a host send a read request to the slave controller? And especially why would so many be sent to the slave that it would cause a transfer of ownership to that controller?

I hope this clarifies my question.

Thanks again.
Uwe Zessin
Honored Contributor
Solution

Re: Ownership changes on A/A EVA Controllers

> Controller tries to balance the i/o load

No, they do not. I/O requests come from the hosts and the controllers respond to them.

The round-robin ownership assignment of vdisks during EVA boot is only a minor part of the whole picture.


> Why would a host send a read request to the slave controller?

slave = non-owning controller?


1.) Because the host OS does not understand ALUA correctly - e.g. VMware V3:

c01736756 - HP StorageWorks Enterprise Virtual Array (EVA) - All vdisks Presented to VMware ESX 3.x Server's Ownership on "Controller A" and Causing Performance Issues

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?lang=en&cc=us&objectID=c01736756


2.) The MPIO is (by default) not configured for handling ALUA - e.g. the Windows MPIO DSM.


3.) There are bugs in the MPIO - e.g. some versions of the Windows MPIO DSM as they behave not correctly, unless there are EVA targets visible on all initiator ports.
.
IBaltay
Honored Contributor

Re: Ownership changes on A/A EVA Controllers

lets have few examples/possible root causes:

Example1

if NONE
after the controller A failure (which was originaly negotiated as managing for approximately the half of the EVAs vdisks), MPIO does not have the mechanism to failover to the original controller (e.g. Windows MPIO) and thus all vdisks are managed via one controller only

if SET PREFERRED PATH TO CONTROLLER A/B FAILOVER/FAILBACK
the EVA controler forcibly fails back to the original managing controller, but if MPIO
does not recognize it it still reads/writes to the proxy and after 1 hour the proxy controller becomes the managing


Example2

when the managing controller has battery problems the reads/writes is commited via proxy controller in the role of the managing controller, but in that case maybe the MPIO/Multipathing software does not correctly recognize this change or it recognizes it but it does not then return to its original managing controller (after the battery charging up and fine) and there are long periods (1 hours) of non optimum reads/writes via the proxy controller.
the pain is one part of the reality
Uwe Zessin
Honored Contributor

Re: Ownership changes on A/A EVA Controllers

I can understand if an MPIO does not check for changes in optimized path every couple of seconds, but more than a few minutes or never sounds like a design / implementation failure.
.
Peter Mattei
Honored Contributor

Re: Ownership changes on A/A EVA Controllers

Del, please get a bit more precise.

- What OS are you talking of?

- Have you implemented an ALUA capable MPIO solution and activated it? (Like Windows EVA DSM and ALB enabled)

In general:
There is no slave controller on the EVA. Both controller are always active.
If you look at it from a LUN perspective there is a LUN owning controllers as described in the post above.

LUN transition between controllers happens on failures of SAN components or when the majority of reads get proxied - this is called "Implicit LUN transition".

Out of the EVA User Guide - Current link: http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01681309/c01681309.pdf

Implicit LUN transition automatically transfers management of a virtual disk to the array controller that receives the most read requests for that virtual disk. This improves performance by reducing the overhead incurred when servicing read I/Os on the non-managing controller. Implicit LUN transition is enabled in VCS 4.x and all versions of XCS.

When creating a virtual disk, one controller is selected to manage the virtual disk. Only this managing controller can issue I/Os to a virtual disk in response to a host read or write request. If a read I/O request arrives on the non-managing controller, the read request must be transferred to the managing
controller for servicing. The managing controller issues the I/O request, caches the read data, and mirrors that data to the cache on the non-managing controller, which then transfers the read data to the host. Because this type of transaction, called a proxy read, requires additional overhead, it provides less than optimal performance. (There is little impact on a write request because all writes are mirrored in both controllers├в caches for fault protection.)

With implicit LUN transition, when the array detects that a majority of read requests for a virtual disk are proxy reads, the array transitions management of the virtual disk to the non-managing controller. This improves performance because the controller receiving most of the read requests becomes the
managing controller, reducing proxy read overhead for subsequent I/Os.

Implicit LUN transition is disabled for all members of an HP Continuous Access EVA DR group. Because HP Continuous Access EVA requires that all members of a DR group be managed by the same controller, it would be necessary to move all members of the DR group if excessive proxy reads were detected on any virtual disk in the group. This would impact performance and create a proxy read
situation for the other virtual disks in the DR group. Not implementing implicit LUN transition on a DR group may cause a virtual disk in the DR group to have excessive proxy reads.

Cheers
Pete
I love storage