Disk Enclosures
1753278 Members
5648 Online
108792 Solutions
New Discussion юеВ

Re: EVA/AIX/MPIO - Is anybody using this?

 
Tom O'Toole
Respected Contributor

EVA/AIX/MPIO - Is anybody using this?


Hi all,

I'm interested if anybody else is using this combination and would like to share configuration info, versions of various components, experiences, etc...

At my site, we're doing LVM mirroring of data between two EVA arrays, and booting from SAN, so these areas are of high interest to me.

Thanks!!
Can you imagine if we used PCs to manage our enterprise systems? ... oops.
3 REPLIES 3
Bret Graham
Valued Contributor

Re: EVA/AIX/MPIO - Is anybody using this?

Hi Tom,

I have talked with some who use MPIO in different configurations. What specifically are you looking for?

I think use of MPIO on AIX has not be used as much due to the limitation of previous versions of XCS firmware on the EVA4/6/8000s. On XCS V5.x the queue depth of any EVA hdisks on AIX is limited to one. As of XCS 6.000 and the new MPIO 1.0.1.0 you can use higher queue depths on EVA hdisks.

I find using MPIO very straightforward with the EVAs.

Do you have specific questions?

Regards,
Bret
Bernd Reize
Trusted Contributor

Re: EVA/AIX/MPIO - Is anybody using this?

Hi Tom,

with mpio you refer to AIX' native MPIO?

We have connected serval AIX hosts to our EVA, too, but using AntemetA for path failover, we never used native mpio directly.

regards,
bernd
Tom O'Toole
Respected Contributor

Re: EVA/AIX/MPIO - Is anybody using this?


Thanks Bret,

I don't know how well known the queue depth of one issue is. We've been using MPIO for quite a while and at first the queue depth issue was only mentioned in an internal document. It has recently finally been put into the connectivity documents (I find the doc on this is limited, poorly organized, and difficult to find - thus my interest in starting a discussion). We are using mpio v 1.0.0.3, and we are getting OK performance most of the time, even with the queue depth problem. Moving to xcs6 and the required eva mpio 1.0.1.0 is not something we can implement without a testing period and a downtime.

We either missed the requirement for fast_fail, or it was not in the doc. at the time we first implemented MPIO, but we found out first hand just how bad path failover was with the IBM default of delayed_fail when a switch failed. After that we did extensive failure testing with fast_fail and found the path switch behavior pretty good.

We have also implemented LVM mirroring of LVs to two EVAs to protect against the loss of an entire eva array, and allow for things like mass disk firmware updates etc... In testing which included powering off both controllers of a pair, the LVM handled failover correctly - the hdisk on the powered off array was marked missing ater a short interval, and processing continued. After booting the array, the hdisk is recovered with varyonvg.

We then had a real world EVA failure when an array stopped processing host I/O. Whatever state the array was in, it was not covered by our testing:-( In this case, the LVM mirroring did not work and the system hung. We are now investigating what parameters, and version of components could be used to force a timely resumption of processing on the surviving mirror in cases like these. The IBM LVDD does not appear to have a timeout mechanism where it kicks out an unresponsive mirror, but it depends on the underlying disk/adapter subsystem to return an error.



Can you imagine if we used PCs to manage our enterprise systems? ... oops.