Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Disk Arrays
cancel
Showing results for 
Search instead for 
Did you mean: 

Connecting 2nd EVA to the fabric, All Linux crashed, Windows Hosts lost paths ...

Connecting 2nd EVA to the fabric, All Linux crashed, Windows Hosts lost paths ...

Hi guys,

Had a very strange issue yesterday that I can not explain, if someone has ideas...

We have a EVA6100 ( XCS 6.110) running for a few Weeks on some brocade switchs (FW 5.3.0d).
Have some Linux (RH3&4) and windows 2K3 Servers. All patched to comply with the EVA streams (From SPOCK web site). The LUNS from this EVA6100 were created by making CA replications from an old EVA5000. For migration the CA replications were deleted and the New EVA6100 migrated to new SAN Switchs (M-series to B-series). All this migration went very well.

Yesterday the second part of the project started, which was to upgrade the old EVA5000 to and EVA6100 by replacing Controllers, Loop switchs, EMU and I/O modules.

After the Hardware stuffs, the rebuilded EVA6100 (Old EVA500) was started on the olds M-series Fabric. It rediscovered the Disk groups, luns and all so. The array was then uninitialized and reinitialized but the WWN from the old EVA5000 has been kept.
Once all this done, the rebuilded EVA6100 was disconnected from the M-series fabric to the B-series Fabric without being shutdowned. The zoning was ready so that all servers can see it and the other EVA can see it too(in order to make CA replications in the other way).

When we did connect that EVA, all Linux servers went into an unpredictable reboot.
On Windows Host we did notice that we lost paths to the production EVA6100.
Sure that the problem was from connecting the online rebuilded EVA to the SAN, we decided to reboot the controllers from the rebuilded EVA. Same symptoms on Linux hosts that rebooted again. We of course decided to shutdown the rebuilded EVA and to investigate.

Once the rebuilded EVA shutdowned, we used SANSURFER to investigate the situation from the HBA point of view. We noticed that the view of the Rebuilded EVA was resilient in SANSURFER but also that some path were still inaccessible to the production EVA. A reboot of the servers fixed all this from the server point of view, so the problems should not come from and instability of the fabrics.

Off course the operation will be retried with offline systems but we are wondering what append?

Is the fact to connect an EVA on the fabric, which was known as an EVA5000 before and is now an EVA6100 (with the same WWN) can cause that?
Why impacts on the paths to the production EVA?
Or servers can just not support addition of an EVA on the fabrics online?

Should we change the WWN of the rebranded EVA from the Original EVA5000 one to the one delivered with the new HSV200-A controller pair?

If anyone has ideas…

Best regard,

Louis-Marie.
2 REPLIES
Rob Leadbeater
Honored Contributor

Re: Connecting 2nd EVA to the fabric, All Linux crashed, Windows Hosts lost paths ...

Hi,

I'll take a guess that you've somehow managed to introduce some duplicate disk UUIDs onto the fabric. That is likely to have sufficiently confused all the servers to make them reboot...

Can you confirm what state the rebuilt EVA6100 was in when it was reconnected to the B-series fabric ? Did it have any disk groups defined etc. ?

Cheers,

Rob

Re: Connecting 2nd EVA to the fabric, All Linux crashed, Windows Hosts lost paths ...

Hi rob,

Thanks for your answer

The rebuilded EVA was newly re-initialised while being reconnected to the brocade switches, Disk groups defined with manualy entered Names.
Host and folders were recreated with SSSU as they are on the production EVA but there was no vdisk on the rebuilded EVA6100.

best regards,

Louis-Marie