Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
Disk Arrays
Showing results for 
Search instead for 
Did you mean: 

Impact Towards EMC Configuration After Adding PCIX I/O Chassis on Our Existing Superdome

Go to solution

Impact Towards EMC Configuration After Adding PCIX I/O Chassis on Our Existing Superdome

Hi Unix Gurus,

We have planned to add a new PCIX I/O Chassis to our existing Superdome to provide backup if our existing PCI I/O Chassis fail because of some of I/Os failure.If we want to divide two of EMC fiber Channels that currently connected on our existing PCI I/O Chassis into two direction, one connects to PCI I/O Chassis and the other to PCIX I/O Chassis.If this scenario is implemented on our existing Superdome that use EMC storage technology, what should we prepare from EMC side and what the impact towards our existing EMC configuration?Please advice us.Thanks in advance.

Duncan Edmonstone
Honored Contributor

Re: Impact Towards EMC Configuration After Adding PCIX I/O Chassis on Our Existing Superdome

So if I understand you correctly, you currently have a 'dome partition with a single PCI IO cage for 12 cards attached yes?

You want to add a second PCI-X IO cage, and move some cards into this cage to seperate out IOs/reduce SPOFs etc.

OK first things first - can you do this...

1. I assume your partition contains more than 1 cell board, as there is a 1-to-1 relationship between cell boards and PCI cages (a PCI IO cage needs its own cell board to connect to, a cell board doesn't have to have a PCI IO cage attached). SO if your npar cobtains just one cell board you can't connect two card cages.

2. You donn't say what sort of 'dome this is... if it contains PA8600 or PA8700 CPUs then a PCI-X IO card cage isn't supported - you'd need to use just a PCI cage.

Assuming you can do it, you can of course expect all your hardware paths for the devices that get moved to the other chassis to change, so:

FOr any LAN NICs, I'd expect to have to change the entry in /etc/rc.config.d/netconf after the upgrade. If you have serviceguard, expect to have to halt the cluster and make changes to the /etc/cmcluster/cmclconfig.ascii file and re-apply.

For any FC disk devices, expect the device file to change. Make a note of your current config for the cards that will be moved using 'ioscan fnH '. Note that only the start of the HW path will change - everything out of the card itself will remain the same - nevetheless this will change all the device files out that card.

I'm assming as you have more than 1 FC card already you have alternate links defined to all your disks, or you are using powerpath. Simply identify the disks that will change and vgreduce them out of the volume group - when the cards are moved, use the notes you yook previously to identify the new disk devices, and vgextend them back in.