HPE GreenLake Administration
- Community Home
 - >
 - Storage
 - >
 - Entry Storage Systems
 - >
 - Disk Enclosures
 - >
 - Re: Impact Towards EMC Configuration After Adding ...
 
Disk Enclosures
        1840131
        Members
    
    
        2568
        Online
    
    
        110161
        Solutions
    
Forums
        Categories
Company
Local Language
                
                  
                  back
                
        
                
        
                
        
                
        
        
        
                
        
                
        
        
        
                
        
              
              Forums
Discussions
Forums
- Data Protection and Retention
 - Entry Storage Systems
 - Legacy
 - Midrange and Enterprise Storage
 - Storage Networking
 - HPE Nimble Storage
 
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
                
                  
                  back
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
                
            
            
                
            
                
            
                
            
                
            
            
                
            
                
            
            
                
            
                
              
            Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
 - Appliance Servers
 - Alpha Servers
 - BackOffice Products
 - Internet Products
 - HPE 9000 and HPE e3000 Servers
 - Networking
 - Netservers
 - Secure OS Software for Linux
 - Server Management (Insight Manager 7)
 - Windows Server 2003
 - Operating System - Tru64 Unix
 - ProLiant Deployment and Provisioning
 - Linux-Based Community / Regional
 - Microsoft System Center Integration
 
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
        Information
        Community
Resources
Community Language
        Language
        Forums
Blogs
	
		
			
            
                
            Go to solution
        
            
		
		
			
            	
	
		
        
		
	
	
		Topic Options
			
				
					
	
			
		
	- Subscribe to RSS Feed
 - Mark Topic as New
 - Mark Topic as Read
 - Float this Topic for Current User
 - Bookmark
 - Subscribe
 - Printer Friendly Page
 
- Mark as New
 - Bookmark
 - Subscribe
 - Mute
 - Subscribe to RSS Feed
 - Permalink
 - Report Inappropriate Content
 
07-19-2005 01:57 PM
07-19-2005 01:57 PM
			
				
					
					
						Hi Unix Gurus,
We have planned to add a new PCIX I/O Chassis to our existing Superdome to provide backup if our existing PCI I/O Chassis fail because of some of I/Os failure.If we want to divide two of EMC fiber Channels that currently connected on our existing PCI I/O Chassis into two direction, one connects to PCI I/O Chassis and the other to PCIX I/O Chassis.If this scenario is implemented on our existing Superdome that use EMC storage technology, what should we prepare from EMC side and what the impact towards our existing EMC configuration?Please advice us.Thanks in advance.
					
				
			
			
				
			
			
				
	
			
				
		
			
			
			
			
			
			
		
		
		
	
	
	
We have planned to add a new PCIX I/O Chassis to our existing Superdome to provide backup if our existing PCI I/O Chassis fail because of some of I/Os failure.If we want to divide two of EMC fiber Channels that currently connected on our existing PCI I/O Chassis into two direction, one connects to PCI I/O Chassis and the other to PCIX I/O Chassis.If this scenario is implemented on our existing Superdome that use EMC storage technology, what should we prepare from EMC side and what the impact towards our existing EMC configuration?Please advice us.Thanks in advance.
Solved! Go to Solution.
		1 REPLY 1
	
	            
            
		
		
			
            
                - Mark as New
 - Bookmark
 - Subscribe
 - Mute
 - Subscribe to RSS Feed
 - Permalink
 - Report Inappropriate Content
 
07-19-2005 05:02 PM
07-19-2005 05:02 PM
Solution
			
				
					
					
						So if I understand you correctly, you currently have a 'dome partition with a single PCI IO cage for 12 cards attached yes?
You want to add a second PCI-X IO cage, and move some cards into this cage to seperate out IOs/reduce SPOFs etc.
OK first things first - can you do this...
1. I assume your partition contains more than 1 cell board, as there is a 1-to-1 relationship between cell boards and PCI cages (a PCI IO cage needs its own cell board to connect to, a cell board doesn't have to have a PCI IO cage attached). SO if your npar cobtains just one cell board you can't connect two card cages.
2. You donn't say what sort of 'dome this is... if it contains PA8600 or PA8700 CPUs then a PCI-X IO card cage isn't supported - you'd need to use just a PCI cage.
Assuming you can do it, you can of course expect all your hardware paths for the devices that get moved to the other chassis to change, so:
FOr any LAN NICs, I'd expect to have to change the entry in /etc/rc.config.d/netconf after the upgrade. If you have serviceguard, expect to have to halt the cluster and make changes to the /etc/cmcluster/cmclconfig.ascii file and re-apply.
For any FC disk devices, expect the device file to change. Make a note of your current config for the cards that will be moved using 'ioscan fnH'. Note that only the start of the HW path will change - everything out of the card itself will remain the same - nevetheless this will change all the device files out that card.
I'm assming as you have more than 1 FC card already you have alternate links defined to all your disks, or you are using powerpath. Simply identify the disks that will change and vgreduce them out of the volume group - when the cards are moved, use the notes you yook previously to identify the new disk devices, and vgextend them back in.
HTH
Duncan 
					
				
			
			
				
	
I am an HPE Employee
			
			
				
			
			
			
			
			
			
		
		
		
	
	
	
You want to add a second PCI-X IO cage, and move some cards into this cage to seperate out IOs/reduce SPOFs etc.
OK first things first - can you do this...
1. I assume your partition contains more than 1 cell board, as there is a 1-to-1 relationship between cell boards and PCI cages (a PCI IO cage needs its own cell board to connect to, a cell board doesn't have to have a PCI IO cage attached). SO if your npar cobtains just one cell board you can't connect two card cages.
2. You donn't say what sort of 'dome this is... if it contains PA8600 or PA8700 CPUs then a PCI-X IO card cage isn't supported - you'd need to use just a PCI cage.
Assuming you can do it, you can of course expect all your hardware paths for the devices that get moved to the other chassis to change, so:
FOr any LAN NICs, I'd expect to have to change the entry in /etc/rc.config.d/netconf after the upgrade. If you have serviceguard, expect to have to halt the cluster and make changes to the /etc/cmcluster/cmclconfig.ascii file and re-apply.
For any FC disk devices, expect the device file to change. Make a note of your current config for the cards that will be moved using 'ioscan fnH
I'm assming as you have more than 1 FC card already you have alternate links defined to all your disks, or you are using powerpath. Simply identify the disks that will change and vgreduce them out of the volume group - when the cards are moved, use the notes you yook previously to identify the new disk devices, and vgextend them back in.
HTH
Duncan
I am an HPE Employee
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
		
	
	
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP