- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: BladeSystem c3000 Enclosure 2nd Interconnect b...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-14-2011 09:20 AM
02-14-2011 09:20 AM
BladeSystem c3000 Enclosure 2nd Interconnect bay not accessible on iSCSI network
Hello everyone!
I have inherited a BladeSystem c3000 Enclosure with 3 x ProLiant BL460c G1 blades and 2 x GbE2c Layer 2/3 Ethernet Blade Switch. We are currently migrating from an EMC Clariion CX3 to an EMC Celerra NX4. On the Both Ethernet Blade switches ports 23 & 24 were dual corded to the EMC Clalriion’s two Storage process for Redundancy. Within the 2 switches both sets of ports are on separate VLAN dedicated to iSCSI. As i was migrating over to the NX4 I realized that the Ethernet blade in the 2nd interconnect bay was not showing up on our iSCSI network and was possibly never configured for the iSCSI network. I looked over web interface of both the chassis and the two Ethernet blades and saw no difference between them that might cause this. The only thing that was different was in Port Mapping under the Chassis’ onboard Administrator: The Ethernet Blade switch that is accessible on the iSCSI network shows two Device ID's per interconnect Bay Port while the one that isn’t only shows 1 device per port. I assume there must be a way to set this up but I can’t find out where. Any help or suggestions would be appreciated!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-18-2011 12:01 PM
02-18-2011 12:01 PM
Re: BladeSystem c3000 Enclosure 2nd Interconnect bay not accessible on iSCSI network
So I've got the 2nd interconnect bay working with the info i've gathered! Here is what happened. When I took over the IT position at my company the C3000 and EMC SAN were already in place and configured and I had no documentation or explanation of the install, to be honest I had no clue how the setup worked anyway. In moving to the to a new SAN (and asking a lot of questions) I found out the following:
1. The X-Connect ports on both interconnect bays (17 and 18) where in separate VLANS (17 was in the default, and 18 was in the iSCSI VLAN). They also were not in a Trunk group. So i moved them to the default VLAN, added them to the 1st Trunk group. I then turned on VLAN tagging and moved them both to the default VLAN and the iSCSI VLAN so that they could transfer traffic between the two bays on those two VLANS. This now allows my blades to see any iSCSI connections on the 2nd interconnect bay.
2. Ports are hard wired to the individualize bays and cannot be changed. The on board NICs on each Blade have a separate iSCSI offload engine that show up with separate MAC address that differ from the actual NIC (For a total of 4 MAC addresses). The Mezzanie cards I have do not have an iSCSI offload engine and therefore only show up as 2 NICs.
3. For true redundancy I would have to use one of the Mezzanie slots NICs on the iSCSI network. This would allow the 1st interconnect bay to fail and iSCSI traffic to still flow through the 2nd interconnect bay.
Now, my question (hopefully my final question) is that looking over the PDF specs it looks like there is some sort of iSCSI optimization on my mezzanine cards (NC373m Dual Port Multifunction 1Gb NIC for c-Class BladeSystem). Does this mean I can run both iSCSI and Network traffic across the NIC over a VLAN of some sort or would I need to dedicate the NIC to either iSCSI or network traffic?