- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: c3000 Enclosure's 2nd Interconnect bay blade s...
BladeSystem - General
1752242
Members
5052
Online
108785
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-17-2011 01:15 AM
тАО02-17-2011 01:15 AM
Re: c3000 Enclosure's 2nd Interconnect bay blade switch not showing up on iSCSI network
Hi,
as Dave already explained correctly, the port mapping from the server blade build in ports or eventual added MEZZ cards is hard wired , it is by design of the chassis. So no the mapping you asked for cannot be changed.
If you want redundancy, then what you need to do is to make sure that the MEZZ card you add can also do iSCSI traffic and then somehow team/bond the iSCSI ports together at the OS level and that will offer redundancy when one Ethernet interconnect module or uplink would fail.
This is so for a C3000 chassis, in a C7000 the mapping is different and offers the redundancy you are looking after by default since the build in LAN ports on the motherboard of your server blade are physically split over interconnect 1 and 2. A C3000 is a C7000 split in half so to speak.
HTH
Kris
as Dave already explained correctly, the port mapping from the server blade build in ports or eventual added MEZZ cards is hard wired , it is by design of the chassis. So no the mapping you asked for cannot be changed.
If you want redundancy, then what you need to do is to make sure that the MEZZ card you add can also do iSCSI traffic and then somehow team/bond the iSCSI ports together at the OS level and that will offer redundancy when one Ethernet interconnect module or uplink would fail.
This is so for a C3000 chassis, in a C7000 the mapping is different and offers the redundancy you are looking after by default since the build in LAN ports on the motherboard of your server blade are physically split over interconnect 1 and 2. A C3000 is a C7000 split in half so to speak.
HTH
Kris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-17-2011 08:11 AM
тАО02-17-2011 08:11 AM
Re: c3000 Enclosure's 2nd Interconnect bay blade switch not showing up on iSCSI network
Thanks everyone for the input! From researching further it looks like my blades have integrated iSCSI offload built into the the first two on board NICs. With that in mind it makes sense why they are wired to the same port as the two on board NICs. With that in mind does anyone know if a the "NC373m Dual Port Multifunction 1Gb NIC for c-Class BladeSystem" mezz card has the same iSCSI offload engine? Since it doesn't show up as 4 MAC address like the on board NICs do I'm going to assume it doesn't. Does that mean iSCSI traffic on these mezz card would be slower then and/or more resource intensive then the on board NICs since it would rely on the CPU instead of an offload engine?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-17-2011 01:11 PM
тАО02-17-2011 01:11 PM
Re: c3000 Enclosure's 2nd Interconnect bay blade switch not showing up on iSCSI network
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-18-2011 10:23 AM
тАО02-18-2011 10:23 AM
Re: c3000 Enclosure's 2nd Interconnect bay blade switch not showing up on iSCSI network
Thanks Kris. So I've got the 2nd interconnect bay working with the info everyone has given me! Here is what happened. When I took over the IT position at my company the C3000 and EMC SAN were already in place and configured and I had no documentation or explanation of the install, to be honest I had no clue how the setup worked anyway. In moving to the to a new SAN (and asking a lot of questions) I found out the following:
1. The X-Connect ports on both interconnect bays (17 and 18) where in separate VLANS (17 was in the default, and 18 was in the iSCSI VLAN). They also were not in a Trunk group. So i moved them to the default VLAN, added them to the 1st Trunk group. I then turned on VLAN tagging and moved them both to the default VLAN and the iSCSI VLAN so that they could transfer traffic between the two bays on those two VLANS. This now allows my blades to see any iSCSI connections on the 2nd interconnect bay.
2. Ports are hard wired to the individualize bays and cannot be changed. The on board NICs on each Blade have a separate iSCSI offload engine that show up with separate MAC address that differ from the actual NIC (For a total of 4 MAC addresses). The Mezzanie cards I have do not have an iSCSI offload engine and therefore only show up as 2 NICs.
3. For true redundancy I would have to use one of the Mezzanie slots NICs on the iSCSI network. This would allow the 1st interconnect bay to fail and iSCSI traffic to still flow through the 2nd interconnect bay.
Now, my question (hopefully my final question) is that looking over the PDF that Kris posted it looks like there is some sort of iSCSI optimization on my mezzanine cards. Does this mean I can run both iSCSI and Network traffic across the NIC over a VLAN of some sort or would I need to dedicate the NIC to either iSCSI or network traffic?
1. The X-Connect ports on both interconnect bays (17 and 18) where in separate VLANS (17 was in the default, and 18 was in the iSCSI VLAN). They also were not in a Trunk group. So i moved them to the default VLAN, added them to the 1st Trunk group. I then turned on VLAN tagging and moved them both to the default VLAN and the iSCSI VLAN so that they could transfer traffic between the two bays on those two VLANS. This now allows my blades to see any iSCSI connections on the 2nd interconnect bay.
2. Ports are hard wired to the individualize bays and cannot be changed. The on board NICs on each Blade have a separate iSCSI offload engine that show up with separate MAC address that differ from the actual NIC (For a total of 4 MAC addresses). The Mezzanie cards I have do not have an iSCSI offload engine and therefore only show up as 2 NICs.
3. For true redundancy I would have to use one of the Mezzanie slots NICs on the iSCSI network. This would allow the 1st interconnect bay to fail and iSCSI traffic to still flow through the 2nd interconnect bay.
Now, my question (hopefully my final question) is that looking over the PDF that Kris posted it looks like there is some sort of iSCSI optimization on my mezzanine cards. Does this mean I can run both iSCSI and Network traffic across the NIC over a VLAN of some sort or would I need to dedicate the NIC to either iSCSI or network traffic?
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP