- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- BL460 G7 and Virtual Connect 1/10 module question....
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-24-2013 05:44 AM
07-24-2013 05:44 AM
BL460 G7 and Virtual Connect 1/10 module question....
Kelly was helping a customer:
****************
Hopefully a simple question for someone….
I was at a customer site today to try to shed some light on a iSCSI performance issue and saw something I was not expecting. We have TWO BL460 G7 running ESXi connecting to 1/10Gb-F VC-Enet modules in bay 1 & 2. Someone (a consultant) had setup iSCSI to LeftHand a couple years ago, but now the customer is having problems. In review of the setup (what connections were getting used for iSCSI), we were looking at the OA for the NICs/ports to collect the MACs.
First - I was surprised to see “LOM:1-a” and “LOM:1-b” on a blade connected to a 1/10 VC module. I was thinking you needed Flex-10 functionality in both the NIC and the VC module to have the subdivision. I know the G7 blade can do it, but thought it would dumb-down to match the 1/10 module.
The second item, I am suspecting we have hardware initiator configured for iSCSI, we have hardware iSCSI vmhba storage adapters in ESX, but do not have anything for iSCSI in the VC profile. So – would that mean it is done via BIOS? (set up during POST?) or with a OneCommand mgr or something ? I tried to enter iSCSI setup during POST on my lab G7 server (control-s), but was never offered to configure it. In my lab I have a Flex-10 VC module – don’t know if VC somehow influences the BIOS offering.
Last – it appears the built-in LOM (NC553i) is doing both, hardware iSCSI initiator AND 1gb NIC (vmnic0 / vmnic1 & vmhba0 / vmhba1 ). I am thinking this may be part of their performance problem. Essentially – all network traffic and storage traffic is using the 2 – 1gb LOM connections.
A non-related item - the customer also has a MEZZ card cabled and connected via a pair of 1gb pass-thru modules (bays 3 & 4), it is a NC542m, and that too looked funky. The NIC ports looked unused in vCenter, and showed up as “vmnic6.p1” and “vmnic6.p2”. I think that is a driver issues (mlx4 ) – mentioned in a Customer Advisory . It appears the consultant planned on using the MEZZ/NC542m for iSCSI (that is what his documentation showed which he left behind for the customer), but intentionally or accidently collapsed it onto the LOMs*NC553i). At least that is what it is looking like when comparing MACs in the OA to the vCenter info.
Any additional insight is welcomed….
**************
From Hoa:
First, I think this is software based iscsi. Look for vSphere iscsi under host's Storage Adapter list.
Second, I think VC 1/10 does all the heavy lifting for both Ethernet and iscsi traffics in one pipe , not as Flexed iscsi segment based, that is you bottle neck. At the very least if you can use NC542m Flex10 for just either iscsi or Ethernet or better yet use 10g Pass Thru if you can, with 10G up links for Lefthand if configurable. Or buy two Flex-10/ 10d modules, turn this into true hardware based. iscsi longer shelf life for G7 also breaking the glass ceiling of VC 3.60.
Also input from Lionel:
This is correct, only 10Gb based VC modules (e.g. Flex-10, FlexFabric, Flex-10/10D) can enable hardware iSCSI with Emulex CNAs.
So I guess what you are seeing is just the iSCSI function that is enabled under the Emulex bios but the iSCSI acceleration is certainly not enabled and this is the reason why you don't see anything for iSCSI in the VC profile. You are using a non-supported configuration and you have no other choice today to use iSCSI software (not accelerated) by selecting vmnic0 to transmit your iSCSI traffic. Unless you decide to swap the VC 1G modules to 10Gb ones.
You see 2 physical functions (LOM1-a and b) because like all CNAs without Flex-10 capable modules, they provide one Ethernet port and one iSCSI (or FCoE) port.
****************
Other comments or suggestions?