- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- HP Virtual Connect FlexFabric 10GB/24 - Port Modul...
BladeSystem - General
1752576
Members
4368
Online
108788
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-01-2011 04:58 AM
03-01-2011 04:58 AM
HP Virtual Connect FlexFabric 10GB/24 - Port Module
Hi all VC expects
Could you help me with this?
If I have 8 full height BL620C G7 servers in a chassi that will give me 4 Flex NICS in each server. i.e 32 Flex NICSs in each and then I can have each 10GB flex nic divided into 4 parts right? that means i can have 128 downlinks?? If i was to have that setup,,,how many of HP Virtual Connect FlexFabric 10GB/24 - Port Module would i need??
or a better way to do things would be to use trunk ports and strip of all the traffic at hypervisor level??
any help would be greatly appreciated,,,
Could you help me with this?
If I have 8 full height BL620C G7 servers in a chassi that will give me 4 Flex NICS in each server. i.e 32 Flex NICSs in each and then I can have each 10GB flex nic divided into 4 parts right? that means i can have 128 downlinks?? If i was to have that setup,,,how many of HP Virtual Connect FlexFabric 10GB/24 - Port Module would i need??
or a better way to do things would be to use trunk ports and strip of all the traffic at hypervisor level??
any help would be greatly appreciated,,,
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-01-2011 05:17 AM
03-01-2011 05:17 AM
Re: HP Virtual Connect FlexFabric 10GB/24 - Port Module
Hi,
You need to look at the Port Mapping diagram for Full-height blades.
Remember, the downlinks between the Interconnect Bays and the Server LOM's are HARDWIRED, this does not change just because you are using Flex10 and FlexNIC's.
So LOM 1 and LOM 3 still connect to Bay 1, and LOM 2 and LOM 4 still connect to Bay 4. Even though the LOM's can be divided into 4 "virtual" NICs, the LOM's are still physical. So when using "Flex" technology, there will be upto 8 "virtual" NICs using each physical (hard) downlink.
Returning to your arithmetic, you are correct that there are 128 available downlinks, however these are "Virtual" downlinks. You still only have 16 physical HARD downlinks (i.e. 128/8 = 16).
Since each Interconnect Bay has 16 downlinks, then the number of modules doesn't change.
i.e. you can get by with 1 module (in Bay 1), but this is not recommended since you would have no redundancy, and you would only be able to use half of your LOM's.
the correct configuration would be to have two modules (Bays 1 & 2) for redundancy and to get full use of your LOMs.
To look at it from a different perspective, the basic rule is that the Onboard NICs (LOMs) are all pathed to IC bays 1 & 2. The remaining IC Bays can only be reached via a Mezzanine card. So when considering virtual NICs from the LOMs, you CAN ONLY use IC Bays 1 & 2, additional modules would not affect this, and would therefore be of no use to you (except in conjunction with Mezz cards).
HTH
Dave.
You need to look at the Port Mapping diagram for Full-height blades.
Remember, the downlinks between the Interconnect Bays and the Server LOM's are HARDWIRED, this does not change just because you are using Flex10 and FlexNIC's.
So LOM 1 and LOM 3 still connect to Bay 1, and LOM 2 and LOM 4 still connect to Bay 4. Even though the LOM's can be divided into 4 "virtual" NICs, the LOM's are still physical. So when using "Flex" technology, there will be upto 8 "virtual" NICs using each physical (hard) downlink.
Returning to your arithmetic, you are correct that there are 128 available downlinks, however these are "Virtual" downlinks. You still only have 16 physical HARD downlinks (i.e. 128/8 = 16).
Since each Interconnect Bay has 16 downlinks, then the number of modules doesn't change.
i.e. you can get by with 1 module (in Bay 1), but this is not recommended since you would have no redundancy, and you would only be able to use half of your LOM's.
the correct configuration would be to have two modules (Bays 1 & 2) for redundancy and to get full use of your LOMs.
To look at it from a different perspective, the basic rule is that the Onboard NICs (LOMs) are all pathed to IC bays 1 & 2. The remaining IC Bays can only be reached via a Mezzanine card. So when considering virtual NICs from the LOMs, you CAN ONLY use IC Bays 1 & 2, additional modules would not affect this, and would therefore be of no use to you (except in conjunction with Mezz cards).
HTH
Dave.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP