- Community Home
- >
- Servers and Operating Systems
- >
- BladeSystem
- >
- BladeSystem - General
- >
- BL460c G6 -> Network -> Lom1 Lom2 Redundancy
-
-
Forums
- Products
- Servers and Operating Systems
- Storage
- Software
- Services
- HPE GreenLake
- Company
- Events
- Webinars
- Partner Solutions and Certifications
- Local Language
- China - 简体中文
- Japan - 日本語
- Korea - 한국어
- Taiwan - 繁體中文
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Latin America
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Blog, Poland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 04:48 AM
03-09-2010 04:48 AM
BL460c G6 -> Network -> Lom1 Lom2 Redundancy
The blades initially have 2 LOMS( Lan On Motherboard) which are then split into 4 virtual nics each, 8 Virtual Nics total.
Nics are as LOM1a, 1b, 1c, 1d, 2a, 2b, 2c, 2d.
We have Virtual Connect Enet Modules with Shared Uplink Networks already in Redundancy at the Modules level.
My question is:
Since there is already redundancy at the Virtual Connect Modules level,
(a) It is then useless to setup HP Nic Teaming at the blade level for redundancy?
(b) Within a blade, since all connections have only one connection 'port' physically to the Enclosure chassis. IF ever LOM 1 loses connection to a fault within the blade, does it mean that LOM2 will also lose connection due to the fault.
I don't know if I've made myself clear enough.
Let me know I can provide a diagram if needed.
Will be glad to get some feedback.
Thank you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 05:04 AM
03-09-2010 05:04 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
(b) Different physical NICs - connecting to physically different VC modules. So if "fault within blade" means a NIC failure then you will not lose connection.
There is no point though having two NICs connected to the same Network and the Network uplinking through only one VC.
The two NICs must be either connected to two different networks that go up thru two differnt VCs (Active/Active), or a single Network that uses uplinks in more than one VC (Active/Passive)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 05:07 AM
03-09-2010 05:07 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
I'll try to answere your questions for you.
(a) The teaming software provides redundancy on IP Level aswell. If either side of the virtual connect goes down you'll have one nic with the IP address up if you team one nic from each bay.
(b) Well you may have One big connector but within that you have loads of diffrent paths.
If LOM1 looses its connection doesn't automatically mean LOM2 fails aswell. Obviously that depends on the fault in the blade. In the way i'm getting your description, the LOMs are no diffrent from the LOMs of any other server Rack or Tower.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 05:07 AM
03-09-2010 05:07 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 05:49 AM
03-09-2010 05:49 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
VCM software failover between modules 1 <--> 2 can take upto 25-30 seconds. If you are not teamed between modules 1 & 2, then you will almost certainly have issues.
With teamed NICs, the failover is almost instantaneous.
Dave.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 08:51 AM
03-09-2010 08:51 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
My issue is as such.
The blades are running Windows 2008 R2 Core with Hyper-V along with Clustered Shared Volume.
We tried first setup with Teaming at Blade level + Redundancy at Virtual Connect level.
Everything is fine UNTIL Virtual Network at Hyper-V level needs to be configured and assigned to a Network Team configured with the HP utility.
At this stage, the network team is lost, and hyper-v can't bind itself to that team to create its virtual network.
HOWEVER, the same settings work fine in Windows 2008 R2 full installation with the GUI.
BUT, according to design, we need to go on Windows 2008 CORE only.
That's why we wanted to bypass the nic teaming at blade level.
Any comments regarding this?
Cheers
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 09:27 AM
03-09-2010 09:27 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
The other posters are absolutely right, do not leave your blade NICs unteamed, otherwise you will sacrifice your ability to fail over quickly.
Are you telling us that you successfully built a team, but that team was dissolved when you attempted to configure Hyper-V to use it? You should make sure your team is configured properly. For a 2008 Core server you should use the CQNICCMD utility documented here:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00582404/c00582404.pdf
good luck!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-09-2010 09:37 AM
03-09-2010 09:37 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
The utility creates the team and connection to it is also good.
But, once in Hyper-V we want to bind the Virtual Network to that same team, what usually would be a 15-sec process, takes forever, more than 20-30 minutes, to finally end in an error.
Afterwards, there is no more connectivity to the IP of the team.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
03-10-2010 10:25 AM
03-10-2010 10:25 AM
Re: BL460c G6 -> Network -> Lom1 Lom2 Redundancy
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01663264/c01663264.pdf?jumpid=reg_R1002_USEN
The short answer from this document is to uninstall Hyper-V AND teaming software, then reinstall them in that order.
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2022 Hewlett Packard Enterprise Development LP