- Community Home
- >
- Networking
- >
- Legacy
- >
- Switches, Hubs, Modems
- >
- Re: Trunking on 5412zl
Switches, Hubs, and Modems
1754929
Members
2979
Online
108827
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2010 11:33 AM
11-02-2010 11:33 AM
Re: Trunking on 5412zl
No we have not. For now, we have switched vSphere back to the default teaming setting basically ALB.
We tried to get LACP working on vSphere with dt-lacp on the HP switches but we kept having intermitent network issues.
We also just had a major network outage, one of our ProCurve switches rebooted and the 10 Gbe dt trunk between the two flaked out on reboot. We ended up removing and re-inserting the 10Gb blade to get the link back up, but by then we switched back to a slew of 1 Gb links for our trunk. We aren't sure if all the distributed trunking caused the switch to reboot or if it was the 10Gb module.
We are still on different firmware versions, so that might be our issue... We are working to get all systems redundant on both switches so we can upgrade firmware more easily.
One downside to iSCSI and vSphere, 95% of our enviornment is solely dependant on our ProCurve switches.
Based on our experiences so far, unless we see some tested known working configurations, we may wait on HP/VMware to evolve more before we try dt-lacp again.
We tried to get LACP working on vSphere with dt-lacp on the HP switches but we kept having intermitent network issues.
We also just had a major network outage, one of our ProCurve switches rebooted and the 10 Gbe dt trunk between the two flaked out on reboot. We ended up removing and re-inserting the 10Gb blade to get the link back up, but by then we switched back to a slew of 1 Gb links for our trunk. We aren't sure if all the distributed trunking caused the switch to reboot or if it was the 10Gb module.
We are still on different firmware versions, so that might be our issue... We are working to get all systems redundant on both switches so we can upgrade firmware more easily.
One downside to iSCSI and vSphere, 95% of our enviornment is solely dependant on our ProCurve switches.
Based on our experiences so far, unless we see some tested known working configurations, we may wait on HP/VMware to evolve more before we try dt-lacp again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2010 12:58 PM
11-02-2010 12:58 PM
Re: Trunking on 5412zl
Okay, to bad, I was hoping to get the golden path from you :-)
We also have 2 x HP ProCurve 6600, but here we are only using LACP to the NetApp storage.
But our counters are showing that we are not even close to being using 10GBe, so we will try to make a smaller environment with just 1 GB connections.
But when using NFS, it's not possible to trunk ports and also get redundant switch connections, unless we use dt-lacp. And we want to trunk, because we want more than 1GB connection, we would like 2x2GB (4x1GB), so we have 2 NFS vmks = 2 datastores which each can perform 1.5-2 GB.
That we are not able to receive with VMware's port lb, because one vmk = one port, and therefore it will never exceed more than 1 GB. :(
/Mads
We also have 2 x HP ProCurve 6600, but here we are only using LACP to the NetApp storage.
But our counters are showing that we are not even close to being using 10GBe, so we will try to make a smaller environment with just 1 GB connections.
But when using NFS, it's not possible to trunk ports and also get redundant switch connections, unless we use dt-lacp. And we want to trunk, because we want more than 1GB connection, we would like 2x2GB (4x1GB), so we have 2 NFS vmks = 2 datastores which each can perform 1.5-2 GB.
That we are not able to receive with VMware's port lb, because one vmk = one port, and therefore it will never exceed more than 1 GB. :(
/Mads
- « Previous
-
- 1
- 2
- Next »
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP