- Community Home
- >
- Networking
- >
- Legacy
- >
- Switches, Hubs, Modems
- >
- Re: Trunking on 5412zl
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2010 03:17 PM
01-26-2010 03:17 PM
I have 3 servers with 8 NICS and I want to trunk them but want to split them up across different modules in the 5412. Is it possible to have one trunk group across different modules so that I have some redundancy for my ESX hosts?
I have a rough drawing of what I want to accomplish while maintaining all connectivity to my NICs on each ESX host. The diagram only shows 2 NICs but each ESX host actually has 8 NICs.
IS this possible on this switch?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2010 03:22 PM
01-26-2010 03:22 PM
Re: Trunking on 5412zl
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-26-2010 09:48 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2010 05:44 AM
01-27-2010 05:44 AM
Re: Trunking on 5412zl
Next is the 2nd part. In our DR site we are going to have 2 24 port switches which I have not purchased yet. Is there some way with procurve switches to trunk across 2 switches like with Nortel's split multi-link trunking?
I would rather not go with mstp since in that scenario I only get half my bandwidth and then fail-over will give me the other half. I would like to get full use of all ports unless one of the switches tanks.
Thanks for the help.
Adam
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2010 06:18 AM
01-27-2010 06:18 AM
Re: Trunking on 5412zl
No other way than that or using a chassis based switch in my DR site?
Thanks,
Adam
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2010 06:38 AM
01-27-2010 06:38 AM
Re: Trunking on 5412zl
i guess what you want is dt-trunking. works with the 3500, 5400, 8200 and 6600 i think.
and you need software k.14.xx.
the disadvantage is you can only do dt-trunking or routing.
hth
alex
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2010 06:16 PM
01-27-2010 06:16 PM
Re: Trunking on 5412zl
Trev.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-27-2010 06:16 PM
01-27-2010 06:16 PM
Re: Trunking on 5412zl
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-24-2010 07:22 AM
03-24-2010 07:22 AM
Re: Trunking on 5412zl
We have been trying to get distributed trunking working with vSphere and our HP switches.
We have the dt-lacp ports setup and we have the vswitches set to IP Hash.
On the HP switches, the ports don't show that there is a lacp member. The HP switches reported the same MAC on both ports (different switches), which we think is correct.
Also when we did this configuration and we tried to bring a vSphere box online, we received an error that HA could not be enabled.
ProCurve Switch 1(config)# trunk 23-24 trk1 lacp
ProCurve Switch 1(config)# switch-interconnect trk1
ProCurve Switch 1(config)# trunk 1 trk5 dt-lacp
ProCurve Switch 2(config)# trunk 23-24 trk1 lacp
ProCurve Switch 2(config)# switch-interconnect trk1
ProCurve Switch 2(config)# trunk 1 trk5 dt-lacp
We are trying the same thing as here:
http://www.vnephos.com/index.php/2009/09/hp-procurve-cross-stack-etherchannel/
I am not sure if we are doing something wrong, or if this simply does not work the way we expect it or what...
Thank you,
Kevin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2010 10:42 AM
11-02-2010 10:42 AM
Re: Trunking on 5412zl
Did you solved your problem?
We are about to try the same with dt-lacp and vSphere. So it would be nice to know if we have to be aware of something special?
Thanks
/Mads
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2010 11:33 AM
11-02-2010 11:33 AM
Re: Trunking on 5412zl
We tried to get LACP working on vSphere with dt-lacp on the HP switches but we kept having intermitent network issues.
We also just had a major network outage, one of our ProCurve switches rebooted and the 10 Gbe dt trunk between the two flaked out on reboot. We ended up removing and re-inserting the 10Gb blade to get the link back up, but by then we switched back to a slew of 1 Gb links for our trunk. We aren't sure if all the distributed trunking caused the switch to reboot or if it was the 10Gb module.
We are still on different firmware versions, so that might be our issue... We are working to get all systems redundant on both switches so we can upgrade firmware more easily.
One downside to iSCSI and vSphere, 95% of our enviornment is solely dependant on our ProCurve switches.
Based on our experiences so far, unless we see some tested known working configurations, we may wait on HP/VMware to evolve more before we try dt-lacp again.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-02-2010 12:58 PM
11-02-2010 12:58 PM
Re: Trunking on 5412zl
We also have 2 x HP ProCurve 6600, but here we are only using LACP to the NetApp storage.
But our counters are showing that we are not even close to being using 10GBe, so we will try to make a smaller environment with just 1 GB connections.
But when using NFS, it's not possible to trunk ports and also get redundant switch connections, unless we use dt-lacp. And we want to trunk, because we want more than 1GB connection, we would like 2x2GB (4x1GB), so we have 2 NFS vmks = 2 datastores which each can perform 1.5-2 GB.
That we are not able to receive with VMware's port lb, because one vmk = one port, and therefore it will never exceed more than 1 GB. :(
/Mads