- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - VMware
- >
- v3000 enclosure with FLEX10 unable to ping some IP...
Operating System - VMware
1753731
Members
4700
Online
108799
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-22-2011 05:48 PM
02-22-2011 05:48 PM
v3000 enclosure with FLEX10 unable to ping some IP's on vSwitch1
Hello,
I have a very, very strange problem (and spent many hrs with vmWare support on the phone).
We have a c3000 enclosure, BL465 blades. Enclosure bay 1 and 2 have 1G switches, bay 3 & 4 have Flex-10 switches. All blades have Mez1 as 1G, Mez2 as 10G. Placement of Flex10 interconnects in bay 3 & 4 (instead of default 1 & 2) is forced by size of Mez card.
We have just added the 10G stuff and some 10G iSCSI storage.
I have vSwitch0 in the management network, a vSwitch1 in the iSCSI network. I have two HP 4500 iSCSI targets at 10.x.x.11 and .12, gateway at .1 and the new targets at .14 & .15
From the blades I can reach the 11 and 12 targets without issues, when I tried adding the 14 & 15 I was unable to reach them. During the digging I also realized I can't ping the .1 (gateway, unneeded as such, but alas).
From the switch I can ping the vmkernel interface (using the same VLAN as the source address)
After lots of testing I found that if I make the management vmkernel port in the iSCSI network, all is well, everything pings as it should etc.
I can create a second vSwitch in my normal management lan. Add another VMkernel port and enable management traffic on it and all is still well. I can uncheck management traffic from vSwitch0 vmkernel port and I can manage on the normal management IP.
But the console GUI still shows vSwitch0 as the management IP, I'm sure that somebody at some point will think "that's wrong" let's fix it, and I'm in trouble.
So in short, with a vmkernel port on vSwitch1 I CAN ping 10.x.x.11 and 10.x.x.12
I CANNOT ping 10.x.x.1 and 10.x.x.14+15
I CAN ping the vmkernel IP from the switch (the .1)
I CAN see arp entry for .1 on ESXi (and it's correct)
If use the exact same nics and IP's on vSwitch0 it all works without issues.
Unfortunately troubleshooting on a low level is not so easy because I can't put a sniffer on the Flex10 interconnect (no hardware to do that available). ESXTOP suggests that traffic does go out via the correct interface.
Any ideas where this might be comming from?
Bas
I have a very, very strange problem (and spent many hrs with vmWare support on the phone).
We have a c3000 enclosure, BL465 blades. Enclosure bay 1 and 2 have 1G switches, bay 3 & 4 have Flex-10 switches. All blades have Mez1 as 1G, Mez2 as 10G. Placement of Flex10 interconnects in bay 3 & 4 (instead of default 1 & 2) is forced by size of Mez card.
We have just added the 10G stuff and some 10G iSCSI storage.
I have vSwitch0 in the management network, a vSwitch1 in the iSCSI network. I have two HP 4500 iSCSI targets at 10.x.x.11 and .12, gateway at .1 and the new targets at .14 & .15
From the blades I can reach the 11 and 12 targets without issues, when I tried adding the 14 & 15 I was unable to reach them. During the digging I also realized I can't ping the .1 (gateway, unneeded as such, but alas).
From the switch I can ping the vmkernel interface (using the same VLAN as the source address)
After lots of testing I found that if I make the management vmkernel port in the iSCSI network, all is well, everything pings as it should etc.
I can create a second vSwitch in my normal management lan. Add another VMkernel port and enable management traffic on it and all is still well. I can uncheck management traffic from vSwitch0 vmkernel port and I can manage on the normal management IP.
But the console GUI still shows vSwitch0 as the management IP, I'm sure that somebody at some point will think "that's wrong" let's fix it, and I'm in trouble.
So in short, with a vmkernel port on vSwitch1 I CAN ping 10.x.x.11 and 10.x.x.12
I CANNOT ping 10.x.x.1 and 10.x.x.14+15
I CAN ping the vmkernel IP from the switch (the .1)
I CAN see arp entry for .1 on ESXi (and it's correct)
If use the exact same nics and IP's on vSwitch0 it all works without issues.
Unfortunately troubleshooting on a low level is not so easy because I can't put a sniffer on the Flex10 interconnect (no hardware to do that available). ESXTOP suggests that traffic does go out via the correct interface.
Any ideas where this might be comming from?
Bas
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-26-2011 09:38 PM
02-26-2011 09:38 PM
Re: v3000 enclosure with FLEX10 unable to ping some IP's on vSwitch1
How are your network profiles configured in your VC Manager? and your server profiles?
Which NIC's are your vSwitches using?
How is the new 10G stuff connected? to what is it connected to?
Do you have a network connection diagram you can share?
Steven
Which NIC's are your vSwitches using?
How is the new 10G stuff connected? to what is it connected to?
Do you have a network connection diagram you can share?
Steven
Steven Clementi
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
HP Master ASE, Storage, Servers, and Clustering
MCSE (NT 4.0, W2K, W2K3)
VCP (ESX2, Vi3, vSphere4, vSphere5, vSphere 6.x)
RHCE
NPP3 (Nutanix Platform Professional)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP