- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-01-2010 08:55 PM
тАО10-01-2010 08:55 PM
Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
I'm new here, but I hope I'm in the right place; based on other posts it seems to be the place to ask.
I'm trying to track down the cause of slow speeds in an NFS, 10GB, VMware environment. The problem is manifesting itself as slow sequential throughput on NFS (served by a fast Sun storage system), but after a bunch of investigation it appears that non-NFS is also slow.
I am new to HP virtual connect stuff but not new to networking. I have been reading up and have a strong idea on what the problem is in the setup, but need some verification on how this is supposed to work would be great! Let me know if more models or firmware versions are required; but basically all firmwares have been upgraded by previous engineers trying to crack this problem (ongoing for 6 months).
From what I can see, the VC is configured as follows:
- 4 x 10GB up to core 10GB HP switch as LACP trunk
- 6 VLANs are defined and mapped to each blade
- each blade has 2 x 10GB which appears as 8 vnics in vSphere
- only vnic0 (from 1st flex) and vnic1 (from 2nd flex) are lighting up in VMware, each at 10 Gbps. vnic1 is in standby mode. vnic 2-7 are all disconnected.
- at face value, this appears like everything should run at 10 Gbps and be very speedy.
But with NFS and netperf, we are only getting in the 100MB/s to 150MB/s range.
The 6 VLANs are all in use. one for vMotion, management, infrastructure, NFS Storage, Web/FTP, DMZ etc.
In vSphere's vSwitches, each VM is given multiple vNics for each VLAN that they need access to. So NIC1 = Web/FTP, NIC2=Oracle, etc.
The bandwidth allocations in the Virtual Connect Manager are all at default/auto settings. Nowhere in Vmware do I see any bandwidth rate limiting.
I don't believe this is a VMware specific problem. Two of the blades have Redhat+Oracle installed on bare metal (no VMware) and are also having slow NFS speeds.
My big question -> If I understand the Flex-10 documentation correctly, then using speed=auto for all the ethernet segment's bandwidth settings would cause it to be equally split. So: 10 Gbps / 6 = 1.66 Gbps per VLAN roughly - bandwidth is allocated in 100 Mbps increments so it might be slightly higher or lower.
Can somebody kindly confirm if that is correct?
Thanks,
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-02-2010 09:57 AM
тАО10-02-2010 09:57 AM
Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
Good part: between VMs (eg test1 + test2) on the same vSphere blade, I get essentially 10 Gbps ( as reported by netperf).
Bad part: when I migrate test2 VM to another blade, I get around 1-2 Gbps with the same netperf command. But it should be 10 Gbps since the Flex-10 is all 10 Gbps, and so is the core Procurve switch (multiple 10 Gbps LACP trunks back to the Flex module).
What gives? Is this bad hardware or is the Flex-10 stuff simply misconfigured?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2010 05:04 AM
тАО10-04-2010 05:04 AM
Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
LOM:1-a, LOM:2-a 1Gb CONSOLE
LOM:2-a, LOM:2-b 2Gb vMotion
LOM:3-a, LOM:3-b 2Gb FT
LOM:4-a, LOM:4-b 5Gb VM Traffic
Have you upgraded to 4.1? Check network card firmware, VC firmware, ect. Do some readup on DCC.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2010 10:18 AM
тАО10-04-2010 10:18 AM
Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
So, when you are doing netperf testing, say on the bare iron configuratin, how many concurrent streams are you running?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-04-2010 10:49 AM
тАО10-04-2010 10:49 AM
Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
You can find me from Twitter @JKytsi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-05-2010 11:20 PM
тАО10-05-2010 11:20 PM
Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
4.0, 4.0 U1 or 4.0 U2?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-11-2011 10:00 PM
тАО10-11-2011 10:00 PM
Re: Flex-10 with c7000, BL490c, vSphere 4.0 - slow speeds
Just wondering if you ever found a solution to this issue? We seem to be running into something similar.
Thanks,
J