- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Flex 10 transfer speeds
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-23-2010 12:51 PM
тАО02-23-2010 12:51 PM
Flex 10 transfer speeds
We have a brand new setup with a BladeSystem and I am having some problems I think with Virtual Connect.
Our Setup:
5 x BL490C G6
2 x Virtual Connect Flex 10
2 x Procuve zl CX4 Modules
2 x Equallogic PS6000VX iSCSI San
We have 5 blades setup with ESX 4 and they connect to the Virtual Connect module with the 2 10GB nics on board. I have one CX4 cable connected to the Virtual Connect module going to the CX4 Module in our core 5400 Switch.
In ESX all of the nics are showing the correct speeds for the Flex Nics.
I have a nic setup for iSCSI that I have given 3GB of bandwith, when I test the disk speed of the VM I can only get 100MB/s and if I test 2 VM's from the same host it only gives half the speed aobut 50MB/s.
What have I done wrong?
Thanks,
Jesse
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 06:00 AM
тАО02-24-2010 06:00 AM
Re: Flex 10 transfer speeds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 06:49 AM
тАО02-24-2010 06:49 AM
Re: Flex 10 transfer speeds
This sounds like there is a 1Gb link in the path between the 3Gb FlexNIC and the Equallogic SAN.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 07:18 AM
тАО02-24-2010 07:18 AM
Re: Flex 10 transfer speeds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 07:57 AM
тАО02-24-2010 07:57 AM
Re: Flex 10 transfer speeds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 09:35 AM
тАО02-24-2010 09:35 AM
Re: Flex 10 transfer speeds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 09:38 AM
тАО02-24-2010 09:38 AM
Re: Flex 10 transfer speeds
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 09:47 AM
тАО02-24-2010 09:47 AM
Re: Flex 10 transfer speeds
Almost no trunking/bonding/aggregate/whatnot solution will spread the segments of a single TCP connection across multiple links. Linux bonding has support for such "round robin" packet scheduling, but it does introduce packet reordering and that can introduce a two steps forward, three steps back situation - out of order traffic causes immediate ACKs from TCP, and if there are enough segments out of order, it will trigger a spurrious fast retransmit, which will consume link bandwidth and suppress the TCP congestion window.
These 8x 1Gig links connected from the Equallogic to the ProCurve switch - exactly which model of ProCurve switch and which rev of firmware? Any particular trunking settings used?
Also, in your previous testing, was it all from one "client" IP address to the "server" IP address of the Equallogic or were multiple IP address pairs involved?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 10:00 AM
тАО02-24-2010 10:00 AM
Re: Flex 10 transfer speeds
We are using an HP ProCurve 5412 Switch with firmware K14.47, as per the document for the Equallogic no special trunking was done. Each 1Gb interface on the array gets an IP and then you define a pool ip that you point your client at so your client only knows the group ip.
So the basic testing so far has been:
1.) Physical Windows server outside the blade enviroment with 2 1 Gb nics in it with 2 IP's, was able to get about 200MB/s on the array.
2.) Pysical Windows Server on a Blade setup with 2 Flex nics with 3 Gb bandwidth each and 2 IP's I was only able to get 100MB/s on the array
3.) ESX server setup with 2 Flex nics with 3 Gb bandwidth and 2 IP's (setup as per MPIO for vmware document) I was only able to get 100MB/s when running 1 VM on 1 host and about 50MB/s when 2 VM's on 1 host.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-24-2010 11:21 AM
тАО02-24-2010 11:21 AM