GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- HPE BladeSystem
- >
- BladeSystem - General
- >
- Re: Virtual Connect switches throughput issues
BladeSystem - General
1847823
Members
4880
Online
104021
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2010 08:25 AM
02-17-2010 08:25 AM
Virtual Connect switches throughput issues
Hi,
We're having some major nfs read throughput issues with the VC 1/10Gb-F switches. The reads coming off our NFS server to a blade can only go up to 3.6MB per second, where a typical nfs read speed can go up to 90MB/sec.
Has anybody seen this problem too? thanks,
1 REPLY 1
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2010 12:53 PM
02-17-2010 12:53 PM
Re: Virtual Connect switches throughput issues
We are having a similar problem. A about 2 months ago our storage vmotions took about 20 min for 10Gb of data. Now they take about 6 hours, although no one is complaining about performance. We have two c7000 enclosures with 27 BL460G1 servers with the same problem. We are using NFS storage on 2 Netapp FAS3170 clusters with 10Gb links. Links are not saturated anywhere. Vmotion works fine on new enclosure with Flex-10 vc and vsphere, so not storage. We just built empty enclosure with same vc swithes and updated all firmware. Farm is ESX 3.5 update 3. New server in empty enclosure is ESX 3.5 update 5. Will post if we find something.
RJ
RJ
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2026 Hewlett Packard Enterprise Development LP