- Community Home
- >
- Storage
- >
- HPE Nimble Storage
- >
- Array Performance and Data Protection
- >
- Re: Quad port MPIO performance with vSphere
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2014 05:51 PM
тАО03-12-2014 05:51 PM
Hi all,
I couldn't seem to find much around this so hopefully someone out there can help me.
I am trying to find out whether I will get a higher aggregated throughput between my ESX hosts and Nimble storage if I use four 1GbE nic ports on each and configure all four ports in ESX for MPIO (instead of two).
I understand that when using iSCSI within vSphere a single iSCSI connection can't use more than one path at a time but with round robin, etc, has anyone seen an overall higher performance in this configuration?
It seems to be a configuration that isn't very common as a lot of people just move to 2 x 10GbE but in my case for this particular use-case I would struggle to justify the extra cost.
Any help would be much appreciated!
Cheers,
Ben
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2014 06:36 PM
тАО03-12-2014 06:36 PM
SolutionHi Ben,
You will see higher aggregated throughput between ESX hosts and Nimble should you dedicate additional NICs for iSCSI AND if you are throughput bound by your existing 2 x 1GbE connections AND assuming you have at least 4 x 1Gbps iSCSI ports on Nimble. You can check in vCenter whether the VMNICs for iSCSI are saturated.
Throughput = IOPS x block size. E.g. 10,000 IOPS x 8KB block = ~80MB/s.
Unless you have either a high IOP or large sequential workloads in the VMs on the host, you may not be saturating the 2 x 1GbE therefore adding additional host NIC ports for iSCSI will provide no additional performnce.
Hope this helps.
-Eddie
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-12-2014 08:06 PM
тАО03-12-2014 08:06 PM
Re: Quad port MPIO performance with vSphere
Thanks Eddie,
That does help, I just wanted to check that from an iSCSI perspective within vSphere I can leverage those additional nics. It's mostly to do with backup and restore throughput which would be largely sequential.
Thanks again!
Cheers,
Ben