- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- StoreVirtual Storage
- >
- Re: Performance (load balancing etc.)
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-18-2009 11:36 PM
тАО10-18-2009 11:36 PM
Performance (load balancing etc.)
Let's assume I build a cluster comprising either two or four nodes, with dedicated hardware for the VSA and a dedicated iSCSI network.
I would have a single Virtual IP, and each VSA can only have a single gigabit vNIC for iSCSI?
That means I have potentially 4gbps worth of connectivity in and out of the cluster - correct?
So, if I have my production ESX boxes or a physical file servers each with a pair of dedicated gigabit NICs connected to the iSCSI network, how is the balancing of connections handled?
For example, if a file server has 4 LUNs assigned to it, is it the "law of averages" that suggests that each LUN target would be assigned a 1gbps connection and that each would go via a different gateway, or could I end up where a single physical file server is trying to access all of its targets via the same gateway VSA?
Is there any way to get more than a single gb connection to a target?
So far, I like the clustering and failover, I'm just trying to understand the performance side of things a little more once you move away from any disk bottleneck at the physical storage.
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-20-2009 05:21 AM
тАО10-20-2009 05:21 AM
Re: Performance (load balancing etc.)
That requires either vSpere in the VMware world, or using HP/Lefthand's Windows MPIO provider.
Without MPIO, individual iSCSI targets maintain persistent paths across your network interfaces, thus they only use one NIC.
I hope that helps,
Kevin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2009 01:23 AM
тАО10-22-2009 01:23 AM
Re: Performance (load balancing etc.)
If my ESX host running the VSA had several physical NICs into the iSCSI switch running at 1gbps, what would the VSA be able to use?
Or if it had a single 10gbps NIC what would it be able to use?
Basically is the VSA limited to 1gbps or just to one vNIC which works at the speed of the underlying physical NIC?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-22-2009 05:15 AM
тАО10-22-2009 05:15 AM
Re: Performance (load balancing etc.)
Theoretically if you had multiple physical NICs on the vswitch that the VSA is connected to you should be able to exceed 1Gb to the VSA. Although I've never tested this theory to see how much throughput you can actually get to the VSA using the above method.
The previous post accurately describes the per target limitations unless using MPIO.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-23-2009 09:44 AM
тАО10-23-2009 09:44 AM
Re: Performance (load balancing etc.)
This is an important part to understand too.
Using native ESX 3.5 iSCSI and only a single LUN/volume, you are limited to 1GB speed because there is no MPIO in affect and there is only one LUN.
If you have more than one LUN, each LUN can use 1GB off the SAN, and each would be redirected to a different node to get good load balancing off the SAN nodes. So in your four node example, 4 luns could achieve 4x1Gb speed because they get load balanced across the SAN. That still does not require MPIO though.
FROM the ESX server, in 3.5, you are basically limited to 1GB because there is no nic load balancing for iSCSI in ESX 3.5.
In ESX 4.0 you can get nic load balancing for a single LUN by enabling the native round robin MPIO across more than one nic.
So ideal for the SAN side is to just make sure "load balancing" is enabled, its the default anyway, and if they are physical nodes do bond their network interfaces.
Ideal for the ESX side is to run ESX 4 and enable multi-nic MPIO.
This rather long post is a major simplification honestly. To understand ESX better I'd suggest you read these two posts...
for ESX 3.5 read this....
http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html
for ESX 4 read this....
http://virtualgeek.typepad.com/virtual_geek/2009/09/a-multivendor-post-on-using-iscsi-with-vmware-vsphere.html