Storage Boards Cleanup
To make it easier to find information about HPE Storage products and solutions, we are doing spring cleaning. This includes consolidation of some older boards, and a simpler structure that more accurately reflects how people use HPE Storage.
HPE StoreVirtual Storage / LeftHand
cancel
Showing results for 
Search instead for 
Did you mean: 

P4500 bandwidth utilization

SOLVED
Go to solution
Pigi
Occasional Visitor

P4500 bandwidth utilization

I'm evaluating to implement a P4500 (2 node) in network raid 10. I guess (but didn't find any evidence) that all 4 NIC (1 Gbps each) will work (that is: my vmware farm will "talk" with both storage node in the same time) so I will leverage a total bandwidth of 4Gbps. Indeed my doubt is that the vmware farm could talk always with only 1 P4500 node while the working node synchronize with the other one. That would say that my vmware farm would talk at a max bandwidth of 2Gbps with the P4000 cluster.

Some evidence that my initial guess is correct?
Thanks!
2 REPLIES
Bryan McMullan
Trusted Contributor
Solution

Re: P4500 bandwidth utilization

I don't think you'll ever see 100% utilization of all links as it stands. But to even get VMWare to talk to all nodes at the same time, you need to ensure that VMWare is using multipath. It takes a little bit to configure it correctly.

This is a good walkthrough of the setup:

http://virtualy-anything.blogspot.com/2009/12/how-to-configure-vsphere-mpio-for-iscsi.html

If you setup the round robin version, VMWare uses all paths to send a set amount of data to each node and runs through them all. Never really using them all at once...but still making use of each of them. There may be a time that HP makes a DSM for VMWare that could change that method, but as of now...nothing.
Bart_Heungens
Honored Contributor

Re: P4500 bandwidth utilization

I have implemented this procedure already several times and I can confirm that this solution is really working well... Load becomes spread accross all available paths...

Though, I am now struggeling with the fact that I moved to distributed switches inside ESX.

What is than the ideal configuration of the NICs in the ESX host? I suppose the 2 seperate VMkernels are not nocessary anymore if I select Load Balancing based on physical NIC load (new in ESX 4.1)?
At that moment, 1 VMkernel should be enough so that ESXi will spread the load anyway accross the 2 NICs?
I do not find any documentation on this one...
--------------------------------------------------------------------------------
If my post was useful, clik on my KUDOS! "White Star" !
My blog: http://blog.bitcon.be