HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
StoreVirtual Storage
Showing results for 
Search instead for 
Did you mean: 

StoreVirtual 4330 Best Practice


StoreVirtual 4330 Best Practice

Hello , we have 4 storevirtual 4330 . We want to know what is the best configuration of clustering . We make the following assumptions:


1 . We do not know if using 2 separate clusters formed from 2 4330 in pair is better than creating a single cluster whit all 4 4330. Avoiding a bottleneck due to access to the single cluster.


2 . We do not know whether it is more powerful to have 2 clusters with pair of 4330 in RAID-1, or the distribution of the number of writes to the disk and then the IOPS would be better with the cluster  with 4 4330 in RAID-5


3 . We use ESXi 5.5.  In regard to Link Aggregation, we need to know what is the best distribution protocol. For example, knowing that ESXi 5.5 supports LACP only with distribution IP Hash, if the system is configured qith a virtual IP address for the cluster of 4330, the distribution on member links will not be optimal because the chosen link is always the same (some IP Hash on each communication) ? it is more correct to use virtual ip or multipath ? or ALG alghoritmo of link aggregation ?


All connections goes through a IRF stack made of 4 A5120 EI wirh rear 10Gbit slot in circular topology.


We hope someone can show us the best solution and if possible links to documents that might clarify performance based on the type and configuration information to properly determine the distribution on the links members of the link aggregation .


Thank you so much

Honored Contributor

Re: StoreVirtual 4330 Best Practice

keeping it short...  use one cluster of 4 nodes.  keep it network raid 10.  Avoid Network Raid5 for all but static archive data!


MPIO isn't AS GOOD w/ VMWare as the HP DSM w/ MS.  The solution is to use LCAP if you can for the nodes and create more LUNs.  Each LUN will go to the VIP which will go to a different node so you get some load balancing that way.

Honored Contributor

Re: StoreVirtual 4330 Best Practice

Indeed more volumes is better since more nodes will be used as initial gateway for the VIP address.


Be sure also to follow the best practices guidelines mentioned in this document:


every physical NIC will have its own IP address and then being assigned to the software iSCSI initiator...

in VMware 4.x this was a CLI thing, in VMware 5.X this can be done thru the GUI as well. Check the doc...

If my post was useful, clik on my KUDOS! "White Star" !

Re: StoreVirtual 4330 Best Practice

Thanks for the suggestion for the constitution of cluster in RAID 10 with 4 4330 nodes with the creation of multiple volumes, although in the past I have run tests using several volumes, having read this tip on the internet, without having actually increased perforamce appreciable.


I have read the document indicated for lefthand configuration with vmware, very useful, but I do not understand: so far we have used the lefthand cluster virtual IP address in ESXi iSCSI configuration to lefthand. If setting links between ESXi and separated lefthand ip using multipath, as indicated in the document, how does the cluster bhehave ? Will lefthand replicate data between nodes in the cluster, even if the ESXi host send data directly to single ip instead of to 4330 virtual ip address ?


can you tell us where to read how ESXi -> A5120 "ip hash" algorithm choice member link and confirm that A5120 -> lefthand "mac hash" algorithm is a standard mac hash wihout proprietary behavior ?


Thank you

Honored Contributor

Re: StoreVirtual 4330 Best Practice

I"m not an ESX guy so I'll leave that to others, but you are correct in targeting the VIP of the cluster.  That address is shared between ALL the nodes so if you try and connect to that IP the cluster behind that IP will decide which node will actually take the connection so each connection could go to a different node which with more connections will ensure more ballanced loads... not as good as the HP DSM for MS which talks to all the nodes, but thats not in the cards for ESX as far as I know.

Regular Advisor

Re: StoreVirtual 4330 Best Practice

oikjn is totally correct on all points and knows his stuff.

With the P4000 you are best to keep it as simple as possible, put all the nodes together and run them in ALB.

Latency is the kingmaker with iSCSI so try to keep it as simple and flat as possible.

Avoid cross switch LACP etc, it generally makes things slower.

TEST jumbo frames on 10GB and see if it improves performance with a real workload before implementing it - lots of switches slow down under heavy load with jumbo as the buffers cannot fit enough frames to be useful.

Generally I have found jumbo to increase performance under light loads, but kill it under heavy load in these scenarios.


Kind Regards.


David Tocker




David Tocker
Frequent Advisor

Re: StoreVirtual 4330 Best Practice

Remember also Flow control on switch and nic, much more important than jumbo frames


Re: StoreVirtual 4330 Best Practice

We have 1 Gbps network so we avoide jumbo frame, too difficult to configure correctly on each party and on Gbps network with so little benefit. Flow control was configured on each and point and switches.


we have more doubts about the configuration of the link aggregation between ESXi and A5120 IRF and the cluster of 4330 and the A5120 IRF.


We do not understand precisely how work the flow distribution over aggregated links.


Between nodes in the cluster 4330 and A5120 IRF will try to use simple links with ALB algorithm, eas suggested.

The cluster consists of an IRF stack of 4 switches A5120 working as a single switch (single MAC domain) , so there should be no problem, even if we believed that the use of LACP was safer being a multi-vendor standard.


For this connection, however, also LACP should work fine since I believe that the distribution functions with MAC Hash ensure an even distribution of flows on 4 links.


For ESXi , the doubt is that pointing to virtual address and using ESXi "ip hash", the distribution always takes place on the same link, unless this mechanism uses both source and destination ip. In this case, if each ESXi aggegated network card of ESXi in balancing have a different ip address we might choose different ip addresses on ESXi in order to have a good distribution.


If someone have suggestion or link to similar configuration hints....


Thank you


Re: StoreVirtual 4330 Best Practice

OK. After reading a bit of documentation, we arrived at these conclusions, at least not until someone says otherwise...


We made a cluster of 4 HP StoreVirtual 4330 configured in RAID 10 as suggested. Each 4330 has 4 network card all used in an iscsi VLAN bonded together to maximize inter node communications and storage access performance. This is questionable but for now we decided to try so.


Below network configuration


IMPORTANT: for ESXi and how it works, at least until version 5.5, we can speak of IP Hash only for connections of Virtual Machine, management, vMotion, .... For iSCSI connections, if you follow the best practices Lefthand / ESXi, even if you set the Link Aggregation 802.3ad, the vmkernel port groups will not use IP hash distribution algorithm, but a Multipathing mechanism with Round-Robin distribution over links (RR was recommended on best practices for lefthand/Vmware).


To connect Lefthand - IRF, we used a setup 802.3ad dynamic link aggregation (LACP). We will see if the performance need optimizations or a configuration like ALB, though with the IRF that is a real stack, ALB could create problems with MAC addresses management. 


To connect ESXi - IRF, as we have vSS (vSwitch Standard and not Distributed - we have Essential Plus license...), we configured a static link aggregation on IRF. On ESXi we created VMKernel groups as indicated in the documentation:


- “http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc_50%2FGUID-0D31125F-DC9D-475B-BC3D-A3E131251642.html


- “http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc_50%2FGUID-37F97D1C-4E4F-460B-ACF9-04D1347959CC.html"


- “http://pubs.vmware.com/vsphere-50/index.jsp?topic=%2Fcom.vmware.vsphere.storage.doc_50%2FGUID-C1358DCD-E4F2-49A5-842D-EDADA1784E9A.html" 


- “http://h20195.www2.hp.com/V2/GetPDF.aspx%2F4AA3-6918ENW.pdf



In this way for Port Group VMKernel used for iSCSI connections, ESXi does not use IP HASH distribution Algorithm, but instead Multipathing distribution MRU or RR or Fixed. Solving the complexity of set specific IP and control IP HASH so obtain traffic distribution on links.


We activates VIP-LB on LUN in Lefthand cluster (default choice). So... in theory... we achive a correct setup with a good fair use of links.


any comment will be appreciated.

Thank you all.