StoreVirtual Storage

P4500 best practice

Frequent Advisor

P4500 best practice


I have a short question about the LeftHand P4500.
Our customer has a LeftHand P4500 solution with 4 nodes. Two of them on different locations. Each of the LeftHand nodes has 2 NICs. The question is, whether it is possible to build 2 separate network fabrics (like Fibre Channel) and connect each NIC to one of the fabrics or have all switches to be connected together and should all NICs be placed in the same subnet?

Thank you,
Andrew Manhein
Occasional Advisor

Re: P4500 best practice

Will the customer use these nodes in a Multi-site SAN configuration inside one managed cluster or will they create two clusters to separate the network and data traffic between the sites? In either case, two subnets with one VIP for each subnet are recommended for best performance. If they go with Multi-iste, the data will be read/written to all nodes and will take advantage of all drive spindles from all four nodes for increased performance. The The downside is that you should have at least 1 Gbps bandwidth between the sites for decent performance (assuming the node NICs are also 1 Gbps). If they are placed into two different clusters, then the WAN bandwidth is not as critical since each site will only read/write to the local cluster nodes (in this case two nodes at each site). The performance benefits of utilizing all four nodes in a single cluster are somewhat negated if you use Network Raid 10 (a.k.a 2-way replication) to ensure that you do not experience downtime from a failed node, but it is highly recommened. You should also bond the NICs using ALB for failover and increased bandwidth. If you are not using VMware to connect to the SAN, you should create two separate subnets - one for each site - and create a VIP for each subnet. The servers that attach to the LUNs should connect to the VIP associated with that site. This configuration will allow most of the I/O traffic to remain on the local subnet. You should also install the LeftHand/StorageWorks MPIO DSM on Windows servers that have at least two bonded NICs for increased performance. I believe you can install this product on servers with only one NIC and still automatically attach to several storage nodes via the local cluster VIP, but performance will max at the speed of one server NIC. There are different opinions on whether you should create a stretched VLAN across multiple sites. VMware vSphere ESX Server does not currently allow connections to multiple VIPs, but there are ways around this limitation that will roughly achieve this goal. Let me know if you have further questions.