1748094 Members
6030 Online
108758 Solutions
New Discussion юеВ

Lefthand P4300

 
SOLVED
Go to solution
James2132
New Member

Lefthand P4300

I am wondering if anyone on these forums has a similar setup or knowledge when working with Lefthand P4300 system.

We are looking at purchasing 2x P4300 SANs for use at two differing site locations within secured datacentres geographically within the same region.

We are looking to achieve the benefits of H/A site failure etc and as well to utilise the benefits of clustering for performance increases on our virtualised servers, with the use of P4300 and shared storage.

One of my greatest concern will be with MS Sql as this is likely to be the most sensitive application we have.

I have gone through the demo video's etc and I can't work out the failover time, the response on the devices of partial link failure etc.

Will will be running redundant dual links between the sites, one lefthand P4300 + H/W HP server each of the sites and H/A on MS Sql.

The major concern that we have if the links between the two site fails i.e outage/broadcast storm etc and they have no visibility to each other, but the lefhand boxes, h/w servers and virtual servers stay online and still have access to the shared storage on their own site but not between sites.

What happens then as each sql instance in the High Availability will still be able to access the storage locally to that site and make updates and mods there?

When the link is restored and both sites become available again and can see each other, if the data updates/modifications is now out of syncs, how is this reflected back into the whole solution?

Our Major concerns and the reason for looking at the P4300 is to obviously mitigate against hardware failure making sure of dual components, redundant links etc, and to make sure we can survive the loss partial loss of those links etc.

Any advice would greatly appreciated.



3 REPLIES 3
teledata
Respected Contributor
Solution

Re: Lefthand P4300

You need to create a true Multi-Site SAN to avoid failure due to site link failure. Which means 2 Virtual IPs, each in their own subnet. Be aware your 2 sites must have Gigabit connectivity and have an average no greater than 2-3ms latency

Be aware, both sites WILL NOT have access to shared storage in the event of a link failure. A qourum is established by having connectivity to a majority of the managers within a cluster. In a cluster with an even number of nodes (like yours with 2) you would run a Failover Manager (a virtual server that runs a SAN/iQ manager). Only the site that has connectivity to the Failover Manager will maintain quorum (and thus access to storage).

If you want to maintain BOTH sites in a site failure, you would need at least 2 multi-site SAN clusters (4 total nodes minimum) where each site has a Failover Manager for the cluster considered its "primary". I set this up for a 2-site hospital. 2 Multi-Site clusters, so each site could maintain connectivity.

With this configuration their VMware, Exchange, and SQL volumes stayed online with our various failure testings (1 node, site link etc)
http://www.tdonline.com
James2132
New Member

Re: Lefthand P4300

Thanks for the time to get back to me,

While we are looking for resilency if we loose both redundant links between the datacentres, we only need to make sure the solution stays up and the data stays current with storage media. Currently the budget we have on this project will only cover the cost of two nodes with future incremental upgrades as neccessary.

My apologies if I am asking something simple but on the 2 node setup does that mean the FOM becomes the single point of failure i.e what happens if both sites loose connectivity to the service.

Also do the HP blades have a direct ISCSi session open to the FOM or direct to the shared storage?
teledata
Respected Contributor

Re: Lefthand P4300

The FoM is not really a single point of failure.. It provides quorum.

Consider this:

You have access to the following:

Node1 + Node2 + FoM = Access to Storage!

Node1 + FoM = Access to Storage!

Node1 + Node2 = Access to Storage!

Node2 + FoM = Access to Storage!

As you see, you simply need access to 2 out of 3 managers to maintain quorum (and access to storage).

So with 2 nodes, you can keep ONE site up, but ONLY the site that maintains access to the FoM So you can keep the FoM at either site, but you should keep it at the site in which you want to maintain storage connectivity in the event of a site link failure
http://www.tdonline.com