Operating System - HP-UX
1833767 Members
2054 Online
110063 Solutions
New Discussion

Campus cluster setup: How about LUNs visibility/pv links?

 
SOLVED
Go to solution
Romaric Guilloud
Regular Advisor

Campus cluster setup: How about LUNs visibility/pv links?

I'm about to implement two rp5405 HP-UX 11.11 servers into a campus cluster MC/SG 11.14.

Each of them will be delivered with two PCI 2Gb FC HBA.
Any docs/guidelines/advices dealing with proper mirror config + pv link features setup within such a campus cluster?

Indeed, as each server has only 2 HBA, can I setup both mirroring and pv link? If so it's at the FC SAN Directors level that the zoning should be made accordingly; but then, anything else to consider at the purely LVM level on these servers?

Thanks in advance.
Sincerely,

Romaric GUILLOUD.
"And remember: There are no stupid questions; there are only stupid people." (To Homer Simpson, in "The Simpsons".)
12 REPLIES 12
Hai Nguyen_1
Honored Contributor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Romaric,

The book "Managing MC/ServiceGuard" should tell you what you asked. Just a note for you, PV links is independent of MC/ServiceGuard. It is controlled by LVM, not by MC/ServiceGuard.

In case you do not have a hardcopy of the book, you can get a softcopy at docs.hp.com.

Hai
Stuart Abramson_2
Honored Contributor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Romaric:

What kind of disks are they? If they are EMC, or some other High Availability storage, which most people have today, you don't need to mirror, the disks are already mirrorred internally.

As stated previously, PV links don't have anything to do with MC/SG.

Do you have EMC PowerPath? If so, you don't need PV links.

So, what kind of disks are they?
Michael Steele_2
Honored Contributor

Re: Campus cluster setup: How about LUNs visibility/pv links?

This is SAN Data Security in an ANY to ANY environment, (* fabric topology *). Sets of hosts and devices inside a zone are not visible to other entities outside the zone. There is Fabric Zoning where Name server information is used to cross reference and direct traffic via the Use of WWN (* World Wide Name. *) Within 'Fabric Zoning' there is Hard Zoning which is Firmware controlled and is most reliable and Soft Zoning. 'Soft Zoning' is the name server implementation and its not as reliable as the Firmware usage.

Also there is host-based LUN masking which is handled by LVM on the server and Storage based LUN masking which is handled by products on the disk array like 'Secure Manager' a top 'Command View'.

HBA's within your servers are 'READ ONLY' fiber channel devices that rely upon the topology determined by either the SAN switch or the end device. If using a SAN switch , like Brocade, then the HBA cannot see the other side of the switch but only up to the switch. This becomes an issue with statistics so only the switch can see the entire SAN and accumulate stat.s in this way.
Support Fatherhood - Stop Family Law
Michael Steele_2
Honored Contributor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Your disk array should be the driving component in this arrangement, for some work best with LVM striping and others do not. Do you have a VA, XP, EMC or what?
Support Fatherhood - Stop Family Law
Romaric Guilloud
Regular Advisor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Thanks Michael for the hint.
I have two HDS 9980V arrays in both DataCenters, hooked up by Brocade 12000 core directors.
I was therefore thinking about the following SAN topology to address my need for Campus cluster:
WWN1 being my source LUN within HDS #1.
WWN2 being my mirrored LUN mirrored within HDS #2:
Zoning WWN1 with both HBAs from node 1 and zoning WWN2 with both HBAs from node 2.

Using ISL feature on the 12000 directors:
Zoning WWN1 with both HBAs from node 2 and zoning WWN2 with both HBAs from node 1.

This way I can make use of pv links on top of my LVM mirror as disks from the arrays are only RAID 5 protected.

Does it make any sense to you?
Thanks in advance for your feedback.
I appreciate.

Sincerely,

Romaric.
"And remember: There are no stupid questions; there are only stupid people." (To Homer Simpson, in "The Simpsons".)
Michael Steele_2
Honored Contributor

Re: Campus cluster setup: How about LUNs visibility/pv links?

So your topology is FC-AL and you probably want to seriously consider LVM striping for best performance.

XP's are renamed Hitachi's and most users of XP's are using LVM striping for best performance. If you choose to go this route then many use PVG's in 4 disks groups, else, extending lvols become a big problem. (* Like, if you make a lvol with 24 disks, you max out two years later and discover that you can only extend by adding another 24 disks. PVG's overcome this problem. *)

MC/SG has no special HW considerations at the campus cluster level so its a matter of getting the latest software releases. For example, what happens to your failovers is the disk array fails? (* This one gets a lot of SA's. *) You'll need EMS for MC/SG for this.

There's also a problem within the Hitachi disk array that you'll have to talk to them about. Earlier models had two disk controllers and they'd thrash if one got a disk transaction belonging to the other. And all the even disks went with one while all the odd disks went with the other. You have to factor this in whether or not you stripe.

(* PS - Don't forget to assign points for everyone. :-) *)
Support Fatherhood - Stop Family Law
Romaric Guilloud
Regular Advisor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Stripping doesn't make much sense here, as a LUN is alreday spread onto 6 HVE within the HDS cabinet.
So, you don't want to logically strip on top of a physical stripping ensured by the cabinet itself...
Thanks anyway for your time.
Regards,
Romaric.
"And remember: There are no stupid questions; there are only stupid people." (To Homer Simpson, in "The Simpsons".)
Michael Steele_2
Honored Contributor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Well, monitor your I/O with 'fcmsutil' and when all of your transactions go out the primary pvlink while the alternate remains mostly idle, then, it will make sense.
Support Fatherhood - Stop Family Law
Solution

Re: Campus cluster setup: How about LUNs visibility/pv links?

Romaric,

I don't think you should necessarily discard doing any striping, it can still win you performance, by striping across LUNs in different array groups... have a look at this document here from Oracle and HP talking about the XP512 (older version of the HDS box but architecturaly similar) - this is probably true for other databases and applications.

Of course you won't be able to do normal striping as you are doing LVM mirroring, but you can still consider extent based striping - of course this means a stripe size of 4MB which may mean you get no performance benefit anyway - if you have the time I'd give it a try...

With regards to your SAN topology - you will be using fabric mode (I can see no reason why you would use FCAL/Quickloop when everything in your environment is capable of fabric logon). But with just two switches you need to be careful about how you configure this...

The attached diagram 1 shows how I implemented this a few years ago with EMC disk arrays and brocade 2800s - note that I used 4 switches to provide two seperate SANs for redundancy.

Now you only have two switches, so you have to implement like diagram 2 and this makes the 12000s a single point of failure in terms of keeping your disks mirrored. (Yes I know there supposed to be director class but they still have SPOFs). Luckily the 12000s are really two switches 'strapped together' so you can mitigate your risk to some point by making sure the ISLs come from different sides of the cabinet, as shown in diagram 3

With this config you should be using two cluster locks, one in each array.... but you need to be really sure that ALL your cable runs between the two sites are diversely routed, if theres any way a single 'incident' could take out all connectivity between the two sites (all the SAN connections and all the LAN connections), then you run the risk of split brain syndrome. If you have a good HA network, a better solution might be a quorum server at a third location.

Hope this is useful, enjoy building this config, this kind of thing is FUN! - I wish I had the opportunity to do it myself more often!

HTH

Duncan

I am an HPE Employee
Accept or Kudo

Re: Campus cluster setup: How about LUNs visibility/pv links?

oops, forgot to attach the diagrams...

Cheers

Duncan

I am an HPE Employee
Accept or Kudo

Re: Campus cluster setup: How about LUNs visibility/pv links?

also I didn't put the link in to the oracle doc!
http://otn.oracle.com/deploy/availability/pdf/SAME_HP_WP_112002.pdf
obviously have my mind on other things this morning

Cheers

duncan

I am an HPE Employee
Accept or Kudo
Romaric Guilloud
Regular Advisor

Re: Campus cluster setup: How about LUNs visibility/pv links?

Thanks Duncan, have a good day there.
Rgds,

Romaric.
"And remember: There are no stupid questions; there are only stupid people." (To Homer Simpson, in "The Simpsons".)