- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Campus cluster setup: How about LUNs visibility/pv...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 04:38 AM
06-19-2003 04:38 AM
Each of them will be delivered with two PCI 2Gb FC HBA.
Any docs/guidelines/advices dealing with proper mirror config + pv link features setup within such a campus cluster?
Indeed, as each server has only 2 HBA, can I setup both mirroring and pv link? If so it's at the FC SAN Directors level that the zoning should be made accordingly; but then, anything else to consider at the purely LVM level on these servers?
Thanks in advance.
Sincerely,
Romaric GUILLOUD.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 05:46 AM
06-19-2003 05:46 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
The book "Managing MC/ServiceGuard" should tell you what you asked. Just a note for you, PV links is independent of MC/ServiceGuard. It is controlled by LVM, not by MC/ServiceGuard.
In case you do not have a hardcopy of the book, you can get a softcopy at docs.hp.com.
Hai
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 06:03 AM
06-19-2003 06:03 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
What kind of disks are they? If they are EMC, or some other High Availability storage, which most people have today, you don't need to mirror, the disks are already mirrorred internally.
As stated previously, PV links don't have anything to do with MC/SG.
Do you have EMC PowerPath? If so, you don't need PV links.
So, what kind of disks are they?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 06:10 AM
06-19-2003 06:10 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
Also there is host-based LUN masking which is handled by LVM on the server and Storage based LUN masking which is handled by products on the disk array like 'Secure Manager' a top 'Command View'.
HBA's within your servers are 'READ ONLY' fiber channel devices that rely upon the topology determined by either the SAN switch or the end device. If using a SAN switch , like Brocade, then the HBA cannot see the other side of the switch but only up to the switch. This becomes an issue with statistics so only the switch can see the entire SAN and accumulate stat.s in this way.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 06:14 AM
06-19-2003 06:14 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 06:52 AM
06-19-2003 06:52 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
I have two HDS 9980V arrays in both DataCenters, hooked up by Brocade 12000 core directors.
I was therefore thinking about the following SAN topology to address my need for Campus cluster:
WWN1 being my source LUN within HDS #1.
WWN2 being my mirrored LUN mirrored within HDS #2:
Zoning WWN1 with both HBAs from node 1 and zoning WWN2 with both HBAs from node 2.
Using ISL feature on the 12000 directors:
Zoning WWN1 with both HBAs from node 2 and zoning WWN2 with both HBAs from node 1.
This way I can make use of pv links on top of my LVM mirror as disks from the arrays are only RAID 5 protected.
Does it make any sense to you?
Thanks in advance for your feedback.
I appreciate.
Sincerely,
Romaric.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 12:21 PM
06-19-2003 12:21 PM
Re: Campus cluster setup: How about LUNs visibility/pv links?
XP's are renamed Hitachi's and most users of XP's are using LVM striping for best performance. If you choose to go this route then many use PVG's in 4 disks groups, else, extending lvols become a big problem. (* Like, if you make a lvol with 24 disks, you max out two years later and discover that you can only extend by adding another 24 disks. PVG's overcome this problem. *)
MC/SG has no special HW considerations at the campus cluster level so its a matter of getting the latest software releases. For example, what happens to your failovers is the disk array fails? (* This one gets a lot of SA's. *) You'll need EMS for MC/SG for this.
There's also a problem within the Hitachi disk array that you'll have to talk to them about. Earlier models had two disk controllers and they'd thrash if one got a disk transaction belonging to the other. And all the even disks went with one while all the odd disks went with the other. You have to factor this in whether or not you stripe.
(* PS - Don't forget to assign points for everyone. :-) *)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 01:05 PM
06-19-2003 01:05 PM
Re: Campus cluster setup: How about LUNs visibility/pv links?
So, you don't want to logically strip on top of a physical stripping ensured by the cabinet itself...
Thanks anyway for your time.
Regards,
Romaric.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-19-2003 04:50 PM
06-19-2003 04:50 PM
Re: Campus cluster setup: How about LUNs visibility/pv links?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2003 12:47 AM
06-20-2003 12:47 AM
SolutionI don't think you should necessarily discard doing any striping, it can still win you performance, by striping across LUNs in different array groups... have a look at this document here from Oracle and HP talking about the XP512 (older version of the HDS box but architecturaly similar) - this is probably true for other databases and applications.
Of course you won't be able to do normal striping as you are doing LVM mirroring, but you can still consider extent based striping - of course this means a stripe size of 4MB which may mean you get no performance benefit anyway - if you have the time I'd give it a try...
With regards to your SAN topology - you will be using fabric mode (I can see no reason why you would use FCAL/Quickloop when everything in your environment is capable of fabric logon). But with just two switches you need to be careful about how you configure this...
The attached diagram 1 shows how I implemented this a few years ago with EMC disk arrays and brocade 2800s - note that I used 4 switches to provide two seperate SANs for redundancy.
Now you only have two switches, so you have to implement like diagram 2 and this makes the 12000s a single point of failure in terms of keeping your disks mirrored. (Yes I know there supposed to be director class but they still have SPOFs). Luckily the 12000s are really two switches 'strapped together' so you can mitigate your risk to some point by making sure the ISLs come from different sides of the cabinet, as shown in diagram 3
With this config you should be using two cluster locks, one in each array.... but you need to be really sure that ALL your cable runs between the two sites are diversely routed, if theres any way a single 'incident' could take out all connectivity between the two sites (all the SAN connections and all the LAN connections), then you run the risk of split brain syndrome. If you have a good HA network, a better solution might be a quorum server at a third location.
Hope this is useful, enjoy building this config, this kind of thing is FUN! - I wish I had the opportunity to do it myself more often!
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2003 12:48 AM
06-20-2003 12:48 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
Cheers
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2003 01:55 AM
06-20-2003 01:55 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
http://otn.oracle.com/deploy/availability/pdf/SAME_HP_WP_112002.pdf
obviously have my mind on other things this morning
Cheers
duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-20-2003 04:00 AM
06-20-2003 04:00 AM
Re: Campus cluster setup: How about LUNs visibility/pv links?
Rgds,
Romaric.