HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: connecting MCSG to SAN
Operating System - HP-UX
1834499
Members
2556
Online
110068
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2006 12:58 AM
11-21-2006 12:58 AM
connecting MCSG to SAN
Hi everyone
I was wondering if you kind people could help me get my head around MCSG.
We haven’t got it in place yet, but we will be moving to a 2-node cluster early next year, so I’m starting to read into it a bit.
In place at present, we have 2 Clariion CX500 arrays (using RAID5) located in different buildings on the same campus. We will be connecting to these via fibre cards.
I’m happy enough for now with the package / monitoring concepts, but I could do with some help with the clariion integration.
The questions I have so far are :
- on the clariions can you present one LUN to more than one server
- how will the mirroring be done
- what software will I require in addition to MCSG on the HPUX servers
eg mirror view / volume manager / power path
I’m trying to get a picture in my head of how it will all be connected, but unfortunately I haven’t quite got there yet!
Any help is much appreciated
I was wondering if you kind people could help me get my head around MCSG.
We haven’t got it in place yet, but we will be moving to a 2-node cluster early next year, so I’m starting to read into it a bit.
In place at present, we have 2 Clariion CX500 arrays (using RAID5) located in different buildings on the same campus. We will be connecting to these via fibre cards.
I’m happy enough for now with the package / monitoring concepts, but I could do with some help with the clariion integration.
The questions I have so far are :
- on the clariions can you present one LUN to more than one server
- how will the mirroring be done
- what software will I require in addition to MCSG on the HPUX servers
eg mirror view / volume manager / power path
I’m trying to get a picture in my head of how it will all be connected, but unfortunately I haven’t quite got there yet!
Any help is much appreciated
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2006 01:16 AM
11-21-2006 01:16 AM
Re: connecting MCSG to SAN
Shalom,
1) Yes Clarion can present a LUN to more than 1 server.
2) Its not. The array handles redundancy and thats Raid 5. You can survive the loss of some disks, but only a limited number.
3)There is a patch set required for ServiceGuard. You will want the system patched to a recent bi-annual patch set. Finally you will need drivers for your fiber cards. If they are HP fiber cards the drivers are on the Application DC/DVD
4)If you are corrected directly you may not need any add in software such as securepath.
SEP
1) Yes Clarion can present a LUN to more than 1 server.
2) Its not. The array handles redundancy and thats Raid 5. You can survive the loss of some disks, but only a limited number.
3)There is a patch set required for ServiceGuard. You will want the system patched to a recent bi-annual patch set. Finally you will need drivers for your fiber cards. If they are HP fiber cards the drivers are on the Application DC/DVD
4)If you are corrected directly you may not need any add in software such as securepath.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-21-2006 01:56 AM
11-21-2006 01:56 AM
Re: connecting MCSG to SAN
Thank you for getting back to me Steven.
OK, so after setting up the shared VGs on the first server, I dont have to use MirrorDisk?
Also, we'll be using dual fibres to connect to the switch, how will servers know which one to use?
OK, so after setting up the shared VGs on the first server, I dont have to use MirrorDisk?
Also, we'll be using dual fibres to connect to the switch, how will servers know which one to use?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2006 09:05 AM
11-22-2006 09:05 AM
Re: connecting MCSG to SAN
LVM can track multiple paths for a physical disk or LUN.
When creating a VG, include all paths to the disk/LUN. The /etc/lvmtab will then contain all paths, and enable LVM to use an alternate route, if one should fail.
If you forget to include alternate paths, use vgextend to include them.
When creating a VG, include all paths to the disk/LUN. The /etc/lvmtab will then contain all paths, and enable LVM to use an alternate route, if one should fail.
If you forget to include alternate paths, use vgextend to include them.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP