- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Linux cluster using HP MC/ServiceGuard -- EVA 3000...
Operating System - Linux
1822153
Members
3327
Online
109640
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-19-2004 08:53 PM
тАО09-19-2004 08:53 PM
Hi,
In a Linux cluster setup, I have some issues:
I am trying to use 'md' driver in Linux for multipath to shared storage with Enterprise Virtual Array 3000.
The O/S is SuSE Linux ES 8.0 which runs on both the cluster nodes (HP ProLiant ML330 machines). I use Qlogic HBAs (of different versions) both supporting failover.
Do I still need to use the Driver Patch for EVA Controller Failover?
I can add the Virtual Disks from the EVA console and this does not need additional Secure Path installation to get detected.
EVA offers different RAID functions such as 0,1 and 5 and I guess I use the software RAID on Linux (implemented thru md drivers) to provide multipath support. Is there any need for implementing a software RAID on top of an EVA which already has hardware RAID functionality (say for example, if I want mirroring instead of multipath)
The MC/SG spec says that 'md' driver should not be active at the same time on both the cluster nodes. So, I am not using the 'fd' type of partition which starts the arrays at boot time. I just use the default 83 type.
Is there any specific need to update to 2.6 kernel in order to support LVM 2?
The kernel version of the O/S in the cluster nodes is 2.4.19 which contains lvm 1.0.5-51. It does not support 'pvremove' and probably the tool 'mkraid' does have some bugs. No idea if there are fixes for the same. Is it better to use 'mdadm'?
I have tried to do the same sequence of steps listed in "Managing Service Guard for Linux" to create the logical volume infrastructure. But the first disk partition grouped using md does not become part of the array --- after rebooting the first node. After rebooting, the second node should get the configuration from the primary node replicated but this does not happen for the first partition. Only the second partition seems to be shown active.
In a Linux cluster setup, I have some issues:
I am trying to use 'md' driver in Linux for multipath to shared storage with Enterprise Virtual Array 3000.
The O/S is SuSE Linux ES 8.0 which runs on both the cluster nodes (HP ProLiant ML330 machines). I use Qlogic HBAs (of different versions) both supporting failover.
Do I still need to use the Driver Patch for EVA Controller Failover?
I can add the Virtual Disks from the EVA console and this does not need additional Secure Path installation to get detected.
EVA offers different RAID functions such as 0,1 and 5 and I guess I use the software RAID on Linux (implemented thru md drivers) to provide multipath support. Is there any need for implementing a software RAID on top of an EVA which already has hardware RAID functionality (say for example, if I want mirroring instead of multipath)
The MC/SG spec says that 'md' driver should not be active at the same time on both the cluster nodes. So, I am not using the 'fd' type of partition which starts the arrays at boot time. I just use the default 83 type.
Is there any specific need to update to 2.6 kernel in order to support LVM 2?
The kernel version of the O/S in the cluster nodes is 2.4.19 which contains lvm 1.0.5-51. It does not support 'pvremove' and probably the tool 'mkraid' does have some bugs. No idea if there are fixes for the same. Is it better to use 'mdadm'?
I have tried to do the same sequence of steps listed in "Managing Service Guard for Linux" to create the logical volume infrastructure. But the first disk partition grouped using md does not become part of the array --- after rebooting the first node. After rebooting, the second node should get the configuration from the primary node replicated but this does not happen for the first partition. Only the second partition seems to be shown active.
Solved! Go to Solution.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-20-2004 03:52 AM
тАО09-20-2004 03:52 AM
Re: Linux cluster using HP MC/ServiceGuard -- EVA 3000, md driver and lvm configuration
Fedora Core which uses the 2.6 kernel only lists support for lvm v1.
It is likely that you will need to update the kernel to support lvm v2.
I have extensively used the raid tools in Red Hat and Fedora even to mirror lvm setups.
I think you can get away with using the built in raid tools of the OS in conjunction with with lvm1 to do the raid.
I did a 4 hour hands on at HP-World using Linux SG and Red Hat and lvm. We were able to handle a mirrored setup and test failover successfully using shared storage.
SEP
It is likely that you will need to update the kernel to support lvm v2.
I have extensively used the raid tools in Red Hat and Fedora even to mirror lvm setups.
I think you can get away with using the built in raid tools of the OS in conjunction with with lvm1 to do the raid.
I did a 4 hour hands on at HP-World using Linux SG and Red Hat and lvm. We were able to handle a mirrored setup and test failover successfully using shared storage.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-20-2004 11:43 AM
тАО09-20-2004 11:43 AM
Solution
The MD driver is not supported with the EVA. You must used either Securepath or the multipath function of the QLogic driver. For Serviceguard, the later is recommended.
Serviceguard does not yet support SUSE SLES9 (with the 2.6 kernel). Serviceguard supports only the LVM delivered with the distribution for RedHat 3 and SUSE SLES8.
Serviceguard does not yet support SUSE SLES9 (with the 2.6 kernel). Serviceguard supports only the LVM delivered with the distribution for RedHat 3 and SUSE SLES8.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Learn About
News and Events
Support
© Copyright 2025 Hewlett Packard Enterprise Development LP