HPE Community read-only access December 15, 2018
This is a maintenance upgrade. You will be able to read articles and posts, but not post or reply.
Dec 15, 4:00 am to 10:00 am UTC
Dec 14, 10:00 pm CST to Dec 15, 4:00 am CST
Dec 14, 8:00 pm PST to Dec 15, 2:00 am PST
Showing results for 
Search instead for 
Did you mean: 

Linux cluster using HP MC/ServiceGuard -- EVA 3000, md driver and lvm configuration

Go to solution
Biju A
Occasional Visitor

Linux cluster using HP MC/ServiceGuard -- EVA 3000, md driver and lvm configuration


In a Linux cluster setup, I have some issues:

I am trying to use 'md' driver in Linux for multipath to shared storage with Enterprise Virtual Array 3000.

The O/S is SuSE Linux ES 8.0 which runs on both the cluster nodes (HP ProLiant ML330 machines). I use Qlogic HBAs (of different versions) both supporting failover.

Do I still need to use the Driver Patch for EVA Controller Failover?

I can add the Virtual Disks from the EVA console and this does not need additional Secure Path installation to get detected.

EVA offers different RAID functions such as 0,1 and 5 and I guess I use the software RAID on Linux (implemented thru md drivers) to provide multipath support. Is there any need for implementing a software RAID on top of an EVA which already has hardware RAID functionality (say for example, if I want mirroring instead of multipath)

The MC/SG spec says that 'md' driver should not be active at the same time on both the cluster nodes. So, I am not using the 'fd' type of partition which starts the arrays at boot time. I just use the default 83 type.

Is there any specific need to update to 2.6 kernel in order to support LVM 2?

The kernel version of the O/S in the cluster nodes is 2.4.19 which contains lvm 1.0.5-51. It does not support 'pvremove' and probably the tool 'mkraid' does have some bugs. No idea if there are fixes for the same. Is it better to use 'mdadm'?

I have tried to do the same sequence of steps listed in "Managing Service Guard for Linux" to create the logical volume infrastructure. But the first disk partition grouped using md does not become part of the array --- after rebooting the first node. After rebooting, the second node should get the configuration from the primary node replicated but this does not happen for the first partition. Only the second partition seems to be shown active.

Steven E. Protter
Exalted Contributor

Re: Linux cluster using HP MC/ServiceGuard -- EVA 3000, md driver and lvm configuration

Fedora Core which uses the 2.6 kernel only lists support for lvm v1.

It is likely that you will need to update the kernel to support lvm v2.

I have extensively used the raid tools in Red Hat and Fedora even to mirror lvm setups.

I think you can get away with using the built in raid tools of the OS in conjunction with with lvm1 to do the raid.

I did a 4 hour hands on at HP-World using Linux SG and Red Hat and lvm. We were able to handle a mirrored setup and test failover successfully using shared storage.

Steven E Protter
Owner of ISN Corporation
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Serviceguard for Linux
Honored Contributor

Re: Linux cluster using HP MC/ServiceGuard -- EVA 3000, md driver and lvm configuration

The MD driver is not supported with the EVA. You must used either Securepath or the multipath function of the QLogic driver. For Serviceguard, the later is recommended.

Serviceguard does not yet support SUSE SLES9 (with the 2.6 kernel). Serviceguard supports only the LVM delivered with the distribution for RedHat 3 and SUSE SLES8.