Operating System - HP-UX
1753502 Members
4949 Online
108794 Solutions
New Discussion юеВ

Re: Volume groups sharing without MC/SG

 
SOLVED
Go to solution
Enrico Venturi
Super Advisor

Volume groups sharing without MC/SG

Hello colleagues,
I'm currently using HP MC/SG to implement a cluster; the key feature I'm using of MC/SG is the capability to mount in exclusive mode the Volume Groups.
Now I'd like to replace MC/SG with a my own application, still sharing the volume groups...
Are there any commands to avoid to mount a disk (or a Logical Unit in case of Virtual Arrays) from two different nodes???
Can anyone suggest an algorythm to do this (mount "exclusive") by user?

regards
Enrico
3 REPLIES 3
Robert-Jan Goossens
Honored Contributor

Re: Volume groups sharing without MC/SG

Hi Enrico,

Check this question, posted yesterday.

http://forums1.itrc.hp.com/service/forums/questionanswer.do?admit=716493758+1084964300549+28353475&threadId=593820

Hope this helps,
Robert-Jan
Stephen Doud
Honored Contributor
Solution

Re: Volume groups sharing without MC/SG

Document ID: UXSGKBRC00003841

TITLE: Can I concurrently mount a file system to two servers?
PROBLEM
I'd like to mount a file system to more than one server at a time. Is this
possible? Is it safe?

RESOLUTION
With the advent of SANs (storage area networks) and Serviceguard, system
administrators have sought to avoid the disadvantages of NFS in favor of
direct-mounting a file system to more than one server at a time.
Unfortunately, concurrent-mounted file systems only work safely when
all servers mount the file system in Read-Only mode.

Details
I/O introduces the largest delays in data communication in a computer system.
To reduce delays encountered during I/O requests, HP-UX loads (caches) a
portion of the file system structure into memory when it mounts a file
system. To be efficient, the kernel may also load a far larger chuck of disk
content than that immediately needed.

When changes are made to a file or the file system, the kernel delays
inefficient disk writes by saving such changes to buffer cache (dedicated
RAM). Hence, disk content is current only after a buffer cache flush.

Furthermore, HPUX does not have the facility to communicate file and file
system changes directly to any other server's kernel.

These two facts introduces an immediate potential for a second server which
has mounted a file system (even in Read-Only mode) to have a non-current
memory-based representation of data and file system structures on the disk.
This inaccurate representation is another way of saying data corruption.
Upon detecting such corruption, the HPUX kernel will panic (save kernel
memory to a "dump" device and reboot the system.

Only when the content on the disks remains static will multiple-mounted file
systems succeed.

What about Serviceguard?
ServiceGuard does not give the kernel the facility to update other kernels
when data changes are queued in buffer cache. It actually prevents more than
one server from activating an LVM volume group set to "exclusive activation
mode". Hence, only one server can mount the VG's logical volumes.

What about Serviceguard-OPS or Serviceguard with extensions for RAC?
Serviceguard-OPS and Serviceguard with extensions for Real Application
Clusters (SGeRAC for Oracle) provides for "shared activation mode", but that
facility is only intended for raw logical volumes in tandem with Oracle
Parallel Server disk block locking mechanisms. Though making it possible to
multiple-mount a file system, the same proclivity for panic due to
non-current kernel representations of disk content will still exist.

Summary
Until Advance File System (AdvFS) features become available with HPUX, NFS,
though slower than direct accessed file systems, provides the protection of
currency (current views) to all NFS clients. For this reasons, the slower
but safer NFS subsystem is the best choice for concurrently mounted file
systems.

#### DOCUMENT END ####
JW_8
Occasional Advisor

Re: Volume groups sharing without MC/SG

Hi Enrico,
If SG is not used, there is no other HPUX commands that can prevent same disk being mounted on multiple nodes.
It is possible to implement somthing like SG to enforce "exclusive activiation" among applications you have control. But, you must ask every application or admin to go through your application to coordinate the activation. This is because you application is not integrated with LVM vgchange(1) command. So, it is possible that some admin who does not know about your application and can activiate the volume you'd like to control on other nodes.
JW