- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Volume groups sharing without MC/SG
Operating System - HP-UX
1753502
Members
4949
Online
108794
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2004 10:51 PM
тАО05-18-2004 10:51 PM
Hello colleagues,
I'm currently using HP MC/SG to implement a cluster; the key feature I'm using of MC/SG is the capability to mount in exclusive mode the Volume Groups.
Now I'd like to replace MC/SG with a my own application, still sharing the volume groups...
Are there any commands to avoid to mount a disk (or a Logical Unit in case of Virtual Arrays) from two different nodes???
Can anyone suggest an algorythm to do this (mount "exclusive") by user?
regards
Enrico
I'm currently using HP MC/SG to implement a cluster; the key feature I'm using of MC/SG is the capability to mount in exclusive mode the Volume Groups.
Now I'd like to replace MC/SG with a my own application, still sharing the volume groups...
Are there any commands to avoid to mount a disk (or a Logical Unit in case of Virtual Arrays) from two different nodes???
Can anyone suggest an algorythm to do this (mount "exclusive") by user?
regards
Enrico
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-18-2004 10:59 PM
тАО05-18-2004 10:59 PM
Re: Volume groups sharing without MC/SG
Hi Enrico,
Check this question, posted yesterday.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?admit=716493758+1084964300549+28353475&threadId=593820
Hope this helps,
Robert-Jan
Check this question, posted yesterday.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?admit=716493758+1084964300549+28353475&threadId=593820
Hope this helps,
Robert-Jan
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-21-2004 03:37 AM
тАО05-21-2004 03:37 AM
Solution
Document ID: UXSGKBRC00003841
TITLE: Can I concurrently mount a file system to two servers?
PROBLEM
I'd like to mount a file system to more than one server at a time. Is this
possible? Is it safe?
RESOLUTION
With the advent of SANs (storage area networks) and Serviceguard, system
administrators have sought to avoid the disadvantages of NFS in favor of
direct-mounting a file system to more than one server at a time.
Unfortunately, concurrent-mounted file systems only work safely when
all servers mount the file system in Read-Only mode.
Details
I/O introduces the largest delays in data communication in a computer system.
To reduce delays encountered during I/O requests, HP-UX loads (caches) a
portion of the file system structure into memory when it mounts a file
system. To be efficient, the kernel may also load a far larger chuck of disk
content than that immediately needed.
When changes are made to a file or the file system, the kernel delays
inefficient disk writes by saving such changes to buffer cache (dedicated
RAM). Hence, disk content is current only after a buffer cache flush.
Furthermore, HPUX does not have the facility to communicate file and file
system changes directly to any other server's kernel.
These two facts introduces an immediate potential for a second server which
has mounted a file system (even in Read-Only mode) to have a non-current
memory-based representation of data and file system structures on the disk.
This inaccurate representation is another way of saying data corruption.
Upon detecting such corruption, the HPUX kernel will panic (save kernel
memory to a "dump" device and reboot the system.
Only when the content on the disks remains static will multiple-mounted file
systems succeed.
What about Serviceguard?
ServiceGuard does not give the kernel the facility to update other kernels
when data changes are queued in buffer cache. It actually prevents more than
one server from activating an LVM volume group set to "exclusive activation
mode". Hence, only one server can mount the VG's logical volumes.
What about Serviceguard-OPS or Serviceguard with extensions for RAC?
Serviceguard-OPS and Serviceguard with extensions for Real Application
Clusters (SGeRAC for Oracle) provides for "shared activation mode", but that
facility is only intended for raw logical volumes in tandem with Oracle
Parallel Server disk block locking mechanisms. Though making it possible to
multiple-mount a file system, the same proclivity for panic due to
non-current kernel representations of disk content will still exist.
Summary
Until Advance File System (AdvFS) features become available with HPUX, NFS,
though slower than direct accessed file systems, provides the protection of
currency (current views) to all NFS clients. For this reasons, the slower
but safer NFS subsystem is the best choice for concurrently mounted file
systems.
#### DOCUMENT END ####
TITLE: Can I concurrently mount a file system to two servers?
PROBLEM
I'd like to mount a file system to more than one server at a time. Is this
possible? Is it safe?
RESOLUTION
With the advent of SANs (storage area networks) and Serviceguard, system
administrators have sought to avoid the disadvantages of NFS in favor of
direct-mounting a file system to more than one server at a time.
Unfortunately, concurrent-mounted file systems only work safely when
all servers mount the file system in Read-Only mode.
Details
I/O introduces the largest delays in data communication in a computer system.
To reduce delays encountered during I/O requests, HP-UX loads (caches) a
portion of the file system structure into memory when it mounts a file
system. To be efficient, the kernel may also load a far larger chuck of disk
content than that immediately needed.
When changes are made to a file or the file system, the kernel delays
inefficient disk writes by saving such changes to buffer cache (dedicated
RAM). Hence, disk content is current only after a buffer cache flush.
Furthermore, HPUX does not have the facility to communicate file and file
system changes directly to any other server's kernel.
These two facts introduces an immediate potential for a second server which
has mounted a file system (even in Read-Only mode) to have a non-current
memory-based representation of data and file system structures on the disk.
This inaccurate representation is another way of saying data corruption.
Upon detecting such corruption, the HPUX kernel will panic (save kernel
memory to a "dump" device and reboot the system.
Only when the content on the disks remains static will multiple-mounted file
systems succeed.
What about Serviceguard?
ServiceGuard does not give the kernel the facility to update other kernels
when data changes are queued in buffer cache. It actually prevents more than
one server from activating an LVM volume group set to "exclusive activation
mode". Hence, only one server can mount the VG's logical volumes.
What about Serviceguard-OPS or Serviceguard with extensions for RAC?
Serviceguard-OPS and Serviceguard with extensions for Real Application
Clusters (SGeRAC for Oracle) provides for "shared activation mode", but that
facility is only intended for raw logical volumes in tandem with Oracle
Parallel Server disk block locking mechanisms. Though making it possible to
multiple-mount a file system, the same proclivity for panic due to
non-current kernel representations of disk content will still exist.
Summary
Until Advance File System (AdvFS) features become available with HPUX, NFS,
though slower than direct accessed file systems, provides the protection of
currency (current views) to all NFS clients. For this reasons, the slower
but safer NFS subsystem is the best choice for concurrently mounted file
systems.
#### DOCUMENT END ####
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-22-2004 02:40 PM
тАО05-22-2004 02:40 PM
Re: Volume groups sharing without MC/SG
Hi Enrico,
If SG is not used, there is no other HPUX commands that can prevent same disk being mounted on multiple nodes.
It is possible to implement somthing like SG to enforce "exclusive activiation" among applications you have control. But, you must ask every application or admin to go through your application to coordinate the activation. This is because you application is not integrated with LVM vgchange(1) command. So, it is possible that some admin who does not know about your application and can activiate the volume you'd like to control on other nodes.
JW
If SG is not used, there is no other HPUX commands that can prevent same disk being mounted on multiple nodes.
It is possible to implement somthing like SG to enforce "exclusive activiation" among applications you have control. But, you must ask every application or admin to go through your application to coordinate the activation. This is because you application is not integrated with LVM vgchange(1) command. So, it is possible that some admin who does not know about your application and can activiate the volume you'd like to control on other nodes.
JW
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP