Operating System - HP-UX
1827838 Members
1506 Online
109969 Solutions
New Discussion

Shared filesystem for multinodes

 
Abid Iqbal
Regular Advisor

Shared filesystem for multinodes

There are six servers running HP-UX 11.31 DCOE.
And four mount points (file system) are on one server.
How to share these four mount points between all six servers so that these are accessible to application on all six nodes for read write operations equally.
Oracle is not being used so SgeRAC is not the option.
Please share any reference document.
14 REPLIES 14
Shibin_2
Honored Contributor

Re: Shared filesystem for multinodes

Use NFS share.

Add the entries in /etc/dfs/dfstab

then do shareall command.
Regards
Shibin
Abid Iqbal
Regular Advisor

Re: Shared filesystem for multinodes

Thanks for the reply.
NFS is not the option. Some servicegaurd solutions......?
R.O.
Esteemed Contributor

Re: Shared filesystem for multinodes

You can create a nfs serviceguard package...
"When you look into an abyss, the abyss also looks into you"
Emil Velez
Honored Contributor

Re: Shared filesystem for multinodes

HP cluster file System (CFS). Is a viable option for this. It is a purchased product. Yes it is used for Oracle RAC but it can be used without it too. It is easy to setup using Cluster VxVM and allows all nodes to read and write the local disks from any node.

There is no separate locking api so the last one that saves data will write so you need to make sure that the systems do not step on each other (oracle takes care of it when using RAC) but it is a good option.

Why not NFS. NFS is a great protocol and can deal with locking of files since one is the nfs server and all requests go through that node. IF that node fails the NFS package starts on another node. The cross mount capabilities of the NFS package allow the package to be mounted locally and on all of the nodes on the same mount point.
Abid Iqbal
Regular Advisor

Re: Shared filesystem for multinodes

Thanks Emil,
With out locking management, it would be difficult to use CFS as there is a great chance of data loss. The servers are running a critical customer application.
NFS is not because the operations require faster performance.
Any other suggestions?
Dennis Handly
Acclaimed Contributor

Re: Shared filesystem for multinodes

>Without locking management, it would be difficult to use CFS as there is a great chance of data loss.

What locking API is the application currently using?

DeafFrog
Valued Contributor

Re: Shared filesystem for multinodes

Hi ,

As Emil already conveyed -Application should be intelligent enough to handel and understand that underlying filesystem is a cluster filesystem , and not a usual one ...why don't you discuss this first with the application support , just a thought.

FrogIsDeaf
Abid Iqbal
Regular Advisor

Re: Shared filesystem for multinodes

>What locking API is the application currently using?
No locking API is available in application currently.
Dennis Handly
Acclaimed Contributor

Re: Shared filesystem for multinodes

>No locking API is available in application currently.

I'm confused. What do you mean by this? The FS doesn't support locking so the application doesn't bother?

If so, how does the application work for a local filesystem, single threaded/process?
Abid Iqbal
Regular Advisor

Re: Shared filesystem for multinodes

Means no locking API for CFS.
It is working on local file system on single server.
Dennis Handly
Acclaimed Contributor

Re: Shared filesystem for multinodes

>Means no locking API for CFS.

I suppose this more accurately means that calls to lockf(2) and fcntl(2) are ignored?
Abid Iqbal
Regular Advisor

Re: Shared filesystem for multinodes

Yes Right.
Final conclusion, CFS is not the solution for this?
Is it confirmed?
Dennis Handly
Acclaimed Contributor

Re: Shared filesystem for multinodes

>CFS is not the solution for this?

It doesn't seem like it. It seems all CFS does is to synchronize filesystem metadata but not user data.
Abid Iqbal
Regular Advisor

Re: Shared filesystem for multinodes

Thank you all for the support and help.