Operating System - Linux
1753360 Members
4742 Online
108792 Solutions
New Discussion юеВ

Is there shared storage conflicts or hardware error?

 
SOLVED
Go to solution
Andriy Galetski
New Member

Is there shared storage conflicts or hardware error?

I have standard DL380 G3 packaged cluster equipped with MSA500 storage.
In my case I have two Linux hosts connected to one shared storage via SCSI bus. Both Linux nodes can communicate with storage separately. But I want to get simultaneously read/write to different file systems from each node.
For example nodeA mount /dev/cciss/c0d2p1 like its own ext3 file system and nodeB mount /dev/cciss/c0d2p2 relatively like its own by nodeB file system. When read/write loading occur shared storage hang. Is it shared SCSI conflicts or hardware error? Do I need to use file systems with distributed lock manager to get both nodes parallel R/W even to different mounted partitions ?

Thanks for any advise.
9 REPLIES 9
TwoProc
Honored Contributor

Re: Is there shared storage conflicts or hardware error?

You'll need a clustered file system. There's not a lot of those around. Veritas systems can do this, and not many others. If you have both systems mount an ext3 systems and read write to it, you'll probably start having corrupt file systems in pretty short order.

I'd suggest NFS as a better way to this, though slower.
We are the people our parents warned us about --Jimmy Buffett
Viktor Balogh
Honored Contributor

Re: Is there shared storage conflicts or hardware error?

As TwoProc said, use a clustered/shared filesystem, it is the only way to get it mounted on both nodes in read/write mode.

Examples of Clustered filesystems:

GPFS (vendor IBM) - General Parallel File System (GPFS) supports replication between attached block storage. Available for AIX and Linux.

OCFS, OCFS2 (Oracle Cluster FileSystem) - use it for clustered oracle databases

GFS, GFS2 (Red Hat)

QFS (Sun) - Quick File System, it is mainly used for the SAM (Storage and Archive Manager) product, which is a Hierarchical Storage Manager solution by Sun

VxCFS - (Veritas Cluster File System)
****
Unix operates with beer.
Andriy Galetski
New Member

Re: Is there shared storage conflicts or hardware error?

Thanks for suggestions. I had a little experience use GFS. But the question is: Can I safely mount R/W different partitions from different nodes connected to shared SCSI storage. In my case I`m did not going to use the same shared file system from both nodes.
In other words block devices /dev/cciss/c0d2p1 and /dev/cciss/c0d2p2 is differ only volume /dev/cciss/c0d2 is the same.
Viktor Balogh
Honored Contributor
Solution

Re: Is there shared storage conflicts or hardware error?

> Can I safely mount R/W different partitions from different nodes connected to shared SCSI storage.

The answer is yes. The concurrent access will be handled by the filesystem, so it won't get corrupted until you use a cluster-aware shared mode filesystem.
****
Unix operates with beer.
TwoProc
Honored Contributor

Re: Is there shared storage conflicts or hardware error?

Yep, you can, as long you you don't mount r/w a file system on two servers at once. But different lv's from the same vg is no problem. You'll have to put the vg in shared mode.
We are the people our parents warned us about --Jimmy Buffett
Viktor Balogh
Honored Contributor

Re: Is there shared storage conflicts or hardware error?

> Yep, you can, as long you you don't mount r/w a file system on two servers at once

Nope, you can easily mount the filesystem in read-write mode on more than one machine at once. That's why it's called _distributed_. The write operations will be handled with the help of the fencing mechanism, which is a special way of inter-cluster file locking mechanism. To read further look this article e.g. regarding the GFS filesystem:

http://www.redhat.com/magazine/009jul05/features/gfs_practices/
****
Unix operates with beer.
chris huys_4
Honored Contributor

Re: Is there shared storage conflicts or hardware error?

> Nope, you can easily mount the filesystem in
> read-write mode on more than one machine at
> once. That's why it's called _distributed_.
> The write operations will be handled with the
> help of the fencing mechanism, which is a
> special way of inter-cluster file locking
> mechanism. To read further look this article
> e.g. regarding the GFS filesystem:

Victor, andrij doesnt want to share fileystems on 2 nodes. Andrij wants to share a lun with 2 partitions, were the partitions are exclusively used by 2 nodes, between the 2 nodes.

Personally I dont think its possible. If a node A wants to write to partition A of lun A, it needs to get a "scsi lock" on the lun A, and if node B has at that moment the "scsi lock", because node B wanted to write to partition B of lun A, then node A initiator will issue a scsi reset on to get the lock, which can result in hanging scsi busses, data corruption etc..

Greetz,
Chris
chris huys_4
Honored Contributor

Re: Is there shared storage conflicts or hardware error?

Hi,

> Victor, andrij doesnt want to share
> fileystems on 2 nodes. Andrij wants to share
> a lun with 2 partitions, were the partitions
> are exclusively used by 2 nodes, between the
> 2 nodes.
Now I have to correct myself. I also had the impression andrij didnt had in mind to use gfs to obtain what he wanted.

Offcourse if he would use gfs, he could offcourse do what he "set forth" to do, as then gfs would take care of the "scsi protocols things" between the 2 nodes"..

Greetz,
Chris
Andriy Galetski
New Member

Re: Is there shared storage conflicts or hardware error?

Hi.

>I also had the impression
>andrij didnt had in
>mind to use gfs to obtain
>what he wanted.

That is right, I did not want use GFS because of its complexity.
It is clear that to avoid conflicts of sharing, clustered system should use distributed locking mechanism like DLM (Distributed Lock Manager).
I assumed that system can work without DLM because of data in logical partitions not overlapping so sharing conflicts should not occur. But it is wrong because SCSI bus and ID of volume device is still shared so access to its must be moderated.