1753797 Members
7099 Online
108799 Solutions
New Discussion юеВ

shared LVM

 
Rick Tweedy_1
Occasional Contributor

shared LVM

Hi all,

I was surprised to see similiar messages relating to the exact thing I am working on. We too do not use Lock manager and/or service gaurd. Why? Well it is not a failover system. We want our t600 and our n4000 to share disk on a huge array via fiber to alow for fast file movement between the systems.

I did the exact list of steps listed in one of the responses here. (ie, vgexport map, ftp, vgimport etc). the bizzare thing is on system b(with read only access) the FS does not seem to get updated unless I unmount and re-mount. I have try the various sync commands with no luck.

What really confuses me is that it would seem to me that this requirement of sharing disk space via fibre and two machines is not that rare of a request yet HP does not seem to be able to give me any clear direction as to the most reliable(ie right) way of doing it. We basically need the space as a temp dumping ground for HUGE flat files for the other system to pick up. It sounds like we are in for alot of management and very light treading in this type of uncontrolled environment...possibly sys admins nightmare? I would love to hear of any experiences others may have had with this type of situation.

Thanks
Rick Tweedy
Shared LVM
12 REPLIES 12
John Palmer
Honored Contributor

Re: shared LVM

You are looking for trouble if you want to read a filesystem on node B that is being updated by system A. This is because the writing system keeps much of the updates in memory (the buffer cache).

Can you utilise MirrorDisk and split off a copy or can you use a particular filesystem that is written by node A, unmounted then mounted and read by node B?

Regards,

John
Rick Tweedy_1
Occasional Contributor

Re: shared LVM

We do not have licenses or implimentation window for Mirror disk. I have a feeling that is exactly what we will have to do is to write a series of scripts to mount/umount the fs as needed. Maybe I can check for a flag file in the mount point directory on other system via nfs mount and if the flag is there the fs is not mounted and visa versa. We were hoping we would be able to have both connected at same time with RW and R only and simply use the sync command to update disk and flush cache. It still bothers me that allthough system A has written 65% of the FS, system B still sees it as empty. The writes happened yesterday and sys B still shows empty. If I umount/mount it gets updated.
Shared LVM
Tim Malnati
Honored Contributor

Re: shared LVM

What you are trying to do is at least unusual. I doubt that Mirrordisk is going to be much of a help though. Mirrordisk opperations are controlled within the same volume group. vgexport does not support exporting only part of a volume group (I assume that this is what you may be trying to accomplish with Mirrordisk).

How about contacting Veritas for a possible solution. They may have something that allows moving data through the fiber channel. See http://www.veritas.com/us/
Alan Riggs
Honored Contributor

Re: shared LVM

When you say you have tried various sync commands, do you mean that you issued them on teh server with write access? Issuing multpile sync should flush teh cache to disk allowing your read only system to view the updated data.

I have never tested concurrent access without ServiceGuard, though. An interesting problem. Are you using an intelligent storage array? they often have their own caching which might account for what you are seeing.
John Palmer
Honored Contributor

Re: shared LVM

You may get away with syncing the filesystem on the box that's writing it and then unmounting/mounting on the box that's reading (remember that it will have out of date information cached).

Don't try reading a filesystem on one box while you're writing to it on another though as you are almost bound to get inconsistencies in the structure.

Regards,

John
Rick Tweedy_1
Occasional Contributor

Re: shared LVM

Yes I have issued the sync command multiple times on the writing box. I even went so far as to try vgsync and lvsync. Nothing. Even 24 hours later the box with read only shows the file systems empty!

The problem is being sent to the HP Expert team, so we shall see what they have to say on it.

I was not here when the planning occurred but for some reason it was never thought of to simply put a fiber backbone between the two boxes and use NFS. I think at this point that will be our solution but as usual the show must go on. In the mean time I think what I will do is using scripts make sure only one box is mounted at a time. Problem with that is mount can only be done by a superuser and there is no gaurantees that when the switch is needed there will not be someone sitting idle in a mounted dir preventing the umount from happening.

The concept is this a data warehouse/mart environment. HUGE(150gb) flat files are pulled from mainframe, loaded into warehouse on box A, processed and a subset of data is extracted to flat files for import on box B. From what I know this process will be happening on an irregular basis, maybe 20 times a day, maybe once a week. Yuck!

There is no way we can bog the network down with this kinda movement which is why the FC60/fibre came into play. As well it was decided that we could use the same sort of mounts to provide the ability to backup data to the readonly boxes silo.

The updates on the Readonly box still confuses me. Why does this system not know that the dirs have been updated? Is it because it is a readonly mount so the system assumes the contents will not change?
Shared LVM
Dave Wherry
Esteemed Contributor

Re: shared LVM

I can't say for sure, but, I suspect this is related to the cache on your disk array. Here's my theory.
You say you are using a huge disk array, an EMC or XP256 or the like I assume.
You say you issue the sync command on your server. That would flush the data from the buffer cache on the server. The data would then be resident in the cache on the disk array. As far as your first server is concerned the data is "committed", written to disk, the I/O is complete. Your array knows that that data in the cache is used by the first server. I doubt that the array is smart enough to know that the second server also wants that same data.
So, until the data gets destaged from cache to the disk, the second server probably will not see it.
The second server read request will look in buffer cache on the server. If the data is not there the read request goes to the disk subsystem. The array is handling a new read request from the second server and first looks in cache. But, it does not know it is looking for data associated with the first server. So, it has to go to disk. The data may still be in cache from the server one write and not physically written to disk yet.
As far as I know there is no sync command for the array. Unless there is cache pressure on the array, forcing it to destage, that data may sit in cache indefinitely.
Run that by the HP folks.
Alan Riggs
Honored Contributor

Re: shared LVM

Dave's thinking mirrors my own regarding the caching on the disk array. However, I don't think the aray will hold writes in cache infdefinitely. Check with the vendor. There may be a timed sync which will occur to commit writes to disk.

The other potential culprit, of course, is read cache on the 2nd server. If a read to the disk (for purpose of an ls, for example) has occured, that data will also be stored in cache. It will remain in cache until purged by other data requests. The sync command flushes cached writes to disk, but it does not affect read cache. To the system, that data is unchanged on disk since the initial mount/read request. I do not know of any way to force the system to forego cahce and read from disk. Perhpas someone else has a technique for accomplishing this.
Michael Lampi
Trusted Contributor

Re: shared LVM

The only way to force the read cache to flush is the same as the way to force the write cache to flush: unmount and remount the file system.

So, to make sure that the read-only system gets a current view of the array, perform the steps in this order:
1. Make sure all systems have the array unmounted.
2. Mount the array from system A.
3. Write the data to the array from system A.
4. Unmount the array from system A.
5. Mount the array from system B.
6. Read the data using system B.
7. Unmount the array from system B.

There are some software packages appearing on the market that will enable simultaneous access to data on an array from multiple hosts, but as far as I know only Veritas has this available for HP-UX at this time.
A journey of 1000 steps ends in a mile.