Operating System - OpenVMS
cancel
Showing results for 
Search instead for 
Did you mean: 

Volume Shadowing

 
SOLVED
Go to solution
Highlighted
Warren G Landrum
Frequent Advisor

Volume Shadowing

Guys,

I've got an ES80 Cluster running VMS 7.3-2. Currently is hooked up to an IBM Shark Array for it's storage, including system disk.Here's the deal.

We are looking to migrate the storage from the Shark to an EMC DMX4 Array. To me the simplest way to do this is mounting all of my current drives as single-member shadowsets and then once the EMC storage is configured and presented to the 2 nodes in the cluster, simply add the eMC volumes in as 2nd shadowset members. Then when we are ready to turn off the Shark, I would simply take the Shark members out of each Shadowset.

Here are my questions/comments.

1. I don't believe there would be any problems mixing the sharks and the EMC volumes in the same shadowset, because once they are each presented to VMS, should not matter. Everyone agree?

2. But lets's say the source Shark Drives are 73GB and the new EMC ones are 146GB. As long as new drives are LARGER than source, should be no problem right. But then, if we take the 73GB drives out of the shadowsets, would the EMC drives only have 73GB usable? - assumption being that they take the size of the source volume.

Anybody see any more potential problems or know about any gotchas I sshould watch out for in this scenario?

THANKS,

Warren
11 REPLIES 11
Hoff
Honored Contributor
Solution

Re: Volume Shadowing

0: Ensure you have a path back out of the data transfer, first and foremost. Ensure a reliable and restorable and consistent and complete BACKUP of the disks exists.

1: theoretically, yes. In practice, you'll soon find out. See 0 above.

2: theoretically, DVE and DDS work just fine, and once DVE is enabled and sized equal or beyond the size of the target volume, you can then transition to larger disks using this or similar sequences. Once all of the members of the shadowset are resident on larger disks, you can then use SET VOLUME /SIZE to extend to the capacity of the virtual volume. In practice, see 0 above.

There is a known wrinkle around enabling DVE on V7.3-2 on the system disk using the distro CD. You'll want to boot and use a V8.2 or later distro for that task, or use another bootable device to enable DVE on the system disk, and the stock version of BACKUP has some DVE issues when restoring disk images. Thus -- and in any event -- do ensure your patches are current.

Stephen Hoffman
HoffmanLabs LLC
EdgarZamora_1
Respected Contributor

Re: Volume Shadowing


1. IMO, yes.

2. You need to use DVE (your volumes initialized with /LIMIT). Once you have moved to the new volumes and taken out the old shadowset members, you should be able to do a SET VOL/SIZ to increase the space. I havent tested this with DSA devices but I've done this with non-shadowed disks.
Jon Pinkley
Honored Contributor

Re: Volume Shadowing

Warren,

As long as the EMC DMX4 can be used with VMS shadowing, then your plan will work.

We did something similar to migrate from HSZ SCSI based storage to EVA, and didn't have any issues.

We didn't use shadowing to replicate from the HSZ to the EVA because we wanted to change our cluster sizes to powers of 2, but after our backup/image/noinit we than used shadowing to bring the HSZ's disks back into the shadowsets. If you do this with 7.3-2, there are some issues you will need to be aware of, as BACKUP on 7.3-2 does not know how to deal with volume /SIZE and /LIMIT For a workaround see my note dated Aug 1, 2007 06:04:47 GMT in this thread: http://forums12.itrc.hp.com/service/forums/questionanswer.do?threadId=1149604

You should not have any problem as long as the first member of the shadowset is the smaller device. When you add the second, it will set the logical volume size to the size of the logical size of the shadow set (no larger than the size of the smallest member).

Before you can expand the volume, you will have to dismount each logical volume, and mount it privately, and use the command set volume/limit. This expands the bitmap.sys file to allow expansion. As long as the cluster size is 8 or greater, you can set the limit to 1TB; if the cluster size is less than 8, you can still set the limit to a large value, just not as large.

Once you have validated the EMC and you never plan to fall back to the shark, you can remove the shark's members, and then use the set volume/size command to expand the logical volume size of 146GB.

I see there have been several replies posted while I entered this; as Hoff says, apply latest shadowing patches. That will also get you HBMM.

Good luck,

Jon
it depends
Robert Gezelter
Honored Contributor

Re: Volume Shadowing

Warren,

Actually, in addition to Hoff's comment, I would suggest extensive testing of the new storage controller and drives BEFORE doing the migration.

One things have been tested (and you have a good backup), I would recommend gradually including the new storage in the production environment. Gradualism is your friend. If difficulties occur, it is far easier to deal with them.

For actually migrating to the new volumes, I recommend taking a look at the procedure that I have presented at the HP Technology Forum, " Migrating OpenVMS Storage Environments without Interruption or Disruption" (notes available at http://www.rlgsc.com/hptechnologyforum/2007/1512.html )

- Bob Gezelter, http://www.rlgsc.com
Jan van den Ende
Honored Contributor

Re: Volume Shadowing

Warren,

we did something very much like this, albeit NOT with Sharks, nor EMCs.
But. as long as both support VMS file systems & shadowing, I see no problems.

As Bob said: graduality is your friend.

As we have our whole systems set up using concealed devices, we also took the opportunity to reshuffle several of those, by BACKUPpimg one directory tree (representing one concealed device) at a time for those we wanted rearrange during the HSH => EVA move.
Careful planning allowed for minor unavailability of the related applic, at a convenient time (that is, convenient for the users, usually NOT for System management :-) )

Success.

hth

Proost.

Have one on me.

jpe
Don't rust yours pelled jacker to fine doll missed aches.
Warren G Landrum
Frequent Advisor

Re: Volume Shadowing

My esteemed colleagues:

Thanks for all the quick, valuable, detailed responses. Looks like if I had just done this on my own without you guys' input, I would have experienced quite a few problems.

But now I have a pretty good idea of what needs to be done and some of the gotchas to watch out for.

I haven't personally worked with DVE, though I was aware of it, so as Bob suggests, we will do a lot of TESTING before trying to implement.

Bob, I got your presentation on migrating w/o disruption. Thanks again.
Robert Gezelter
Honored Contributor

Re: Volume Shadowing

Warren,

You are welcome.

The most important point is the point about testing and graduality. "Cold turkey" cutovers are both unnecessary and high risk.

If there you have any questions concerning the presentation, please feel free to contact me.

- Bob Gezelter, http://www.rlgsc.com
Martin Hughes
Regular Advisor

Re: Volume Shadowing

The fact that your system disk is included in the migration makes the process more complicated than it would be otherwise. My assumption is that you will require an outage to run wwidmgr and setup your boot & dump devices, and modify bootdef_dev.
For the fashion of Minas Tirith was such that it was built on seven levels, each delved into a hill, and about each was set a wall, and in each wall was a gate. (J.R.R. Tolkien). Quote stolen from VAX/VMS IDSM 5.2
Jan van den Ende
Honored Contributor

Re: Volume Shadowing

Re Martin:

>>>
The fact that your system disk is included in the migration makes the process more complicated than it would be otherwise. My assumption is that you will require an outage to run wwidmgr and setup your boot & dump devices, and modify bootdef_dev.
<<<

Combine with:

>>>
I've got an ES80 Cluster running VMS
<<<

So, what happened to rolling upgrades?
That mechanism may have been designed for software upgrades, and allows you to run two different versions from two system disks.
But clustering also allows you to run the same version.
So:
- BACKUP (a dismounted menber of) the system disk to the new SAN environment
- do not forget to change the volume label (as in any rolling upgrade)
- shutdown ONE node, and configure the console settings to boot, and boot from that SAN
- shutdown next node, reconfigure & boot
- repeat for all nodes

Voila.

hth

Proost.

Have one on me.

jpe


Don't rust yours pelled jacker to fine doll missed aches.