1833431 Members
3338 Online
110052 Solutions
New Discussion

Mirror configuration change

 
LEJARRE Patrick
Advisor

Mirror configuration change

We would like to replace two 4 GB mirrored disks by 9 GB disks in a Jamaica rack A3311A ONLINE. The rack is full ( 2 x 4 mirrored disk ).
ON-LINE JFS and MIRROR/UX are installed on the system.
There are only data on these disks.
Has anyone already performed such changes ?
A procedure would help us a lot.
6 REPLIES 6
Andreas Voss
Honored Contributor

Re: Mirror configuration change

Hi,

it depends on the vg where the disks are included. If you have created the vg with default values the max PE then was set to the maximum of a 4GB disk.
In this case you have to backup the data, delete (vgexport is not sufficient) the vg, replace the disks and create new vg.

Assume your two 4GB disks are in one vg (Ie: vg01) and mounted on ie. /data:
Backup /data (tar, cpio, fbackup ore whatever)
Record the lvols sizes and device files:
vgdisplay -v vg01
List the vg group file:
ls -l /dev/vg01/group
and remember the minor number
umount /data
vgchange -a n
vgexport -v -m /tmp/vg01.map vg01
shutdown -h 0
Replace the disks
Start machine
pvcreate /dev/rdsk/c#t#d0 (for both disks)
mkdir /dev/vg01
mknod /dev/vg01/group c 64
vgcreate vg01 /dev/dsk/c#t#d0 /dev/dsk/c#t#d0
Create your lvols listed in /tmp/vg01.map
make new filesystems on the lvols
mount the new filesystems
restore the data

For making quick mirror:
Create the lvol first with one extend ie:
lvcreate -l 1 vg01
Then mirror the lvol:
lvextend -m 1 /dev/vg01/lvol1 /dev/dsk/c#t#d0
Now give the full size ie. 4GB:
lvextend -L 4000 /dev/vg01/lvol1
This procedure is much faster than assign the full size and mirror then.

Regards

Andrew
LEJARRE Patrick
Advisor

Re: Mirror configuration change

Hi Andreas,
Thanks for your reply, but it unfortunately doesn't answer the question.

The point is to do all the changes online!
Remember we do have ONLINE-JFS and MIRROR/UX installed.

We can not afford our apps becoming unavailable to our users. No shutdown is allowed.

Regards.
Andreas Voss
Honored Contributor

Re: Mirror configuration change

Hi,

in theorie you can turn off the mirror, remove one disk from vg, replace the disk,
add this disk to the vg and setup the mirror to this disk. Then doing the same wit the other disk.
BUT: if your vg has a too small MAX PE you could NOT include the 9GB disk into your vg !
Second:
It could be critical to change a disk at the same address with different capacity. Perhaps the system did not recognize the new disk correctly.
If your max PE is sufficient:
(Example: vg01 , /dev/vg01/lvol1)
lvreduce -m 0 /dev/vg01/lvol1 /dev/dsk/c#t#d0
vgreduce vg01 /dev/dsk/c#t#d0
Replace the disk.
pvcreate /dev/rdsk/c#t#d0
vgextend vg01 /dev/dsk/c#t#d0
lvextend -m 1 /dev/vg01/lvol1 /dev/dsk/c#t#d0
Doing this again with the other disk.

No warranty

Dave Wherry
Esteemed Contributor

Re: Mirror configuration change

You can do it all on-line. Break the mirrors all off of a disk, vgrduce the disk out of the volume group then remove and replace it. The Jamaica should be hot swapable. I would look for a time when the system is least busy.
Then you can vgextend and add the disk. Andreas was mostly correct on his comment about using the 9GB drive. If max pe is set to the size of the 4GB drives, that is all you will be able to use on those 9GB drives. You will be able to use the drives, just less than half of their capacity. Since you would only be able to use 4GB you gain nothing by replacing your current drives.
To really fix this, as Andreas said, you need to backup the data and rebuild the volume groups.
Tim Malnati
Honored Contributor

Re: Mirror configuration change

These guys are absolutely correct in their discussion of the max PE problem and Online JFS can not do anything to change it either. Mirrordisk does not have the capabiliy of mirroring two different volume groups either. The bottom line is that you will require an outage. But the duration of the outage can be drastically reduced by using rdist as the method of copying data from the old volume group to the new one.
To briefly discribe it you (1) remove the mirrors from the existing drives and pull the 4 gig drive that become unused, (2) install the 9 gig drive and create the new volume group, logical volume(s) and file system(s), (3) mount the new volume file system(s) to new mount point(s), (4) copy all the data from the old file system(s) to the new file system(s) using the redirect option of rdist through the localhost target, (5) Outage time - eliminate end user access to the both old and new volume groups, (6) rdist again to copy files that changed during and after the original rdist, (7) unmount both old and new volume groups and remount the new logical volume(s) to the old lv mount point(s), (8) outage done - restore access, (9) update fstab with the new alignment, (10) break down the old volume group and remove the other 4 gig drive, (11) install the 9 gig drive that will be used as mirror and vgextend and lvextend the new volume group and file system(s).

The outage time that this evolution takes is totally dependent on the amount of data that is changing during and after the first rdist. Another rdist period between the first and last can be added to the procedure to potentially reduce the time further. I've actually done this thing and the outage time was about ten minutes in my case. The key to being fast and successful is having the entire evolution well planned with every step fully documented in a procedure before the fact. Having some scripts prepared to perform the actual changes is important too.
Dave Wherry
Esteemed Contributor

Re: Mirror configuration change

I just went back and reviewed this thread. I had not seen Tim's answer before. What a great option for fixing the max PE problem. Way to go Tim!
I'd like to rework a couple of my volume groups and am going to do some testing with this. It will depend on the speed of rdist because I have 14GB volume groups to work with. Each logical volume contains Oracle database chunks, each about 2GB. So once a record is added/changed/deleted I'll have to recopy 2GB. If rdist can work faster than backup and restore this will be great.
I might add a couple of more steps for my situation. My volume groups are numbered in assigned ranges, 10-19 for production, 20-29 for QA .... I'd like to keep the same volume group numbers. To do this I will have to vgexport both the old and new volume group. Then vgimport the new volume group and, I'll have to look up the command, do a vgchangeid? to set the vg number back to the original.
Until HP gets around to giving us the ability to modify volume groups Tim's process has great potential.