LVM and VxVM

Re: Mirror data with Mirrordisk/UX between two LUNs

 
SOLVED
Go to solution
Prokopets
Respected Contributor

Mirror data with Mirrordisk/UX between two LUNs

Hi, all!

  I have a data on some ldevs, presented from a disk array to hp-ux 11.23. I want to mirror this data to ldevs presented from another array using mirrordisk, but faced with problem - mirrordisk allows me to mirror only on the same physical LUN. I think i'm doing something wrong, can you please advice me, what should be the correct path to do the task?

 

Philipp.

4 REPLIES 4
Matti_Kurkela
Honored Contributor
Solution

Re: Mirror data with Mirrordisk/UX between two LUNs

I think you've mistaken the nature of MirrorDisk's restriction: when you mirror the local system disk to another local disk, the two disks are separate physical units and also, obviously, separate LUNs. Mirroring only within the same LUN would not be very useful in the case of disk failures.

 

(However, you can mirror within the same LUN if you override the default "strict" extent allocation policy of the LVM... but it's usually a bad idea.)

 

The restriction is that both the source and destination LUNs must belong in the same VG. A VG can contain more than one LUN.

 

You did not specify whether you're using LVM or VxVM. Assuming that you use LVM, the basic procedure is:

  • present the new LUN to the system, ioscan & insf to make it visible
  • use pvcreate on the new LUN
  • use vgextend to join the new LUN to the VG you wish to mirror
  • use lvextend -m 1 to mirror the LV(s) of the VG to the new LUN
MK
Prokopets
Respected Contributor

Re: Mirror data with Mirrordisk/UX between two LUNs

Hi!
   You was absolutely right, i was just confused by SMH, via cli everything works good. Thanks for explanation! Now i tested (on 11.31, but should also work on 11.23) it:

 

# pvcreate /dev/rdsk/c14t0d5
Physical volume "/dev/rdsk/c14t0d5" has been successfully created.
# mkdir -p -m u=rwx,g=rx,o=rx /dev/vg04
# mknod /dev/vg04/group c 64 0x010000
# chmod u=rw-,g=r,o=--- /dev/vg04/group
# ev/dsk/c10t0d5 /dev/dsk/c12t0d5 /dev/dsk/c20t0d5 /dev/dsk/c6t0d5 /dev/dsk/c8t0d5 /dev/dsk/c16t0d5 /dev/dsk/c18t0d5                                      <
Increased the number of physical extents per physical volume to 1535.
Volume group "/dev/vg04" has been successfully created.
Volume Group configuration for /dev/vg04 has been saved in /etc/lvmconf/vg04.conf
# lvcreate -A y -n lvol01 -L 2048 -p w -s y vg04
Logical volume "/dev/vg04/lvol01" has been successfully created with
character device "/dev/vg04/rlvol01".
Logical volume "/dev/vg04/lvol01" has been successfully extended.
Volume Group configuration for /dev/vg04 has been saved in /etc/lvmconf/vg04.conf
# lvchange -a y -t 0 /dev/vg04/lvol01
Logical volume "/dev/vg04/lvol01" has been successfully changed.
Volume Group configuration for /dev/vg04 has been saved in /etc/lvmconf/vg04.conf
# mkdir -p -m u=rwx,g=rx,o=rx /mnt/mount_vg04_1
# mkfs -F vxfs /dev/vg04/rlvol01
    version 7 layout
    2097152 sectors, 2097152 blocks of size 1024, log size 16384 blocks
    largefiles supported
# mount -F vxfs -e /dev/vg04/lvol01 /mnt/mount_vg04_1
mount: mounted /dev/vg04/lvol01 on /mnt/mount_vg04_1
# pvcreate /dev/rdsk/c14t0d6
Physical volume "/dev/rdsk/c14t0d6" has been successfully created.
# v/dsk/c20t0d6 /dev/dsk/c6t0d6 /dev/dsk/c8t0d6 /dev/dsk/c12t0d6 /dev/dsk/c16t0d6 /dev/dsk/c18t0d6                                                        <
Current path "/dev/dsk/c10t0d5" is an alternate link, skip.
Current path "/dev/dsk/c12t0d5" is an alternate link, skip.
Current path "/dev/dsk/c20t0d5" is an alternate link, skip.
Current path "/dev/dsk/c6t0d5" is an alternate link, skip.
Current path "/dev/dsk/c8t0d5" is an alternate link, skip.
Current path "/dev/dsk/c16t0d5" is an alternate link, skip.
Current path "/dev/dsk/c18t0d5" is an alternate link, skip.
Volume group "/dev/vg04" has been successfully extended.
Volume Group configuration for /dev/vg04 has been saved in /etc/lvmconf/vg04.conf
# lvextend -A y -m 1 /dev/vg04/lvol01 /dev/dsk/c14t0d6
The newly allocated mirrors are now being synchronized. This operation will
take some time. Please wait ....
Logical volume "/dev/vg04/lvol01" has been successfully extended.
Volume Group configuration for /dev/vg04 has been saved in /etc/lvmconf/vg04.conf
# lvdisplay /dev/vg04/lvol01
--- Logical volumes ---
LV Name                     /dev/vg04/lvol01
VG Name                     /dev/vg04
LV Permission               read/write
LV Status                   available/syncd
Mirror copies               1
Consistency Recovery        MWC
Schedule                    parallel
LV Size (Mbytes)            2048
Current LE                  512
Allocated PE                1024
Stripes                     0
Stripe Size (Kbytes)        0
Bad block                   on
Allocation                  strict
IO Timeout (Seconds)        default

and it shows me one miror. Thanks again. (should add one more tag: never believe SMH :) )

 

Philipp.

Prokopets
Respected Contributor

Re: Mirror data with Mirrordisk/UX between two LUNs

I faced with one more strange issue. Now i'm trying to reproduce the same things on 11.31 on VG version 2.0:

 

# pvcreate /dev/rdisk/disk53
Physical volume "/dev/rdisk/disk53" has been successfully created.
# vgcreate -V 2.0 -A y -x y -S 5g -s 4 vg03 /dev/disk/disk53
vgcreate: The size of the volume group has been set to 10228m in order to include
the capacity of all the physical volumes specified.
Volume group "/dev/vg03" has been successfully created.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf
# lvcreate -L 5120 vg03
Logical volume "/dev/vg03/lvol1" has been successfully created with
character device "/dev/vg03/rlvol1".
Logical volume "/dev/vg03/lvol1" has been successfully extended.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf
# lvchange -a y /dev/vg03/lvol1
Logical volume "/dev/vg03/lvol1" has been successfully changed.
Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf
# mkfs -F vxfs /dev/vg03/lvol1
    version 6 layout
    5242880 sectors, 5242880 blocks of size 1024, log size 16384 blocks
    largefiles supported
# mount /dev/vg03/lvol1 /mnt
# pvcreate /dev/rdisk/disk49
Physical volume "/dev/rdisk/disk49" has been successfully created.
# vgextend -x y /dev/vg03/lvol1 /dev/disk/disk49
Volume group "/dev/vg03/lvol1" does not exist in the "/etc/lvmtab" file.
Volume group "/dev/vg03/lvol1" does not exist in the "/etc/lvmtab_p" file.
# vgextend -x y /dev/vg03 /dev/disk/disk49
vgextend: Couldn't install the physical volume "/dev/disk/disk49".
Error: The volume group exceeds the configured size.
# vgdisplay vg03
--- Volume groups ---
VG Name                     /dev/vg03
VG Write Access             read/write
VG Status                   available
Max LV                      511
Cur LV                      1
Open LV                     1
Max PV                      511
Cur PV                      1
Act PV                      1
Max PE per PV               2557
VGDA                        2
PE Size (Mbytes)            4
Total PE                    2557
Alloc PE                    1280
Free PE                     1277
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  2.0
VG Max Size                 10228m
VG Max Extents              2557
#

 

what can be wrong here?

Matti_Kurkela
Honored Contributor

Re: Mirror data with Mirrordisk/UX between two LUNs

When you're creating the VG, you're setting the VG Max Size parameter to a much too small value:

# vgcreate -V 2.0 -A y -x y -S 5g -s 4 vg03 /dev/disk/disk53
vgcreate: The size of the volume group has been set to 10228m in order to include
the capacity of all the physical volumes specified.
Volume group "/dev/vg03" has been successfully created.

 Effectively, you've told that the maximum total size of this VG should be 5 gigabytes, even though your PV /dev/disk/disk53 is more than 10 gigabytes in size. To avoid wasting more than half of the capacity of the PV, the vgcreate command will override your -S option value. But it sets the value to match exactly the size of your PV, effectively making it impossible to extend this VG for any purpose - including mirroring.

 

The VG Max Size value should not be chosen based on the current size of the VG, but based on your largest estimate of the future size of the VG, including the capacity requirements for mirroring, and preferably a x2 or x3 safety factor if practical. In practice, you probably should never specify anything less than 1t here.

 

In your case, you could have used vgcreate like this:

# vgcreate -V 2.0 -A y -x y -S 32t -s 4 vg03 /dev/disk/disk53

 This would have made LVM ensure that you could keep adding PVs to this VG until the total size of the VG reaches 32 terabytes, or about 16 terabytes of filesystem capacity if you use two-way mirroring (minus the filesystem metadata overhead, of course).

 

 

MK