Operating System - HP-UX
1752801 Members
5596 Online
108789 Solutions
New Discussion юеВ

Re: LVM size limits & stale mirror extents

 
Jeff Gyurko
Frequent Advisor

LVM size limits & stale mirror extents

Hi community... I've a wierd issue I'm going to explain and I'm looking for some honest feedback regarding possible remedies...

HP-UX 11.11 64bit RP7410.

We're migrating data from one EMC storage array to another. We're completing this by establishing mirrors on the new arrays, then splitting out the primary (left column of lvdisplay -v) keeping the data on the mirror. This has worked flawlessly so far on a dozen or so servers. I've one more server to go and it contains a logical volume of 2TB @ 98% of capacity (oracle). Here are the particulars of the VG:
# vgdisplay -v /dev/vg08
--- Volume groups ---
VG Name /dev/vg08
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 64
Cur PV 6
Act PV 6
Max PE per PV 2138
VGDA 12
PE Size (Mbytes) 256
Total PE 8192
Alloc PE 8192
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

When establishing the mirrors, the very last extent fails to complete the mirroring process. The command completes, but the lvdisplay -v shows the very last extent being stale on the mirror like this:

08191 /dev/dsk/c20t2d0 00798 current /dev/dsk/c26t6d0 00798 stale

No matter what device I use, the last extent fails to completely mirror. If I use c26t6d0 as the first device (there are 6 devices that make up the mirror), all extents completely mirror so there is no bad part on the device c26t6d0. The device I do choose to be last, always has the last extent as stale. The lvsync/vgsync commands return i/o errors. I know there is a 2TB limit on a LV, but the LV created ok and the data on it is OK so I don't think that is it.

I've thought of using fsadm to reduce the size of the filesystem 256mb (1 extent) but does that take the space from the last extent forward when it relocates data? If I fsadm -Ee to reorganize extents, does that free up the extents at the end of the LV? If it does, can you then safely lvreduce by 1 extent from the back?

Any knowledge/suggestions appreciated.
4 REPLIES 4
Ismail Azad
Esteemed Contributor

Re: LVM size limits & stale mirror extents

Hi,

Extent defragmentation at the LV level is not the same as performing an fsadm as the fsadm reorganizes file system blocks at the file system level. Now, what i can suggest is as you are using the maximum possible size of an LV -2TB. At the time of planning your size of LV, a 10% minfree as well as 5% filesystem overhead space should be kept for optimum performance and cases like yours. However, i don't understand when you say you are trying to reduce the size of an extent to 256MB because I think that's the maximum supported extent size and decreasing your extent size, only increases your overhead as it accommodates more space on your extent map. Hope the info helps.

Regards
Ismail Azad
Read, read and read... Then read again until you read "between the lines".....
Jayakrishnan G Naik
Trusted Contributor

Re: LVM size limits & stale mirror extents

HI Jeff,

Can I have the vgdisplay -v /dev/vg08 complete output ?

Thanks & Regards
Jayakrishnan G Naik
Benoy Daniel
Trusted Contributor

Re: LVM size limits & stale mirror extents

Try to do a defragment then do a pv move instead of mirroring. Or else create a new VG using new luns and do a data copy. It need some amount of downtime.
Jeff Gyurko
Frequent Advisor

Re: LVM size limits & stale mirror extents

- I've thought of using pvmove and that will more than likely be my next option.

- Since it was the last extent that failed to mirror, I wanted to reduce the LV by 1 extent (256mb) not change the extent size.

- I'm aware of the free space you'd like to preserve, but you work with what you have to work with.

I've already reduced the volumes I was using to mirror so the vgdisplay won't show anything other than the current devices but here it is:
--- Volume groups ---
VG Name /dev/vg08
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 64
Cur PV 6
Act PV 6
Max PE per PV 2138
VGDA 12
PE Size (Mbytes) 256
Total PE 8192
Alloc PE 8192
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0

--- Logical volumes ---
LV Name /dev/vg08/lvol1
LV Status available/syncd
LV Size (Mbytes) 2097152
Current LE 8192
Allocated PE 8192
Used PV 6


--- Physical volumes ---
PV Name /dev/dsk/c12t0d0
PV Name /dev/dsk/c20t0d0 Alternate Link
PV Status available
Total PE 1999
Free PE 0
Autoswitch On
Proactive Polling Off

PV Name /dev/dsk/c22t0d0
PV Name /dev/dsk/c18t0d0 Alternate Link
PV Status available
Total PE 719
Free PE 0
Autoswitch On
Proactive Polling Off

PV Name /dev/dsk/c22t0d1
PV Name /dev/dsk/c18t0d1 Alternate Link
PV Status available
Total PE 399
Free PE 0
Autoswitch On
Proactive Polling Off

PV Name /dev/dsk/c22t2d4
PV Name /dev/dsk/c18t2d4 Alternate Link
PV Status available
Total PE 2138
Free PE 0
Autoswitch On
Proactive Polling Off

PV Name /dev/dsk/c22t2d5
PV Name /dev/dsk/c18t2d5 Alternate Link
PV Status available
Total PE 2138
Free PE 0
Autoswitch On
Proactive Polling Off

PV Name /dev/dsk/c20t1d0
PV Name /dev/dsk/c12t1d0 Alternate Link
PV Status available
Total PE 799
Free PE 0
Autoswitch On
Proactive Polling Off

Thanks so far...