- Community Home
- >
- Servers and Operating Systems
- >
- Operating System - HP-UX
- >
- LVM and VxVM
- >
- LVM size limits & stale mirror extents
-
- Forums
-
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
- HPE Blog, Austria, Germany & Switzerland
- Blog HPE, France
- HPE Blog, Italy
- HPE Blog, Japan
- HPE Blog, Middle East
- HPE Blog, Latin America
- HPE Blog, Russia
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
-
Blogs
- Advancing Life & Work
- Advantage EX
- Alliances
- Around the Storage Block
- HPE Blog, Latin America
- HPE Blog, Middle East
- HPE Blog, Saudi Arabia
- HPE Blog, South Africa
- HPE Blog, UK & Ireland
- HPE Ezmeral: Uncut
- OEM Solutions
- Servers & Systems: The Right Compute
- Tech Insights
- The Cloud Experience Everywhere
-
Information
- Community
- Welcome
- Getting Started
- FAQ
- Ranking Overview
- Rules of Participation
- Tips and Tricks
- Resources
- Announcements
- Email us
- Feedback
- Information Libraries
- Integrated Systems
- Networking
- Servers
- Storage
- Other HPE Sites
- Support Center
- Aruba Airheads Community
- Enterprise.nxt
- HPE Dev Community
- Cloud28+ Community
- Marketplace
-
Forums
-
Blogs
-
Information
-
English
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
12-02-2010 12:08 PM
12-02-2010 12:08 PM
LVM size limits & stale mirror extents
HP-UX 11.11 64bit RP7410.
We're migrating data from one EMC storage array to another. We're completing this by establishing mirrors on the new arrays, then splitting out the primary (left column of lvdisplay -v) keeping the data on the mirror. This has worked flawlessly so far on a dozen or so servers. I've one more server to go and it contains a logical volume of 2TB @ 98% of capacity (oracle). Here are the particulars of the VG:
# vgdisplay -v /dev/vg08
--- Volume groups ---
VG Name /dev/vg08
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 64
Cur PV 6
Act PV 6
Max PE per PV 2138
VGDA 12
PE Size (Mbytes) 256
Total PE 8192
Alloc PE 8192
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
When establishing the mirrors, the very last extent fails to complete the mirroring process. The command completes, but the lvdisplay -v shows the very last extent being stale on the mirror like this:
08191 /dev/dsk/c20t2d0 00798 current /dev/dsk/c26t6d0 00798 stale
No matter what device I use, the last extent fails to completely mirror. If I use c26t6d0 as the first device (there are 6 devices that make up the mirror), all extents completely mirror so there is no bad part on the device c26t6d0. The device I do choose to be last, always has the last extent as stale. The lvsync/vgsync commands return i/o errors. I know there is a 2TB limit on a LV, but the LV created ok and the data on it is OK so I don't think that is it.
I've thought of using fsadm to reduce the size of the filesystem 256mb (1 extent) but does that take the space from the last extent forward when it relocates data? If I fsadm -Ee to reorganize extents, does that free up the extents at the end of the LV? If it does, can you then safely lvreduce by 1 extent from the back?
Any knowledge/suggestions appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
12-02-2010 07:29 PM
12-02-2010 07:29 PM
Re: LVM size limits & stale mirror extents
Extent defragmentation at the LV level is not the same as performing an fsadm as the fsadm reorganizes file system blocks at the file system level. Now, what i can suggest is as you are using the maximum possible size of an LV -2TB. At the time of planning your size of LV, a 10% minfree as well as 5% filesystem overhead space should be kept for optimum performance and cases like yours. However, i don't understand when you say you are trying to reduce the size of an extent to 256MB because I think that's the maximum supported extent size and decreasing your extent size, only increases your overhead as it accommodates more space on your extent map. Hope the info helps.
Regards
Ismail Azad
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
12-02-2010 07:47 PM
12-02-2010 07:47 PM
Re: LVM size limits & stale mirror extents
Can I have the vgdisplay -v /dev/vg08 complete output ?
Thanks & Regards
Jayakrishnan G Naik
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
12-02-2010 08:21 PM
12-02-2010 08:21 PM
Re: LVM size limits & stale mirror extents
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
12-03-2010 06:22 AM
12-03-2010 06:22 AM
Re: LVM size limits & stale mirror extents
- Since it was the last extent that failed to mirror, I wanted to reduce the LV by 1 extent (256mb) not change the extent size.
- I'm aware of the free space you'd like to preserve, but you work with what you have to work with.
I've already reduced the volumes I was using to mirror so the vgdisplay won't show anything other than the current devices but here it is:
--- Volume groups ---
VG Name /dev/vg08
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 64
Cur PV 6
Act PV 6
Max PE per PV 2138
VGDA 12
PE Size (Mbytes) 256
Total PE 8192
Alloc PE 8192
Free PE 0
Total PVG 0
Total Spare PVs 0
Total Spare PVs in use 0
--- Logical volumes ---
LV Name /dev/vg08/lvol1
LV Status available/syncd
LV Size (Mbytes) 2097152
Current LE 8192
Allocated PE 8192
Used PV 6
--- Physical volumes ---
PV Name /dev/dsk/c12t0d0
PV Name /dev/dsk/c20t0d0 Alternate Link
PV Status available
Total PE 1999
Free PE 0
Autoswitch On
Proactive Polling Off
PV Name /dev/dsk/c22t0d0
PV Name /dev/dsk/c18t0d0 Alternate Link
PV Status available
Total PE 719
Free PE 0
Autoswitch On
Proactive Polling Off
PV Name /dev/dsk/c22t0d1
PV Name /dev/dsk/c18t0d1 Alternate Link
PV Status available
Total PE 399
Free PE 0
Autoswitch On
Proactive Polling Off
PV Name /dev/dsk/c22t2d4
PV Name /dev/dsk/c18t2d4 Alternate Link
PV Status available
Total PE 2138
Free PE 0
Autoswitch On
Proactive Polling Off
PV Name /dev/dsk/c22t2d5
PV Name /dev/dsk/c18t2d5 Alternate Link
PV Status available
Total PE 2138
Free PE 0
Autoswitch On
Proactive Polling Off
PV Name /dev/dsk/c20t1d0
PV Name /dev/dsk/c12t1d0 Alternate Link
PV Status available
Total PE 799
Free PE 0
Autoswitch On
Proactive Polling Off
Thanks so far...
Hewlett Packard Enterprise International
- Communities
- HPE Blogs and Forum
© Copyright 2021 Hewlett Packard Enterprise Development LP