- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - Linux
- >
- Re: Checking extends distribution of Logical Volum...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-23-2017 09:05 AM
тАО02-23-2017 09:05 AM
Checking extends distribution of Logical Volumes against Physical Volumes
Bonjour,
I desperately search how to see how the extends of a Logical Volume are distributed on the disks that make up the Volume Group. Especially when the Logical Volume is mirrored between 2 different storage array.
The idea behind this check is :
- to detect if a logical volume is mirrored on two volumes from a same array
- to verify wich extends are not synchronized
I had a case where one of my client configured a volume group with several disk from a given array, several from a second array, and has created a logical volume mirrored on the same array ... :-(
Under HP-UX we can simply control the localisation of extends of a logical volume whith lvdisplay and -v option. It doesn't work under Linux.
Exemple with vg01 which owns 2 disks from an array (disk12 en disk23) and 2 disks from another array (disk114 and disk142)
root@kskckca:/# lvdisplay -v /dev/vg01/lvol1
--- Logical volumes ---
LV Name /dev/vg01/lvol1
VG Name /dev/vg01
LV Permission read/write
LV Status available/syncd
Mirror copies 1
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 69988
Current LE 17497
Allocated PE 34994
Stripes 0
Stripe Size (Kbytes) 0
Bad block NONE
Allocation strict
IO Timeout (Seconds) default
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/disk/disk12 17497 17497
/dev/disk/disk114 17494 17494
/dev/disk/disk23 3 3
--- Logical extents ---
LE PV1 PE1 Status 1 PV2 PE2 Status 2
00000000 /dev/disk/disk12 00000000 current /dev/disk/disk114 00003821 current
00000001 /dev/disk/disk12 00000001 current /dev/disk/disk114 00003822 current
00000002 /dev/disk/disk12 00000002 current /dev/disk/disk114 00003823 current
.../...
00017494 /dev/disk/disk12 00017494 current /dev/disk/disk23 00006241 current
00017495 /dev/disk/disk12 00017495 current /dev/disk/disk23 00006242 current
00017496 /dev/disk/disk12 00017496 current /dev/disk/disk23 00006243 current
Here I can see that almost all extends of this LV are mirrored between disk12 (my first test array) and disk114 (a volume from a second array). OK.
BUT the last 3 extends of the LV are mirrored between disk12 and disk23 ... that reside on the same storage array. Not good :-(
I can also see that all extends are synchronised : status = current
Is there anyway to get those useful informations under Linux ? I currently work with a Red Hat 7.3
Many thanks in advance
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2017 01:12 AM
тАО02-28-2017 01:12 AM
Re: Checking extends distribution of Logical Volumes against Physical Volumes
There is "-m" switch available with "lvdisplay" and "pvdisplay" which would show up the mapping of logical extents to physical volumes and physical extents. This may help you.
Example:
[root@ansible-host ~]# lvdisplay -m /dev/datavg/datalv
--- Logical volume ---
LV Path /dev/datavg/datalv
LV Name datalv
VG Name datavg
LV UUID eMOIpm-0iHa-gWHH-itPq-6lYA-fmVg-uhmYLH
LV Write Access read/write
LV Creation host, time ansible-host.example.com, 2016-12-28 04:53:13 -0500
LV Status available
# open 1
LV Size 296.00 MiB
Current LE 74
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Segments ---
Logical extents 0 to 73:
Type linear
Physical volume /dev/sdb1
Physical extents 0 to 73
[root@ansible-host ~]# pvdisplay -m /dev/sdb1
--- Physical volume ---
PV Name /dev/sdb1
VG Name datavg
PV Size 300.00 MiB / not usable 4.00 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 74
Free PE 0
Allocated PE 74
PV UUID cjcgSN-tzcq-DMKn-fJ0R-NE9o-qhig-VZDjKY
--- Physical Segments ---
Physical extent 0 to 73:
Logical volume /dev/datavg/datalv
Logical extents 0 to 73
SimplyLinuxFAQ
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2017 03:20 AM - edited тАО02-28-2017 03:25 AM
тАО02-28-2017 03:20 AM - edited тАО02-28-2017 03:25 AM
Re: Checking extends distribution of Logical Volumes against Physical Volumes
Thank you for the information. I totally missed the -m / '--maps' option. I gave a kudo for this.
Unfortunatly, I am afraid the informations given by the "-m" are very light regarding a mirrored raid1 logical_volume: it gives some details about the "internal" Logical Volumes but I still don't know how a given logcal extend is mapped to physical extent(s).
To clarify, here is a small test I did:
- created a vg with 2 x 10 GB san virtual volumes from a given storage array, and 2 other 10 GB from an other array
- tagged 2 virtual volumes from one array with BDX, and the 2 others with LAC :
# pvs -o name,vg_name,tags
PV VG PV Tags
/dev/mapper/YRO_BDX_POCSGLXPK10_01 vg_sglxpk10 BDX
/dev/mapper/YRO_BDX_POCSGLXPK10_02 vg_sglxpk10 BDX
/dev/mapper/YRO_LAC_POCSGLXPK10_01 vg_sglxpk10 LAC
/dev/mapper/YRO_LAC_POCSGLXPK10_02 vg_sglxpk10 LAC
- then, I tried to create a 11GB mirrored volume. Because the size is greater than one san volume, the logical volume will lie on 2 san volumes and the mirror on the other 2. The challenge was to have all extends from one source (for example BDX) mirrored on the other source (LAC). So I tried this :
lvcreate -n lvol1 -L 11G -m 1 --type raid1 vg_sglxpk10 @bdx @LAC
If I try to analyze how extends are distributed against physical volumes with the тАУm option:
# lvdisplay -m /dev/vg_sglxpk10/lvol1
.../...
--- Segments ---
Logical extents 0 to 2815:
Type raid1
Monitoring monitored
Raid Data LV 0
Logical volume lvol1_rimage_0
Logical extents 0 to 2815
Raid Data LV 1
Logical volume lvol1_rimage_1
Logical extents 0 to 2815
Raid Metadata LV 0 lvol1_rmeta_0
Raid Metadata LV 1 lvol1_rmeta_1
I can see the distribution against "internal" Logical Volume lvol1_rimage_0 and lvol1_rimage_1, not directly against Physical Volumes. So I am unable to qualify if the mirroring is done between 2 volumes from the same array [ bad :-( ]or from a different array [ nice :-) ]
And in the case of this test there is a real problem:
- First, _rimage_0 lies on 1 volume from an array and 1 volume from the other array. Same thing for the internal lvol _rimage_1. It means that a logical extend in this configuration could be mirrored between a physical extent from one storage and an other one from the same array.
It can be checked like this:
# lvs -a -o name,vg_name,devices vg_sglxpk10
LV VG Devices
lvol1 vg_sglxpk10 lvol1_rimage_0(0),lvol1_rimage_1(0)
[lvol1_rimage_0] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_01(1)
[lvol1_rimage_0] vg_sglxpk10 /dev/mapper/YRO_LAC_POCSGLXPK10_02(0)
[lvol1_rimage_1] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_02(1)
[lvol1_rimage_1] vg_sglxpk10 /dev/mapper/YRO_LAC_POCSGLXPK10_01(0)
[lvol1_rmeta_0] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_01(0)
[lvol1_rmeta_1] vg_sglxpk10 /dev/mapper/YRO_BDX_POCSGLXPK10_02(0)
- Secondly, I can see from a vgdisplay the global allocation of physical extents. And it clearly shows that both volumes from the "BDX" location are full. So probably the mirroring is done between those 2 volumes for the first 2556 extends, and the two volumes from the array located @ "LAC" for the last 260 extends. Extract :
# vgdisplay -v vg_sglxpk10
--- Physical volumes ---
PV Name /dev/mapper/YRO_BDX_POCSGLXPK10_01
Total PE / Free PE 2556 / 0
PV Name /dev/mapper/YRO_BDX_POCSGLXPK10_02
Total PE / Free PE 2556 / 0
PV Name /dev/mapper/YRO_LAC_POCSGLXPK10_02
Total PE / Free PE 2556 / 2295
PV Name /dev/mapper/YRO_LAC_POCSGLXPK10_01
Total PE / Free PE 2556 / 2295
In a summary I can say there is a problem but I can't analyze it in further details тАж that's my problem. And the case presented here is rather simple but could be more complex in "real world" IT.
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-28-2017 05:02 AM
тАО02-28-2017 05:02 AM
Re: Checking extends distribution of Logical Volumes against Physical Volumes
Since it is known where the data has to be dumped, can't we specify the luns/pvs directly while creating an lvm.
Hint
[root@ansible-host ~]# lvcreate -v --extents 100 -n testlv testvg -m 1 --type raid1 /dev/sdf1 /dev/sdh1
Using volume group(s) on command line.
Archiving volume group "testvg" metadata (seqno 13).
Creating logical volume testlv
Creating logical volume testlv_rimage_0
Creating logical volume testlv_rmeta_0
Creating logical volume testlv_rimage_1
Creating logical volume testlv_rmeta_1
activation/volume_list configuration setting not defined: Checking only host tags for testvg/testlv_rmeta_0.
Creating testvg-testlv_rmeta_0
Loading testvg-testlv_rmeta_0 table (253:7)
Resuming testvg-testlv_rmeta_0 (253:7)
Clearing metadata area of testvg/testlv_rmeta_0
Initializing 512 B of logical volume "testvg/testlv_rmeta_0" with value 0.
Removing testvg-testlv_rmeta_0 (253:7)
activation/volume_list configuration setting not defined: Checking only host tags for testvg/testlv_rmeta_1.
Creating testvg-testlv_rmeta_1
Loading testvg-testlv_rmeta_1 table (253:7)
Resuming testvg-testlv_rmeta_1 (253:7)
Clearing metadata area of testvg/testlv_rmeta_1
Initializing 512 B of logical volume "testvg/testlv_rmeta_1" with value 0.
Removing testvg-testlv_rmeta_1 (253:7)
Creating volume group backup "/etc/lvm/backup/testvg" (seqno 15).
Activating logical volume "testlv" exclusively.
activation/volume_list configuration setting not defined: Checking only host tags for testvg/testlv.
Creating testvg-testlv_rmeta_0
Loading testvg-testlv_rmeta_0 table (253:7)
Resuming testvg-testlv_rmeta_0 (253:7)
Creating testvg-testlv_rimage_0
Loading testvg-testlv_rimage_0 table (253:8)
Resuming testvg-testlv_rimage_0 (253:8)
Creating testvg-testlv_rmeta_1
Loading testvg-testlv_rmeta_1 table (253:9)
Resuming testvg-testlv_rmeta_1 (253:9)
Creating testvg-testlv_rimage_1
Loading testvg-testlv_rimage_1 table (253:10)
Resuming testvg-testlv_rimage_1 (253:10)
Creating testvg-testlv
Loading testvg-testlv table (253:11)
Resuming testvg-testlv (253:11)
Monitoring testvg/testlv
Wiping known signatures on logical volume "testvg/testlv"
Initializing 4.00 KiB of logical volume "testvg/testlv" with value 0.
Logical volume "testlv" created.[root@ansible-host ~]# pvdisplay -m /dev/sd{f,g,h,i}1
--- Physical volume ---
PV Name /dev/sdf1
VG Name testvg
PV Size 1023.00 MiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 154
Allocated PE 101
PV UUID 4vjqDE-I8d8-Uo86-Kxiq-YoFh-Djys-gyfQuF
--- Physical Segments ---
Physical extent 0 to 0:
Logical volume /dev/testvg/testlv_rmeta_0
Logical extents 0 to 0
Physical extent 1 to 100:
Logical volume /dev/testvg/testlv_rimage_0
Logical extents 0 to 99
Physical extent 101 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdg1
VG Name testvg
PV Size 1023.00 MiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 255
Allocated PE 0
PV UUID sVkMc4-E8KT-GfYr-q0Nb-9BAd-cvUD-a7u7TW
--- Physical Segments ---
Physical extent 0 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdh1
VG Name testvg
PV Size 1023.00 MiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 154
Allocated PE 101
PV UUID 4K9iwh-hrfW-75Ce-x6nD-hEa2-FA1F-RcwDHQ
--- Physical Segments ---
Physical extent 0 to 0:
Logical volume /dev/testvg/testlv_rmeta_1
Logical extents 0 to 0
Physical extent 1 to 100:
Logical volume /dev/testvg/testlv_rimage_1
Logical extents 0 to 99
Physical extent 101 to 254:
FREE
--- Physical volume ---
PV Name /dev/sdi1
VG Name testvg
PV Size 1023.00 MiB / not usable 3.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 255
Free PE 255
Allocated PE 0
PV UUID AGTOs1-Mlba-favG-2xV9-bOzl-jVOm-3lmJXY
--- Physical Segments ---
Physical extent 0 to 254:
FREE
SimplyLinuxFAQ
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-19-2017 05:01 AM
тАО03-19-2017 05:01 AM
Re: Checking extends distribution of Logical Volumes against Physical Volumes
Hi,
Sorry for this late answer.
I agree with you : if the job is correctly done @ creation time, well ... the job is done.
But, my problem is mostly later in the time : "what, if the initial job has not been done thoughtfully ?" Not sure that existing tools are enough to analyzes & corrects problems
Eric