1822497 Members
2478 Online
109642 Solutions
New Discussion юеВ

Expand logical volume

 
SOLVED
Go to solution
Yurub
Frequent Visitor

Expand logical volume

Hi,

We have an old system:

ia64 hp Integrity BL870c i4 .

HP-UX DB3 B.11.31 U ia64 ---- unlimited-user license

the storage structure was as follow:

RAID 01 with 10 HDD 600GB each and spare HDD with extra another HDD for later use.

the saconfig result was :

****-DB3#saconfig /dev/ciss1

******************** SmartArray RAID Controller /dev/ciss1 ********************

Auto-Fail Missing Disks at Boot     = enabled
Cache Configuration Status          = cache enabled
Cache Ratio                         = 25% Read / 75% Write

---------- PHYSICAL DRIVES ----------

Location  Ct Enc Bay       WWID           Size       Status

External  41   1   1  0x5000c5005832dc89  600.1 GB   OK
External  41   1   2  0x5000c5005832ea66  600.1 GB   OK
External  41   1   3  0x5000c50058329abd  600.1 GB   OK
External  41   1   4  0x5000c5005832a25e  600.1 GB   OK
External  41   1   5  0x5000c5005832dba9  600.1 GB   OK
External  41   1   6  0x5000c5005832f986  600.1 GB   OK
External  41   1   7  0x5000c5005832eea1  600.1 GB   OK
External  41   1   8  0x5000c50058328056  600.1 GB   OK
External  41   1   9  0x5000c5005832ed49  600.1 GB   OK
External  41   1  10  0x5000c5005832db4e  600.1 GB   OK
External  41   1  11  0x5000c5005832af7d  600.1 GB   SPARE
External  41   1  12  0x5000c50058328416  600.1 GB   UNASSIGNED

---------- LOGICAL DRIVE 0 ----------

Device File          = /dev/dsk/c4t0d0
RAID Level           = 1+0
Size                 = 2560000 MB
Stripe Size          = 128 KB
Status               = OK

Participating Physical Drive(s):

Ct  Enc  Bay         WWID
41    1    1  0x5000c5005832dc89
41    1    2  0x5000c5005832ea66
41    1    3  0x5000c50058329abd
41    1    4  0x5000c5005832a25e
41    1    5  0x5000c5005832dba9
41    1    6  0x5000c5005832f986
41    1    7  0x5000c5005832eea1
41    1    8  0x5000c50058328056
41    1    9  0x5000c5005832ed49
41    1   10  0x5000c5005832db4e

Participating Spare Drive(s):

Ct  Enc  Bay         WWID
41    1   11  0x5000c5005832af7d

we had also a volume group vgora built on this disk

VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      3
Open LV                     3
Max PV                      12
Cur PV                      1
Act PV                      1
Max PE per PV               33276
VGDA                        2
PE Size (Mbytes)            64
Total PE                    32767
Alloc PE                    32738
Free PE                     29
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 24957g
VG Max Extents              399312

   --- Logical volumes ---
   LV Name                     /dev/vgora/lvol1
   LV Status                   available/syncd
   LV Size (Mbytes)            60032
   Current LE                  938
   Allocated PE                938
   Used PV                     1

   LV Name                     /dev/vgora/lvol2
   LV Status                   available/syncd
   LV Size (Mbytes)            1766400
   Current LE                  27600
   Allocated PE                27600
   Used PV                     1

   LV Name                     /dev/vgora/lvol3
   LV Status                   available/syncd
   LV Size (Mbytes)            268800
   Current LE                  4200
   Allocated PE                4200
   Used PV                     1


   --- Physical volumes ---
   PV Name                     /dev/disk/disk4
   PV Status                   available
   Total PE                    32767
   Free PE                     29
   Autoswitch                  On
   Proactive Polling           On

i ignored the result related to vg00

because we want to expand the lvol2 which is for /database

we added the spare and the UNASSIENED drive to the array

****-DB3#saconfig /dev/ciss1

******************** SmartArray RAID Controller /dev/ciss1 ********************

Auto-Fail Missing Disks at Boot     = enabled
Cache Configuration Status          = cache enabled
Cache Ratio                         = 25% Read / 75% Write

---------- PHYSICAL DRIVES ----------

Location  Ct Enc Bay       WWID           Size       Status

External  41   1   1  0x5000c5005832dc89  600.1 GB   OK
External  41   1   2  0x5000c5005832ea66  600.1 GB   OK
External  41   1   3  0x5000c50058329abd  600.1 GB   OK
External  41   1   4  0x5000c5005832a25e  600.1 GB   OK
External  41   1   5  0x5000c5005832dba9  600.1 GB   OK
External  41   1   6  0x5000c5005832f986  600.1 GB   OK
External  41   1   7  0x5000c5005832eea1  600.1 GB   OK
External  41   1   8  0x5000c50058328056  600.1 GB   OK
External  41   1   9  0x5000c5005832ed49  600.1 GB   OK
External  41   1  10  0x5000c5005832db4e  600.1 GB   OK
External  41   1  11  0x5000c5005832af7d  600.1 GB   OK
External  41   1  12  0x5000c50058328416  600.1 GB   OK

---------- LOGICAL DRIVE 0 ----------

Device File          = /dev/dsk/c4t0d0
RAID Level           = 1+0
Size                 = 3072000 MB
Stripe Size          = 128 KB
Status               = OK

Participating Physical Drive(s):

Ct  Enc  Bay         WWID
41    1    1  0x5000c5005832dc89
41    1    2  0x5000c5005832ea66
41    1    3  0x5000c50058329abd
41    1    4  0x5000c5005832a25e
41    1    5  0x5000c5005832dba9
41    1    6  0x5000c5005832f986
41    1    7  0x5000c5005832eea1
41    1    8  0x5000c50058328056
41    1    9  0x5000c5005832ed49
41    1   10  0x5000c5005832db4e
41    1   11  0x5000c5005832af7d
41    1   12  0x5000c50058328416

Participating Spare Drive(s):

None

but to be noted we have used the free PE before the expansion of the RAID drive

which was:

   --- Physical volumes ---
   PV Name                     /dev/disk/disk4
   PV Status                   available
   Total PE                    32767
   Free PE                     2681
   Autoswitch                  On
   Proactive Polling           On

now the problem is the:

we see that we have 3TB on disk4 , note that i cannot diskinfo /dev/disk/disk4, it says "Character device required " so i used rdisk/disk4

****-DB3#diskinfo /dev/rdisk/disk4
SCSI describe of /dev/rdisk/disk4:
             vendor: HP
         product id: LOGICAL VOLUME
               type: direct access
               size: 3145728000 Kbytes
   bytes per sector: 512

  but the vgdisplay says:

****-DB3#vgdisplay vgora
--- Volume groups ---
VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      3
Open LV                     3
Max PV                      6
Cur PV                      1
Act PV                      1
Max PE per PV               64000
VGDA                        2
PE Size (Mbytes)            64
Total PE                    32767
Alloc PE                    32740
Free PE                     27
Total PVG                   1
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 24000g
VG Max Extents              384000

that means: we have total of  32767 PE with 64MB which means a total of 2097088MB which is about 2TB while we have 3TB as "describe of /dev/rdisk/disk4:"

I tried many commands like ioscan, insf, vgreduce, but i stuck here.

sometimes i face that this device is character and this device is a block. and i dont know what to do.

in short: we cannot reflect the expansion of RAID drive to be aware by the volume group.

please help

 

9 REPLIES 9
georgek_1
HPE Pro
Solution

Re: Expand logical volume

Hello Yurub,

You need to use the command "vgmodify" to use the added to the disk, by adding new disks to the logical drive .
When a LUN is dynamically grown (DLE - Dynamic Lun Expansion - at the storage end , here at RAID) , vgmodify should be used to allow LVM to access this new space. 
Once the volume group has been adjusted by vgmodify the new space can be allocated using the normal LVM method by lvextend or lvcreate.

You may need to use the command # vgmodify -v -r -a -E vg_name --> review mode , make sure that it is showing new disk size in output .
Now run without -r to make actual changes .

You may refer the White paper "Using the vgmodify command to perform LVM Volume Group Dynamic LUN Expansion (DLE) and Contraction (DLC)" from URL given below for more informaiton .
Section "Dynamic LUN expansion (DLE) and dynamic LUN contraction (DLC)" , page #6 onwards .

https://community.hpe.com/hpeb/attachments/hpeb/itrc-156/377793/1/356848.pdf

Please make sure you have good data backup before making any changes .

I work for HPE/ I am an HPE Employee (HPE Community)



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
support_s
System Recommended

Query: Expand logical volume

Hello,

 

Let us know if you were able to resolve the issue.

If you are satisfied with the answers then kindly click the "Accept As Solution" button for the most helpful response so that it is beneficial to all community members.

 

 

Please click on "Thumbs Up/Kudo" icon to give a "Kudo".


Accept or Kudo

Yurub
Frequent Visitor

Re: Query: Expand logical volume

unfortunately it did not work

****-DB3#vgmodify -v -r -a -E vgora
Volume Group configuration for /dev/vgora has been saved in /etc/lvmconf/vgora.conf
An update to the Volume Group is NOT required
Review complete. Volume group not modified

i also cannot tverify the size of /dev/disk/disk4

****-DB3#diskinfo /dev/disk/disk4
diskinfo: Character device required

but i can see the /dev/rdisk/disk4

****-DB3#diskinfo /dev/rdisk/disk4
SCSI describe of /dev/rdisk/disk4:
             vendor: HP
         product id: LOGICAL VOLUME
               type: direct access
               size: 3145728000 Kbytes
   bytes per sector: 512

 

georgek_1
HPE Pro

Re: Query: Expand logical volume

Hello Yurub,

 

I had a look at the configuration of the vg and it seems it is hitting a limit "Max PV Size (Tbytes)" of 2TB for vg with version 1 .
You could check the limits of different vg versions using # lvmadm -t . Below are the limits for vg version 1.0 which you have .

# lvmadm -t
--- LVM Limits ---
VG Version                  1.0
Max VG Size (Tbytes)        510
Max LV Size (Tbytes)        16
Max PV Size (Tbytes)        2    =====>
Max VGs                     256
Max LVs                     255
Max PVs                     255
Max Mirrors                 2
Max Stripes                 255
Max Stripe Size (Kbytes)    32768
Max LXs per LV              65535
Max PXs per PV              65535
Max Extent Size (Mbytes)    256

As per the table , the max size a disk can have is 2 TB .
The initial size of the disk (before adding new disks) was 2560000MB = 2500GB , which itself was above the limit .
After adding those 2 disks , the new size is 3072000  , ~3000GB which cannot be used in vg with version 1.0 .

****-DB3#vgdisplay vgora
--- Volume groups ---
VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      3
Open LV                     3
Max PV                      6
Cur PV                      1
Act PV                      1
...
Total PVG                   1
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0  ==>

 

You have 2 options available ,

1)migrate the vg version from 1.0 to 2.x which will accommodate disks with higher size up to 16 TB , use the new size with help of vgmodify as shared earlier or  2) take data backup , recreate the array in such a way that there would be multiple logical volumes with size less than 2 tb so that all of these disks  together can use be used make vg with version  1.0 and use all of the space .

You may refer #man vgversion for more information  also make sure you have a good data backup before making any changes.

 

I work for HPE/ I am an HPE Employee (HPE Community) 



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Yurub
Frequent Visitor

Re: Query: Expand logical volume

how can I migrate to version 2.0, i cannot find command vgconvert

while searching on google i found that it should be installed with patches earlier than 2008, I have this boundle but I dont have vgconvert!!

 FEATURE11i                            B.11.31.1403.401a Feature Enablement Patches for HP-UX 11i v3, March 2014

Please help

georgek_1
HPE Pro

Re: Query: Expand logical volume

Hello Yurub,

 

The command is vgversion , which will help you migrate vg with version 1.0 to 2.x . There are multiple versions available such as 2.0 / 2.1 and 2.2 .

For example , 

      Change the version of a volume group named /dev/vg01 to volume group
      version 2.2 and be verbose:

           vgversion -v -V 2.2 /dev/vg01

 

Please refer # man vgversion for more details / steps . also refer #lanadm -t for limits of each version .

 

I work for HPE/ I am an HPE Employee (HPE Community)

 



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Yurub
Frequent Visitor

Re: Query: Expand logical volume

Many Thanks

****-DB3#vgdisplay vgora
--- Volume groups ---
VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      2047
Cur LV                      3
Open LV                     3
Cur Snapshot LV             0
Max PV                      2048
Cur PV                      1
Act PV                      1
Max PE per PV               262144
VGDA                        2
PE Size (Mbytes)            64
Unshare unit size (Kbytes)  1024
Total PE                    47999
Alloc PE                    40138
Current pre-allocated PE    0
Free PE                     7861
Total PVG                   1
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  2.2
VG Max Size                 24000g
VG Max Extents              384000
Cur Snapshot Capacity       0p
Max Snapshot Capacity       24000g

i have expanded some volumes and still have about 500GB for future use.

thank you

Yurub
Frequent Visitor

Re: Query: Expand logical volume

thank you again

but i wonder, if I have 12 disks with 600GB on RAID 10, why I have 3TB not 3.6TB. 

 

georgek_1
HPE Pro

Re: Query: Expand logical volume

Hello Yurub,

It seems like 50 gb is taken from each disk for raid - could be for parity writing , thus not available for user .
The size mentioned when the RAID has 10 disks was 2560000 MB .

Ideally, the size should be 600 (disk size) * 10 (no of disks)  * 1024 (in MB)  / 2 (since it is RAID10 - mirroring) = 3072000.

But the size available is 2560000 as shown below .

---------- LOGICAL DRIVE 0 ----------
Device File          = /dev/dsk/c4t0d0
RAID Level           = 1+0
Size                 = 2560000 MB  ==>
Stripe Size          = 128 KB
Status               = OK

Thus we have almost 512000 MB (3072000 - 2560000)which is ~ 500GB  missing when you have RAID with 10 disks configured .
So for a single disk it is 50gb (500gb/10 disks)  non-usable for data writing .

When you added 2 new disks , the ideal size should be 600 * 12 * 1024 /2 = 3686400 MB ~ 3600GB .

But the current size is 3072000 MB .
---------- LOGICAL DRIVE 0 ----------

Device File          = /dev/dsk/c4t0d0
RAID Level           = 1+0
Size                 = 3072000 MB  ==> 
Stripe Size          = 128 KB
Status               = OK

So the missing size is again 3686400 - 3072000 = 614400 MB , ~ 600gb .
Which is nothing but addition of 100gb tp previous missing size (500gb) , 50 gb from each disk , thus 100gb for 2 new disks .

Hope this explains .

 

I work for HPE/ I am an HPE Employee (HPE Community)



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo