<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Query: Expand logical volume in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234025#M948992</link>
    <description>&lt;P style="margin: 0;"&gt;Hello,&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Let us know if you were able to resolve the issue.&lt;BR /&gt;&lt;BR /&gt;If you are satisfied with the answers then kindly click the "Accept As Solution" button for the most helpful response so that it is beneficial to all community members.&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Please click on "Thumbs Up/Kudo" icon to give a "Kudo".&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jan 2025 11:36:51 GMT</pubDate>
    <dc:creator>support_s</dc:creator>
    <dc:date>2025-01-29T11:36:51Z</dc:date>
    <item>
      <title>Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7233639#M948988</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;We have an old system:&lt;/P&gt;&lt;P&gt;ia64 hp Integrity BL870c i4 .&lt;/P&gt;&lt;P&gt;HP-UX DB3 B.11.31 U ia64 ---- unlimited-user license&lt;/P&gt;&lt;P&gt;the storage structure was as follow:&lt;/P&gt;&lt;P&gt;RAID 01 with 10 HDD 600GB each and spare HDD with extra another HDD for later use.&lt;/P&gt;&lt;P&gt;the saconfig result was :&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#saconfig /dev/ciss1

******************** SmartArray RAID Controller /dev/ciss1 ********************

Auto-Fail Missing Disks at Boot     = enabled
Cache Configuration Status          = cache enabled
Cache Ratio                         = 25% Read / 75% Write

---------- PHYSICAL DRIVES ----------

Location  Ct Enc Bay       WWID           Size       Status

External  41   1   1  0x5000c5005832dc89  600.1 GB   OK
External  41   1   2  0x5000c5005832ea66  600.1 GB   OK
External  41   1   3  0x5000c50058329abd  600.1 GB   OK
External  41   1   4  0x5000c5005832a25e  600.1 GB   OK
External  41   1   5  0x5000c5005832dba9  600.1 GB   OK
External  41   1   6  0x5000c5005832f986  600.1 GB   OK
External  41   1   7  0x5000c5005832eea1  600.1 GB   OK
External  41   1   8  0x5000c50058328056  600.1 GB   OK
External  41   1   9  0x5000c5005832ed49  600.1 GB   OK
External  41   1  10  0x5000c5005832db4e  600.1 GB   OK
External  41   1  11  0x5000c5005832af7d  600.1 GB   SPARE
External  41   1  12  0x5000c50058328416  600.1 GB   UNASSIGNED

---------- LOGICAL DRIVE 0 ----------

Device File          = /dev/dsk/c4t0d0
RAID Level           = 1+0
Size                 = 2560000 MB
Stripe Size          = 128 KB
Status               = OK

Participating Physical Drive(s):

Ct  Enc  Bay         WWID
41    1    1  0x5000c5005832dc89
41    1    2  0x5000c5005832ea66
41    1    3  0x5000c50058329abd
41    1    4  0x5000c5005832a25e
41    1    5  0x5000c5005832dba9
41    1    6  0x5000c5005832f986
41    1    7  0x5000c5005832eea1
41    1    8  0x5000c50058328056
41    1    9  0x5000c5005832ed49
41    1   10  0x5000c5005832db4e

Participating Spare Drive(s):

Ct  Enc  Bay         WWID
41    1   11  0x5000c5005832af7d&lt;/LI-CODE&gt;&lt;P&gt;we had also a volume group vgora built on this disk&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      3
Open LV                     3
Max PV                      12
Cur PV                      1
Act PV                      1
Max PE per PV               33276
VGDA                        2
PE Size (Mbytes)            64
Total PE                    32767
Alloc PE                    32738
Free PE                     29
Total PVG                   0
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 24957g
VG Max Extents              399312

   --- Logical volumes ---
   LV Name                     /dev/vgora/lvol1
   LV Status                   available/syncd
   LV Size (Mbytes)            60032
   Current LE                  938
   Allocated PE                938
   Used PV                     1

   LV Name                     /dev/vgora/lvol2
   LV Status                   available/syncd
   LV Size (Mbytes)            1766400
   Current LE                  27600
   Allocated PE                27600
   Used PV                     1

   LV Name                     /dev/vgora/lvol3
   LV Status                   available/syncd
   LV Size (Mbytes)            268800
   Current LE                  4200
   Allocated PE                4200
   Used PV                     1


   --- Physical volumes ---
   PV Name                     /dev/disk/disk4
   PV Status                   available
   Total PE                    32767
   Free PE                     29
   Autoswitch                  On
   Proactive Polling           On&lt;/LI-CODE&gt;&lt;P&gt;i ignored the result related to vg00&lt;/P&gt;&lt;P&gt;because we want to expand the lvol2 which is for /database&lt;/P&gt;&lt;P&gt;we added the spare and the UNASSIENED drive to the array&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#saconfig /dev/ciss1

******************** SmartArray RAID Controller /dev/ciss1 ********************

Auto-Fail Missing Disks at Boot     = enabled
Cache Configuration Status          = cache enabled
Cache Ratio                         = 25% Read / 75% Write

---------- PHYSICAL DRIVES ----------

Location  Ct Enc Bay       WWID           Size       Status

External  41   1   1  0x5000c5005832dc89  600.1 GB   OK
External  41   1   2  0x5000c5005832ea66  600.1 GB   OK
External  41   1   3  0x5000c50058329abd  600.1 GB   OK
External  41   1   4  0x5000c5005832a25e  600.1 GB   OK
External  41   1   5  0x5000c5005832dba9  600.1 GB   OK
External  41   1   6  0x5000c5005832f986  600.1 GB   OK
External  41   1   7  0x5000c5005832eea1  600.1 GB   OK
External  41   1   8  0x5000c50058328056  600.1 GB   OK
External  41   1   9  0x5000c5005832ed49  600.1 GB   OK
External  41   1  10  0x5000c5005832db4e  600.1 GB   OK
External  41   1  11  0x5000c5005832af7d  600.1 GB   OK
External  41   1  12  0x5000c50058328416  600.1 GB   OK

---------- LOGICAL DRIVE 0 ----------

Device File          = /dev/dsk/c4t0d0
RAID Level           = 1+0
Size                 = 3072000 MB
Stripe Size          = 128 KB
Status               = OK

Participating Physical Drive(s):

Ct  Enc  Bay         WWID
41    1    1  0x5000c5005832dc89
41    1    2  0x5000c5005832ea66
41    1    3  0x5000c50058329abd
41    1    4  0x5000c5005832a25e
41    1    5  0x5000c5005832dba9
41    1    6  0x5000c5005832f986
41    1    7  0x5000c5005832eea1
41    1    8  0x5000c50058328056
41    1    9  0x5000c5005832ed49
41    1   10  0x5000c5005832db4e
41    1   11  0x5000c5005832af7d
41    1   12  0x5000c50058328416

Participating Spare Drive(s):

None&lt;/LI-CODE&gt;&lt;P&gt;but to be noted we have used the free PE before the expansion of the RAID drive&lt;/P&gt;&lt;P&gt;which was:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;   --- Physical volumes ---
   PV Name                     /dev/disk/disk4
   PV Status                   available
   Total PE                    32767
   Free PE                     2681
   Autoswitch                  On
   Proactive Polling           On&lt;/LI-CODE&gt;&lt;P&gt;now the problem is the:&lt;/P&gt;&lt;P&gt;we see that we have 3TB on disk4 , note that i cannot diskinfo /dev/disk/disk4, it says "Character device required&amp;nbsp;" so i used rdisk/disk4&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#diskinfo /dev/rdisk/disk4
SCSI describe of /dev/rdisk/disk4:
             vendor: HP
         product id: LOGICAL VOLUME
               type: direct access
               size: 3145728000 Kbytes
   bytes per sector: 512&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;but the vgdisplay says:&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#vgdisplay vgora
--- Volume groups ---
VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      255
Cur LV                      3
Open LV                     3
Max PV                      6
Cur PV                      1
Act PV                      1
Max PE per PV               64000
VGDA                        2
PE Size (Mbytes)            64
Total PE                    32767
Alloc PE                    32740
Free PE                     27
Total PVG                   1
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  1.0
VG Max Size                 24000g
VG Max Extents              384000&lt;/LI-CODE&gt;&lt;P&gt;that means: we have total of&amp;nbsp; 32767 PE with 64MB which means a total of 2097088MB which is about 2TB while we have 3TB as "describe of /dev/rdisk/disk4:"&lt;/P&gt;&lt;P&gt;I tried many commands like ioscan, insf, vgreduce, but i stuck here.&lt;/P&gt;&lt;P&gt;sometimes i face that this device is character and this device is a block. and i dont know what to do.&lt;/P&gt;&lt;P&gt;in short: we cannot reflect the expansion of RAID drive to be aware by the volume group.&lt;/P&gt;&lt;P&gt;please help&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 07 Feb 2025 03:23:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7233639#M948988</guid>
      <dc:creator>Yurub</dc:creator>
      <dc:date>2025-02-07T03:23:21Z</dc:date>
    </item>
    <item>
      <title>Re: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7233917#M948991</link>
      <description>&lt;P dir="auto" style="margin: 0;"&gt;Hello Yurub,&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;You need to use the command "vgmodify" to use the added to the disk, by adding new disks to the logical drive .&lt;BR /&gt;When a LUN is dynamically grown (DLE - Dynamic Lun Expansion - at the storage end , here at RAID) , vgmodify should be used to allow LVM to access this new space.&amp;nbsp;&lt;BR /&gt;Once the volume group has been adjusted by vgmodify the new space can be allocated using the normal LVM method by lvextend or lvcreate.&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;You may need to use the command # vgmodify -v -r -a -E vg_name --&amp;gt; review mode , make sure that it is showing new disk size in output .&lt;BR /&gt;Now run without -r to make actual changes .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;You may refer the White paper "Using the vgmodify command to perform LVM Volume Group Dynamic LUN Expansion (DLE) and Contraction (DLC)" from URL given below for more informaiton .&lt;BR /&gt;Section "Dynamic LUN expansion (DLE) and dynamic LUN contraction (DLC)" , page #6 onwards .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&lt;A href="https://community.hpe.com/hpeb/attachments/hpeb/itrc-156/377793/1/356848.pdf" target="_blank"&gt;https://community.hpe.com/hpeb/attachments/hpeb/itrc-156/377793/1/356848.pdf&lt;/A&gt;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;Please make sure you have good data backup before making any changes .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/P&gt;</description>
      <pubDate>Tue, 28 Jan 2025 04:39:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7233917#M948991</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2025-01-28T04:39:29Z</dc:date>
    </item>
    <item>
      <title>Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234025#M948992</link>
      <description>&lt;P style="margin: 0;"&gt;Hello,&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Let us know if you were able to resolve the issue.&lt;BR /&gt;&lt;BR /&gt;If you are satisfied with the answers then kindly click the "Accept As Solution" button for the most helpful response so that it is beneficial to all community members.&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Please click on "Thumbs Up/Kudo" icon to give a "Kudo".&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2025 11:36:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234025#M948992</guid>
      <dc:creator>support_s</dc:creator>
      <dc:date>2025-01-29T11:36:51Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234273#M948994</link>
      <description>&lt;P&gt;unfortunately it did not work&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#vgmodify -v -r -a -E vgora
Volume Group configuration for /dev/vgora has been saved in /etc/lvmconf/vgora.conf
An update to the Volume Group is NOT required
Review complete. Volume group not modified&lt;/LI-CODE&gt;&lt;P&gt;i also cannot tverify the size of /dev/disk/disk4&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#diskinfo /dev/disk/disk4
diskinfo: Character device required&lt;/LI-CODE&gt;&lt;P&gt;but i can see the /dev/&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;r&lt;/FONT&gt;&lt;/STRONG&gt;disk/disk4&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#diskinfo /dev/rdisk/disk4
SCSI describe of /dev/rdisk/disk4:
             vendor: HP
         product id: LOGICAL VOLUME
               type: direct access
               size: 3145728000 Kbytes
   bytes per sector: 512&lt;/LI-CODE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Feb 2025 09:23:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234273#M948994</guid>
      <dc:creator>Yurub</dc:creator>
      <dc:date>2025-02-03T09:23:42Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234312#M948995</link>
      <description>&lt;P dir="auto" style="margin: 0;"&gt;Hello Yurub,&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;I had a look at the configuration of the vg and it seems it is hitting a limit "Max PV Size (Tbytes)" of 2TB for vg with version 1 .&lt;BR /&gt;You could check the limits of different vg versions using # lvmadm -t . Below are the limits for vg version 1.0 which you have .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;# lvmadm -t&lt;BR /&gt;--- LVM Limits ---&lt;BR /&gt;VG Version &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1.0&lt;BR /&gt;Max VG Size (Tbytes) &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;510&lt;BR /&gt;Max LV Size (Tbytes) &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;16&lt;BR /&gt;Max PV Size (Tbytes) &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;2 &amp;nbsp; &amp;nbsp;=====&amp;gt;&lt;BR /&gt;Max VGs &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 256&lt;BR /&gt;Max LVs &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 255&lt;BR /&gt;Max PVs &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 255&lt;BR /&gt;Max Mirrors &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 2&lt;BR /&gt;Max Stripes &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 255&lt;BR /&gt;Max Stripe Size (Kbytes) &amp;nbsp; &amp;nbsp;32768&lt;BR /&gt;Max LXs per LV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;65535&lt;BR /&gt;Max PXs per PV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;65535&lt;BR /&gt;Max Extent Size (Mbytes) &amp;nbsp; &amp;nbsp;256&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;As per the table , the max size a disk can have is 2 TB .&lt;BR /&gt;The initial size of the disk (before adding new disks) was 2560000MB = 2500GB , which itself was above the limit .&lt;BR /&gt;After adding those 2 disks , the new size is 3072000 &amp;nbsp;, ~3000GB which cannot be used in vg with version 1.0 .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;****-DB3#vgdisplay vgora&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; /dev/vgora&lt;BR /&gt;VG Write Access &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; read/write&lt;BR /&gt;VG Status &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; available&lt;BR /&gt;Max LV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;255&lt;BR /&gt;Cur LV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;3&lt;BR /&gt;Open LV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 3&lt;BR /&gt;Max PV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;6&lt;BR /&gt;Cur PV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1&lt;BR /&gt;Act PV &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1&lt;BR /&gt;...&lt;BR /&gt;Total PVG &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 1&lt;BR /&gt;Total Spare PVs &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; 0&lt;BR /&gt;Total Spare PVs in use &amp;nbsp; &amp;nbsp; &amp;nbsp;0&lt;BR /&gt;VG Version &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;1.0 &amp;nbsp;==&amp;gt;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;You have 2 options available ,&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;1)migrate the vg version from 1.0 to 2.x which will accommodate disks with higher size up to 16 TB , use the new size with help of vgmodify as shared earlier or&amp;nbsp; 2) take data backup , recreate the array in such a way that there would be multiple logical volumes with size less than 2 tb so that all of these disks&amp;nbsp; together can use be used make vg with version &amp;nbsp;1.0 and use all of the space .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;You may refer #man vgversion for more information&amp;nbsp; also make sure you have a good data backup before making any changes.&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 03 Feb 2025 18:43:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234312#M948995</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2025-02-03T18:43:48Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234345#M948996</link>
      <description>&lt;P&gt;how can I migrate to version 2.0, i cannot find command vgconvert&lt;/P&gt;&lt;P&gt;while searching on google i found that it should be installed with patches earlier than 2008, I have this boundle but I dont have &lt;FONT color="#FF6600"&gt;vgconvert&lt;/FONT&gt;!!&lt;/P&gt;&lt;LI-CODE lang="markup"&gt; FEATURE11i                            B.11.31.1403.401a Feature Enablement Patches for HP-UX 11i v3, March 2014&lt;/LI-CODE&gt;&lt;P&gt;Please help&lt;/P&gt;</description>
      <pubDate>Tue, 04 Feb 2025 07:28:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234345#M948996</guid>
      <dc:creator>Yurub</dc:creator>
      <dc:date>2025-02-04T07:28:13Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234365#M948997</link>
      <description>&lt;P dir="auto" style="margin: 0;"&gt;Hello Yurub,&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;The command is vgversion , which will help you migrate vg with version 1.0 to 2.x . There are multiple versions available such as 2.0 / 2.1 and 2.2 .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;For example ,&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; Change the version of a volume group named /dev/vg01 to volume group&lt;BR /&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; version 2.2 and be verbose:&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;vgversion -v -V 2.2 /dev/vg01&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;Please refer # man vgversion for more details / steps . also refer #lanadm -t for limits of each version .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Feb 2025 09:40:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234365#M948997</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2025-02-04T09:40:09Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234407#M948998</link>
      <description>&lt;P&gt;Many Thanks&lt;/P&gt;&lt;LI-CODE lang="markup"&gt;****-DB3#vgdisplay vgora
--- Volume groups ---
VG Name                     /dev/vgora
VG Write Access             read/write
VG Status                   available
Max LV                      2047
Cur LV                      3
Open LV                     3
Cur Snapshot LV             0
Max PV                      2048
Cur PV                      1
Act PV                      1
Max PE per PV               262144
VGDA                        2
PE Size (Mbytes)            64
Unshare unit size (Kbytes)  1024
Total PE                    47999
Alloc PE                    40138
Current pre-allocated PE    0
Free PE                     7861
Total PVG                   1
Total Spare PVs             0
Total Spare PVs in use      0
VG Version                  2.2
VG Max Size                 24000g
VG Max Extents              384000
Cur Snapshot Capacity       0p
Max Snapshot Capacity       24000g&lt;/LI-CODE&gt;&lt;P&gt;i have expanded some volumes and still have about 500GB for future use.&lt;/P&gt;&lt;P&gt;thank you&lt;/P&gt;</description>
      <pubDate>Tue, 04 Feb 2025 22:02:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234407#M948998</guid>
      <dc:creator>Yurub</dc:creator>
      <dc:date>2025-02-04T22:02:47Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234412#M948999</link>
      <description>&lt;P&gt;thank you again&lt;/P&gt;&lt;P&gt;but i wonder, if I have 12 disks with 600GB on RAID 10, why I have 3TB not 3.6TB.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 04 Feb 2025 22:57:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234412#M948999</guid>
      <dc:creator>Yurub</dc:creator>
      <dc:date>2025-02-04T22:57:44Z</dc:date>
    </item>
    <item>
      <title>Re: Query: Expand logical volume</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234473#M949000</link>
      <description>&lt;P dir="auto" style="margin: 0;"&gt;Hello Yurub,&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;It seems like 50 gb is taken from each disk for raid - could be for parity writing , thus not available for user .&lt;BR /&gt;The size mentioned when the RAID has 10 disks was 2560000 MB .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;Ideally, the size should be 600 (disk size) * 10 (no of disks) &amp;nbsp;* 1024 (in MB) &amp;nbsp;/ 2 (since it is RAID10 - mirroring) = 3072000.&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;But the size available is 2560000 as shown below .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;---------- LOGICAL DRIVE 0 ----------&lt;BR /&gt;Device File &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= /dev/dsk/c4t0d0&lt;BR /&gt;RAID Level &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = 1+0&lt;BR /&gt;Size &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = 2560000 MB &amp;nbsp;==&amp;gt;&lt;BR /&gt;Stripe Size &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= 128 KB&lt;BR /&gt;Status &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = OK&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;Thus we have almost 512000 MB (3072000 - 2560000)which is ~ 500GB &amp;nbsp;missing when you have RAID with 10 disks configured .&lt;BR /&gt;So for a single disk it is 50gb (500gb/10 disks) &amp;nbsp;non-usable for data writing .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;When you added 2 new disks , the ideal size should be 600 * 12 * 1024 /2 = 3686400 MB ~ 3600GB .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;But the current size is 3072000 MB .&lt;BR /&gt;---------- LOGICAL DRIVE 0 ----------&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;Device File &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= /dev/dsk/c4t0d0&lt;BR /&gt;RAID Level &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = 1+0&lt;BR /&gt;Size &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = 3072000 MB &amp;nbsp;==&amp;gt;&amp;nbsp;&lt;BR /&gt;Stripe Size &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp;= 128 KB&lt;BR /&gt;Status &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; &amp;nbsp; = OK&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;So the missing size is again 3686400 - 3072000 = 614400 MB , ~ 600gb .&lt;BR /&gt;Which is nothing but addition of 100gb tp previous missing size (500gb) , 50 gb from each disk , thus 100gb for 2 new disks .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;Hope this explains .&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P dir="auto" style="margin: 0;"&gt;I work for HPE/ I am an HPE Employee (HPE Community)&lt;/P&gt;</description>
      <pubDate>Wed, 05 Feb 2025 18:06:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/expand-logical-volume/m-p/7234473#M949000</guid>
      <dc:creator>georgek_1</dc:creator>
      <dc:date>2025-02-05T18:06:17Z</dc:date>
    </item>
  </channel>
</rss>

