<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Extend LVM - Issue after pvresize in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363057#M61145</link>
    <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;fdisk -l&lt;BR /&gt;&lt;BR /&gt;Will show device of new disks/LUNS added.&lt;BR /&gt;&lt;BR /&gt;I fear the premise of what you wish to do will fail.&lt;BR /&gt;&lt;BR /&gt;But I will after taking the Sabbath off check back and try and provide further assistance.&lt;BR /&gt;&lt;BR /&gt;Please let the community know how it turns out.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
    <pubDate>Fri, 20 Feb 2009 14:35:30 GMT</pubDate>
    <dc:creator>Steven E. Protter</dc:creator>
    <dc:date>2009-02-20T14:35:30Z</dc:date>
    <item>
      <title>Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363054#M61142</link>
      <description>I have a HP DL385 G5 server with 4 HDDs configured as RAID5. I need to add 2 more disks to the same.&lt;BR /&gt;To do this I added the 2 new disks to RAID array and was able to expand and extend it.&lt;BR /&gt;&lt;BR /&gt;Expand&lt;BR /&gt;/usr/sbin/hpacucli ctrl slot=1 ld 1 add drives=allunassigned&lt;BR /&gt;&lt;BR /&gt;Extend&lt;BR /&gt;/usr/sbin/hpacucli ctrl slot=1 ld 1 modify size=max&lt;BR /&gt;&lt;BR /&gt;Pots this I needed to resize the LVM.&lt;BR /&gt;The new space was not getting recognized by the LVM.&lt;BR /&gt;So, the steps I got were:&lt;BR /&gt; - pvresize&lt;BR /&gt; - lvresize/lvextend&lt;BR /&gt; - resize2fs&lt;BR /&gt;&lt;BR /&gt;After doing pvresize, the pvs command still shows the DevSize the old size and is not recognizing the new size.&lt;BR /&gt;The output is as below:&lt;BR /&gt;% pvs -o +dev_size&lt;BR /&gt;  PV                VG           Fmt  Attr PSize   PFree   DevSize&lt;BR /&gt;  /dev/cciss/c0d0p2 VolGroupHPHA lvm2 a-   339.97G 135.84G 204.90G&lt;BR /&gt;&lt;BR /&gt;% vgdisplay&lt;BR /&gt;  --- Volume group ---&lt;BR /&gt;  VG Name               VolGroupHPHA&lt;BR /&gt;  System ID&lt;BR /&gt;  Format                lvm2&lt;BR /&gt;  Metadata Areas        1&lt;BR /&gt;  Metadata Sequence No  11&lt;BR /&gt;  VG Access             read/write&lt;BR /&gt;  VG Status             resizable&lt;BR /&gt;  MAX LV                0&lt;BR /&gt;  Cur LV                7&lt;BR /&gt;  Open LV               7&lt;BR /&gt;  Max PV                0&lt;BR /&gt;  Cur PV                1&lt;BR /&gt;  Act PV                1&lt;BR /&gt;  VG Size               339.97 GB&lt;BR /&gt;  PE Size               32.00 MB&lt;BR /&gt;  Total PE              10879&lt;BR /&gt;  Alloc PE / Size       6532 / 204.12 GB&lt;BR /&gt;  Free  PE / Size       4347 / 135.84 GB&lt;BR /&gt;  VG UUID               wiqLs0-kD3s-0a0E-jEu0-FOD2-xBDZ-K0LR7N&lt;BR /&gt;&lt;BR /&gt;% pvdisplay&lt;BR /&gt;  --- Physical volume ---&lt;BR /&gt;  PV Name               /dev/cciss/c0d0p2&lt;BR /&gt;  VG Name               VolGroupHPHA&lt;BR /&gt;  PV Size               340.00 GB / not usable 31.81 MB&lt;BR /&gt;  Allocatable           yes&lt;BR /&gt;  PE Size (KByte)       32768&lt;BR /&gt;  Total PE              10879&lt;BR /&gt;  Free PE               4347&lt;BR /&gt;  Allocated PE          6532&lt;BR /&gt;  PV UUID               M0zFCv-gwU0-n1y6-S3BB-DM1a-IuWj-13JXP2&lt;BR /&gt;&lt;BR /&gt;And when iexecute the lvresize command, I get following error:&lt;BR /&gt;% /usr/sbin/lvresize -L+100G /dev/VolGroupHPHA/LogVol07&lt;BR /&gt;  Extending logical volume LogVol07 to 284.03 GB&lt;BR /&gt;  device-mapper: reload ioctl failed: Invalid argument&lt;BR /&gt;  Failed to suspend LogVol07&lt;BR /&gt;&lt;BR /&gt;Not sure why the increased size is not getting recognized.&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Feb 2009 12:19:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363054#M61142</guid>
      <dc:creator>Gaurav G</dc:creator>
      <dc:date>2009-02-20T12:19:35Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363055#M61143</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;To make LV larger.&lt;BR /&gt;&lt;BR /&gt;lvextend specifying new physical volume to extend to.&lt;BR /&gt;&lt;BR /&gt;Then resize2fs&lt;BR /&gt;&lt;BR /&gt;If you take a LUN on storage for example, and expand it on the storage controller, that change is very unlikely to be recognized at all, even after a reboot of the Linux system.&lt;BR /&gt;&lt;BR /&gt;The way to go is to create a new larger lun on the storage, use storage utilities to clone the data and set up a new logical volume and then file system.&lt;BR /&gt;&lt;BR /&gt;pvresize is not going to recognize a change made on the controller on a running system.&lt;BR /&gt;&lt;BR /&gt;I may have this wrong, but need more details from you to provide further assistance.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 20 Feb 2009 13:00:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363055#M61143</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-02-20T13:00:49Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363056#M61144</link>
      <description>Hi Steve,&lt;BR /&gt;&lt;BR /&gt;Thanks for the response.&lt;BR /&gt;I understand that.&lt;BR /&gt;I actually was trying to do a pvcreate instead of pvresize to create the new PV, but that requires a device name. And after adding the 2 new HDDs I could not figure out there device names.&lt;BR /&gt;My fdisk output looks as follows:&lt;BR /&gt;% fdisk -l&lt;BR /&gt;&lt;BR /&gt;Disk /dev/cciss/c0d0: 366.8 GB, 366870733824 bytes&lt;BR /&gt;255 heads, 63 sectors/track, 44602 cylinders&lt;BR /&gt;Units = cylinders of 16065 * 512 = 8225280 bytes&lt;BR /&gt;&lt;BR /&gt;           Device Boot      Start         End      Blocks   Id  System&lt;BR /&gt;/dev/cciss/c0d0p1   *           1          13      104391   83  Linux&lt;BR /&gt;/dev/cciss/c0d0p2              14       26761   214853310   8e  Linux LVM&lt;BR /&gt;&lt;BR /&gt; - Even here I can see the disk size has been increased to 366G.&lt;BR /&gt;&lt;BR /&gt; - How can I figure out the device names for the new HDDs I added?&lt;BR /&gt;&lt;BR /&gt; - Do let me know what other information would be useful for taking this forward.&lt;BR /&gt;</description>
      <pubDate>Fri, 20 Feb 2009 13:21:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363056#M61144</guid>
      <dc:creator>Gaurav G</dc:creator>
      <dc:date>2009-02-20T13:21:52Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363057#M61145</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;fdisk -l&lt;BR /&gt;&lt;BR /&gt;Will show device of new disks/LUNS added.&lt;BR /&gt;&lt;BR /&gt;I fear the premise of what you wish to do will fail.&lt;BR /&gt;&lt;BR /&gt;But I will after taking the Sabbath off check back and try and provide further assistance.&lt;BR /&gt;&lt;BR /&gt;Please let the community know how it turns out.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 20 Feb 2009 14:35:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363057#M61145</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-02-20T14:35:30Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363058#M61146</link>
      <description>You extended the existing LUN on the array, so you don't get any new devices. I assume that 366GB is about the right size for your whole array, if not then you have another problem. You can probably create a new partition of type "Linux LVM" in the empty space at the end of /dev/cciss/c0d0 and add it to your volume group, then extend your logical volumes.</description>
      <pubDate>Fri, 20 Feb 2009 16:48:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363058#M61146</guid>
      <dc:creator>Heironimus</dc:creator>
      <dc:date>2009-02-20T16:48:05Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363059#M61147</link>
      <description>&lt;!--!*#--&gt;You fail to specify the disks and logical unit sizes...&lt;BR /&gt;&lt;BR /&gt;What's the output from this commands?:&lt;BR /&gt;* hpacucli controller slot=1 logicaldrive all show&lt;BR /&gt;&lt;BR /&gt;As you are using partitions, you have two options:&lt;BR /&gt;&lt;BR /&gt;1- Delete the second partition (c0d0p2) and recreate it bigger. (that would be done offline, with a rescue cd). Then resize your PV and then you can resize your LVs.&lt;BR /&gt;2- Create a third partition (c0d0p3), initialize it as a PV and add it to your VG, then you can resize your LVs.&lt;BR /&gt;</description>
      <pubDate>Sun, 22 Feb 2009 00:15:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363059#M61147</guid>
      <dc:creator>Ciro  Iriarte</dc:creator>
      <dc:date>2009-02-22T00:15:59Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363060#M61148</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;During my trials and errors my machine went bad and today I had to reset it up.&lt;BR /&gt;Once I had an initial setup again, I started with following steps:&lt;BR /&gt;1. Expand&lt;BR /&gt;/usr/sbin/hpacucli ctrl slot=1 ld 1 add drives=allunassigned&lt;BR /&gt;&lt;BR /&gt;2. Extend&lt;BR /&gt;/usr/sbin/hpacucli ctrl slot=1 ld 1 modify size=max&lt;BR /&gt;&lt;BR /&gt;3. pvresize /dev/cciss/c0d0p2 &lt;BR /&gt;(This was different from the command I had executed last time. Last time I had executed command: pvresize --setphysicalvolumesize 340G /dev/cciss/c0d0p2)&lt;BR /&gt;&lt;BR /&gt;% pvresize /dev/cciss/c0d0p2&lt;BR /&gt;  Physical volume "/dev/cciss/c0d0p2" changed&lt;BR /&gt;  1 physical volume(s) resized / 0 physical volume(s) not resized&lt;BR /&gt;&lt;BR /&gt;But this time I did not see change in sizes for any of the commands.&lt;BR /&gt;&lt;BR /&gt;% vgs -v&lt;BR /&gt;    Finding all volume groups&lt;BR /&gt;    Finding volume group "VolGroupHPHA"&lt;BR /&gt;  VG           Attr   Ext    #PV #LV #SN VSize   VFree  VG UUID&lt;BR /&gt;  VolGroupHPHA wz--n- 32.00M   1   7   0 204.88G 18.75G d8hKjg-fYg8-WTrR-WMyw-i03r-DIsn-Ji2bZ6&lt;BR /&gt;&lt;BR /&gt;% pvs -v&lt;BR /&gt;    Scanning for physical volume names&lt;BR /&gt;  PV                VG           Fmt  Attr PSize   PFree  DevSize PV UUID&lt;BR /&gt;  /dev/cciss/c0d0p2 VolGroupHPHA lvm2 a-   204.88G 18.75G 204.90G mh3ZjX-Q0iK-8AZs-Ef0H-pxjF-zjS8-ohE409&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;The only command which is showing the new size is hpacucli which I think is expected because of the first 2 commands that I executed.&lt;BR /&gt;% hpacucli ctrl all show config&lt;BR /&gt;&lt;BR /&gt;Smart Array P400 in Slot 1    (sn: P61620H9SVY0J5)&lt;BR /&gt;&lt;BR /&gt;   array A (SAS, Unused Space: 0 MB)&lt;BR /&gt;&lt;BR /&gt;      logicaldrive 1 (341.7 GB, RAID 5, OK)&lt;BR /&gt;&lt;BR /&gt;      physicaldrive 1I:1:5 (port 1I:box 1:bay 5, SAS, 72 GB, OK)&lt;BR /&gt;      physicaldrive 1I:1:6 (port 1I:box 1:bay 6, SAS, 72 GB, OK)&lt;BR /&gt;      physicaldrive 2I:1:1 (port 2I:box 1:bay 1, SAS, 72 GB, OK)&lt;BR /&gt;      physicaldrive 2I:1:2 (port 2I:box 1:bay 2, SAS, 72 GB, OK)&lt;BR /&gt;      physicaldrive 2I:1:3 (port 2I:box 1:bay 3, SAS, 72 GB, OK)&lt;BR /&gt;      physicaldrive 2I:1:4 (port 2I:box 1:bay 4, SAS, 72 GB, OK)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;This time around pvresize (without setphysicalvolumesize parameter) did not detect the correct size and did not resize the pv. I had expected it to comsume all the available bandwidth and be around 340G or so.&lt;BR /&gt;&lt;BR /&gt;My thought is that I am missing something basic, but havent been able to figure that out till now.&lt;BR /&gt;&lt;BR /&gt;I have also tried if I could create a new PV using pvcreate, however on my system I just cannot get the device names for the newly added HDDs so cannot take that approcah either.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Gaurav G.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 23 Feb 2009 10:59:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363060#M61148</guid>
      <dc:creator>Gaurav G</dc:creator>
      <dc:date>2009-02-23T10:59:29Z</dc:date>
    </item>
    <item>
      <title>Re: Extend LVM - Issue after pvresize</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363061#M61149</link>
      <description>Please, review my last post. You MUST either create a third partition or recreate the second one to use all available space (pvresize doesn't expand partitions)...</description>
      <pubDate>Mon, 23 Feb 2009 22:36:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extend-lvm-issue-after-pvresize/m-p/4363061#M61149</guid>
      <dc:creator>Ciro  Iriarte</dc:creator>
      <dc:date>2009-02-23T22:36:08Z</dc:date>
    </item>
  </channel>
</rss>

