<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Creating a multi-volume LVM VG in Kickstart? in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443174#M37119</link>
    <description>&lt;!--!*#--&gt;&lt;BR /&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;I'd like to sanity check/bounce this off you all before I go ahead with this.  I'm preparing to install a DL580 fully populated with 16 drives, split across 2 x P400 controllers.  For maximum availability and performance I want to mirror the drives across both controllers, add the mirrors into an LVM volume group and then create striped Logical Volumes across those mirrors for data.&lt;BR /&gt;&lt;BR /&gt;I've prepared the following Kickstart but have struggled to find any examples online of creating a multi-volume VG at boot time using kickstart. Redhat's Kickstart docs only give an example using one pv. Is this likely to work? If not then I can fall back to using one and then add a %post script to extend the volume group with the rest later on.&lt;BR /&gt;&lt;BR /&gt;Note: the display width (on preview) looks like it's causing the "physical volume (LVM)" lines to wrap, but they are all one line.&lt;BR /&gt;&lt;BR /&gt;John &lt;BR /&gt;&lt;BR /&gt;bootloader --location=mbr --driveorder=cciss/c0d0,cciss/c1d0 &lt;BR /&gt;clearpart --linux&lt;BR /&gt;&lt;BR /&gt;part raid.1 --size 300 --ondisk cciss/c0d0 --asprimary&lt;BR /&gt;part raid.2 --size 300 --ondisk cciss/c1d0 --asprimary&lt;BR /&gt;raid /boot --fstype ext3 --level=RAID1 --device=md0 raid.1 raid.2&lt;BR /&gt;&lt;BR /&gt;part raid.3 --size 1 --ondisk cciss/c0d0 --grow&lt;BR /&gt;part raid.4 --size 1 --ondisk cciss/c1d0 --grow&lt;BR /&gt;raid pv.1 --fstype "physical volume (LVM)" --level=RAID1 --device=md1 raid.3 raid.4&lt;BR /&gt;&lt;BR /&gt;part raid.5 --size 1 --ondisk cciss/c0d1 --grow&lt;BR /&gt;part raid.6 --size 1 --ondisk cciss/c1d1 --grow&lt;BR /&gt;raid pv.2 --fstype "physical volume (LVM)" --level=RAID1 --device=md2 raid.5 raid.6&lt;BR /&gt;&lt;BR /&gt;part raid.7 --size 1 --ondisk cciss/c0d2 --grow&lt;BR /&gt;part raid.8 --size 1 --ondisk cciss/c1d2 --grow&lt;BR /&gt;raid pv.3 --fstype "physical volume (LVM)" --level=RAID1 --device=md3 raid.7 raid.8&lt;BR /&gt;&lt;BR /&gt;part raid.9 --size 1 --ondisk cciss/c0d3 --grow&lt;BR /&gt;part raid.10 --size 1 --ondisk cciss/c1d3 --grow&lt;BR /&gt;raid pv.4 --fstype "physical volume (LVM)" --level=RAID1 --device=md4 raid.9 raid.10&lt;BR /&gt;&lt;BR /&gt;part raid.11 --size 1 --ondisk cciss/c0d4 --grow&lt;BR /&gt;part raid.12 --size 1 --ondisk cciss/c1d4 --grow&lt;BR /&gt;raid pv.5 --fstype "physical volume (LVM)" --level=RAID1 --device=md5 raid.11 raid.12&lt;BR /&gt;&lt;BR /&gt;part raid.13 --size 1 --ondisk cciss/c0d5 --grow&lt;BR /&gt;part raid.14 --size 1 --ondisk cciss/c1d5 --grow&lt;BR /&gt;raid pv.6 --fstype "physical volume (LVM)" --level=RAID1 --device=md6 raid.13 raid.14&lt;BR /&gt;&lt;BR /&gt;part raid.15 --size 1 --ondisk cciss/c0d6 --grow&lt;BR /&gt;part raid.16 --size 1 --ondisk cciss/c1d6 --grow&lt;BR /&gt;raid pv.7 --fstype "physical volume (LVM)" --level=RAID1 --device=md7 raid.15 raid.16&lt;BR /&gt;&lt;BR /&gt;part raid.17 --size 1 --ondisk cciss/c0d7 --grow&lt;BR /&gt;part raid.18 --size 1 --ondisk cciss/c1d7 --grow&lt;BR /&gt;raid pv.8 --fstype "physical volume (LVM)" --level=RAID1 --device=md8 raid.17 raid.18&lt;BR /&gt;&lt;BR /&gt;volgroup system --pesize=32768 pv.1 pv.2 pv.3 pv.4 pv.5 pv.6 pv.7 pv.8&lt;BR /&gt;logvol / --fstype ext3 --name=root --vgname=system --size=51200&lt;BR /&gt;logvol swap --fstype swap --name=swap --vgname=system --size=16384</description>
    <pubDate>Fri, 19 Jun 2009 09:45:21 GMT</pubDate>
    <dc:creator>John McNulty_2</dc:creator>
    <dc:date>2009-06-19T09:45:21Z</dc:date>
    <item>
      <title>Creating a multi-volume LVM VG in Kickstart?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443174#M37119</link>
      <description>&lt;!--!*#--&gt;&lt;BR /&gt;Hi,&lt;BR /&gt;&lt;BR /&gt;I'd like to sanity check/bounce this off you all before I go ahead with this.  I'm preparing to install a DL580 fully populated with 16 drives, split across 2 x P400 controllers.  For maximum availability and performance I want to mirror the drives across both controllers, add the mirrors into an LVM volume group and then create striped Logical Volumes across those mirrors for data.&lt;BR /&gt;&lt;BR /&gt;I've prepared the following Kickstart but have struggled to find any examples online of creating a multi-volume VG at boot time using kickstart. Redhat's Kickstart docs only give an example using one pv. Is this likely to work? If not then I can fall back to using one and then add a %post script to extend the volume group with the rest later on.&lt;BR /&gt;&lt;BR /&gt;Note: the display width (on preview) looks like it's causing the "physical volume (LVM)" lines to wrap, but they are all one line.&lt;BR /&gt;&lt;BR /&gt;John &lt;BR /&gt;&lt;BR /&gt;bootloader --location=mbr --driveorder=cciss/c0d0,cciss/c1d0 &lt;BR /&gt;clearpart --linux&lt;BR /&gt;&lt;BR /&gt;part raid.1 --size 300 --ondisk cciss/c0d0 --asprimary&lt;BR /&gt;part raid.2 --size 300 --ondisk cciss/c1d0 --asprimary&lt;BR /&gt;raid /boot --fstype ext3 --level=RAID1 --device=md0 raid.1 raid.2&lt;BR /&gt;&lt;BR /&gt;part raid.3 --size 1 --ondisk cciss/c0d0 --grow&lt;BR /&gt;part raid.4 --size 1 --ondisk cciss/c1d0 --grow&lt;BR /&gt;raid pv.1 --fstype "physical volume (LVM)" --level=RAID1 --device=md1 raid.3 raid.4&lt;BR /&gt;&lt;BR /&gt;part raid.5 --size 1 --ondisk cciss/c0d1 --grow&lt;BR /&gt;part raid.6 --size 1 --ondisk cciss/c1d1 --grow&lt;BR /&gt;raid pv.2 --fstype "physical volume (LVM)" --level=RAID1 --device=md2 raid.5 raid.6&lt;BR /&gt;&lt;BR /&gt;part raid.7 --size 1 --ondisk cciss/c0d2 --grow&lt;BR /&gt;part raid.8 --size 1 --ondisk cciss/c1d2 --grow&lt;BR /&gt;raid pv.3 --fstype "physical volume (LVM)" --level=RAID1 --device=md3 raid.7 raid.8&lt;BR /&gt;&lt;BR /&gt;part raid.9 --size 1 --ondisk cciss/c0d3 --grow&lt;BR /&gt;part raid.10 --size 1 --ondisk cciss/c1d3 --grow&lt;BR /&gt;raid pv.4 --fstype "physical volume (LVM)" --level=RAID1 --device=md4 raid.9 raid.10&lt;BR /&gt;&lt;BR /&gt;part raid.11 --size 1 --ondisk cciss/c0d4 --grow&lt;BR /&gt;part raid.12 --size 1 --ondisk cciss/c1d4 --grow&lt;BR /&gt;raid pv.5 --fstype "physical volume (LVM)" --level=RAID1 --device=md5 raid.11 raid.12&lt;BR /&gt;&lt;BR /&gt;part raid.13 --size 1 --ondisk cciss/c0d5 --grow&lt;BR /&gt;part raid.14 --size 1 --ondisk cciss/c1d5 --grow&lt;BR /&gt;raid pv.6 --fstype "physical volume (LVM)" --level=RAID1 --device=md6 raid.13 raid.14&lt;BR /&gt;&lt;BR /&gt;part raid.15 --size 1 --ondisk cciss/c0d6 --grow&lt;BR /&gt;part raid.16 --size 1 --ondisk cciss/c1d6 --grow&lt;BR /&gt;raid pv.7 --fstype "physical volume (LVM)" --level=RAID1 --device=md7 raid.15 raid.16&lt;BR /&gt;&lt;BR /&gt;part raid.17 --size 1 --ondisk cciss/c0d7 --grow&lt;BR /&gt;part raid.18 --size 1 --ondisk cciss/c1d7 --grow&lt;BR /&gt;raid pv.8 --fstype "physical volume (LVM)" --level=RAID1 --device=md8 raid.17 raid.18&lt;BR /&gt;&lt;BR /&gt;volgroup system --pesize=32768 pv.1 pv.2 pv.3 pv.4 pv.5 pv.6 pv.7 pv.8&lt;BR /&gt;logvol / --fstype ext3 --name=root --vgname=system --size=51200&lt;BR /&gt;logvol swap --fstype swap --name=swap --vgname=system --size=16384</description>
      <pubDate>Fri, 19 Jun 2009 09:45:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443174#M37119</guid>
      <dc:creator>John McNulty_2</dc:creator>
      <dc:date>2009-06-19T09:45:21Z</dc:date>
    </item>
    <item>
      <title>Re: Creating a multi-volume LVM VG in Kickstart?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443175#M37120</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Best thing to do is do a normal install on a system like this, and use that as a template for a kickstart file.&lt;BR /&gt;&lt;BR /&gt;If that does not work, or is not feasible, you may be faced with a litlte trial and error.&lt;BR /&gt;&lt;BR /&gt;I see nothing inherently wrong with what you are trying.&lt;BR /&gt;&lt;BR /&gt;Note also, if you have time to specify the server, that using built in hardware raid is better for performance than software raid. RAID controllers have CPU to handle I/O which takes this work load off the main CPU.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Fri, 19 Jun 2009 11:27:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443175#M37120</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2009-06-19T11:27:28Z</dc:date>
    </item>
    <item>
      <title>Re: Creating a multi-volume LVM VG in Kickstart?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443176#M37121</link>
      <description>&lt;BR /&gt;Hi SEP,&lt;BR /&gt;&lt;BR /&gt;Yes doing a manual install first in the back of my mind as a possible option is this doesn't pan out the way I expect.&lt;BR /&gt;&lt;BR /&gt;I don't agree with always using hardware RAID though, just because it's there, as most people seem to do. Hardware RAID doesn't allow me to mirror across controllers, so creates a single point of failure in a RAID card.  I also don't like RAID5 very much: the read-read-write-write performance penalty for modifying data blocks sucks, and performance goes right down the toilet when rebalancing a RAID5 volume after a disk failure.&lt;BR /&gt;&lt;BR /&gt;I've built a config spanning two controllers like this before on an rx6600 using LVM on HP-UX, and performance was very fast.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 19 Jun 2009 11:52:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443176#M37121</guid>
      <dc:creator>John McNulty_2</dc:creator>
      <dc:date>2009-06-19T11:52:47Z</dc:date>
    </item>
    <item>
      <title>Re: Creating a multi-volume LVM VG in Kickstart?</title>
      <link>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443177#M37122</link>
      <description>&lt;BR /&gt;Well, that actually worked first time. But in hind sight (and for the rest of the builds) I think it's better to stick with just one disk and build/add the remaining mirrors in later. What the install does when creating the mirrors for the first time is wait around until both halves of the mirror sync up. It looks like the install has hung, until you alt-f2 over to the install shell and watch(in top) all the md resync processes beavering away, and monitor their progress from /proc/mdstat.  &lt;BR /&gt;&lt;BR /&gt;With this many disks it stalls the install for a couple of hours until the last of the mirrors are complete before carrying on.&lt;BR /&gt;&lt;BR /&gt;So, an interesting experiment but not worth doing it up front.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 24 Jun 2009 14:35:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/creating-a-multi-volume-lvm-vg-in-kickstart/m-p/4443177#M37122</guid>
      <dc:creator>John McNulty_2</dc:creator>
      <dc:date>2009-06-24T14:35:38Z</dc:date>
    </item>
  </channel>
</rss>

