<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: RHEL4 Linux boot disk mirroring. in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931282#M26774</link>
    <description>Hi Andrea Rossi, so bad your experience but I have several servers with software raid working without problems since years.&lt;BR /&gt;&lt;BR /&gt;I can say that Linux software RAID is reliable.</description>
    <pubDate>Fri, 26 Jan 2007 07:48:07 GMT</pubDate>
    <dc:creator>Ivan Ferreira</dc:creator>
    <dc:date>2007-01-26T07:48:07Z</dc:date>
    <item>
      <title>RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931276#M26768</link>
      <description>On RHEL4 I would like to (software) mirror root disk/partition to one more identical extra SCSI hard drive disk installed on the server. (We don’t have any hardware RAID card installed on this Server).&lt;BR /&gt;&lt;BR /&gt;On HP-UX I used to do LVM mirroring using Mirror/UX but I am note sure the same could be done on Red Hat Linux 4.&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Gulam.&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Jan 2007 10:06:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931276#M26768</guid>
      <dc:creator>Gulam Mohiuddin</dc:creator>
      <dc:date>2007-01-23T10:06:28Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931277#M26769</link>
      <description>Here is how I did it once:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=904635" target="_blank"&gt;http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=904635&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;The step missing is to set the partition type to software RAID for the vfat partition.</description>
      <pubDate>Tue, 23 Jan 2007 10:51:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931277#M26769</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-01-23T10:51:37Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931278#M26770</link>
      <description>Probably you need to change raidtools for mdadm also, but the idea is that.</description>
      <pubDate>Tue, 23 Jan 2007 10:52:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931278#M26770</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-01-23T10:52:32Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931279#M26771</link>
      <description>Probably you need to change raidtools for mdadm also, but that's the idea.</description>
      <pubDate>Tue, 23 Jan 2007 10:52:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931279#M26771</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-01-23T10:52:38Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931280#M26772</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I found this with google search maybe this will help you along!&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.tek-tips.com/viewthread.cfm?qid=1318503&amp;amp;page=1" target="_blank"&gt;http://www.tek-tips.com/viewthread.cfm?qid=1318503&amp;amp;page=1&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;But beware I have never try any of this and I would not try any of this unless I had a good/valid/tested back of my original disk!&lt;BR /&gt;&lt;BR /&gt;enjoy, life.&lt;BR /&gt;&lt;BR /&gt;Jean-Pierre Huc&lt;BR /&gt;</description>
      <pubDate>Tue, 23 Jan 2007 11:14:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931280#M26772</guid>
      <dc:creator>Huc_1</dc:creator>
      <dc:date>2007-01-23T11:14:03Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931281#M26773</link>
      <description>Hi&lt;BR /&gt;&lt;BR /&gt;in my personal experience software raid on x86 platform is not as reliable as on HPUX, and linux make no exception. Besides hw raid is really more affordable.</description>
      <pubDate>Fri, 26 Jan 2007 06:47:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931281#M26773</guid>
      <dc:creator>Andrea Rossi</dc:creator>
      <dc:date>2007-01-26T06:47:51Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931282#M26774</link>
      <description>Hi Andrea Rossi, so bad your experience but I have several servers with software raid working without problems since years.&lt;BR /&gt;&lt;BR /&gt;I can say that Linux software RAID is reliable.</description>
      <pubDate>Fri, 26 Jan 2007 07:48:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931282#M26774</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2007-01-26T07:48:07Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931283#M26775</link>
      <description>&lt;!--!*#--&gt;Hi,&lt;BR /&gt; &lt;BR /&gt;we set up all our Linux servers with Linux MD software raid, and we have never experienced any problems with it, absolutely satisfied.&lt;BR /&gt;In fact I would consider Linux MD RAID even more reliable from my point of view than any queer hardware raid controller whose state is mostly difficult to monitor if you don't have a suitable MIB (some vendors seem to not care).&lt;BR /&gt;This is said, because with Linux MD RAID you can easily enable the mdadm --monitor mode,&lt;BR /&gt;and integrating traps to send passive checks to a Nagios server is absolutely painless.&lt;BR /&gt;I heard that newer versions of Linux LVM recently have started incorporating LVM mirroring, like known from HP-UX LVM,&lt;BR /&gt;but so far this is still considered experimental, wheras LVM volumes on top of MD RAID devices is well proved and tested technology.&lt;BR /&gt;You can set up MD RAID mirroring already during the installation (which is probably easier if you are new to the mdadm command).&lt;BR /&gt;But you can also at any time setup MD devices from a living Linux that was installed into partitions.&lt;BR /&gt;Here's how I would convert from a partitioned Linux.&lt;BR /&gt;First your kernel must support MD_RAID.&lt;BR /&gt;The RHEL vanilla kernels have been doing this for quite a while, but one never knows.&lt;BR /&gt;You could check your kernel's config file.&lt;BR /&gt; &lt;BR /&gt;# grep MD_RAID /boot/config-$(uname -r)&lt;BR /&gt;CONFIG_MD_RAID0=m&lt;BR /&gt;CONFIG_MD_RAID1=m&lt;BR /&gt;CONFIG_MD_RAID10=m&lt;BR /&gt;CONFIG_MD_RAID5=m&lt;BR /&gt;CONFIG_MD_RAID6=m&lt;BR /&gt;&lt;BR /&gt;If you're unlucky you have to build a new kernel from the sources.&lt;BR /&gt;There you can abbreviate things a lot by first running "make oldconfig"!&lt;BR /&gt;And only afterwards search in the deeply nested menuconfig for MD support&lt;BR /&gt;(it really can be difficult to find because the Linux kernels boat with configurable options, and I cannot remeber by heart anymore what the menu path was)&lt;BR /&gt;You can check the different raid levels in kernel menuconfig either as "y" which would link them statically, or as "m" (like in my output) which would require you to make an initial RAM disk (which is easy thanks to mkinitrd) later, because you need to have RAID1 available at an early boot stage.&lt;BR /&gt;Then run "make bzImage" and "make modules", "make mnodules_install", move the kernel into your /boot and adapt your boot loader, and boot the new kernel.&lt;BR /&gt;Once booted, check that you can load the raid1&lt;BR /&gt;&lt;BR /&gt;# modprobe raid1&lt;BR /&gt;&lt;BR /&gt;# lsmod|grep raid&lt;BR /&gt;raid1                  19521  5 &lt;BR /&gt;&lt;BR /&gt;This is done automatically later by the init script from the initial RAM disk.&lt;BR /&gt;&lt;BR /&gt;Ah, forgot your kernel also requires LVM support.&lt;BR /&gt;So check like&lt;BR /&gt;&lt;BR /&gt;# grep BLK_DEV_DM /boot/config-$(uname -r)&lt;BR /&gt;CONFIG_BLK_DEV_DM=m&lt;BR /&gt;&lt;BR /&gt;Let's assume you have two identical SCSCI disks /dev/sda, where your running Linux is installed, and /dev/sdb which is free.&lt;BR /&gt;&lt;BR /&gt;You need to create a partition scheme on /dev/sdb which should contain at least 3 partitions of type auto raid (0xfd).&lt;BR /&gt;You can use fdisk for manual creation.&lt;BR /&gt;It could look similar to mine&lt;BR /&gt; &lt;BR /&gt;# fdisk -l /dev/sdb&lt;BR /&gt;&lt;BR /&gt;Disk /dev/sdb: 73.4 GB, 73407820800 bytes&lt;BR /&gt;255 heads, 63 sectors/track, 8924 cylinders&lt;BR /&gt;Units = cylinders of 16065 * 512 = 8225280 bytes&lt;BR /&gt;&lt;BR /&gt;   Device Boot      Start         End      Blocks   Id  System&lt;BR /&gt;/dev/sdb1   *           1          17      136521   fd  Linux raid autodetect&lt;BR /&gt;/dev/sdb2              18        1263    10008495   fd  Linux raid autodetect&lt;BR /&gt;/dev/sdb3            1264        1762     4008217+  fd  Linux raid autodetect&lt;BR /&gt;/dev/sdb4            1763        8924    57528765    5  Extended&lt;BR /&gt;/dev/sdb5            1763        6743    40009851   fd  Linux raid autodetect&lt;BR /&gt;/dev/sdb6            6744        6868     1004031   82  Linux swap&lt;BR /&gt;/dev/sdb7            6869        8924    16514788+  fd  Linux raid autodetect&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;/dev/sdb1 will carry your /boot filesystem,&lt;BR /&gt;and thus only requires to be small as some 100 MB.&lt;BR /&gt;I would strongly advise you to later have /boot mounted as /dev/md0 rather than some LVM volume (which although it should theoratically work would make your life in recovery mode far too complicated)&lt;BR /&gt;/dev/sdb2 becomes the PV for our root VG.&lt;BR /&gt;So abt. 5-10 GB is recommended.&lt;BR /&gt;Finally we need a swap device,&lt;BR /&gt;which also should not reside in an LVM volume for performance reasons (this is different to HP-UX LVM).&lt;BR /&gt;Apart from that you may partition more raid slices for devices beyond /dev/md2,&lt;BR /&gt;but they aren't required.&lt;BR /&gt;&lt;BR /&gt;Now you can start building the raid devices.&lt;BR /&gt;At this stage they must be created as degraded arrays because the later mirror slices on /dev/sda are still used by our running Linux.&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;# mdadm -C -n2 -l1 /dev/md0 /dev/sdb1 missing&lt;BR /&gt;&lt;BR /&gt;-C is create mode, -n2 number of components (slices), -l1 layout as RAID1, and missing because our /dev/sda1 is still in use.&lt;BR /&gt;&lt;BR /&gt;You repeat this for all the devices you require.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;# mdadm -C -n2 -l1 /dev/md1 /dev/sdb2 missing&lt;BR /&gt;&lt;BR /&gt;# mdadm -C -n2 -l1 /dev/md2 /dev/sdb3 missing&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Now you already can create an /etc/mdadm.conf &lt;BR /&gt;for the degraded array.&lt;BR /&gt;&lt;BR /&gt;# cat &amp;lt;&lt;EOF&gt;/etc/mdadm.conf&lt;BR /&gt;DEVICE /dev/sdb[123]&lt;BR /&gt;#&lt;BR /&gt;MAILADDR root&lt;BR /&gt;#&lt;BR /&gt;EOF&lt;BR /&gt;&lt;BR /&gt;# mdadm -QDs &amp;gt;&amp;gt;/etc/mdadm.conf&lt;BR /&gt;&lt;BR /&gt;Now create an ext3 filesystem to later host /boot.&lt;BR /&gt;&lt;BR /&gt;# mkfs.ext3 /dev/md0&lt;BR /&gt;&lt;BR /&gt;and disable mount counts since we have a journalling FS&lt;BR /&gt;&lt;BR /&gt;# tune2fs -c 0 -i 0 /dev/md0&lt;BR /&gt;&lt;BR /&gt;mount it and copy contents from /boot&lt;BR /&gt;in it&lt;BR /&gt;&lt;BR /&gt;# mkdir -p /mnt/tmp{1,2,3,4,5}&lt;BR /&gt;&lt;BR /&gt;# mount /dev/md0 /mnt/tmp1&lt;BR /&gt;&lt;BR /&gt;# cd /boot &amp;amp;&amp;amp; tar cf - .|(cd /mnt/tmp1 &amp;amp;&amp;amp; tar xf -)&lt;BR /&gt;&lt;BR /&gt;# df -i /boot /mnt/tmp1&lt;BR /&gt;&lt;BR /&gt;In case your current initial RAM disk has neither LVM nor MD support better create a new one like&lt;BR /&gt;&lt;BR /&gt;# mkinitrd /mnt/tmp1/initrd-raid-$(uname -r).imp $(uname -r)&lt;BR /&gt;&lt;BR /&gt;Depending on your boot loader you then need to edit either /etc/lilo.conf or /mnt/tmp1/grub/grub.conf&lt;BR /&gt;&lt;BR /&gt;Not until so long ago LILO was the preferred loader for MD RAID boots.&lt;BR /&gt;But now grub also can cope with it, and I prefer the latter because of its wider features.&lt;BR /&gt;&lt;BR /&gt;You need to the directives "kernel" point to your RAID enabled kernel (probably the same one as before).&lt;BR /&gt;And important, the root= must point to the LV that shall carry /, e.g. root=/dev/vgroot/lv_root&lt;BR /&gt;The other options can remain the same.&lt;BR /&gt;Finally "initrd" must point to the file we just created by mkinitrd.&lt;BR /&gt;Best copy a whole stanza from the menu and only edit the changes.&lt;BR /&gt;&lt;BR /&gt;If all's done &lt;BR /&gt;&lt;BR /&gt;# umount /mnt/tmp1&lt;BR /&gt;&lt;BR /&gt;Then you can create LVM volumes on top of /dev/md1 to carry your OS.&lt;BR /&gt;&lt;BR /&gt;# pvcreate /dev/md1&lt;BR /&gt;&lt;BR /&gt;# vgcreate vgroot /dev/md1&lt;BR /&gt;&lt;BR /&gt;# lvcreate -L 1024m -n lv_root vgroot&lt;BR /&gt;# lvcreate -L 512m -n lv_var vgroot&lt;BR /&gt;# lvcreate -L 512m -n lv_tmp vgroot&lt;BR /&gt;# lvcreate -L 4096m -n lv_usr vgroot&lt;BR /&gt;# lvcreate -L 256m -n lv_home vgroot&lt;BR /&gt;# lvcreate -L 356m -n lv_opt&lt;BR /&gt;&lt;BR /&gt;etc.&lt;BR /&gt;&lt;BR /&gt;The volumes and their sizes heavily relies on your needs.&lt;BR /&gt;If you later need more space, no problem.&lt;BR /&gt;Current RHEL have the ext2online command which lets you extend even a mounted filesystem.&lt;BR /&gt;&lt;BR /&gt;Supply ext3 filesystems on them.&lt;BR /&gt;&lt;BR /&gt;# vgdisplay -v vgroot 2&amp;gt;/dev/null&lt;BR /&gt;&lt;BR /&gt;# lvs --noheadings --separator=,|cut -d, -f1|xargs -n1 -i mkfs.ext3 /dev/vgroot/{}&lt;BR /&gt;&lt;BR /&gt;# lvs --noheadings --separator=,|cut -d, -f1|xargs -n1 -i tune2fs -i 0 -c 0  /dev/vgroot/{}&lt;BR /&gt;&lt;BR /&gt;After that you need to copy all data from your current mounts into the new LVs.&lt;BR /&gt;You could do this either in single user mode&lt;BR /&gt;or produce snapshots and copy from them&lt;BR /&gt;if your current Linux already uses LVM.&lt;BR /&gt;I only demonstrate this for the / filesystem.&lt;BR /&gt;You must repeat for every filesystem.&lt;BR /&gt;&lt;BR /&gt;I would use dump/restore for copying&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;# mount /dev/vgroot/lv_root /mnt/tmp1&lt;BR /&gt;&lt;BR /&gt;# dump -lf - /|(cd /mnt/tmp1 &amp;amp;&amp;amp; restore -rf -)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Then you need to edit fstab in your new root filesystem to read the new mount points.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;/dev/md0             /boot ext3 defaults 1 2&lt;BR /&gt;/dev/vgroot/lv_root  /   ext3  defaults 1 1&lt;BR /&gt;/dev/md2            swap swap   defaults 0 0&lt;BR /&gt;...&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Finally, you need to write grub into the MBR of /dev/sdb&lt;BR /&gt;&lt;BR /&gt;# grub&lt;BR /&gt;&lt;BR /&gt;grub&amp;gt; root (hd1,0)&lt;BR /&gt;grub&amp;gt; setup (hd1)&lt;BR /&gt;grub&amp;gt; quit&lt;BR /&gt;&lt;BR /&gt;You then can try to boot your new LVM on top of RAID.&lt;BR /&gt;&lt;BR /&gt;If, hopefully all worked well&lt;BR /&gt;you can destroy your old Linux on /dev/sda&lt;BR /&gt;and complete the mirror.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;# sfdisk -d /dev/sdb | sfdisk /dev/sda&lt;BR /&gt;&lt;BR /&gt;# mdadm /dev/md0 -a /dev/sda1&lt;BR /&gt;&lt;BR /&gt;You can check syncing with&lt;BR /&gt;&lt;BR /&gt;# mdadm -QD /dev/md0&lt;BR /&gt;&lt;BR /&gt;If all's clean continue with /dev/md1 and /dev/md2.&lt;BR /&gt;&lt;BR /&gt;Finally you must update your mdadm.conf.&lt;BR /&gt;&lt;BR /&gt;Edit it and throw away only lines starting with ARRAY and execute&lt;BR /&gt;&lt;BR /&gt;# mdadm -QDs &amp;gt;&amp;gt;/etc/mdadm.conf&lt;BR /&gt;&lt;BR /&gt;Add to the DEVICE line your new partitions so that it reads&lt;BR /&gt; &lt;BR /&gt;DEVICE /dev/sd[ab][123]&lt;BR /&gt;&lt;BR /&gt;Finally update the MBR on /dev/sda.&lt;BR /&gt;&lt;BR /&gt;# grub&lt;BR /&gt;grub&amp;gt; root (hd0,0)&lt;BR /&gt;grub&amp;gt; setup (hd0)&lt;BR /&gt;grub&amp;gt; quit&lt;BR /&gt;&lt;BR /&gt;Now, if I haven't forgotten or overlooked something decisive, you should have a Linux installation on SW MD RAID.&lt;BR /&gt;&lt;BR /&gt;Finally you can setup md monitoring by writing a short sccript that does some sort of notification (e.g. submitting a apssive check to a Nagios server) and put its path in an extra line after PROGRAM in mdadm.conf&lt;BR /&gt;and run&lt;BR /&gt;&lt;BR /&gt;# service md-monitor start&lt;BR /&gt;&lt;BR /&gt;&lt;/EOF&gt;</description>
      <pubDate>Mon, 29 Jan 2007 12:01:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931283#M26775</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2007-01-29T12:01:15Z</dc:date>
    </item>
    <item>
      <title>Re: RHEL4 Linux boot disk mirroring.</title>
      <link>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931284#M26776</link>
      <description>&lt;!--!*#--&gt;Hi Gulam,&lt;BR /&gt; &lt;BR /&gt;I noticed a crucial typo I made yesterday while writing.&lt;BR /&gt;But I'm sure you have already noticed.&lt;BR /&gt;The dump command should take a level 0 dump (i.e. full backup).&lt;BR /&gt;Therefore, it should read "dump -0f -" before the pipe.&lt;BR /&gt;But you could of course also use any other backup mechanism you are familiar with, like piping output from find /fs -xdev into a cpio in passthrough mode.&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;# lvcreate -L 256m -n lvsnap -s /dev/vgrootX/lv_root&lt;BR /&gt;&lt;BR /&gt;# mount -r /dev/vgrootX/lvsnap /mnt/tmp1&lt;BR /&gt;&lt;BR /&gt;# df -ik / /mnt/tmp1&lt;BR /&gt;Filesystem            Inodes   IUsed   IFree IUse% Mounted on&lt;BR /&gt;/dev/mapper/vgrootX-lv_root&lt;BR /&gt;                      131072    6915  124157    6% /&lt;BR /&gt;/dev/mapper/vgrootX-lvsnap&lt;BR /&gt;                      131072    6915  124157    6% /mnt/tmp1&lt;BR /&gt;&lt;BR /&gt;# lvdisplay /dev/vgrootX/lv_root | awk '/Current LE/{print$NF}'&lt;BR /&gt;256&lt;BR /&gt;# lvcreate -l 256 -n lv_root_copy vgrootX&lt;BR /&gt;  Logical volume "lv_root_copy" created&lt;BR /&gt; &lt;BR /&gt;# mkfs.ext3 -q /dev/vgrootX/lv_root_copy &lt;BR /&gt;max_blocks 268435456, rsv_groups = 8192, rsv_gdb = 63&lt;BR /&gt;inode.i_blocks = 2528, i_size = 4243456&lt;BR /&gt;&lt;BR /&gt;# tune2fs -i 0 -c 0 /dev/vgrootX/lv_root_copy &lt;BR /&gt;tune2fs 1.35 (28-Feb-2004)&lt;BR /&gt;Setting maximal mount count to -1&lt;BR /&gt;Setting interval between check 0 seconds&lt;BR /&gt;&lt;BR /&gt;# mount /dev/vgrootX/lv_root_copy /mnt/tmp2&lt;BR /&gt;&lt;BR /&gt;# cd /mnt/tmp1 &amp;amp;&amp;amp; find . -xdev |cpio -pmuda /mnt/tmp2&lt;BR /&gt;234814 blocks&lt;BR /&gt;&lt;BR /&gt;# df -i /mnt/tmp[12]&lt;BR /&gt;Filesystem            Inodes   IUsed   IFree IUse% Mounted on&lt;BR /&gt;/dev/mapper/vgrootX-lvsnap&lt;BR /&gt;                      131072    6915  124157    6% /mnt/tmp1&lt;BR /&gt;/dev/mapper/vgrootX-lv_root_copy&lt;BR /&gt;                      131072    6915  124157    6% /mnt/tmp2&lt;BR /&gt;&lt;BR /&gt;# df -k /mnt/tmp[12]&lt;BR /&gt;Filesystem           1K-blocks      Used Available Use% Mounted on&lt;BR /&gt;/dev/mapper/vgrootX-lvsnap&lt;BR /&gt;                       1032088    170388    809272  18% /mnt/tmp1&lt;BR /&gt;/dev/mapper/vgrootX-lv_root_copy&lt;BR /&gt;                       1032088    170408    809252  18% /mnt/tmp2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Of course, you could also use pax,&lt;BR /&gt;which is more flexible in on-the-fly path manipulation etc.&lt;BR /&gt;&lt;BR /&gt;I just wanted to demonstrate what a great feature snapshots are that can be applied to living systems (e.g. an updating database)&lt;BR /&gt;for very fast snapshot copies for test environments etc.&lt;BR /&gt;After its use you just unmount the snapshot and destroy it&lt;BR /&gt;&lt;BR /&gt;Btw, an lvdisplay on the snapshot shows you how much of it has been currently used up to store the differences of the snaphot volume since the snapshot was taken.&lt;BR /&gt;If you have snapshot a very active filesystem and you run into danger of filling the snapshot up (at which point it is rendered useless) you may alway lvextend the snapshot to accomodate more space.&lt;BR /&gt;As you can see my snapshot has been far to large with 256 MB.&lt;BR /&gt;During the time I took the dump I used up less then 5% of it.&lt;BR /&gt;&lt;BR /&gt;# lvdisplay /dev/vgrootX/lvsnap |grep -i allocated&lt;BR /&gt;  Allocated to snapshot  3.47% &lt;BR /&gt;&lt;BR /&gt;# pwd&lt;BR /&gt;/mnt/tmp1&lt;BR /&gt;# cd&lt;BR /&gt;# umount /mnt/tmp1&lt;BR /&gt;# lvremove -f /dev/vgrootX/lvsnap &lt;BR /&gt;  Logical volume "lvsnap" successfully removed&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 30 Jan 2007 08:19:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/rhel4-linux-boot-disk-mirroring/m-p/3931284#M26776</guid>
      <dc:creator>Ralph Grothe</dc:creator>
      <dc:date>2007-01-30T08:19:05Z</dc:date>
    </item>
  </channel>
</rss>

