<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Dynamic Root Disk Failure in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027015#M429918</link>
    <description>&lt;BR /&gt;You are seeing a bug in the clone unmount that occurs when you have a file system mounted under another mount point.  This defect will be fixed in the next release.&lt;BR /&gt;&lt;BR /&gt;As a workaround until the defect is fixed, you can issue the â  drd umount" command whenever you get the unmount failure.  Since /var/adm/crash and /var/tmp will have been unmounted by the first attempt, the next unmount will succeed.  You will need to do this after unmount failures in the "drd clone" "drd umount" and "drd runcmd" commands.&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 06 Feb 2007 19:44:11 GMT</pubDate>
    <dc:creator>Judy Wathen</dc:creator>
    <dc:date>2007-02-06T19:44:11Z</dc:date>
    <item>
      <title>Dynamic Root Disk Failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027013#M429916</link>
      <description>Hello all, &lt;BR /&gt;I downloaded and installed DRD on my brand new 11.23 Itanium server. The server is fully patched and has nothing running on it yet.&lt;BR /&gt;When I run the command everything seems to work up to the point it begins to unmount the DRD lvols:&lt;BR /&gt;&lt;BR /&gt;root@dnux048: /roots # drd clone -t /dev/dsk/c3t0d0  &lt;BR /&gt;&lt;BR /&gt;=======  02/06/07 12:51:35 MST  BEGIN Clone System Image (user=root)&lt;BR /&gt;         (jobid=dnux048)&lt;BR /&gt;&lt;BR /&gt;       * Reading Current System Information&lt;BR /&gt;       * Selecting System Image To Clone&lt;BR /&gt;       * Selecting Target Disk&lt;BR /&gt;       * Selecting Volume Manager For New System Image&lt;BR /&gt;       * Analyzing For System Image Cloning&lt;BR /&gt;       * Creating New File Systems&lt;BR /&gt;       * Copying File Systems To New System Image&lt;BR /&gt;       * Making New System Image Bootable&lt;BR /&gt;       * Unmounting New System Image Clone&lt;BR /&gt;ERROR:   Unmounting the file system fails.&lt;BR /&gt;         - Unmounting the clone image fails.&lt;BR /&gt;         - The command "/usr/bin/sh -c&lt;BR /&gt;           /var/opt/drd/tmp/drdXTH5dGVL/.drdvgchangecmd " fails with the return&lt;BR /&gt;           code 16. The error message from the command is "vgchange: Couldn't&lt;BR /&gt;           deactivate volume group "drd00":&lt;BR /&gt;           Device busy&lt;BR /&gt;           "&lt;BR /&gt;       * Unmounting New System Image Clone failed with 1 error.&lt;BR /&gt;&lt;BR /&gt;=======  02/06/07 13:27:23 MST  END Clone System Image failed with 1 error.&lt;BR /&gt;         (user=root)  (jobid=dnux048)&lt;BR /&gt;&lt;BR /&gt;A bdf before the run:&lt;BR /&gt;root@dnux004: /roots # bdf&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vg00/lvol3     581632  316088  263520   55% /&lt;BR /&gt;/dev/vg00/lvol1     360448  144440  214416   40% /stand&lt;BR /&gt;/dev/vg00/lvol12   4710400 2019032 2670416   43% /var&lt;BR /&gt;/dev/vg00/lvol10   1024000   20495  943277    2% /var/tmp&lt;BR /&gt;/dev/vg00/lvol9    4710400   17625 4399484    0% /var/adm/crash&lt;BR /&gt;/dev/vg00/lvol11   6348800 2640848 3679024   42% /usr&lt;BR /&gt;/dev/vg00/lvol8     204800    8800  194528    4% /tmp&lt;BR /&gt;/dev/vg00/lvol7     262144   25899  221513   10% /roots&lt;BR /&gt;/dev/vg00/lvol6    1024000   37023  925319    4% /patches&lt;BR /&gt;/dev/vg00/lvol5    6660096 3751112 2886304   57% /opt&lt;BR /&gt;/dev/vg00/lvol4     262144    9296  250880    4% /home&lt;BR /&gt;&lt;BR /&gt;A BDF after the run:&lt;BR /&gt;root@dnux004: /roots # bdf&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vg00/lvol3     581632  316088  263520   55% /&lt;BR /&gt;/dev/vg00/lvol1     360448  144440  214416   40% /stand&lt;BR /&gt;/dev/vg00/lvol12   4710400 2019032 2670416   43% /var&lt;BR /&gt;/dev/vg00/lvol10   1024000   20495  943277    2% /var/tmp&lt;BR /&gt;/dev/vg00/lvol9    4710400   17625 4399484    0% /var/adm/crash&lt;BR /&gt;/dev/vg00/lvol11   6348800 2640848 3679024   42% /usr&lt;BR /&gt;/dev/vg00/lvol8     204800    8800  194528    4% /tmp&lt;BR /&gt;/dev/vg00/lvol7     262144   25899  221513   10% /roots&lt;BR /&gt;/dev/vg00/lvol6    1024000   37023  925319    4% /patches&lt;BR /&gt;/dev/vg00/lvol5    6660096 3751112 2886304   57% /opt&lt;BR /&gt;/dev/vg00/lvol4     262144    9296  250880    4% /home&lt;BR /&gt;/dev/drd00/lvol3   4710400 2019032 2670416   43% /var/opt/drd/mnts/sysimage_001&lt;BR /&gt;/dev/drd00/lvol12   4710400 2019032 2670416   43% /var/opt/drd/mnts/sysimage_001/var&lt;BR /&gt;&lt;BR /&gt;Now I can manually unmount the DRD mount points, and I can use the newly created DRD disk to boot from, yet every time I run the command (I have tried it three times on two different Itanium servers) it hangs in basically the same place: always trying to unmount the /var file systems.&lt;BR /&gt;&lt;BR /&gt;Anyone else experience this? &lt;BR /&gt;Any thoughts on what may be causing this?&lt;BR /&gt;&lt;BR /&gt;Thanks!&lt;BR /&gt;Steve&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Feb 2007 19:06:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027013#M429916</guid>
      <dc:creator>UNIX Engr</dc:creator>
      <dc:date>2007-02-06T19:06:12Z</dc:date>
    </item>
    <item>
      <title>Re: Dynamic Root Disk Failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027014#M429917</link>
      <description>Hi Steve:&lt;BR /&gt;&lt;BR /&gt;To begin, I have no experience with this product.&lt;BR /&gt;&lt;BR /&gt;It would be interesting to see what files might be open on the DRD mountpoints and by what processes.  I'd use 'lsof' to probe this.&lt;BR /&gt;&lt;BR /&gt;Given that you say you can manually unmount the filesystems, I suspect that you are not going to find any open files or processes using these filesystems, though.&lt;BR /&gt;&lt;BR /&gt;Regards!&lt;BR /&gt;&lt;BR /&gt;...JRF...</description>
      <pubDate>Tue, 06 Feb 2007 19:42:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027014#M429917</guid>
      <dc:creator>James R. Ferguson</dc:creator>
      <dc:date>2007-02-06T19:42:32Z</dc:date>
    </item>
    <item>
      <title>Re: Dynamic Root Disk Failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027015#M429918</link>
      <description>&lt;BR /&gt;You are seeing a bug in the clone unmount that occurs when you have a file system mounted under another mount point.  This defect will be fixed in the next release.&lt;BR /&gt;&lt;BR /&gt;As a workaround until the defect is fixed, you can issue the â  drd umount" command whenever you get the unmount failure.  Since /var/adm/crash and /var/tmp will have been unmounted by the first attempt, the next unmount will succeed.  You will need to do this after unmount failures in the "drd clone" "drd umount" and "drd runcmd" commands.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 06 Feb 2007 19:44:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027015#M429918</guid>
      <dc:creator>Judy Wathen</dc:creator>
      <dc:date>2007-02-06T19:44:11Z</dc:date>
    </item>
    <item>
      <title>Re: Dynamic Root Disk Failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027016#M429919</link>
      <description>Thanks for the info!&lt;BR /&gt;I have scripted a work around by executing a "drd umount" commnad after the "drd clone" command.&lt;BR /&gt;&lt;BR /&gt;Cheers!&lt;BR /&gt;Steve</description>
      <pubDate>Wed, 07 Feb 2007 12:03:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027016#M429919</guid>
      <dc:creator>UNIX Engr</dc:creator>
      <dc:date>2007-02-07T12:03:03Z</dc:date>
    </item>
    <item>
      <title>Re: Dynamic Root Disk Failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027017#M429920</link>
      <description>Steve - &lt;BR /&gt;&lt;BR /&gt;Just wanted to let you know that a new release of DRD that contains a fix to this defect is now available from HP's software depot.  You can download Release A.1.0.18.245, issued in February 2007, by going to the Downloads and Patches section of the DRD documentation site:&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/DRD/" target="_blank"&gt;http://docs.hp.com/en/DRD/&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Judy</description>
      <pubDate>Fri, 02 Mar 2007 19:11:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/dynamic-root-disk-failure/m-p/5027017#M429920</guid>
      <dc:creator>Judy Wathen</dc:creator>
      <dc:date>2007-03-02T19:11:49Z</dc:date>
    </item>
  </channel>
</rss>

