<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MC/SG Mirroring in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675353#M699561</link>
    <description>1. One of the mirrored disk failed, a new disk is replaced. Can I do without shutdown the cluster nodes.&lt;BR /&gt;&lt;BR /&gt;If patches PHKL_32095 and PHCO_31709 (or later versions) are installed, a new option (-a n/y) is added to pvchange to detach and&lt;BR /&gt;re-attach a PV to the volume group. This option can be used to replace a mirror disk without interrupting the operation of a&lt;BR /&gt;package.&lt;BR /&gt;Whitepaper discussing it:&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/7161/LVM_OLR_whitepaper.pdf" target="_blank"&gt;http://docs.hp.com/en/7161/LVM_OLR_whitepaper.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;2. Adding additional new disk and setup the mirroring.&lt;BR /&gt;Assuming you are using LVM, this document describes how to add a disk to a&lt;BR /&gt;Serviceguard volume group: UXSGLVKBAN00000002&lt;BR /&gt;TITLE:  Adding A Disk To A Volume Group In A ServiceGuard Package&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;3.  Can I deactivate the VG and mount it to one server only without startup of cluster package?&lt;BR /&gt;&lt;BR /&gt;When a cluster is down, a VG can be de-clustered and activated normally.&lt;BR /&gt;# vgchange -c n vg??&lt;BR /&gt;# vgchange -a y vg??&lt;BR /&gt;&lt;BR /&gt;To get the VG re-clustered, use cmapplyconf on the cluster configuration file.&lt;BR /&gt;&lt;BR /&gt;If the cluster is functional but you need to work on the LVM application&lt;BR /&gt;data, consider editting the package control script, commenting out the&lt;BR /&gt;customer_defined_run_cmds and customer_defined_halt_cmds and running the&lt;BR /&gt;package control script manually:  &lt;PKG.CNTL&gt; start  (use "stop" to halt it)&lt;BR /&gt;&lt;BR /&gt;4. Adding additional volume group to the cluster node:&lt;BR /&gt;- Create the VG and it's logical volumes on one node&lt;BR /&gt;- VGimport the VG on the second node (see document in item 2 above)&lt;BR /&gt;- Edit the cluster configuration file, add a VOLUME_GROUP reference for the new VG&lt;BR /&gt;- either CMAPPLYCONF the file, or manually "clustered" the VG if the node is running cmcld:&lt;BR /&gt;# vgchange -c y vg??&lt;BR /&gt;- Edit the package control script, adding the new VG and Lvol references (remember to increment the VG[?] and LV[?] values)&lt;BR /&gt;- Copy the script change to adoptive nodes&lt;BR /&gt;- Test the updated script.&lt;BR /&gt;&lt;BR /&gt;&lt;/PKG.CNTL&gt;</description>
    <pubDate>Tue, 22 Nov 2005 08:53:03 GMT</pubDate>
    <dc:creator>Stephen Doud</dc:creator>
    <dc:date>2005-11-22T08:53:03Z</dc:date>
    <item>
      <title>MC/SG Mirroring</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675351#M699559</link>
      <description>I have two servers which is running 11.23 IA platform.  Both servers are connected to MSA 30.  Currently the shared volume is using mirroring to provide redundancy.  I only have experienced for MC/SG in VA setup.&lt;BR /&gt;&lt;BR /&gt;Just wondering what is the step if the following scenario happen.&lt;BR /&gt;&lt;BR /&gt;1. One of the mirrored disk failed, a new disk is replaced.  Can I do without shutdown the cluster nodes.&lt;BR /&gt;&lt;BR /&gt;2. Adding additional new disk and setup the mirroring.&lt;BR /&gt;&lt;BR /&gt;3. In the case that if the cluster need some troubleshooting.  Can I deactivate the VG and mount it to one server only without startup of cluster package?&lt;BR /&gt;&lt;BR /&gt;4.  Adding additional volume group to the cluster node</description>
      <pubDate>Sun, 20 Nov 2005 20:43:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675351#M699559</guid>
      <dc:creator>kholikt</dc:creator>
      <dc:date>2005-11-20T20:43:20Z</dc:date>
    </item>
    <item>
      <title>Re: MC/SG Mirroring</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675352#M699560</link>
      <description>1) Yes, the procedure is the same. Of course, the disk replacement needs to be done on the currently active node.&lt;BR /&gt;&lt;BR /&gt;2) If you are adding new disks (PV's) to an existing VG then you do the add on the currently active node. You will then need to do a vgexport and vgimport the revised VG on each adoptive node.&lt;BR /&gt;&lt;BR /&gt;3) Yes. You should note that the VG is now cluster aware so the cluster itself should remain up although the particular package can be down.&lt;BR /&gt;&lt;BR /&gt;4) If this is a shared VG then you must import the VG on all nodes; if not then no additional work is required.&lt;BR /&gt;&lt;BR /&gt;All of this is covered in the MC/SG manuals.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 20 Nov 2005 21:23:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675352#M699560</guid>
      <dc:creator>A. Clay Stephenson</dc:creator>
      <dc:date>2005-11-20T21:23:10Z</dc:date>
    </item>
    <item>
      <title>Re: MC/SG Mirroring</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675353#M699561</link>
      <description>1. One of the mirrored disk failed, a new disk is replaced. Can I do without shutdown the cluster nodes.&lt;BR /&gt;&lt;BR /&gt;If patches PHKL_32095 and PHCO_31709 (or later versions) are installed, a new option (-a n/y) is added to pvchange to detach and&lt;BR /&gt;re-attach a PV to the volume group. This option can be used to replace a mirror disk without interrupting the operation of a&lt;BR /&gt;package.&lt;BR /&gt;Whitepaper discussing it:&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/7161/LVM_OLR_whitepaper.pdf" target="_blank"&gt;http://docs.hp.com/en/7161/LVM_OLR_whitepaper.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;2. Adding additional new disk and setup the mirroring.&lt;BR /&gt;Assuming you are using LVM, this document describes how to add a disk to a&lt;BR /&gt;Serviceguard volume group: UXSGLVKBAN00000002&lt;BR /&gt;TITLE:  Adding A Disk To A Volume Group In A ServiceGuard Package&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;3.  Can I deactivate the VG and mount it to one server only without startup of cluster package?&lt;BR /&gt;&lt;BR /&gt;When a cluster is down, a VG can be de-clustered and activated normally.&lt;BR /&gt;# vgchange -c n vg??&lt;BR /&gt;# vgchange -a y vg??&lt;BR /&gt;&lt;BR /&gt;To get the VG re-clustered, use cmapplyconf on the cluster configuration file.&lt;BR /&gt;&lt;BR /&gt;If the cluster is functional but you need to work on the LVM application&lt;BR /&gt;data, consider editting the package control script, commenting out the&lt;BR /&gt;customer_defined_run_cmds and customer_defined_halt_cmds and running the&lt;BR /&gt;package control script manually:  &lt;PKG.CNTL&gt; start  (use "stop" to halt it)&lt;BR /&gt;&lt;BR /&gt;4. Adding additional volume group to the cluster node:&lt;BR /&gt;- Create the VG and it's logical volumes on one node&lt;BR /&gt;- VGimport the VG on the second node (see document in item 2 above)&lt;BR /&gt;- Edit the cluster configuration file, add a VOLUME_GROUP reference for the new VG&lt;BR /&gt;- either CMAPPLYCONF the file, or manually "clustered" the VG if the node is running cmcld:&lt;BR /&gt;# vgchange -c y vg??&lt;BR /&gt;- Edit the package control script, adding the new VG and Lvol references (remember to increment the VG[?] and LV[?] values)&lt;BR /&gt;- Copy the script change to adoptive nodes&lt;BR /&gt;- Test the updated script.&lt;BR /&gt;&lt;BR /&gt;&lt;/PKG.CNTL&gt;</description>
      <pubDate>Tue, 22 Nov 2005 08:53:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mc-sg-mirroring/m-p/3675353#M699561</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2005-11-22T08:53:03Z</dc:date>
    </item>
  </channel>
</rss>

