<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic MIRROR DISKS in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255898#M552009</link>
    <description>HI&lt;BR /&gt;&lt;BR /&gt;Can somone point me to a document that can help me out.&lt;BR /&gt;&lt;BR /&gt;I have a 2 node cluster. We have 2 EVA's&lt;BR /&gt;&lt;BR /&gt;I have presented the disks on the 2 eva's to the nodes.&lt;BR /&gt;&lt;BR /&gt;I now want to mirror the disks on eva 1 and eva 2 to node one and then do the same on node2.so i have a mirroed disk with data residing on eva 1 and eva2. &lt;BR /&gt;&lt;BR /&gt;If there is a failure on eva one the cluster will still see the data on node2.&lt;BR /&gt;&lt;BR /&gt;If there is now a failure on say node1 the cluster will fail to node2 and see the data on eva2.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Mon, 20 Sep 2010 11:55:53 GMT</pubDate>
    <dc:creator>wayne_104</dc:creator>
    <dc:date>2010-09-20T11:55:53Z</dc:date>
    <item>
      <title>MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255898#M552009</link>
      <description>HI&lt;BR /&gt;&lt;BR /&gt;Can somone point me to a document that can help me out.&lt;BR /&gt;&lt;BR /&gt;I have a 2 node cluster. We have 2 EVA's&lt;BR /&gt;&lt;BR /&gt;I have presented the disks on the 2 eva's to the nodes.&lt;BR /&gt;&lt;BR /&gt;I now want to mirror the disks on eva 1 and eva 2 to node one and then do the same on node2.so i have a mirroed disk with data residing on eva 1 and eva2. &lt;BR /&gt;&lt;BR /&gt;If there is a failure on eva one the cluster will still see the data on node2.&lt;BR /&gt;&lt;BR /&gt;If there is now a failure on say node1 the cluster will fail to node2 and see the data on eva2.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Sep 2010 11:55:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255898#M552009</guid>
      <dc:creator>wayne_104</dc:creator>
      <dc:date>2010-09-20T11:55:53Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255899#M552010</link>
      <description>I would simply create equal LUNs on both arrays and use LVM to mirror the LVOLs.&lt;BR /&gt;&lt;BR /&gt;In case of an EVA failure the server will continue with the other EVA, in case of a cluster switch the other node will activate the VG and run the application.</description>
      <pubDate>Mon, 20 Sep 2010 12:07:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255899#M552010</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2010-09-20T12:07:09Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255900#M552011</link>
      <description>As i said . on the eva i have presented equal partiotions to both nodes.&lt;BR /&gt;&lt;BR /&gt;on node one i have done the mirror and lvextended the vg. no problems when i switch to the second node does not like the vg.&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Sep 2010 12:15:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255900#M552011</guid>
      <dc:creator>wayne_104</dc:creator>
      <dc:date>2010-09-20T12:15:49Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255901#M552012</link>
      <description>&amp;gt;&amp;gt;  second node does not like the vg&lt;BR /&gt;&lt;BR /&gt;Details? Messages?&lt;BR /&gt;&lt;BR /&gt;Did you vgimport the modified VG into the second node?</description>
      <pubDate>Mon, 20 Sep 2010 12:40:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255901#M552012</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2010-09-20T12:40:10Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255902#M552013</link>
      <description>What you are saying is what they call extended cluster(old campus cluster in HP), check the docs site you have a lot of info, on how to get it working</description>
      <pubDate>Mon, 20 Sep 2010 12:46:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255902#M552013</guid>
      <dc:creator>likid0</dc:creator>
      <dc:date>2010-09-20T12:46:04Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255903#M552014</link>
      <description>You did not say what sort of problem the other node had when you had it attempt to access the VGs.  That would be helpful to know.&lt;BR /&gt;&lt;BR /&gt;As others have described, create a volume group that spans both EVAs, and mirror LVOLs on one EVA to the other EVA.  Having done this, you may need to vgexport and reimport the VG on the other node, since the LVM commands to create the VG, lvol and mirrors were only performed on one node.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;DETAIL:&lt;BR /&gt;The /etc/lvmtab file is the map that LVM uses to activate a volume group.  Some of the content of the binary file can be viewed using 'strings /etc/lvmtab'.  If you find a discrepancy in the number if disk special files (DSF) per VG across nodes, then you need to re-import the VG on the node that has fewer DSFs.  The procedure is described in other threads.  &lt;BR /&gt;&lt;BR /&gt;Once /etc/lvmtab lists all EVA LUNs with each VG, a single EVA failure should be transparent when LVM switches to alternate links.&lt;BR /&gt;</description>
      <pubDate>Mon, 20 Sep 2010 16:09:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255903#M552014</guid>
      <dc:creator>Stephen Doud</dc:creator>
      <dc:date>2010-09-20T16:09:56Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255904#M552015</link>
      <description>Got it working.&lt;BR /&gt;&lt;BR /&gt;Halt the cluster&lt;BR /&gt;#cmhaltcl â  f&lt;BR /&gt;#vi /etc/rc.config.d/cmcluster&lt;BR /&gt;AUTOSTART_CMCLD=0&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;# pvcreate -f /dev/rdisk/disk38&lt;BR /&gt;OUTPUT&lt;BR /&gt;Physical volume "/dev/rdisk/disk38" has been successfully created.&lt;BR /&gt;&lt;BR /&gt;# vgchange -c n vg02&lt;BR /&gt;Output&lt;BR /&gt;Performed Configuration change.&lt;BR /&gt;Volume group "vg03" has been successfully changed.&lt;BR /&gt;&lt;BR /&gt;# vgchange -a y vg03&lt;BR /&gt;Output&lt;BR /&gt;Activated volume group&lt;BR /&gt;Volume group "vg03" has been successfully changed.&lt;BR /&gt;&lt;BR /&gt;# vgextend /dev/vg02 /dev/disk/disk23&lt;BR /&gt;&lt;BR /&gt;OUTPUT&lt;BR /&gt;Volume group "/dev/vg02" has been successfully extended.&lt;BR /&gt;Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg&lt;BR /&gt;&lt;BR /&gt;#lvextend -m 1 /dev/vg02/lvol1 /dev/disk/disk23&lt;BR /&gt;&lt;BR /&gt;OUTPUT&lt;BR /&gt;The newly allocated mirrors are now being synchronized. This operation will&lt;BR /&gt;take some time. Please wait ....&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;DEACTIVATE DISK&lt;BR /&gt;#vgchange -a n /dev/vgnn&lt;BR /&gt;&lt;BR /&gt;# vgexport -v -p -s -m vg02.map /dev/vg02&lt;BR /&gt;&lt;BR /&gt;# scp vg02.map node2:/&lt;BR /&gt;&lt;BR /&gt;OUTPUT&lt;BR /&gt;The authenticity of host 'alsmt1 (10.192.100.19)' can't be established.&lt;BR /&gt;RSA key fingerprint is 60:5b:0d:95:fc:aa:60:f9:42:6a:1c:a1:75:a2:fe:f7.&lt;BR /&gt;Are you sure you want to continue connecting (yes/no)? yes&lt;BR /&gt;Warning: Permanently added 'alsmt1,10.192.100.19' (RSA) to the list of known hosts.&lt;BR /&gt;Password:&lt;BR /&gt;Vg02.map                                                                                                                100%   22     0.0KB/s   0&lt;BR /&gt;&lt;BR /&gt;ON SECOND NODE&lt;BR /&gt;&lt;BR /&gt;# mkdir /dev/vg02&lt;BR /&gt;# mknod /dev/vg02/group c 64 0x020000 NOTE THE NUMBER&lt;BR /&gt;# vgimport -v -s -m /vg02.map vg02&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;â  &amp;gt; Agile Multipathing of HP-UX 11.31 is not used by default after import (HP-UX 11.31 Bug ?!). The volume group uses alternate LVM Paths.&lt;BR /&gt;Solution:&lt;BR /&gt;# vgchange -a n vg02&lt;BR /&gt;# vgchange -a y vg02&lt;BR /&gt;&lt;BR /&gt;// Add agile Path&lt;BR /&gt;# vgextend /dev/vg02 /dev/disk/disk39 &lt;BR /&gt;NOTE THIS DISK NUMBER MAY BE DIFFRENT ON THE SECOND NODE&lt;BR /&gt;&lt;BR /&gt;// Remove Alternate Paths&lt;BR /&gt;# vgreduce vg02 /dev/dsk/c16t0d1 /dev/dsk/c10t0d1 &lt;BR /&gt;&lt;BR /&gt;OUTPUT&lt;BR /&gt;&lt;BR /&gt;Device file path "/dev/dsk/c16t0d1" is an primary link.&lt;BR /&gt;Removing primary link and switching to an alternate link.&lt;BR /&gt;Device file path "/dev/dsk/c10t0d1" is an alternate path.&lt;BR /&gt;Volume group "vg02" has been successfully reduced.&lt;BR /&gt;Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf&lt;BR /&gt;&lt;BR /&gt;BACKUP VG&lt;BR /&gt;# vgchange -a r vg02&lt;BR /&gt;# vgcfgbackup /dev/vg02&lt;BR /&gt;Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf&lt;BR /&gt;&lt;BR /&gt;Deactivate mount&lt;BR /&gt;&lt;BR /&gt;#vgchange -a n /dev/vgnn for ll vgâ  s&lt;BR /&gt;#vgchange -c y /dev/vgnn for all vgâ  s&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Change in /etc/cmcluster/IALCH_oraDB/IALCH_oraDB.conf&lt;BR /&gt;&lt;BR /&gt;#vi /etc/cmcluster/IALCH_oraDB/IALCH_oraDB.conf&lt;BR /&gt;vgchange_cmd                    "vgchange -a e -q n"&lt;BR /&gt;&lt;BR /&gt;APPLY PACKAGE&lt;BR /&gt;&lt;BR /&gt;#cmcheckconf -P $SGCONF/IALCH_oraDB/IALCH_oraDB.conf&lt;BR /&gt;&lt;BR /&gt;If No errors the apply package.&lt;BR /&gt; #cmapplyconf -P $SGCONF/IALCH_oraDB/IALCH_oraDB.conf&lt;BR /&gt;OUTPUT&lt;BR /&gt;Validation for package IALCH_oraDB succeeded via /etc/cmcluster/scripts/mscripts/master_control_script.sh.&lt;BR /&gt;&lt;BR /&gt;Modify the package configuration ([y]/n)? y&lt;BR /&gt;Completed the cluster update&lt;BR /&gt;#vi /etc/rc.config.d/cmcluster&lt;BR /&gt;AUTOSTART_CMCLD=1&lt;BR /&gt;Reboot systems&lt;BR /&gt;</description>
      <pubDate>Tue, 21 Sep 2010 11:18:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255904#M552015</guid>
      <dc:creator>wayne_104</dc:creator>
      <dc:date>2010-09-21T11:18:49Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255905#M552016</link>
      <description># vgimport -v -s -m /vg02.map vg02&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; Agile Multipathing of HP-UX 11.31 is not used by default after import (HP-UX 11.31 Bug ?!). The volume group uses alternate LVM Paths.&lt;BR /&gt;&lt;BR /&gt;Not a bug. It's just that 11.31 uses the legacy devices unless you explicitly request to use the new ones. I guess this is HP's way to maximize compatibility with legacy scripts and software.&lt;BR /&gt;&lt;BR /&gt;You should have used the -N option:&lt;BR /&gt;&lt;BR /&gt;# vgimport -N -v -s -m /vg02.map vg02&lt;BR /&gt;&lt;BR /&gt;MK</description>
      <pubDate>Wed, 22 Sep 2010 06:03:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255905#M552016</guid>
      <dc:creator>Matti_Kurkela</dc:creator>
      <dc:date>2010-09-22T06:03:48Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255906#M552017</link>
      <description>see above</description>
      <pubDate>Wed, 22 Sep 2010 06:09:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255906#M552017</guid>
      <dc:creator>wayne_104</dc:creator>
      <dc:date>2010-09-22T06:09:03Z</dc:date>
    </item>
    <item>
      <title>Re: MIRROR DISKS</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255907#M552018</link>
      <description>Thanks Matti&lt;BR /&gt;&lt;BR /&gt;gave you 10 pointer for that</description>
      <pubDate>Wed, 22 Sep 2010 06:09:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/mirror-disks/m-p/5255907#M552018</guid>
      <dc:creator>wayne_104</dc:creator>
      <dc:date>2010-09-22T06:09:47Z</dc:date>
    </item>
  </channel>
</rss>

