<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: PV unavailable in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390939#M535880</link>
    <description>&lt;P&gt;Thanks a lot Torsten!!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay.&lt;/P&gt;</description>
    <pubDate>Tue, 15 Nov 2011 13:08:29 GMT</pubDate>
    <dc:creator>vijay alur alur</dc:creator>
    <dc:date>2011-11-15T13:08:29Z</dc:date>
    <item>
      <title>PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390631#M535867</link>
      <description>&lt;P&gt;Hi All,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i have a VG, having 2 PV's c6t12d0 and c7t12d0. As per my knowledge these 2 disks are actually 1 physical disk and accessed by 2 paths since the t12 and d0 are same for both the PV. Also this VG has 2 PV group each disk making one PVG. The problem is that the disk c7t12d0 is unavailable in the vgdisplay o/p and pvdisplay o/p. when i give pvdisplay -v c7t12d0 it shows only some of the PE's as stale while rest of the PE's are current. I suspect that one of the two FC Link to the disk is failed. Just want to know how can i recover from this situation?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please refer output of pvdisplay and vgdisplay&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help.....&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 08:29:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390631#M535867</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T08:29:34Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390671#M535868</link>
      <description>&lt;P&gt;&amp;gt;&amp;gt; As per my knowledge these 2 disks are actually 1 physical disk and accessed by 2 paths since the t12 and d0 are same for both the PV.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;No. They are physically seperate disks/LUNs. If they were alternate links, the vgdisplay command on the volume group would indicate that, and it doesn't.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;They are seperate disks, and the logical volume appears to be mirrored between them (you can see this as the LV has 2 x the physical extents to logical extents, and the pvdisplay shows exactly half the physical extents on the failing disk)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Are you sure these disks are out of FC controllers? You should identify the HW path of the failing disk using lssf:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;lssf /dev/dsk/c7t12d0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and then try and see if that path is out of one of your FC controllers:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;ioscan -funC fc&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Then use fcmsutil to check if the FC controller is up and active or not:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;fcmsutil /dev/td0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;for example... you should also of course, consult the "When Good Disks Go Bad" technical paper:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01911837/c01911837.pdf" target="_blank" rel="noopener"&gt;http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01911837/c01911837.pdf&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jun 2021 10:56:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390671#M535868</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2021-06-18T10:56:36Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390731#M535869</link>
      <description>&lt;P&gt;Hi Duncan,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks very much for your reply.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Yes these disk come out of a FC Controller. Below is the output.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# lssf /dev/dsk/c7t12d0&lt;BR /&gt;sdisk card instance 7 SCSI target 12 SCSI LUN 0 section 0 at address 0/3/1/0/4/0.8.0.255.0.12.0 /dev/dsk/c7t12d0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# ioscan -fnCfc&lt;BR /&gt;Class I H/W Path Driver S/W State H/W Type Description&lt;BR /&gt;========================================================================&lt;BR /&gt;fc 0 0/2/1/0/4/0 fcd CLAIMED INTERFACE HP 2Gb PCI/PCI-X Fibre Channel FC/GigE Dual Port Combo Adapter&lt;BR /&gt;/dev/fcd0&lt;BR /&gt;fc 1 0/3/1/0/4/0 fcd CLAIMED INTERFACE HP 2Gb PCI/PCI-X Fibre Channel FC/GigE Dual Port Combo Adapter&lt;BR /&gt;/dev/fcd1&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# /opt/fcms/bin/fcmsutil /dev/fcd1&lt;/P&gt;&lt;P&gt;Vendor ID is = 0x001077&lt;BR /&gt;Device ID is = 0x002312&lt;BR /&gt;PCI Sub-system Vendor ID is = 0x00103c&lt;BR /&gt;PCI Sub-system ID is = 0x0012c7&lt;BR /&gt;PCI Mode = PCI-X 133 MHz&lt;BR /&gt;ISP Code version = 3.2.171&lt;BR /&gt;ISP Chip version = 3&lt;BR /&gt;Topology = PRIVATE_LOOP&lt;BR /&gt;Link Speed = 2Gb&lt;BR /&gt;Local N_Port_id is = 0x000001&lt;BR /&gt;Previous N_Port_id is = 0x000001&lt;BR /&gt;Local Loop_id is = 125&lt;BR /&gt;N_Port Node World Wide Name = 0x50060b0000324d2f&lt;BR /&gt;N_Port Port World Wide Name = 0x50060b0000324d2e&lt;BR /&gt;Switch Port World Wide Name = N/A&lt;BR /&gt;Switch Node World Wide Name = N/A&lt;BR /&gt;Driver state = ONLINE&lt;BR /&gt;Hardware Path is = 0/3/1/0/4/0&lt;BR /&gt;Maximum Frame Size = 2048&lt;BR /&gt;Driver-Firmware Dump Available = NO&lt;BR /&gt;Driver-Firmware Dump Timestamp = N/A&lt;BR /&gt;Driver Version = @(#) libfcd.a HP Fibre Channel ISP 23xx Driver B.11.23.02 /ux/core/isu/FCD/kern/src/common/wsio/fcd_init.c:Aug 31 2004,13:48:17&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The FC link is online, i suspect that the link through /dev/fcd1 had a fluctuation and after regaining connectivity this particular disk didn't ( c7t12d0 ) reinitialized and may need a reboot. What do you think?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;is there any way i can probe and chk connectivity between /dev/fcd1 and c7t12do?? i guess there is a fcmsutil cmd for the same..&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;regards,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;vijay&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 09:31:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390731#M535869</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T09:31:45Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390741#M535870</link>
      <description>&lt;P&gt;vijay,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;So the FC card is online, but is the disk?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Will it respond to a diskinfo command?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;diskinfo /dev/rdsk/c7t12d0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you read from it?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;dd if=/dev/rdsk/c7t12d0 of=/dev/null bs=8k count=1024&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Can you read the whole thing (will take some time)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;dd if=/dev/rdsk/c7t12d0 of=/dev/null bs=8k&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you get errors off that, are there other disks out this same controller in the same disk cab as c7t12d0 which are still working? We are trying to determine here if you have a failing disk, or failing controller on the disk cab (I'm guessing these a phyiscal disks and not LUNs give you are operating in private loop rather than on a point-to-point fabric)&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 09:40:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390741#M535870</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2011-11-15T09:40:33Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390797#M535871</link>
      <description>&lt;P&gt;hii Duncan,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks again,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i am really enjoying this exchange of knowledge with you!!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;diskinfo does not give the desired output.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;# diskinfo /dev/rdsk/c7t12d0&lt;BR /&gt;diskinfo: can't open /dev/rdsk/c7t12d0: No such device or address&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;doing a dd on it wont cause any issue?? this disk is part of cluster.... do let me know if it is safe to do that?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Yes, the other disk connected to same controller are working fine.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i want you to have a look at this&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;disk 28 0/3/1/0/4/0.8.0.255.0.12.0 sdisk NO_HW DEVICE HP 36.4GST336607FC&lt;BR /&gt;/dev/dsk/c7t12d0 /dev/rdsk/c7t12d0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;disk 27 0/2/1/0/4/0.8.0.255.0.12.0 sdisk CLAIMED DEVICE HP 36.4GST336607FC&lt;BR /&gt;/dev/dsk/c6t12d0 /dev/rdsk/c6t12d0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;did you notice the hardware path above??its looks like same disk but through different FC path...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;what are your thoughts??&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Vijay&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 10:12:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390797#M535871</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T10:12:56Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390863#M535872</link>
      <description>You probably have a JBOD like DS2405 connected. To confirm, please send&lt;BR /&gt;&lt;BR /&gt;# ioscan -fn&lt;BR /&gt;&lt;BR /&gt;# echo "map" | cstm</description>
      <pubDate>Tue, 15 Nov 2011 11:26:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390863#M535872</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-15T11:26:46Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390887#M535873</link>
      <description>&lt;P&gt;Hi Torsten,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks for replying.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;It is a Disk array with FC Card. Dont know the complete details as the server is located at remote location.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I have attached the file with requested details.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 12:02:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390887#M535873</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T12:02:34Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390893#M535874</link>
      <description>&lt;P&gt;Could you now please do&lt;/P&gt;&lt;P&gt;&amp;nbsp;# echo "sel dev 37;info;wait;il" | cstm&lt;/P&gt;&lt;P&gt;&amp;nbsp;# echo "sel dev 58;info;wait;il" | cstm&lt;/P&gt;&lt;P&gt;to get more&lt;/P&gt;&lt;P&gt;information about the enclosure?&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 12:14:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390893#M535874</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-15T12:14:47Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390907#M535875</link>
      <description>&lt;P&gt;hi torsten,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;just wanted to confirm these command wont cause any disruption to the running services??&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 12:40:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390907#M535875</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T12:40:37Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390911#M535876</link>
      <description>No, this runs the information tool of the online diagnostics on the JBOD controllers.</description>
      <pubDate>Tue, 15 Nov 2011 12:43:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390911#M535876</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-15T12:43:56Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390917#M535877</link>
      <description>&lt;P&gt;Ok Trosten,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please find attached the required output's.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 13:05:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390917#M535877</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T13:05:42Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390925#M535878</link>
      <description>&lt;P&gt;You have 2 DS2405 disk enclosures, each with 15 disks. In the first enclosure there is a problem with disk in slot 2 (NO_HW), in the second with disk in slot 12 (FAILED). Check both and replace if needed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 12:58:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390925#M535878</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-15T12:58:00Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390939#M535880</link>
      <description>&lt;P&gt;Thanks a lot Torsten!!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 13:08:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390939#M535880</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T13:08:29Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390945#M535883</link>
      <description>&lt;P&gt;Note: if you replace the disks online, they will remain in NO_HW status until you run fcmsutil with replace_dsk option. The syslog will tell you the details after an ioscan.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Example: fcmsutil /dev/fcd0 replace_dsk 0x0a&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 13:11:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5390945#M535883</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-15T13:11:34Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5391221#M535886</link>
      <description>&lt;P&gt;Hi Torsten,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i was just going through the cstm log's, in the first disk enclosure logs i.e. with device id 37, i cannot see any fault. where as in the second disk enclosure with device id 58, i can see a fault for disk in slot # 12. But as you said, both the disk enclosure's have faulty disks.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;can you please help me with that???&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;By the way we just did a reboot for both the nodes in the cluster and now the disk is not seen in the ioscan output, but it is visible in the vgdisplay or /etc/lvmtab. it seems the metadata got updated after reboot.&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also i have one doubt about the device naming, i was in an assumption that the device file c6t12d0 and c7t12d0 are physically same disk but accessed through 2 different controller. my assumption was that t12d0 is same for both the disk that means its same disk but 2 device file for 2 different path's. Can you hlep ,me with this as well?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks and Regards,&lt;/P&gt;&lt;P&gt;Vijay&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 16:30:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5391221#M535886</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-15T16:30:40Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5391481#M535887</link>
      <description>&lt;P&gt;Look at your ioscan:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;target      5  0/2/1/0/4/0.8.0.255.0.2     tgt        NO_HW       DEVICE&lt;/PRE&gt;&lt;PRE&gt;target     32  0/3/1/0/4/0.8.0.255.0.12    tgt        NO_HW       DEVICE&lt;/PRE&gt;&lt;P&gt;You will also notice this:&lt;/P&gt;&lt;PRE&gt;disk       18  0/3/1/0/4/0.8.0.255.0.7.0   sdisk      CLAIMED     DEVICE       HPQ     BD03659532&lt;/PRE&gt;&lt;PRE&gt;disk       17  0/2/1/0/4/0.8.0.255.0.7.0   sdisk      CLAIMED     DEVICE       HP 36.4GST336607FC&lt;/PRE&gt;&lt;P&gt;Same slot, different disks. This means you have 2 enclosures. It is a cluster?&amp;nbsp; "cmviewcl" will tell. For this reason you have a lvmpvg file! I assume you have 2 nodes, first connected to controller A of the first JBOD, second node to controller B, vice versa for the second JBOD.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After a reboot, a "NO_HW" device will disappear, this is not nice, but normal.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do a "diskinfo" for both present disks in a same slotnumber and you will see different serial numbers.&lt;/P&gt;</description>
      <pubDate>Tue, 15 Nov 2011 20:57:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5391481#M535887</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-15T20:57:40Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392355#M535888</link>
      <description>&lt;P&gt;Hi Torsten,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;if i have a failed disk and i am replacing it with a new disk. how can i give the same disk name as the old disk device file name to the new disk's device file name?? the OS Version is hpux 11.23.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please suggest??&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2011 13:56:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392355#M535888</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-16T13:56:25Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392631#M535889</link>
      <description>&lt;P&gt;The device special file name is bound to the slot, so it will be the same after replacement.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2011 18:07:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392631#M535889</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-16T18:07:50Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392643#M535890</link>
      <description>&lt;P&gt;OK, thanks Torsten.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;then after replacing the disk i do not need to make any chnages in the cluster config file since the disk that got failed is also part of second cluster lock disk along with being a data disk.....&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;we have setup a plan for the disk replacment and would be scheduling the disk replacement activity.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Your suggestion were very helpful and knowledgable for me.Thanks again!!&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Vijay&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2011 18:29:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392643#M535890</guid>
      <dc:creator>vijay alur alur</dc:creator>
      <dc:date>2011-11-16T18:29:19Z</dc:date>
    </item>
    <item>
      <title>Re: PV unavailable</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392669#M535891</link>
      <description>Follow this guide to replace the disk:&lt;BR /&gt;&lt;BR /&gt;When_Good_Disks_Go_Bad_WP&lt;BR /&gt;&lt;A target="_blank" href="http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01911837/c01911837.pdf"&gt;http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01911837/c01911837.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;When it comes to "vgchange -a y ..." replace this by "vgchange -a e ..." because of the cluster. Perform the command from the node that owns the VG. You did not mention this is a cluster, but I assumed this already.</description>
      <pubDate>Wed, 16 Nov 2011 18:46:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/pv-unavailable/m-p/5392669#M535891</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2011-11-16T18:46:32Z</dc:date>
    </item>
  </channel>
</rss>

