<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Hard disk failure in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090278#M144657</link>
    <description>Robert I'm not able to acces the link.&lt;BR /&gt;&lt;BR /&gt;Patrick, it says that I need to reboot the server. Is it not necessary? the failed disk does not belong to root vg in my case and it had only one filesystem/LV on it.&lt;BR /&gt;&lt;BR /&gt;Do I need to do pvcreate or the vgcfgrestore will take care of it?&lt;BR /&gt;&lt;BR /&gt;My plan after the disk replacment is:&lt;BR /&gt;pvcreate on the disk&lt;BR /&gt;vgcfgrestore on the vg&lt;BR /&gt;lvcreate&lt;BR /&gt;newfs&lt;BR /&gt;mount the fs and restore&lt;BR /&gt;&lt;BR /&gt;Does it look logical or I need to do somthing else?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot for your help&lt;BR /&gt;&lt;BR /&gt;-Vikas</description>
    <pubDate>Fri, 10 Oct 2003 03:46:41 GMT</pubDate>
    <dc:creator>Vikas_2</dc:creator>
    <dc:date>2003-10-10T03:46:41Z</dc:date>
    <item>
      <title>Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090275#M144654</link>
      <description>One of the hard disk of my 10.20 system has failed. Ioscan shows not claimed for the disk. This disk is in a external storage and is hot swappable. The disk is  not mirrored and the VG has three disks in it.&lt;BR /&gt;&lt;BR /&gt;Can you pleae advice me the steps to carry out before and after the disk change?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;-Vikas</description>
      <pubDate>Fri, 10 Oct 2003 03:03:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090275#M144654</guid>
      <dc:creator>Vikas_2</dc:creator>
      <dc:date>2003-10-10T03:03:35Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090276#M144655</link>
      <description>You firstly need to identify which lvols are on the failed drive - because youve lost the data on them. Try lvdisplay -v on each lvol and see which report errors. Once you replace the failed drive do a vgcfgrestore onto it, this will replace the VG/LV info, but now you will need to newfs the lvols which were on it as their either corrupted or completely lost, then recover the data on them from backup.&lt;BR /&gt;</description>
      <pubDate>Fri, 10 Oct 2003 03:10:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090276#M144655</guid>
      <dc:creator>Stefan Farrelly</dc:creator>
      <dc:date>2003-10-10T03:10:57Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090277#M144656</link>
      <description>Hi Vikas,&lt;BR /&gt; &lt;BR /&gt;&lt;A href="http://www.unixadm.net/howto/bad_disk.html" target="_blank"&gt;http://www.unixadm.net/howto/bad_disk.html&lt;/A&gt;&lt;BR /&gt; &lt;BR /&gt;Start from point 1.2 (you have hot swappable disk).&lt;BR /&gt; &lt;BR /&gt;Regards,&lt;BR /&gt; &lt;BR /&gt;Robert-Jan</description>
      <pubDate>Fri, 10 Oct 2003 03:26:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090277#M144656</guid>
      <dc:creator>Robert-Jan Goossens</dc:creator>
      <dc:date>2003-10-10T03:26:45Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090278#M144657</link>
      <description>Robert I'm not able to acces the link.&lt;BR /&gt;&lt;BR /&gt;Patrick, it says that I need to reboot the server. Is it not necessary? the failed disk does not belong to root vg in my case and it had only one filesystem/LV on it.&lt;BR /&gt;&lt;BR /&gt;Do I need to do pvcreate or the vgcfgrestore will take care of it?&lt;BR /&gt;&lt;BR /&gt;My plan after the disk replacment is:&lt;BR /&gt;pvcreate on the disk&lt;BR /&gt;vgcfgrestore on the vg&lt;BR /&gt;lvcreate&lt;BR /&gt;newfs&lt;BR /&gt;mount the fs and restore&lt;BR /&gt;&lt;BR /&gt;Does it look logical or I need to do somthing else?&lt;BR /&gt;&lt;BR /&gt;Thanks a lot for your help&lt;BR /&gt;&lt;BR /&gt;-Vikas</description>
      <pubDate>Fri, 10 Oct 2003 03:46:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090278#M144657</guid>
      <dc:creator>Vikas_2</dc:creator>
      <dc:date>2003-10-10T03:46:41Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090279#M144658</link>
      <description>Have the engineer replace the faulty disk, no need to reboot, you have hot swap disks.&lt;BR /&gt;&lt;BR /&gt;[Step 1.2]&lt;BR /&gt;&lt;BR /&gt;Restore the LVM configuration/headers onto the new disk from your backup&lt;BR /&gt;of the LVM configuration:&lt;BR /&gt;&lt;BR /&gt;# vgcfgrestore -n [volume group name] /dev/rdsk/cXtYdZ&lt;BR /&gt;&lt;BR /&gt;Where X is the 'card instance number' of the SCSI bus attached to&lt;BR /&gt;that card. Y is the 'SCSI ID' of the disk (or array controller, in&lt;BR /&gt;the case of an array), and Z is the 'LUN number' (typically 0 for a&lt;BR /&gt;non-array type disk). Note that if the HP Customer Engineer replaces&lt;BR /&gt;the disk at the same address, the device file name will not change.&lt;BR /&gt;In that case the name will be what it was prior to the replacement.&lt;BR /&gt;For our example:&lt;BR /&gt;&lt;BR /&gt;# vgcfgrestore -n /dev/vg00 /dev/rdsk/c0t4d0&lt;BR /&gt;&lt;BR /&gt;[Step 1.3]&lt;BR /&gt;&lt;BR /&gt;Reactivate the volume group (VG) so that the new disk can be attached,&lt;BR /&gt;since it wasn't configured in at boot time:&lt;BR /&gt;&lt;BR /&gt;# vgchange -a y [volume group name]&lt;BR /&gt;&lt;BR /&gt;For our example, the volume group vg00 will already be activated, but&lt;BR /&gt;it will not know of the replaced disk; therefore, this step is still&lt;BR /&gt;required so that LVM will now know that the disk is again available:&lt;BR /&gt;&lt;BR /&gt;# vgchange -a y /dev/vg00&lt;BR /&gt;&lt;BR /&gt;The vgchange command will activate each specified volume group and all&lt;BR /&gt;associated physical and logical volumes for read-write access. In the case&lt;BR /&gt;of&lt;BR /&gt;vg00,it would initially have been activated with c0t4d0 in an unknown state.&lt;BR /&gt;vgchange tells vg00 to look again at c0t4d0, which is now in a known state.&lt;BR /&gt;It&lt;BR /&gt;is important to remember that even though lvol5 and lvol6 are now active&lt;BR /&gt;they&lt;BR /&gt;are void of data.&lt;BR /&gt;&lt;BR /&gt;[Step 1.4]&lt;BR /&gt;&lt;BR /&gt;Determine which logical volumes spanned onto that disk. You only need&lt;BR /&gt;to recreate and restore data for the volumes that actually touched that&lt;BR /&gt;disk. Other LVs in the volume group are still OK.&lt;BR /&gt;&lt;BR /&gt;# pvdisplay -v /dev/dsk/c0tXd0&lt;BR /&gt;&lt;BR /&gt;will show a listing of all the extents on disk lu X, and to what logical&lt;BR /&gt;volume they belong. This listing is fairly long, so you might want to&lt;BR /&gt;pipe it to more or send it to a file. For our example:&lt;BR /&gt;&lt;BR /&gt;# pvdisplay -v /dev/dsk/c0t4d0 | more&lt;BR /&gt;.....&lt;BR /&gt;.....&lt;BR /&gt;--- Distribution of physical volume ---&lt;BR /&gt;LV Name LE of LV PE for LV&lt;BR /&gt;/dev/vg00/lvol5 50 50&lt;BR /&gt;/dev/vg00/lvol6 245 245&lt;BR /&gt;.....&lt;BR /&gt;.....&lt;BR /&gt;&lt;BR /&gt;From this we can see that logical volumes /dev/vg00/lvol5 and&lt;BR /&gt;/dev/vg00/lvol6 have physical extents on this disk, but /dev/vg00/lvol1&lt;BR /&gt;through /dev/vg00/lvol4 don't, so we will need to recreate and restore&lt;BR /&gt;lvol5 and lvol6 only.&lt;BR /&gt;&lt;BR /&gt;Note: Even though lvol5 was also in part on another disk drive, it&lt;BR /&gt;will need to be treated as if the entire lvol was lost, not just&lt;BR /&gt;the part on c0t4d0.&lt;BR /&gt;&lt;BR /&gt;[Step 1.5]&lt;BR /&gt;&lt;BR /&gt;Restore the data from your backup onto the replacement disk for&lt;BR /&gt;the logical volumes identified in step 1.4. For raw volumes, you can&lt;BR /&gt;simply restore the full raw volume using the utility that was used to&lt;BR /&gt;create your backup. For file systems, you will need to recreate the&lt;BR /&gt;file systems first. For our example:&lt;BR /&gt;&lt;BR /&gt;For HFS:&lt;BR /&gt;&lt;BR /&gt;# newfs -F hfs /dev/vg00/rlvol5&lt;BR /&gt;# newfs -F hfs /dev/vg00/rlvol6&lt;BR /&gt;&lt;BR /&gt;For JFS:&lt;BR /&gt;&lt;BR /&gt;# newfs -F vxfs /dev/vg00/rlvol5&lt;BR /&gt;# newfs -F vxfs /dev/vg00/rlvol6&lt;BR /&gt;&lt;BR /&gt;Note that we use the raw logical volume device file for the newfs&lt;BR /&gt;command. For file systems that had non-default configurations, please&lt;BR /&gt;consult the man page of newfs for the correct options.&lt;BR /&gt;&lt;BR /&gt;After a file system has been created on the logical volume mount the&lt;BR /&gt;file system under the mount point that it previously occupied. Take&lt;BR /&gt;whatever&lt;BR /&gt;steps are necessary to prevent your applications or users from accessing the&lt;BR /&gt;filesystem until the data has been recovered. Now that the filesystem has&lt;BR /&gt;been&lt;BR /&gt;created simply restore the data for that file system from backups.&lt;BR /&gt;&lt;BR /&gt;Note: You will need to have recorded how your file systems were&lt;BR /&gt;originally created in order to perform this step. The only&lt;BR /&gt;critical feature of this step is that the file system be at&lt;BR /&gt;least as large as before the disk failure. You can change&lt;BR /&gt;other file system parameters, such as those used to tune the&lt;BR /&gt;file system's performance.&lt;BR /&gt;&lt;BR /&gt;For the file system case, there is no need to worry about data on the&lt;BR /&gt;disk (c0t5d0) that was newer then the data on the tape. The newfs&lt;BR /&gt;wiped out all data on the lvol5. For the raw volume access, you may&lt;BR /&gt;have to specify your restore utilities over-write option to guarantee&lt;BR /&gt;bringing the volume back to a known state.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 10 Oct 2003 03:50:09 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090279#M144658</guid>
      <dc:creator>Robert-Jan Goossens</dc:creator>
      <dc:date>2003-10-10T03:50:09Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090280#M144659</link>
      <description>Hi,again&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;This disk is in a external storage and is hot swappable. The disk is not mirrored and the VG has three disks in it.&amp;lt;&amp;lt;&lt;BR /&gt;&lt;BR /&gt;yes, of course you don't need to reboot your system If failed disk is hotswapable.&lt;BR /&gt;&lt;BR /&gt;I will sumarize the procesure toughly.&lt;BR /&gt;&lt;BR /&gt;1. put out and into new disk on system.&lt;BR /&gt;2. pvcreate&lt;BR /&gt;3. vgrestorecfg&lt;BR /&gt;4. staled lvm restore &lt;BR /&gt;&lt;BR /&gt;I think that your idea is good!&lt;BR /&gt;Look at more specific information in the file.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 10 Oct 2003 03:57:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090280#M144659</guid>
      <dc:creator>KCS_1</dc:creator>
      <dc:date>2003-10-10T03:57:27Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090281#M144660</link>
      <description>First action I would do is to identify disk and reseat it - pull it out, wait for 20 seconds and put it back securely. Then in 30 seconds do an 'ioscan -fn' to see if disk recovered. If not - replace disk, if it will recover - backup ASAP and then investigate what has happened&lt;BR /&gt;Eugeny</description>
      <pubDate>Fri, 10 Oct 2003 04:19:52 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090281#M144660</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-10-10T04:19:52Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090282#M144661</link>
      <description>Thanks everyone....I'll wait for the disk tobe replaced and then proceed...&lt;BR /&gt;&lt;BR /&gt;My complete system backup is with fbackup. after the disk replacement, I'll have to recover only one file system. do I need to take any special care for this restore.?&lt;BR /&gt;&lt;BR /&gt;-Vikas</description>
      <pubDate>Fri, 10 Oct 2003 04:52:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090282#M144661</guid>
      <dc:creator>Vikas_2</dc:creator>
      <dc:date>2003-10-10T04:52:28Z</dc:date>
    </item>
    <item>
      <title>Re: Hard disk failure</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090283#M144662</link>
      <description>Hi Vikas,&lt;BR /&gt; &lt;BR /&gt;No nothing special, just chase you users out of the system during the restore and keep your application offline. &lt;BR /&gt; &lt;BR /&gt;Good luck.&lt;BR /&gt; &lt;BR /&gt;Robert-Jan</description>
      <pubDate>Fri, 10 Oct 2003 04:58:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hard-disk-failure/m-p/3090283#M144662</guid>
      <dc:creator>Robert-Jan Goossens</dc:creator>
      <dc:date>2003-10-10T04:58:30Z</dc:date>
    </item>
  </channel>
</rss>

