<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Volume set repair in Operating System - OpenVMS</title>
    <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704319#M73634</link>
    <description>Thanks for the inputs. Could be a case of RTFM for Volume sets. At the moment I have a backup to restore, but would like to save the data on the remaining RVNÂ´s at this point if possible. I have tried mounting individual shadow sets (RVNÂ´s) but have the same problem as RVN 6 is not available. Surely it must be possible to mount and backup single RVNÂ´s, no? (IÂ´ve tried the /vol=n switch).&lt;BR /&gt;&lt;BR /&gt;If not I have to init the Volume set and hope I can restore the original files to the smaller volume set...</description>
    <pubDate>Mon, 09 Jan 2006 09:20:13 GMT</pubDate>
    <dc:creator>Michael Purdy</dc:creator>
    <dc:date>2006-01-09T09:20:13Z</dc:date>
    <item>
      <title>Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704314#M73629</link>
      <description>We have a 2 node cluster VMS 7.2-1 and 2 volume sets. 2 disks were removed from a volume set and used elsewhere. Is it possible to repair the volume set to continue without the 2 disks. The mount command fails with failed to lock volume and device is not mounted. I do not think the disks were removed cleanly from the vol set, any ideas?</description>
      <pubDate>Fri, 06 Jan 2006 11:53:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704314#M73629</guid>
      <dc:creator>Michael Purdy</dc:creator>
      <dc:date>2006-01-06T11:53:37Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704315#M73630</link>
      <description>Never used volumesets but to my knowledge you can not repair it.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Fri, 06 Jan 2006 11:58:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704315#M73630</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-01-06T11:58:04Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704316#M73631</link>
      <description>I think it's time to test your recovery strategy - do you have a backup?&lt;BR /&gt;&lt;BR /&gt;How many disks in each set? &lt;BR /&gt;&lt;BR /&gt;The problem is that some or all of each file would have been on the disks that where removed.&lt;BR /&gt;</description>
      <pubDate>Fri, 06 Jan 2006 12:02:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704316#M73631</guid>
      <dc:creator>Ian Miller.</dc:creator>
      <dc:date>2006-01-06T12:02:20Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704317#M73632</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;Sorry, but NO way! Part of your data (about 2/3) is (was) on the disks that are used elsewhere.&lt;BR /&gt;And if you find a single drive big enough to hold all, even a restore to that drive is not straightforward. An /IMAGE restore WILL want to restore each file(-fragment) to its original Relative Volume Number.&lt;BR /&gt;_IF_ you had _NO_ file aliasses on that set, _THEN_ you can restore the whole saveset to a mounted Files-11 drive. Make sure not to forget /OWN=ORIGINAL  !&lt;BR /&gt;&lt;BR /&gt;I wish you good luck and success. I think you will need both.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Fri, 06 Jan 2006 12:17:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704317#M73632</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-01-06T12:17:17Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704318#M73633</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;Obviously, the data on the two volumes that were removed is lost (presuming that they were overwritten).&lt;BR /&gt;&lt;BR /&gt;It may be possible to recover some of the contents of the remaining volumes (actually, I told a war story about just such an incident at the 1990 European DECUS Symposium).&lt;BR /&gt;&lt;BR /&gt;I am in the middle of something, I will try to comment more extensively later.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Fri, 06 Jan 2006 17:44:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704318#M73633</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-01-06T17:44:33Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704319#M73634</link>
      <description>Thanks for the inputs. Could be a case of RTFM for Volume sets. At the moment I have a backup to restore, but would like to save the data on the remaining RVNÂ´s at this point if possible. I have tried mounting individual shadow sets (RVNÂ´s) but have the same problem as RVN 6 is not available. Surely it must be possible to mount and backup single RVNÂ´s, no? (IÂ´ve tried the /vol=n switch).&lt;BR /&gt;&lt;BR /&gt;If not I have to init the Volume set and hope I can restore the original files to the smaller volume set...</description>
      <pubDate>Mon, 09 Jan 2006 09:20:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704319#M73634</guid>
      <dc:creator>Michael Purdy</dc:creator>
      <dc:date>2006-01-09T09:20:13Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704320#M73635</link>
      <description>By reading the documentation on BACKUP and volume sets you will find the instructions for restoring a single member of a volume set from your backup.  I believe the HELP for BACKUP has some information on this as well.&lt;BR /&gt;&lt;BR /&gt;I haven't done this in years, but it is possible to just restore the missing volumes from your last backup.&lt;BR /&gt;&lt;BR /&gt;You will have to provide drives of course.&lt;BR /&gt;&lt;BR /&gt;If you want to back up the contents of the untouched members of the volume set before proceeding, you can always do a BACKUP/PHYSICAL of them for safekeeping.&lt;BR /&gt;&lt;BR /&gt;After making a physical backup of them, you could try restoring just the 2 missing drives and mounting the volume set.  If it will mount at all, then you can do a file level backup of the 2 good volumes by using the /VOLUME qualifier to specify which one(s) you want to select.&lt;BR /&gt;&lt;BR /&gt;After backing up the files, then you can attempt an ANALYZE/DISK [/REPAIR] on the volume set and see just how bad things look.&lt;BR /&gt;&lt;BR /&gt;Robert</description>
      <pubDate>Mon, 09 Jan 2006 09:29:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704320#M73635</guid>
      <dc:creator>Robert_Boyd</dc:creator>
      <dc:date>2006-01-09T09:29:55Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704321#M73636</link>
      <description>You can also copy-delete files that are already restored. To another disk that is.&lt;BR /&gt;&lt;BR /&gt;Wim</description>
      <pubDate>Mon, 09 Jan 2006 09:43:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704321#M73636</guid>
      <dc:creator>Wim Van den Wyngaert</dc:creator>
      <dc:date>2006-01-09T09:43:20Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704322#M73637</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;Before doing anything else, take the surviving members of the volume set offline.&lt;BR /&gt;&lt;BR /&gt;As I mentioned previously, it is possible to recover the data on the surviving members of the volume set. &lt;BR /&gt;&lt;BR /&gt;Having done this very procedure (as a result of a head crash), it is doable (and ironically) fully within the Files-11 specification. However, it is delicate procedure, and involves faking out MOUNT and carefully using the results from ANALYZE. &lt;BR /&gt;&lt;BR /&gt;If the data is valuable, it is worth doing. This is particularly true for very large volume sets (such as the one that you describe).&lt;BR /&gt;&lt;BR /&gt;It is difficult to describe the detailed steps to recover this situation in a reasonable length posting, but it is possible to recover the files on the surviving volumes.&lt;BR /&gt;&lt;BR /&gt;If I can be of assistance, please drop me a note by private email.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 09 Jan 2006 10:09:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704322#M73637</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-01-09T10:09:00Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704323#M73638</link>
      <description>OK, so the Volume set was made up of 6x 2 disk shadow sets, 1 shadow set was removed. All the data is backed up. All the remaining disks have been dismounted. I have started to do a Backup /physical of $10$DKA0: of the first shadow set, and will take a backup of  the other four disks too. then plan A is to init the volume set with the 5 shadow sets and restore the whole volume to this...</description>
      <pubDate>Mon, 09 Jan 2006 10:33:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704323#M73638</guid>
      <dc:creator>Michael Purdy</dc:creator>
      <dc:date>2006-01-09T10:33:49Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704324#M73639</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;so you HAVE 10 disks?&lt;BR /&gt;&lt;BR /&gt;I think you stand a much better chance, if you re-create a 6-volume set from 6 shadowsets, with 2 of those shadow sets being single-disk. (AFTER you finish the physical BACKUPs, of course).&lt;BR /&gt;&lt;BR /&gt;That would mean that at the logical level you are re-creating the same configuration.&lt;BR /&gt;&lt;BR /&gt;You sacrifice some redundancy (and perhaps some read-performance) but it _IS_ the same logical structure as the one you saved!&lt;BR /&gt;&lt;BR /&gt;And after you get that back on line, real quickly start planning how to get to full redundancy, and how to prevent this thing from happening again!&lt;BR /&gt;&lt;BR /&gt;Success.&lt;BR /&gt;&lt;BR /&gt;Proost.&lt;BR /&gt;&lt;BR /&gt;Have one on me.&lt;BR /&gt;&lt;BR /&gt;jpe</description>
      <pubDate>Mon, 09 Jan 2006 10:46:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704324#M73639</guid>
      <dc:creator>Jan van den Ende</dc:creator>
      <dc:date>2006-01-09T10:46:59Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704325#M73640</link>
      <description>Michael,&lt;BR /&gt;&lt;BR /&gt;My preference here would be to disconnect one of each of the shadow sets (and reclone). I would then put the preserved members aside (phyisically).&lt;BR /&gt;&lt;BR /&gt;You can then re-create the missing volume ONLY. The MOUNT will then work, albeit with any elements missing that were on the lost pack.&lt;BR /&gt;&lt;BR /&gt;Recovering beyond this is a manual operation, involving the ANALYZE/DISK utility. As a starting point, you should not have to restore the entire volume set UNTIL AFTER there is an understanding of the degree of the damage.&lt;BR /&gt;&lt;BR /&gt;Generally speaking, files created DO NOT, unless no space exists on the current volume, straddle members of the volume set.&lt;BR /&gt;&lt;BR /&gt;- Bob Gezelter, &lt;A href="http://www.rlgsc.com" target="_blank"&gt;http://www.rlgsc.com&lt;/A&gt;</description>
      <pubDate>Mon, 09 Jan 2006 12:28:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704325#M73640</guid>
      <dc:creator>Robert Gezelter</dc:creator>
      <dc:date>2006-01-09T12:28:49Z</dc:date>
    </item>
    <item>
      <title>Re: Volume set repair</title>
      <link>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704326#M73641</link>
      <description>The volume set was reinitialised and the data restored from backup...&lt;BR /&gt;&lt;BR /&gt;Thanks, dank je wel, merci vielmal, for all replies.&lt;BR /&gt;&lt;BR /&gt;-Mike&lt;BR /&gt;</description>
      <pubDate>Wed, 11 Jan 2006 05:27:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-openvms/volume-set-repair/m-p/3704326#M73641</guid>
      <dc:creator>Michael Purdy</dc:creator>
      <dc:date>2006-01-11T05:27:00Z</dc:date>
    </item>
  </channel>
</rss>

