<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Problem with fs mounting in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898098#M622263</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;It appears the LUN is accessible from the EMC box but not the old contents of the LUN. What actually has happened to the EMC box. Were there some changes done Zoning config etc.&lt;BR /&gt;&lt;BR /&gt;Just see /etc/lvmpvg file all the PVGs should be listed in it.&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Devender</description>
    <pubDate>Mon, 25 Apr 2005 06:20:04 GMT</pubDate>
    <dc:creator>Devender Khatana</dc:creator>
    <dc:date>2005-04-25T06:20:04Z</dc:date>
    <item>
      <title>Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898091#M622256</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I have a problem on a server connected to an EMC Symmetrix.&lt;BR /&gt;&lt;BR /&gt;After a reboot, it is unable to mount filesystems in a vg named vg106BEBarch.&lt;BR /&gt;&lt;BR /&gt;If I run a vgdisplay vg106BEBarch, I get this :&lt;BR /&gt;&lt;BR /&gt;vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t14d4":&lt;BR /&gt;The specified path does not correspond to physical volume attached to&lt;BR /&gt;this volume group&lt;BR /&gt;vgdisplay: Warning: couldn't query physical volume "/dev/dsk/c4t14d4":&lt;BR /&gt;The specified path does not correspond to physical volume attached to&lt;BR /&gt;this volume group&lt;BR /&gt;vgdisplay: Warning: couldn't query all of the physical volumes.&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vg106BEBarch&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      5&lt;BR /&gt;Open LV                     5&lt;BR /&gt;Max PV                      64&lt;BR /&gt;Cur PV                      2&lt;BR /&gt;Act PV                      2&lt;BR /&gt;Max PE per PV               10240&lt;BR /&gt;VGDA                        4&lt;BR /&gt;PE Size (Mbytes)            8&lt;BR /&gt;Total PE                    2156&lt;BR /&gt;Alloc PE                    1150&lt;BR /&gt;Free PE                     1006&lt;BR /&gt;Total PVG                   2&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;&lt;BR /&gt;vgdisplay -v shows the followings disks :&lt;BR /&gt;   --- Physical volume groups ---&lt;BR /&gt;   PVG Name                    106ARCH&lt;BR /&gt;   PV Name                     /dev/dsk/c3t6d4&lt;BR /&gt;   PV Name                     /dev/dsk/c3t6d5&lt;BR /&gt;   PV Name                     /dev/dsk/c4t6d4&lt;BR /&gt;   PV Name                     /dev/dsk/c4t6d5&lt;BR /&gt;&lt;BR /&gt;   PVG Name                    ARCH2&lt;BR /&gt;   PV Name                     /dev/dsk/c3t14d4&lt;BR /&gt;   PV Name                     /dev/dsk/c4t14d4&lt;BR /&gt;&lt;BR /&gt;I appears that tis vg was moved from another server and was still active on the previous server. I desactivated it on the other server but I can't find how to solve this. &lt;BR /&gt;&lt;BR /&gt;It seems, but I am still not sure, I am checking with storage team, that the two last disks shouldn't belong to this vg.&lt;BR /&gt;&lt;BR /&gt;ioscan -fnC disk shows all the disks claimed.&lt;BR /&gt;&lt;BR /&gt;I daren't move /etc/lvmtab and run vgscan -av, I am not familiar with this and I fear that the solution may be worse that the problem.&lt;BR /&gt;&lt;BR /&gt;I ran vgscan -apv and it doesn't see vg106BEBarch vg.&lt;BR /&gt;&lt;BR /&gt;In addition, these disks are all configured with multipath two by two so when it says that there's only 2 active PV's, it means that there are 4 disks active with multipath. (c3t6d4 multipath with c4t6d4 and so on)&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Apr 2005 05:03:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898091#M622256</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T05:03:45Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898092#M622257</link>
      <description>Try:&lt;BR /&gt;&lt;BR /&gt;diskinfo /dev/rdsk/c3t14d4&lt;BR /&gt;and&lt;BR /&gt;pvdisplay /dev/dsk/c3t14d4&lt;BR /&gt;&lt;BR /&gt;And paste here the output.</description>
      <pubDate>Mon, 25 Apr 2005 05:11:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898092#M622257</guid>
      <dc:creator>Alex Lavrov.</dc:creator>
      <dc:date>2005-04-25T05:11:18Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898093#M622258</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Physical disks in your ARCH2 Physical Volume Group are not accessible. As stated earlier see wheather the device files exists. &lt;BR /&gt;&lt;BR /&gt;VG can still be activated by bypassing the quorum check using "vgchange -a y -q n /dev/vg106BEBarch"&lt;BR /&gt;&lt;BR /&gt;But by doing this your Logical volumes which are allocated to these physical volumes fully or partially will not be available.&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Devender</description>
      <pubDate>Mon, 25 Apr 2005 05:28:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898093#M622258</guid>
      <dc:creator>Devender Khatana</dc:creator>
      <dc:date>2005-04-25T05:28:08Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898094#M622259</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Here are the results of diskinfo and pvdisplay. &lt;BR /&gt;&lt;BR /&gt;# diskinfo /dev/rdsk/c3t14d4&lt;BR /&gt;SCSI describe of /dev/rdsk/c3t14d4:&lt;BR /&gt;             vendor: EMC&lt;BR /&gt;         product id: SYMMETRIX&lt;BR /&gt;               type: direct access&lt;BR /&gt;               size: 8838720 Kbytes&lt;BR /&gt;   bytes per sector: 512&lt;BR /&gt;&lt;BR /&gt;root@mou043  [/tmp]&lt;BR /&gt;# pvdisplay /dev/dsk/c3t14d4&lt;BR /&gt;pvdisplay: Warning: couldn't query physical volume "/dev/dsk/c3t14d4":&lt;BR /&gt;The specified path does not correspond to physical volume attached to&lt;BR /&gt;this volume group&lt;BR /&gt;pvdisplay: Warning: couldn't query physical volume "/dev/dsk/c4t14d4":&lt;BR /&gt;The specified path does not correspond to physical volume attached to&lt;BR /&gt;this volume group&lt;BR /&gt;pvdisplay: Warning: couldn't query all of the physical volumes.&lt;BR /&gt;pvdisplay: Couldn't retrieve the names of the physical volumes&lt;BR /&gt;belonging to volume group "/dev/vg106BEBarch".&lt;BR /&gt;pvdisplay: Cannot display physical volume "/dev/dsk/c3t14d4".&lt;BR /&gt;&lt;BR /&gt;I have no problem to activate the vg, but if I try to mount the fs, I get such errors :&lt;BR /&gt;vxfs mount: /dev/vg106BEBarch/lvol1 is corrupted. needs checking&lt;BR /&gt;&lt;BR /&gt;if I run a fsck, I get : &lt;BR /&gt;&lt;BR /&gt;# fsck /dev/vg106BEBarch/lvol1&lt;BR /&gt;file system is larger than device&lt;BR /&gt;vxfs fsck: cannot initialize aggregate&lt;BR /&gt;file system check failure, aborting ...&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Apr 2005 06:08:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898094#M622259</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T06:08:47Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898095#M622260</link>
      <description>If you execute the command "insf", do you get any output?</description>
      <pubDate>Mon, 25 Apr 2005 06:12:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898095#M622260</guid>
      <dc:creator>Alex Lavrov.</dc:creator>
      <dc:date>2005-04-25T06:12:24Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898096#M622261</link>
      <description>no, insf does not give any output</description>
      <pubDate>Mon, 25 Apr 2005 06:15:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898096#M622261</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T06:15:06Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898097#M622262</link>
      <description>And another question, how did you move the VG from the other server? vgexport/vgimport?&lt;BR /&gt;&lt;BR /&gt;If you perform vgscan, do you get any messages?&lt;BR /&gt;After vgscan:&lt;BR /&gt;&lt;BR /&gt;strings /etc/lvmtab | more&lt;BR /&gt;&lt;BR /&gt;Can you see this vg/disk there?</description>
      <pubDate>Mon, 25 Apr 2005 06:15:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898097#M622262</guid>
      <dc:creator>Alex Lavrov.</dc:creator>
      <dc:date>2005-04-25T06:15:12Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898098#M622263</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;It appears the LUN is accessible from the EMC box but not the old contents of the LUN. What actually has happened to the EMC box. Were there some changes done Zoning config etc.&lt;BR /&gt;&lt;BR /&gt;Just see /etc/lvmpvg file all the PVGs should be listed in it.&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Devender</description>
      <pubDate>Mon, 25 Apr 2005 06:20:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898098#M622263</guid>
      <dc:creator>Devender Khatana</dc:creator>
      <dc:date>2005-04-25T06:20:04Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898099#M622264</link>
      <description>I don't know how they were moved, I didn't do it. I assume only a vgchange -a n was done on the original server, which may have caused the problem as the vg was active on both servers (but no filesystem mounted on old server, at least !)&lt;BR /&gt;vgscan -apv gives (part of it only): &lt;BR /&gt;Following Physical Volumes belong to one Volume Group.&lt;BR /&gt;Unable to match these Physical Volumes to a Volume Group.&lt;BR /&gt;Use the vgimport command to complete the process.&lt;BR /&gt;/dev/dsk/c3t14d4&lt;BR /&gt;/dev/dsk/c4t14d4&lt;BR /&gt;&lt;BR /&gt;but nothing on c3t6d4-5 or c4t6d4-5 which should be seen, they are seen by pvdisplay !</description>
      <pubDate>Mon, 25 Apr 2005 06:21:41 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898099#M622264</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T06:21:41Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898100#M622265</link>
      <description>Here is what appears in /etc/lvmpvg&lt;BR /&gt;&lt;BR /&gt;VG      /dev/vg106BEBarch&lt;BR /&gt;PVG     106ARCH&lt;BR /&gt;/dev/dsk/c3t6d4&lt;BR /&gt;/dev/dsk/c3t6d5&lt;BR /&gt;/dev/dsk/c4t6d4&lt;BR /&gt;/dev/dsk/c4t6d5&lt;BR /&gt;PVG     ARCH2&lt;BR /&gt;/dev/dsk/c3t14d4&lt;BR /&gt;/dev/dsk/c4t14d4&lt;BR /&gt;&lt;BR /&gt;It seems coherent with vgdisplay, but I don't know if vgdisplay reads this file, in which case it would be normal that it gives the same result :-)</description>
      <pubDate>Mon, 25 Apr 2005 06:25:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898100#M622265</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T06:25:35Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898101#M622266</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;You might have to run vgexport/vgimport now. Just note down the VG minor no. by "ls -l /dev/v06BEBarch/group" Then do a vgexport and then vgimport by specifying the physical Paths of all the Physical disks with vgimport.&lt;BR /&gt;&lt;BR /&gt;Do you have some old bdf/lvdisplay -v output from where you can come to know how many file system were there in this VG and what was the allocation policy ?&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Devender</description>
      <pubDate>Mon, 25 Apr 2005 06:30:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898101#M622266</guid>
      <dc:creator>Devender Khatana</dc:creator>
      <dc:date>2005-04-25T06:30:21Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898102#M622267</link>
      <description>There were 5 lv's distributed on the 4 first disks :&lt;BR /&gt;/dev/dsk/c3t6d4&lt;BR /&gt;/dev/dsk/c3t6d5&lt;BR /&gt;/dev/dsk/c4t6d4&lt;BR /&gt;/dev/dsk/c4t6d5&lt;BR /&gt;(in fact 2 disks with multipath)&lt;BR /&gt;&lt;BR /&gt;But I don't know what the following disks are :&lt;BR /&gt;/dev/dsk/c3t14d4&lt;BR /&gt;/dev/dsk/c4t14d4&lt;BR /&gt;&lt;BR /&gt;I can figure some possibilities : &lt;BR /&gt;&lt;BR /&gt;vgexport/vgimport : should I import only the first 4 disks or all ? isn't there a risk to lose all data on the disks ?&lt;BR /&gt;&lt;BR /&gt;mv /etc/lvmtab /etc/lvmtab.old;vgscan -av :&lt;BR /&gt;I did that only once, based on a thread in this forum describing exactly my problem. I don't know if this command is dangerous, and besides, I am not sure at all it can solve the problem. I will probably do that, but I'd like to be sure before.&lt;BR /&gt;&lt;BR /&gt;vgreduce 2 unused disks : I will use this at the very end if it appears that nothing else works, I really doubt it would be the solution. And more : if the two last disks :&lt;BR /&gt;/dev/dsk/c3t14d4&lt;BR /&gt;/dev/dsk/c4t14d4&lt;BR /&gt;don't belong to this volume group, to which do they belong ? I don't wan't to solve my problem by erasing someone else's data :-(&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 25 Apr 2005 06:53:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898102#M622267</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T06:53:56Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898103#M622268</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;You can edit the /etc/lvmpvg file and can make necessary changes to reflect only first four disks in it. (See man lvmpvg)&lt;BR /&gt;&lt;BR /&gt;And if you are sure that this vg was allocated to only four disks then you can do a vgimport with only four devices after changing /etc/lvmpvg. It do not destroy existing contents of the disk.&lt;BR /&gt;&lt;BR /&gt;HTH,&lt;BR /&gt;Devender</description>
      <pubDate>Mon, 25 Apr 2005 07:05:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898103#M622268</guid>
      <dc:creator>Devender Khatana</dc:creator>
      <dc:date>2005-04-25T07:05:19Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898104#M622269</link>
      <description>To continue the previous post:&lt;BR /&gt;&lt;BR /&gt;vgchange -a n vg106BEBarch&lt;BR /&gt;vgexport -m /tmp/my.map vg106BEBarch&lt;BR /&gt;&lt;BR /&gt;mkdir /dev/vg106BEBarch&lt;BR /&gt;mknod /dev/vg106BEBarch/group c 64 0x&lt;NUM&gt;&lt;BR /&gt;&lt;BR /&gt;to get the &lt;NUM&gt;&lt;/NUM&gt;ll /dev/vg106BEBarch/group &lt;BR /&gt;and see what it was.&lt;BR /&gt;&lt;BR /&gt;vgimport -m /tmp/my.map vg106BEBarch /dev/dsk/c3t6d4 /dev/dsk/c3t6d5 /dev/dsk/c4t6d4 /dev/dsk/c4t6d5&lt;BR /&gt;&lt;BR /&gt;Check if it works without those two disks. If you wish to bring them back, do the same, but add them to the "vgimport" command.&lt;/NUM&gt;</description>
      <pubDate>Mon, 25 Apr 2005 08:10:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898104#M622269</guid>
      <dc:creator>Alex Lavrov.</dc:creator>
      <dc:date>2005-04-25T08:10:26Z</dc:date>
    </item>
    <item>
      <title>Re: Problem with fs mounting</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898105#M622270</link>
      <description>Thanks to all of you for you help. Unfortunately, I won't be able to tell you who was right, we found a workaround by using another vg, this one will be removed, which will very likely solve the problem :-)</description>
      <pubDate>Mon, 25 Apr 2005 10:01:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/problem-with-fs-mounting/m-p/4898105#M622270</guid>
      <dc:creator>Global Unix Team</dc:creator>
      <dc:date>2005-04-25T10:01:21Z</dc:date>
    </item>
  </channel>
</rss>

