<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: nfs package failover success but client get statfs error message in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301402#M58016</link>
    <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Sure, 10.10.12.8 is floating IP address, I get following finding on NFS toolkit V3 release note known issue&lt;BR /&gt;&lt;BR /&gt;JAGaf57739: HA NFS and â  Stale NFS Handleâ  &lt;BR /&gt;&lt;BR /&gt;The workround are followings&lt;BR /&gt;&lt;BR /&gt;1. Create the logical volumes with persistent minor numbers.&lt;BR /&gt;2. Export the filesystem with an assigned filesystem identification&lt;BR /&gt;&lt;BR /&gt;I don't know if this can resolve my problem .... try tomorrow.</description>
    <pubDate>Thu, 06 Nov 2008 17:25:28 GMT</pubDate>
    <dc:creator>public</dc:creator>
    <dc:date>2008-11-06T17:25:28Z</dc:date>
    <item>
      <title>nfs package failover success but client get statfs error message</title>
      <link>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301400#M58014</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;I get a problem on linux mcsg nfs toolkit package, I create a NFS package success and export two file systems &lt;BR /&gt;&lt;BR /&gt;[root@filesrv2 ~]# showmount -e&lt;BR /&gt;Export list for filesrv2:&lt;BR /&gt;/tank *&lt;BR /&gt;/user *&lt;BR /&gt;&lt;BR /&gt;On client, mount those two NFS filesystem as following&lt;BR /&gt;&lt;BR /&gt;[root@tapesrv1 /]# df&lt;BR /&gt;Filesystem           1K-blocks      Used Available Use% Mounted on&lt;BR /&gt;/dev/mapper/VolGroup00-LogVol00&lt;BR /&gt;                      56443212   7698544  45877468  15% /&lt;BR /&gt;/dev/cciss/c0d0p1        98747     22323     71325  24% /boot&lt;BR /&gt;none                  16455476         0  16455476   0% /dev/shm&lt;BR /&gt;/dev/mapper/VolGroup00-LogVol03&lt;BR /&gt;                      30254032    238512  28478704   1% /tmp&lt;BR /&gt;/dev/mapper/VolGroup00-LogVol04&lt;BR /&gt;                      20642428     77800  19516052   1% /scratch&lt;BR /&gt;10.10.12.8:/user     2580277440    109152 2449097728   1% /user&lt;BR /&gt;10.10.12.8:/tank     8256977920  11836320 8169308576   1% /tank&lt;BR /&gt;&lt;BR /&gt;mount command report&lt;BR /&gt;&lt;BR /&gt;10.10.12.8:/user on /user type nfs (rw,hard,addr=10.10.12.8)&lt;BR /&gt;10.10.12.8:/tank on /tank type nfs (rw,hard,addr=10.10.12.8)&lt;BR /&gt;&lt;BR /&gt;But when I try to failover nfs package to another node, the nfs client dmesg get nfs_statfs: statfs error = 116 message and the df command output as following&lt;BR /&gt;&lt;BR /&gt;[root@tapesrv1 /]# df&lt;BR /&gt;Filesystem           1K-blocks      Used Available Use% Mounted on&lt;BR /&gt;/dev/mapper/VolGroup00-LogVol00&lt;BR /&gt;                      56443212   7697796  45878216  15% /&lt;BR /&gt;/dev/cciss/c0d0p1        98747     22323     71325  24% /boot&lt;BR /&gt;none                  16455476         0  16455476   0% /dev/shm&lt;BR /&gt;/dev/mapper/VolGroup00-LogVol03&lt;BR /&gt;                      30254032    238512  28478704   1% /tmp&lt;BR /&gt;/dev/mapper/VolGroup00-LogVol04&lt;BR /&gt;                      20642428     77800  19516052   1% /scratch&lt;BR /&gt;10.10.12.8:/user     8256977920  11836320 8169308576   1% /user&lt;BR /&gt;10.10.12.8:/tank             -         -         -   -  /tank&lt;BR /&gt;&lt;BR /&gt;Do you see ? the /user still there but the content become /tank directory's content and /tank fail to reconnect, does anyone get such problem before ? &lt;BR /&gt;</description>
      <pubDate>Thu, 06 Nov 2008 05:17:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301400#M58014</guid>
      <dc:creator>public</dc:creator>
      <dc:date>2008-11-06T05:17:32Z</dc:date>
    </item>
    <item>
      <title>Re: nfs package failover success but client get statfs error message</title>
      <link>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301401#M58015</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;the nfs stats data is going to be skewed because its collected on two servers.&lt;BR /&gt;&lt;BR /&gt;Is the mount going to the floating IP address?&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 06 Nov 2008 16:08:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301401#M58015</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2008-11-06T16:08:36Z</dc:date>
    </item>
    <item>
      <title>Re: nfs package failover success but client get statfs error message</title>
      <link>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301402#M58016</link>
      <description>Hi,&lt;BR /&gt;&lt;BR /&gt;Sure, 10.10.12.8 is floating IP address, I get following finding on NFS toolkit V3 release note known issue&lt;BR /&gt;&lt;BR /&gt;JAGaf57739: HA NFS and â  Stale NFS Handleâ  &lt;BR /&gt;&lt;BR /&gt;The workround are followings&lt;BR /&gt;&lt;BR /&gt;1. Create the logical volumes with persistent minor numbers.&lt;BR /&gt;2. Export the filesystem with an assigned filesystem identification&lt;BR /&gt;&lt;BR /&gt;I don't know if this can resolve my problem .... try tomorrow.</description>
      <pubDate>Thu, 06 Nov 2008 17:25:28 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/nfs-package-failover-success-but-client-get-statfs-error-message/m-p/4301402#M58016</guid>
      <dc:creator>public</dc:creator>
      <dc:date>2008-11-06T17:25:28Z</dc:date>
    </item>
  </channel>
</rss>

