Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-15-2007 02:50 PM
06-15-2007 02:50 PM
NFS Slow
Servers: rx2620
HP-UX 11.23
MC Service Guard 11.17
I would like to just mount together the file systems a NFS mount point in both servers.
but my problem is when i try to change the package to another host in the cluter is VERY SLOW to mount the NFS.
My package.sh in atach.
pec1-/# cmviewcl
CLUSTER STATUS
cluster1 up
NODE STATUS STATE
pec1 up running
PACKAGE STATUS STATE AUTO_RUN NODE
sappec up running enabled pec1
NODE STATUS STATE
pec2 up running
pec1-/#
pec1:
pec1-/# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 573440 420536 151720 73% /
/dev/vg00/lvol1 311296 140280 169776 45% /stand
/dev/vg00/lvol8 4718592 2350448 2350528 50% /var
/dev/vg00/lvol7 6356992 2547792 3779488 40% /usr
/dev/vg00/lvol9 10240000 5778675 4182526 58% /usr/sap
/dev/vg00/lvol4 2048000 1425328 618520 70% /tmp
/dev/vg00/lvol6 6275072 3960128 2296872 63% /opt
/dev/vg00/lvol5 32768 9976 22640 31% /home
/dev/sapmnt/lvol1 6275072 4805718 1377549 78% /export/sapmnt/PEC
/dev/vg01/lvol2 507904 3813 472645 1% /oracle
/dev/vg01/lvol3 4096000 133055 3715266 3% /oracle/client
/dev/vg01/lvol4 10240000 4858260 5045388 49% /oracle/stage/102_64
/dev/vg01/lvol5 6291456 585851 5354574 10% /oracle/PEC
/dev/vg01/lvol6 6144000 4234421 1790791 70% /oracle/PEC/102_64
/dev/vg01/lvol7 81920000 28684812 49907990 36% /oracle/PEC/sapdata1
/dev/vg01/lvol8 81920000 27471524 51045449 35% /oracle/PEC/sapdata2
/dev/vg01/lvol9 81920000 35630948 43395989 45% /oracle/PEC/sapdata3
/dev/vg01/lvol10 81920000 32251780 46563958 41% /oracle/PEC/sapdata4
/dev/vg02/lvol1 2048000 119376 1808092 6% /oracle/PEC/mirrlogA
/dev/vg02/lvol2 2048000 119376 1808092 6% /oracle/PEC/mirrlogB
/dev/vg02/lvol3 40960000 7550620 31321513 19% /oracle/PEC/oraarch
/dev/vg02/lvol4 2048000 136417 1792116 7% /oracle/PEC/origlogA
/dev/vg02/lvol5 2048000 136417 1792116 7% /oracle/PEC/origlogB
/dev/vgsappec/lvol1
10469376 3215482 6800718 32% /usr/sap/PEC
/dev/vg02/lvol6 5210112 1308198 3658062 26% /oracle/PEC/saptemp1
/dev/vg02/lvol7 15351808 20236 14373356 0% /oracle/PEC/sapreorg
pec0:/export/sapmnt/PEC
6275072 4805720 1377544 78% /sapmnt/PEC
pec1-/#
pec2:
pec2-/# bdf
Filesystem kbytes used avail %used Mounted on
/dev/vg00/lvol3 573440 283136 288048 50% /
/dev/vg00/lvol1 1835008 122760 1698976 7% /stand
/dev/vg00/lvol8 5120000 2877160 2225384 56% /var
/dev/vg00/lvol7 7274496 2554280 4683448 35% /usr
/dev/vg00/lvol10 12288000 21448 11499896 0% /usr/sap
/dev/vg00/lvol4 524288 19792 501544 4% /tmp
/dev/vg00/lvol6 7274496 4066480 3182968 56% /opt
/dev/vg00/lvol5 114688 10024 103872 9% /home
pec0:/export/sapmnt/PEC
6275072 4805720 1377544 78% /sapmnt/PEC
pec2-/#
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2007 12:23 AM
06-16-2007 12:23 AM
Re: NFS Slow
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2007 01:48 AM
06-16-2007 01:48 AM
Re: NFS Slow
hosts: files [NOTFOUND=continue UNAVAIL=continue] dns
then put all of your production machine addresses into /etc/hosts. Production systems won't change very much and you avoid delays and problems with another group maintaining DNS.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-16-2007 01:56 AM
06-16-2007 01:56 AM
Re: NFS Slow
The problem is:
The NFS(/export/sapmnt/PEC) is in the package .
The IP Package:
192.210.1.216 pec0 # package Hostname
Message in the log:
/sapmnt/PEC: stat: Stale NFS file handle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-17-2007 11:33 PM
06-17-2007 11:33 PM
Re: NFS Slow
#Revision: 1.10
however no HP copyright is present.
HP's scripts contain a copyright - so your script is home-grown.
What version of HPUX is running?
Make certain the Volume Group minor number matches between servers.
A stale handle indicates the NFS clients' linkage to the real file system located on the NFS server has changed or otherwise become invalid. This may happen as a result of moving an NFS package from one server to another, and the LVM volume group minor number differing between servers (as an example).
As a diagnostic in the applic_halt_cmds() in your script, use 'nfsstat -m' to identify any NFS file systems that may have stale handles.