- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- vmunix error
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-30-2009 06:21 PM
тАО04-30-2009 06:21 PM
vmunix error
we are getting following error in syslog of our HPUX11.11 machine.
vmunix: xdr_opaque: decode FAILED
vmunix: NOTICE: nfs_server: bad getargs for 2/16
i checked and found, there are few nfs file systems are mounted on the server but they are looking good.
please let me know what does this states.
thanks in advance.
Roy
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-30-2009 06:37 PM
тАО04-30-2009 06:37 PM
Re: vmunix error
this server is part of your NIS system?
please post dmesg output also please check below thread, thanks
http://forums13.itrc.hp.com/service/forums/questionanswer.do?admit=109447627+1241145367735+28353475&threadId=1023669
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-30-2009 06:45 PM
тАО04-30-2009 06:45 PM
Re: vmunix error
0/4/2/0.58.0.1.0.5.0 dlmfdrv
0/4/2/0.58.0.1.0.5.1 dlmfdrv
0/4/2/0.58.0.1.0.5.2 dlmfdrv
0/4/2/0.58.0.1.0.5.3 dlmfdrv
0/4/2/0.58.0.1.0.5.4 dlmfdrv
0/4/2/0.58.0.1.0.5.5 dlmfdrv
0/4/2/0.58.0.1.0.5.6 dlmfdrv
0/4/2/0.58.0.1.0.5.7 dlmfdrv
0/4/2/0.58.0.1.0.6 tgt
0/4/2/0.58.0.1.0.6.0 dlmfdrv
0/4/2/0.58.0.1.0.6.1 dlmfdrv
0/4/2/0.58.0.1.0.6.2 dlmfdrv
0/4/2/0.58.0.255.0 fcd_vbus
0/4/2/0.58.0.255.0.1 tgt
0/4/2/0.58.0.255.0.1.0 sctl
0/5 lba
fcd: Claimed HP AB378-60101 4Gb Fibre Channel port at hardware path 0/5/1/0 (FC Port 1 on HBA)
0/5/1/0 fcd
0/5/1/0.61 fcd_fcp
0/5/1/0.61.0.9.0 fcd_vbus
0/5/1/0.61.0.9.0.1 tgt
0/5/1/0.61.0.9.0.1.2 dlmfdrv
0/5/1/0.61.0.9.0.1.3 dlmfdrv
0/5/1/0.61.0.9.0.3 tgt
0/5/1/0.61.0.9.0.3.0 dlmfdrv
0/5/1/0.61.0.9.0.3.1 dlmfdrv
0/5/1/0.61.0.9.0.3.2 dlmfdrv
0/5/1/0.61.0.9.0.3.3 dlmfdrv
0/5/1/0.61.0.9.0.3.4 dlmfdrv
0/5/1/0.61.0.9.0.3.5 dlmfdrv
0/5/1/0.61.0.9.0.3.6 dlmfdrv
0/5/1/0.61.0.9.0.3.7 dlmfdrv
0/5/1/0.61.0.9.0.4 tgt
0/5/1/0.61.0.9.0.4.0 dlmfdrv
0/5/1/0.61.0.9.0.4.1 dlmfdrv
0/5/1/0.61.0.9.0.4.2 dlmfdrv
0/5/1/0.61.0.9.0.4.3 dlmfdrv
0/5/1/0.61.0.9.0.4.4 dlmfdrv
0/5/1/0.61.0.9.0.4.5 dlmfdrv
0/5/1/0.61.0.9.0.4.6 dlmfdrv
0/5/1/0.61.0.9.0.4.7 dlmfdrv
0/5/1/0.61.0.9.0.5 tgt
0/5/1/0.61.0.9.0.5.0 dlmfdrv
0/5/1/0.61.0.9.0.5.1 dlmfdrv
0/5/1/0.61.0.9.0.5.2 dlmfdrv
0/5/1/0.61.0.9.0.5.3 dlmfdrv
0/5/1/0.61.0.9.0.5.4 dlmfdrv
0/5/1/0.61.0.9.0.5.5 dlmfdrv
0/5/1/0.61.0.9.0.5.6 dlmfdrv
0/5/1/0.61.0.9.0.5.7 dlmfdrv
0/5/1/0.61.0.9.0.6 tgt
0/5/1/0.61.0.9.0.6.0 dlmfdrv
0/5/1/0.61.0.9.0.6.1 dlmfdrv
0/5/1/0.61.0.9.0.6.2 dlmfdrv
0/5/1/0.61.0.255.0 fcd_vbus
0/5/1/0.61.0.255.0.9 tgt
0/5/1/0.61.0.255.0.9.0 sctl
fcd: Claimed HP AB378-60101 4Gb Fibre Channel port at hardware path 0/5/2/0 (FC Port 1 on HBA)
0/5/2/0 fcd
0/5/2/0.61 fcd_fcp
0/5/2/0.61.0.1.0 fcd_vbus
0/5/2/0.61.0.1.0.1 tgt
0/5/2/0.61.0.1.0.1.2 sdisk
0/5/2/0.61.0.1.0.1.3 sdisk
0/5/2/0.61.0.1.0.3 tgt
0/5/2/0.61.0.1.0.3.0 sdisk
0/5/2/0.61.0.1.0.3.1 sdisk
0/5/2/0.61.0.1.0.3.2 sdisk
0/5/2/0.61.0.1.0.3.3 sdisk
0/5/2/0.61.0.1.0.3.4 sdisk
0/5/2/0.61.0.1.0.3.5 sdisk
0/5/2/0.61.0.1.0.3.6 sdisk
0/5/2/0.61.0.1.0.3.7 sdisk
0/5/2/0.61.0.1.0.4 tgt
0/5/2/0.61.0.1.0.4.0 sdisk
0/5/2/0.61.0.1.0.4.1 sdisk
0/5/2/0.61.0.1.0.4.2 sdisk
0/5/2/0.61.0.1.0.4.3 sdisk
0/5/2/0.61.0.1.0.4.4 sdisk
0/5/2/0.61.0.1.0.4.5 sdisk
0/5/2/0.61.0.1.0.4.6 sdisk
0/5/2/0.61.0.1.0.4.7 sdisk
0/5/2/0.61.0.1.0.5 tgt
0/5/2/0.61.0.1.0.5.0 sdisk
0/5/2/0.61.0.1.0.5.1 sdisk
0/5/2/0.61.0.1.0.5.2 sdisk
0/5/2/0.61.0.1.0.5.3 sdisk
0/5/2/0.61.0.1.0.5.4 sdisk
0/5/2/0.61.0.1.0.5.5 sdisk
0/5/2/0.61.0.1.0.5.6 sdisk
0/5/2/0.61.0.1.0.5.7 sdisk
0/5/2/0.61.0.1.0.6 tgt
0/5/2/0.61.0.1.0.6.0 sdisk
0/5/2/0.61.0.1.0.6.1 sdisk
0/5/2/0.61.0.1.0.6.2 sdisk
0/5/2/0.61.0.255.0 fcd_vbus
0/5/2/0.61.0.255.0.1 tgt
0/5/2/0.61.0.255.0.1.0 sctl
0/6 lba
0/6/1/0 igelan
8 memory
16 ipmi
128 processor
129 processor
136 processor
137 processor
144 processor
145 processor
System Console is on the Built-In Serial Interface
igelan0: INITIALIZING HP A6825-60101 PCI 1000Base-T Adapter at hardware path 0/1/2/0
igelan1: INITIALIZING HP A6825-60101 PCI 1000Base-T Adapter at hardware path 0/6/1/0
Logical volume 64, 0x3 configured as ROOT
Logical volume 64, 0x2 configured as SWAP
Logical volume 64, 0x2 configured as DUMP
Swap device table: (start & size given in 512-byte blocks)
entry 0 - major is 64, minor is 0x2; start = 0, size = 28672000
Dump device table: (start & size given in 1-Kbyte blocks)
entry 0000000000000000 - major is 31, minor is 0x20000; start = 314208, size = 14336000
Starting the STREAMS daemons-phase 1
Create STCP device files
Starting the STREAMS daemons-phase 2
$Revision: vmunix: vw: -proj selectors: CUPI80_BL2000_1108 -c 'Vw for CUPI80_BL2000_1108 build' -- cupi80_bl2000_1108 'CUPI80_BL2000_1108' Wed Nov 8 19:24:56 PST 2000 $
Memory Information:
physical page size = 4096 bytes, logical page size = 4096 bytes
Physical: 25163776 Kbytes, lockable: 23029512 Kbytes, available: 22966320 Kbytes
xdr_opaque: decode FAILED
NOTICE: nfs_server: bad getargs for 2/16
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО04-30-2009 10:11 PM
тАО04-30-2009 10:11 PM
Re: vmunix error
Look here
http://www11.itrc.hp.com/service/cki/docDisplay.do?docLocale=en&docId=emr_na-c00924085-3
Hope it helps
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-01-2009 12:24 AM
тАО05-01-2009 12:24 AM
Re: vmunix error
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-01-2009 03:04 AM
тАО05-01-2009 03:04 AM
Re: vmunix error
NFS server: "xdr_opaque: decode FAILED" and "NOTICE: nfs_server: bad getargs"
PROBLEM
The following two recurrent messages occur in syslog on an
NFS Server (ONC/NFS 1.2):
getargs for 2/16
What do these messages mean and what are the consequences for
the NFS Server ?
CONFIGURATION
Operating System - HP-UX
Version - 11.x
Subsystem - NFS Server - ONC/NFS 1.2
RESOLUTION
The two messages in the NFS server syslog indicate that some data
coming from an NFS client could not be XDR-decoded correctly on
the HP-UX NFS Server.
With respect to "bad getargs for 2/16" in the second syslog
message:
o "2" represents "NFS version 2"
o "16" represents the procedure number 16, which is
the "READDIR" procedure. The READDIR (read directory)
procedure has 3 arguments:
1. the filehandle for the directory to be read,
2. a cookie, and
3. the maximum size of the results in bytes.
At least one of these arguments was received and XDR-decoded as
incorrect, which leads the two messages to be logged.
Typically, this behavior may come from data corruption over the
network, or most probably, from the NFS Client itself, since the
same messages with the same values reoccur (xdr_opaque, 2, 16).
These two messages, as such, have no impact on the NFS Server;
they are informative only.
ALT KEYWORDS
"bad getargs for 2/16"
"bad getargs"
"xdr_opaque: decode failed"
"xdr-decoded"
getargs
nfs
nfs_server
xdr
xdr_opaque