Operating System - HP-UX
1833991 Members
4046 Online
110063 Solutions
New Discussion

Re: Massive write on NFS filesystem

 
Ing. Marco Tricase
Occasional Contributor

Massive write on NFS filesystem

I tried to export my ORACLE DB (about 100GB) on a NFS filesystem.
On the owner filesystem server I set optiion largefiles, and the mount had version 3. But export fails with errors EXP-00030 and EXP-00002.
I checked and there are problem to create file over 2GB.
Any suggest?
Thanks in advance
Sys Adm
6 REPLIES 6
David_246
Trusted Contributor

Re: Massive write on NFS filesystem

Hi,

What type of FS is it, what version shows the client side of the nfs-mount, what options show the client side ?

Nfs exporting an Oracle FS of 100 GB ???



Regs David
@yourservice
Stefan Farrelly
Honored Contributor

Re: Massive write on NFS filesystem

Its not a problem with nfs v3. The max file size on NFS v3 is 1 Terabyte for 11.0 and 2TB for 11i. Do you have the latest nfs megapatch installed ?

Were using nfs for live oracle databases with filesizes over 2Gb and no problems.

All I can suggest is on your nfs filesystem try using prealloc to create a 3GB file to see if it works first; prealloc t 3000000000

If so then you dont have a largefiles problem on your nfs filesystem, look into your mount options for nfs (very important) and then into your oracle error numbers for a more detailed explanation.

From an HP perspective I reccomend the book "optimizing nfs performance" by dave olker.

In the fstab we use nfs mount options;
nfs rw,bg,hard,intr,rsize=32768,wsize=32768,proto=udp,vers=3,suid 0 0

and performance is fast and very reliable.
Im from Palmerston North, New Zealand, but somehow ended up in London...
Bill Hassell
Honored Contributor

Re: Massive write on NFS filesystem

100Gb over NFS is pretty scary, especially if this data is important. I would strongly recommend ftp which significantly faster and more reliable. ftp handles overlapping acknowledgements and adjust the transfer to accomodate the speed of the network, while NFS is stateless and treats every block of data as a new transaction.


Bill Hassell, sysadmin
Michael Steele_2
Honored Contributor

Re: Massive write on NFS filesystem

NFS only supports largefiles on version 3. So from the server verify this with :

nfsstat -m
vers=3

To convert vxfs file systems to largefiles on the server and remount:

fsadm -F vxfs -o largefiles /dev/vg##/rlvol#

/etc/exports should not be affected, nor /etc/fstab.

To mount largefiles:

mount -F vxfs -o largefiles /dev/vg_name/lv_name /mount_point

/etc/fstab:

server:/nfs_mnt /nfs_mnt nfs rw,suid 0 0

http://docs.hp.com/hpux/onlinedocs/os/lgfiles4.pdf
Support Fatherhood - Stop Family Law
Yogeeraj_1
Honored Contributor

Re: Massive write on NFS filesystem

hello,

I would consider a compression option and the script will be as follows:

------------------------------
#!/bin/csh -vx

setenv UID /
setenv FN exp.`date +%j_%Y`.dmp
setenv PIPE /tmp/exp_tmp_ora8i.dmp

setenv MAXSIZE 500m
setenv EXPORT_WHAT "full=y COMPRESS=n"

echo $FN

cd /nfs/atc-netapp1/expbkup_ora8i
ls -l

rm expbkup.log export.test exp.*.dmp* $PIPE
mknod $PIPE p

date > expbkup.log
( gzip < $PIPE ) | split -b $MAXSIZE - $FN. &

# uncomment this to just SPLIT the file, not compress and split
#split -b $MAXSIZE $PIPE $FN. &

exp userid=$UID buffer=20000000 file=$PIPE $EXPORT_WHAT >>& expbkup.log
date >> expbkup.log


date > export.test
cat `echo $FN.* | sort` | gunzip > $PIPE &

# uncomment this to just SPLIT the file, not compress and split
#cat `echo $FN.* | sort` > $PIPE &

imp userid=sys/o8isgr8 file=$PIPE show=y full=y >>& export.test
date >> export.test

tail expbkup.log
tail export.test

ls -l
rm -f $PIPE
--------------------------------------------------

Try to test this one and post the results.

Hope this helps!
Best Regards
Yogeeraj
No person was ever honoured for what he received. Honour has been the reward for what he gave (clavin coolidge)
Ing. Marco Tricase
Occasional Contributor

Re: Massive write on NFS filesystem

Hi everybody,
thanks for the interest.

Machine who mounts FS (NFS client) is N4000 HPUX 11.00
NFS patches installed are:
PHCO_14194 1.0 quota(1) patch for NFS-quotas
PHKL_20202 1.0 Fix pthread error return, nfs/tcp panic
PHNE_17586 1.0 NFS Kernel General Release/Performance Patch
PHNE_21376 1.0 ONC/NFS General Release/Performance Patch
PHNE_23833 1.0 ONC/NFS General Release/Performance Patch

Machine who exports FS is L1000 HPUX 11.00
NFS patches are:
PHCO_14194 1.0 quota(1) patch for NFS-quotas
PHNE_17586 1.0 NFS Kernel General Release/Performance Patch
PHNE_23833 1.0 ONC/NFS General Release/Performance Patch


In the nfs server side, the type of FS is:
root@sapdev:SV1:/save_export># fstyp -v /dev/vg01/lvol2
vxfs
version: 3
f_bsize: 8192
f_frsize: 8192
f_blocks: 11038720
f_bfree: 9641735
f_bavail: 9566409
f_files: 2410464
f_ffree: 2410432
f_favail: 2410432
f_fsid: 1074003970
f_basetype: vxfs
f_namemax: 254
f_magic: a501fcf5
f_featurebits: 0
f_flag: 16
f_fsindex: 6
f_size: 11038720

In the NFS client side if I launch nfsstat -m:
/save_export from sapdev:/save_export (Addr 192.168.64.1)
Flags: vers=3,proto=udp,auth=unix,hard,intr,link,symlink,devs,rsize=32768,wsize=32768,retrans=5
Lookups: srtt=133 (332ms), dev=114 (570ms), cur= 73 (1460ms)
Writes: srtt= 15 ( 37ms), dev= 3 ( 15ms), cur= 3 ( 60ms)
All: srtt=133 (332ms), dev=113 (565ms), cur= 73 (1460ms)

I followed check list of Frederick Steele, and I modified /etc/fstab as Stefan Farrelly, but If I launch a prealloc of 3GB on the mounted filesystem, message is:
root@sapdb:PR1:/save_export># prealloc t2 3000000000
prealloc: File too large

Insted, If I launch the same command in the nfs server side, it work properly.

TIA,
Marco
Sys Adm