Operating System - HP-UX
1822041 Members
3473 Online
109640 Solutions
New Discussion юеВ

nfs speed issues on large files (70+GB)

 
Sergey Akifiev_1
Frequent Advisor

nfs speed issues on large files (70+GB)

i have an oracle `exp' utility, that writes full export of very large database on nfs-attached volume.
at first time things went very well, but when size of dump became about 50GB i notice, that network utilization went to 3MB/s instead of former 9MB/s.
when copying 900MB file in parallel on to same nfs volume, transfer speed as about 9MB/s.

Oracle states, that it is waiting 'SQL*Net message from client' for exp's session. so trouble is in either in exp utility or nfs client/server.

nfsstaton server client shows no errors or timeouts. there is mostly 'write' operations.

any suggestions?
14 REPLIES 14
Peter Godron
Honored Contributor

Re: nfs speed issues on large files (70+GB)

Hi,
if it is the export executable:

The IMP/EXP programs run in two-task mode to protect the SGA from potential corruption by user programs. If you re-link these programs in single task mode you can gain much improvement in speed (up to 30%). Although Oracle won't support this they supposedly use this method themselves.
Although running in single-task is faster, it requires more memory since the Oracle executable's text is no longer shared between the front-end and background processes. Thus, if you need to transfer large amounts of data between databases, re-link the executable for greater efficiency.

cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk singletask
# make -f ins_rdbms.mk expst
# make -f ins_rdbms.mk impst
# make -f ins_rdbms.mk sqlldrst
# mv expst $ORACLE_HOME/bin/
# mv impst $ORACLE_HOME/bin/
# mv sqlldrst $ORACLE_HOME/bin/

Now use expst and impst instead of imp or exp. This ussed to work on Oracle 8.

Have you tried generating the export onto local disk and then transferring ?
Or, how about multiple partial exports ?
Sergey Akifiev_1
Frequent Advisor

Re: nfs speed issues on large files (70+GB)

thnx for fast reply
never knew about this oracle feature. i'll try it sometime (hope it will not crash oracle instance :-) )

i haven't tried both exporting to local disk (there simple no such amount of free space - dumpfile's estimated size is about 400GB) and doing multiple exports (imho, it is more headache, then solution).

so, you've never seen perfomance degradation with large files on nfs?
Peter Godron
Honored Contributor

Re: nfs speed issues on large files (70+GB)

Sergey,
I have had slow nfs mounts, but I thought you will have found these threads and tested a few things (card speed/network load etc.):
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=609597
and somebody from the linux camp:
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=760141

But somebody is actually running a DB across NFS:
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1036284
rick jones
Honored Contributor

Re: nfs speed issues on large files (70+GB)

How much memory is in the db system from which you are doing the export?

How about the NFS server?

You say mostly writes - what are the other operations, and does the mix start to change with the size of the dump? Are there any retrans at all?
there is no rest for the wicked yet the virtuous have no pillows
Sergey Akifiev_1
Frequent Advisor

Re: nfs speed issues on large files (70+GB)

db server have 4GB RAM. 963592K is free (accordind to top)
NFS server have 2GB RAM.
on server side i have:
> nfsstat -s

Server Info:
Getattr Setattr Lookup Readlink Read Write Create Remove
51803 26 10318 10 1010225 79270456 36 6
Rename Link Symlink Mkdir Rmdir Readdir RdirPlus Access
0 0 0 4 0 5 5885 14301
Mknod Fsstat Fsinfo PathConf Commit
0 117 8 2 1722705
Server Ret-Failed
530
Server Faults
0
Server Cache Stats:
Inprog Idem Non-idem Misses
6 4 9 81715977
Server Write Gathering:
WriteOps WriteRPC Opsaved
79270456 79270456 0

on client:
sergey@client:~$ nfsstat -c

Client rpc:
Connection oriented:
calls badcalls badxids
0 0 0
timeouts newcreds badverfs
0 0 0
timers cantconn nomem
0 0 0
interrupts
0
Connectionless oriented:
calls badcalls retrans
88009876 1 3420
badxids timeouts waits
3 3420 0
newcreds badverfs timers
0 0 2687
toobig nomem cantsend
0 0 0
bufulocks
0

Client nfs:
calls badcalls clgets
88009880 1 88009880
cltoomany
0
Version 2: (0 calls)
null getattr setattr
0 0% 0 0% 0 0%
root lookup readlink
0 0% 0 0% 0 0%
read wrcache write
0 0% 0 0% 0 0%
create remove rename
0 0% 0 0% 0 0%
link symlink mkdir
0 0% 0 0% 0 0%
rmdir readdir statfs
0 0% 0 0% 0 0%
Version 3: (88009880 calls)
null getattr setattr
0 0% 8787 0% 28 0%
lookup access readlink
3519 0% 5152 0% 0 0%
read write create
1879362 2% 84293039 95% 29 0%
mkdir symlink mknod
2 0% 0 0% 0 0%
remove rmdir rename
4 0% 0 0% 0 0%
link readdir readdir+
0 0% 1 0% 2650 0%
fsstat fsinfo pathconf
98 0% 9 0% 6 0%
commit
1817194 2%

PS i gave a try to `expst' utility, but it fails with 'ORA-00932: inconsistent datatypes'.
PPS after mounting nfs volume with tcp and rsize=16384,wsize=16384 copying of 900MB file went at about 10-11MB/s speed.
rick jones
Honored Contributor

Re: nfs speed issues on large files (70+GB)

It would be good to see stats over an interval while the transfer is taking place. You might be able to run them through beforeafter:

ftp://ftp.cup.hp.com/dist/networking/tools/

to do the math for you. In particular, those "retrans" in connectionless mode - it would be worth seeing if they are increasing during your slow transfers.
there is no rest for the wicked yet the virtuous have no pillows
Sergey Akifiev_1
Frequent Advisor

Re: nfs speed issues on large files (70+GB)

you think it worth to investigate reason of 3k of retranses over 80+M of writes?

yesterday i've managed to get around 8MB/s troughput by putting `recordlength=65535' in exp's cmdline.
but nevertheless oracle reports "SQL*Net message from client" wait. :-) IMHO, i'm hitting exp limitation.
rick jones
Honored Contributor

Re: nfs speed issues on large files (70+GB)

When the stats are since boot and I'm not otherwise sure _when_ those retrans happened, yes. With what I recall is the rather long, always timeout-based retransmission in NFS (500 milliseconds or 700 milliseconds) it doesn't take all that many of them to start affecting the average response time. Perhaps not with the specific percentages, but it doesn't have to be all that high.

And besides, I'm a networking guy - I tweak on retransmissions :)
there is no rest for the wicked yet the virtuous have no pillows
Dave Olker
Neighborhood Moderator

Re: nfs speed issues on large files (70+GB)

Can we get some basic information about the configuration?

What type of system is the NFS client?
What type of system is the NFS server?
What OS is the NFS client running?
What OS is the NFS server running?
What NIC interface are the client and server using?
nfsstat -m output from the client?
nfsstat -c output from the client?

Thanks,

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Sergey Akifiev_1
Frequent Advisor

Re: nfs speed issues on large files (70+GB)

client is HP-UX 11.00 on L-class server:
sergey@buster:~$ uname -a
HP-UX buster B.11.00 U 9000/800 135901597 unlimited-user license
sergey@buster:~$ model
9000/800/L2000-44
it have 4GB RAM and fc-attached storage.

server is a FreeBSD 6.1-RELEASE on dual-xeon, 2GB RAM, 9TB local RAID5 on SATA disks.
sergey@bootes:~> uname -a
FreeBSD bootes.e-business.alfabank.ru 6.1-RELEASE FreeBSD 6.1-RELEASE #2: Fri Dec 29 13:47:52 MSK 2006 root@:/opt/src/sys/i386/compile/SMP i386
sergey@bootes:~> dmesg | grep memory
real memory = 2146828288 (2047 MB)
avail memory = 2095808512 (1998 MB)

NICs on client:
sergey@buster:~$ su root -c "ioscan -fnC lan" | less
Password:
Class I H/W Path Driver S/W State H/W Type Description
===================================================================
lan 0 0/0/0/0 btlan3 CLAIMED INTERFACE HP PCI 10/100Base-TX Core
/dev/diag/lan0 /dev/ether0
lan 1 0/3/0/0 btlan6 CLAIMED INTERFACE HP A3738A PCI 10/100Base-TX Ultimate Combo
/dev/diag/lan1 /dev/ether1 /dev/lan1
lan 2 0/6/0/0 btlan6 CLAIMED INTERFACE HP A3738A PCI 10/100Base-TX Ultimate Combo
/dev/diag/lan2 /dev/ether2 /dev/lan2

on server there are 2 Intel PRO/1000 EB cards:
em0@pci4:0:0: class=0x020000 card=0x000015d9 chip=0x10968086 rev=0x01 hdr=0x00
vendor = 'Intel Corporation'
device = 'PRO/1000 EB Network Connection'
class = network
subclass = ethernet
em1@pci4:0:1: class=0x020000 card=0x000015d9 chip=0x10968086 rev=0x01 hdr=0x00
vendor = 'Intel Corporation'
device = 'PRO/1000 EB Network Connection'
class = network
subclass = ethernet

nfsstat -m:
sergey@buster:~$ nfsstat -m
/bootes from bootes:/opt (Addr 10.1.2.70)
Flags: vers=3,proto=tcp,auth=unix,hard,intr,link,symlink,devs,rsize=16384,wsize=16384,retrans=5
All: srtt= 0 ( 0ms), dev= 0 ( 0ms), cur= 0 ( 0ms)

nfsstat -c:
sergey@buster:~$ nfsstat -c

Client rpc:
Connection oriented:
calls badcalls badxids
37222538 3 3
timeouts newcreds badverfs
0 0 0
timers cantconn nomem
0 0 0
interrupts
3
Connectionless oriented:
calls badcalls retrans
88048183 3 3421
badxids timeouts waits
5 3421 0
newcreds badverfs timers
0 0 2687
toobig nomem cantsend
0 0 0
bufulocks
0

Client nfs:
calls badcalls clgets
125275992 6 125275992
cltoomany
0
Version 2: (0 calls)
null getattr setattr
0 0% 0 0% 0 0%
root lookup readlink
0 0% 0 0% 0 0%
read wrcache write
0 0% 0 0% 0 0%
create remove rename
0 0% 0 0% 0 0%
link symlink mkdir
0 0% 0 0% 0 0%
rmdir readdir statfs
0 0% 0 0% 0 0%
Version 3: (125275992 calls)
null getattr setattr
0 0% 10930 0% 2529 0%
lookup access readlink
5899 0% 5459 0% 0 0%
read write create
9778181 7% 112508877 89% 501 0%
mkdir symlink mknod
50 0% 3 0% 0 0%
remove rmdir rename
367 0% 13 0% 5 0%
link readdir readdir+
0 0% 14 0% 2719 0%
fsstat fsinfo pathconf
403 0% 11 0% 15 0%
commit
2960016 2%
Dave Olker
Neighborhood Moderator

Re: nfs speed issues on large files (70+GB)

Hi Sergey,

Your client is using 100BT cards and this is the performance you're seeing:

> PPS after mounting nfs volume with tcp
> and rsize=16384,wsize=16384 copying of
> 900MB file went at about 10-11MB/s speed.

I don't think you're going to get any better speed than this. 10-11 MB/second is practically wire speed for a 100BT card. If you're getting that speed doing NFS writes then you're getting the most out of your hardware.

If you want to go faster than 10-11MB/sec you'd need a faster interface card on the client. Your server has GigE cards so if you were to replace/add a GigE interface on your client you might (assuming they are not saturated by other clients/traffic) be able to get faster write speeds using the GigE card.

Keep in mind certain GigE interfaces, like iether cards, are not supported on HP-UX 11.0, so you might also consider upgrading the OS on your client - especially considering 11.0 is no longer supported.

Regards,

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Sergey Akifiev_1
Frequent Advisor

Re: nfs speed issues on large files (70+GB)

we're in progress of upgrading client to 11.11 right now.
verrrrryyyyy slooooowwwwwww. :-)
Dave Olker
Neighborhood Moderator

Re: nfs speed issues on large files (70+GB)

Once the upgrade is complete, you should be able to put the iether GigE cards in. If memory serves, those are the best performing GigE cards we have (i.e. most efficient, best driver, etc.).

Regards,

Dave


I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Sergey Akifiev_1
Frequent Advisor

Re: nfs speed issues on large files (70+GB)

as requested :-)

upgrade is complete. BUT. we're not planning to intall gigabit netcards. it is useless in this case.
if `exp' utility cannot fill 100M bandwidth, then installing 1G is a waste of time and money.

if u read my posts carefully, you note, that copying big file gave me 10MB/s and exp - only 6-8.