Storage Software
1748229 Members
4069 Online
108759 Solutions
New Discussion

Oracle 11g over NFS, directNFS, noac/actimeo=0, Polyserve, huge DB/tablespace creation delay

 
Adam Garsha
Valued Contributor

Oracle 11g over NFS, directNFS, noac/actimeo=0, Polyserve, huge DB/tablespace creation delay


Our DBA’s have run some performance/benchmark tests creating oracle databases and tablespaces (both 11.2.0.1, 11.1.0.7) when running against an NFS exported polyserv dboptimized filesystem.

We’ve found that when using Oracle directNFS, against a polyserv dboptimized NFS export, that completion time is 10 times slower.

So, creation tasks that would normally run for just a handful of minutes when not using the Oracle directNFS driver (and also not using “noac” and not using “actimeo=0” mount options ), run for over an hour.

I am not completely sold that this is a huge deal, since ideally you’d optimize on direct random i/o vs. creation/extension tasks, but the DBA’s believe it to be a big-deal/show-stopper and I’ll defer to there judgement.

I think it has something to do with attribute caching being disabled (i.e. noac) with oracle directnfs, but I do not know for sure.

The DBA’s are opening a case with oracle. We had already opened a case to see if we could remove the "noac/actimeo=0" NFS mount option for single instance oracle and we got an affirmative response.

I thought it would be good idea to ask if you have heard of anything like this; namely, have you heard of such a performance degredation when using Oracle directNFS against a dboptimized polyserve filesystem.

This issue has reached director level and it will be hard for me to argue away a ten-fold increase in database/tablespace creation (since they will arguably have delays with dataloads and perhaps extends and certainly creation tasks). So I could see us either getting a fix or else not using directNFS (and hence slowing down our DB activity).

Some information:

I’ve included some of my own simple tests below:

(a.) The FTP test is from client memory fs to polyserve nas head memory fs and is meant only to show that network is not a bottleneck of any kind

FTP test “looks good” no network bottleneck:

ftp test with "9Gb" nic:
ftp> get big_file.zero
local: big_file.zero remote: big_file.zero
227 Entering Passive Mode (134,48,29,33,23,23)
150 Opening BINARY mode data connection for big_file.zero (1073741824 bytes).
226 File send OK.
1073741824 bytes received in 1.88 secs (5.6e+05 Kbytes/sec) => 4.27Gbps

(b.) dd write test on the NAS head itself against a db optimized filesystem

On NAS head, same filesystem that is also "dboptimized" (i.e. no fs buffer cache)
[root@its-poly3 data01]# dd if=/dev/zero of=bigfile.zero bs=1048576 count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 7.18444 seconds, 149 MB/s

(c.) dd write test on the NFS client on the same db optimized export using oracles 10g recommended export options (vnfs_goes_here:/hpcfs/ora01/data01/TESTDB01 /oracle/data/TESTDB01 nfs rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,tcp,noac):

On filesystem with "noac" mount option
[root@tst-oradb2 TESTDB01]# dd bs=1048576 if=/dev/zero of=big_file.zero count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 2862.35 seconds, 375 kB/s

(d.) dd write test on the same NFS client and the same db optimized export but without the noac option for the mount:

With "noac" turned off, but against dboptimized, not great, but much much much better than with “noac” present:
[root@tst-oradb2 TESTDB01]# dd bs=1024k if=/dev/zero of=big_file.zero count=1024
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 22.7846 seconds, 47.1 MB/s

(e.) dd read/write tests without file extends and using direct I/O against the same NFS filesystem on the same client:

directi/o reads-and-writes this time to a file that isn’t being grown:
[root@tst-oradb2 TESTDB01]# dd if=system01.dbf of=junk.copy bs=4096k conv=notrunc oflag=direct
170+1 records in
170+1 records out
713039872 bytes (713 MB) copied, 4.10213 seconds, 174 MB/s
3 REPLIES 3
Adam Garsha
Valued Contributor

Re: Oracle 11g over NFS, directNFS, noac/actimeo=0, Polyserve, huge DB/tablespace creation delay

oh yes, pertanent information:

Redhat 5.3 and polyserve 3.7sp2 (special patch kit for nehalum and 5.3 support)
Emil Velez
Honored Contributor

Re: Oracle 11g over NFS, directNFS, noac/actimeo=0, Polyserve, huge DB/tablespace creation delay

make sure you got the polyserve hotfixes for 3.7
Adam Garsha
Valued Contributor

Re: Oracle 11g over NFS, directNFS, noac/actimeo=0, Polyserve, huge DB/tablespace creation delay

We are at sp2. A big issue was we were using a non-default block size with polyserve. When doing so, the performance was terrible. Switching to the default (which I think is 4k block FS) solved our performance problem.