<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Oracle DB over NFS on Netapp only 35MB/sec in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116357#M541014</link>
    <description>&lt;P&gt;Hi Christian,&lt;BR /&gt;&lt;BR /&gt;Yeah, that Dave Olker guy writes a lot of technical papers. He usually posts them on docs.hp.com once he finishes them so check there periodically for future ones. &lt;LI-EMOJI id="lia_winking-face" title=":winking_face:"&gt;&lt;/LI-EMOJI&gt;&lt;BR /&gt;&lt;BR /&gt;Do you happen to have any 11i v3 systems in your test ring? We replaced the entire NFS client and server code in 11i v3 so I'd be willing to bet it would behave differently than the 11i v2 client. However, since my 11i v2 client is able to push my HP-UX server to full speed this seems like an issue specific to HP-&amp;gt;NetApp.&lt;BR /&gt;&lt;BR /&gt;I've heard in the past NetApp has recommended HP customers tune down the number of biods to 1 or 2 and that would give them better performance with filers. I always assumed that was because of the thundering herd and filesystem semaphore issues we had in our code that we recently resolved with the nfs_wakeup_one=2 and nfs_fine_grain_fs_lock=2 tunables. But there may be other quirks between our client and the filers that don't show up with other servers.&lt;BR /&gt;&lt;BR /&gt;In any case, now that you've found the "Designing a High Performance NFS Server" paper, I'd suggest using all the tips/tricks outlined there (CKO, TSO, TCP windows, etc.). I'd also suggest trying the 11i v2 client with 1 or 2 biod daemons to see if that affects throughput at all. Finally, I'd really like to see how an 11i v3 client behaves in your environment.&lt;BR /&gt;&lt;BR /&gt;I have 11i v2 and v3 systems in my test ring but my NetApp filer is too slow to see any performance differences in my tests.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave&lt;/P&gt;</description>
    <pubDate>Fri, 18 Jun 2021 11:00:13 GMT</pubDate>
    <dc:creator>Dave Olker</dc:creator>
    <dc:date>2021-06-18T11:00:13Z</dc:date>
    <item>
      <title>Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116343#M541000</link>
      <description>We experiance low performance accross NFS on our Netapp filer.&lt;BR /&gt;The throughput is limited to ~35MB/sec for table scans and also for a dd on the filesystem.&lt;BR /&gt;&lt;BR /&gt;Our hardware exists of a rp7410 with 2x 1GBit. A Netapp FAS3050C with 4x 1GB (one aggregate).&lt;BR /&gt;Network equipment is CISCO 65xx switches.&lt;BR /&gt;&lt;BR /&gt;For that I opened a call at HP and at Netapp. The specialists came to the conclusion, that the behavior from the HP-UX NFS client in combination with Netapp Filer gives this bad transferrate.&lt;BR /&gt;We crosschecked this by using a HP Proliant Server with Redhat Linux on the same network and filer equipment. Then we got a bandwith of 100MB/sec.&lt;BR /&gt;&lt;BR /&gt;A network trace made on the filer shows us the behaviour of both:&lt;BR /&gt;&lt;BR /&gt;HP-UX Client:&lt;BR /&gt; 15   0.000789 192.168.158.222 -&amp;gt; 192.168.158.55 NFS V3 WRITE Call, FH:0x4f3aacce Offset:0 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt; 16   0.005762 192.168.158.55 -&amp;gt; 192.168.158.222 NFS [TCP ACKed lost segment] V3 WRITE Reply (Call In 15) Len:32768 FILE_SYNC&lt;BR /&gt; 17   0.000659 192.168.158.222 -&amp;gt; 192.168.158.55 NFS V3 WRITE Call, FH:0x4f3aacce Offset:32768 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt; 18   0.005389 192.168.158.55 -&amp;gt; 192.168.158.222 NFS [TCP ACKed lost segment] V3 WRITE Reply (Call In 17) Len:32768 FILE_SYNC&lt;BR /&gt;&lt;BR /&gt;Linux Client:&lt;BR /&gt; 10   0.112781 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:0 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt; 43   0.114106 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:32768 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt; 77   0.114556 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:65536 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;112   0.114941 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:98304 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;179   0.115494 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:163840 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;214   0.115778 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:196608 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;236   0.115979 172.27.240.55 -&amp;gt; 172.27.224.190 NFS V3 WRITE Reply (Call In 10) Len:32768 FILE_SYNC&lt;BR /&gt;237   0.115987 172.27.240.55 -&amp;gt; 172.27.224.190 NFS V3 WRITE Reply (Call In 43) Len:32768 FILE_SYNC&lt;BR /&gt;250   0.116062 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:229376 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;283   0.116333 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call[Unreassembled Packet]&lt;BR /&gt;318   0.116626 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:294912 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;351   0.116897 172.27.224.190 -&amp;gt; 172.27.240.55 NFS V3 WRITE Call, FH:0x36672f4a Offset:327680 Len:32768 UNSTABLE[Unreassembled Packet]&lt;BR /&gt;371   0.117063 172.27.240.55 -&amp;gt; 172.27.224.190 NFS V3 WRITE Reply (Call In 77) Len:32768 FILE_SYNC&lt;BR /&gt;&lt;BR /&gt;The explanation from support was - while linux send multiple NFS write requests without waiting on the ack, HP-UX send a single NFS call and waits until the filer replies ack. Each write takes 6ms - that is the limitation in bandwith!&lt;BR /&gt;&lt;BR /&gt;Actually I tried Oracle 11g with DirectNFS Client on HP-UX - this gives us also on the rp7410 100MB/sec.&lt;BR /&gt;&lt;BR /&gt;That sounds the explanation from support seams to be right, that the behaviour from HP-UX NFS client performs bad.&lt;BR /&gt;&lt;BR /&gt;Does anybody have a idea if this behaviour of OS NFS client can be changed - by patching or kernel parameters or anything else?&lt;BR /&gt;&lt;BR /&gt;I could not belive that Linux will outperform HP-UX in this discipline!&lt;BR /&gt;&lt;BR /&gt;I'am looking forward to your answer!&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian</description>
      <pubDate>Thu, 13 Dec 2007 15:51:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116343#M541000</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-13T15:51:38Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116344#M541001</link>
      <description>It has been a very long time since I played with NFS, but back then (ca UX 9.X and 10.X) I was seeing multiple outstanding writes on a file.  &lt;BR /&gt;&lt;BR /&gt;I wonder if there are/are not biod's enabled, and if the writes by the app are/are not O_SYNC or something?&lt;BR /&gt;&lt;BR /&gt;Down in the noise level and not related (iirc) to the one at a time business, Linux (2.6 kernels anyway) will probably use TSO on the GbE interface.  HP-UX by default will not as HP-UX defaults CKO (ChecKsum Offload) to off for the 1 Gigabit NICs.  It does default CKO on on the 10Gbit NICs.</description>
      <pubDate>Fri, 14 Dec 2007 01:33:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116344#M541001</guid>
      <dc:creator>rick jones</dc:creator>
      <dc:date>2007-12-14T01:33:42Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116345#M541002</link>
      <description>Christian.&lt;BR /&gt;&lt;BR /&gt;Not sure if this is going to help; however, HP-UX11iV3 supports NFS v4. Earlier versions of HP-UX didn't. This may be an explanation of why. I'm just not up to date with NFS, it has been a while.&lt;BR /&gt;&lt;BR /&gt;I'm assuming you have HP-UX11iV1 ?&lt;BR /&gt;&lt;BR /&gt;If possible let us know what OS and/or NFS version? &lt;BR /&gt;&lt;BR /&gt;A quick test with UX11iV3 may prove fruitful. I too have seen very poor performance using Netapp with UX, but that was a while ago.</description>
      <pubDate>Fri, 14 Dec 2007 02:34:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116345#M541002</guid>
      <dc:creator>Trevor Roddam_1</dc:creator>
      <dc:date>2007-12-14T02:34:12Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116346#M541003</link>
      <description>Hi all,&lt;BR /&gt;&lt;BR /&gt;thank you for your quick response.&lt;BR /&gt;Here are some more details about our environment.&lt;BR /&gt;The rp7410 runs under 11.11 with NFSv3 with the mount options Netapp and Oracle recommends.&lt;BR /&gt;Beside the rp7410 we also have newer machines in use - e.g. a rx3600 running 11.23 with multiple GB interfaces.&lt;BR /&gt;For testing purpose I installed 11v3 on another rx3600 and mount the Oracle database with NFSv4. But the troughput was also not good.&lt;BR /&gt;Only when I use Oracle DirectNFS Client weather on rp7410 11v1 or on rx3600 11v2/11v3 gives me a very good performance. But 11g is very new and is not supported from our applications.&lt;BR /&gt;As far as I understand the experts from HP and Netapp Support - the whole story is about the behavior of the HP-UX NFS Client. No outstanding writes/reads.&lt;BR /&gt;By the way - I configured 16 biod's on each machine and set these Kernel parameters:&lt;BR /&gt;&lt;BR /&gt;JAGad15675&lt;BR /&gt;nfs_new_lock_code = 1 &lt;BR /&gt;&lt;BR /&gt;JAGad72416&lt;BR /&gt;async_read_avoidance_enabled = 1                &lt;BR /&gt;&lt;BR /&gt;do local paging for binaries over NFS&lt;BR /&gt;page_text_to_local = 1&lt;BR /&gt;&lt;BR /&gt;Are there any other kernel parameters witch change the behaviour of the NFS client to make also outstanding writes and reads? I think that's what linux NFS client does and also what the Oracle NFS client does.&lt;BR /&gt;&lt;BR /&gt;Do you need further infos about our environment? Any suggestions?&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian</description>
      <pubDate>Fri, 14 Dec 2007 08:11:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116346#M541003</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-14T08:11:31Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116347#M541004</link>
      <description>Hi Christian,&lt;BR /&gt;&lt;BR /&gt;A couple things - we have a lot more kernel tunables, and therefore a much more flexible NFS client, on 11.23 and 11.31, so if you can use 11.23 on your client I'd suggest at least running that.  &lt;BR /&gt;&lt;BR /&gt;The fact that the HP client is sending write requests with UNSTABLE but the NetApp server is replying with FILE_SYNC is surprising to me.  Why is the filer using FILE_SYNC semantics?&lt;BR /&gt;&lt;BR /&gt;In any case, I'd be happy to work with you to improve your NFS performance of the client.  My first suggestion would be to test the throughput with a tool like iozone rather than via an application.&lt;BR /&gt;&lt;BR /&gt;I've never had a problem driving my HP-UX clients to 100 MB/sec with Gigabit Ethernet.  If you're able to run 11.23 on your NFS client, here are the tunable settings I recommend you use:&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;_______________________________________&lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;o  nfs_async_read_avoidance_enabled&lt;BR /&gt;This tells the NFS client to issue READ calls even if all the biods are busy servicing WRITE calls &lt;BR /&gt;&lt;BR /&gt;Default Setting: 0 (Disabled) &lt;BR /&gt;Recommended Setting: 1 (Enabled)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;o  nfs_fine_grain_fs_lock &lt;BR /&gt;By default (0), the NFS client code uses a global system-wide semaphore to control access to many routines and data structures.  This use of a global semaphore leads to a lack of parallel activity through many of the main NFS client code paths.  When set to 2, the client avoids all use of this global filesystem semaphore and uses finer grained locks to protect critical code paths and data structures.  The result is a much higher performing NFS client. &lt;BR /&gt;&lt;BR /&gt;Default Setting: 0 (Use FS Semaphore in all code paths) &lt;BR /&gt;Recommended Setting: 2 (Avoid FS Semaphore in all code paths)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;o  nfs_new_lock_code &lt;BR /&gt;By default (0) when an NFS client places a lock on a file we turn off the biods and buffer cache for this file, effectively making all access to the file synchronous.  When enabled (1) the client will enable the biods and buffer cache on locked files if the entire file is locked. &lt;BR /&gt;&lt;BR /&gt;Default Setting: 0 (Disabled) &lt;BR /&gt;Recommended Setting: 1 (Use biods and buffer cache on locked files)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;o  nfs_new_rnode_lock_code &lt;BR /&gt;This instructs the NFS client to allow processes waiting to lock an rnode (NFS version of an inode) on the NFS client to be interrupted by ^C.  By default these processes sleep in the kernel at a non-interruptible state. &lt;BR /&gt;&lt;BR /&gt;Default Setting: 0 (threads are not interruptible while waiting to lock an rnode) &lt;BR /&gt;Recommended Setting: 1 (threads are interruptible while waiting to lock an rnode)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;o  nfs_wakeup_one &lt;BR /&gt;There are a couple nasty thundering herd conditions in the NFS client code.  By setting this tunable to 2 both of the thundering herd conditions are avoided and the CPU contention of the system is dramatically reduced as well as throughput increased. &lt;BR /&gt;&lt;BR /&gt;Default Setting: 0 (both thundering herd conditions exist) &lt;BR /&gt;Recommended Setting: 2 (bypass both thundering herd conditions)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;o  nfs3_new_acache &lt;BR /&gt;By default (0) the NFS client uses a linear search when walking the list of credential structures associated with a given file or directory (i.e. all the users who want to look at a given NFS file or directory).  When enabled (1), the NFS client uses a hashed algorithm which can greatly increase performance and reduce CPU overhead when many users attempt to access the same shared file or directory. &lt;BR /&gt;&lt;BR /&gt;Default Setting: 0 (linear credential search) &lt;BR /&gt;Recommended Setting: 1 (hashed credential search)&lt;BR /&gt; &lt;BR /&gt;_______________________________________&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;If you set the above tunables to the recommended values I'd be curious if the performance improves.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Fri, 14 Dec 2007 20:40:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116347#M541004</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-14T20:40:10Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116348#M541005</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;Iâ   am very happy to hear from you. I already read your book about HP-UX NFS and followed all the recommendations in it. The book is based on 11.11, so all the new things in 11.23 and 11.31 are not included.&lt;BR /&gt;&lt;BR /&gt;I opened a call for the performance problem 1 1/2 year ago. The discussion between the Netapp and the HP expert (Mr. Holger Zessel - call# 1209123232-121) was that the combination HP-UX client and Netapp NFS server seems to work not very well. The Netapp expert explained that the HP-UX sends unstable writes but act as it sends stable writes. Therefore the filer waits 5ms (timeout) after each request before sending the response of the "unstable write". After the response arrives on the HP-UX box the next request will be send from it! This limits the bandwidth on the GB on 15-25MB/sec. Oracle's direct NFS client seems to work different - the same Oracle DB on the same HP-UX server (11.23) with the data files located on the same filer and the same mount points - I will get about 100MB/sec, while the OS NFS client gives me only 20-25MB/sec.&lt;BR /&gt;&lt;BR /&gt;My first task today morning was to try out your kernel parameter recommendations on a test server (rx3600 11.23). But the throughput with dd command and also with Oracle does not grow. Iâ   am not familiar with iozone - witch command line parameters should I use? In what output are you interested?&lt;BR /&gt;&lt;BR /&gt;In what further details are you interested?&lt;BR /&gt;&lt;BR /&gt;Iâ   am really looking forward to your answer!&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian</description>
      <pubDate>Mon, 17 Dec 2007 13:09:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116348#M541005</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-17T13:09:40Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116349#M541006</link>
      <description>Hi Christian,&lt;BR /&gt;&lt;BR /&gt;&amp;gt; My first task today morning was to try out &lt;BR /&gt;&amp;gt; your kernel parameter recommendations on a &lt;BR /&gt;&amp;gt; test server (rx3600 11.23). But the &lt;BR /&gt;&amp;gt; throughput with dd command and also with &lt;BR /&gt;&amp;gt; Oracle does not grow.&lt;BR /&gt;&lt;BR /&gt;Did you reboot the system after making these changes?  Many of them only take effect after a reboot, especially if you have NFS filesystems mounted when the kctune command is used.&lt;BR /&gt;&lt;BR /&gt;I tried reproducing your problem on my systems.  I have an older NetApp filer and I used iozone to write a 1GB file to the filer from the following clients:&lt;BR /&gt;&lt;BR /&gt;HP-UX 11.23&lt;BR /&gt;SuSE 9.2&lt;BR /&gt;Solaris 10&lt;BR /&gt;&lt;BR /&gt;All of these clients are similar in hardware configuration.  I used the same mount options on all 3 clients and here are the throughput results I saw:&lt;BR /&gt;&lt;BR /&gt;HP-UX 11.23: 34901 MB/sec&lt;BR /&gt;SuSE 9.2: 32031 MB/sec&lt;BR /&gt;Solaris 10: 34337 MB/sec&lt;BR /&gt;&lt;BR /&gt;So none of my systems push the GigE interface on the filer to wire speed, possibly because the filer is pretty old as is the version of Data ONTAP running on it.  But my point is the 11.23 client is able to get comparable performance to the Solaris 10 and SuSE clients.&lt;BR /&gt;&lt;BR /&gt;Also, I looked at a Wireshark trace of the data transfers and my trace shows the NetApp filer replying to the write calls with the UNSTABLE flag rather than FILE_SYNC.  I thought that was interesting.&lt;BR /&gt;&lt;BR /&gt;If you want to play with iozone, you can download the source from iozone.org.  It's pretty easy to compile and the syntax I was using to test with is:&lt;BR /&gt;&lt;BR /&gt;# iozone -c -e -s 1g -r 32k -i 0 -+n -f /filer-1/testfile&lt;BR /&gt;&lt;BR /&gt;Hope this helps,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Tue, 18 Dec 2007 00:40:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116349#M541006</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-18T00:40:44Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116350#M541007</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;Yesterday - I did not reboot the system after kernel parameter changes, but unmount and remount the file systems.&lt;BR /&gt;Today, I rebooted the system as you recommend. It seems, that the write dd gives me a slightly better throughput than yesterday -&amp;gt; ~30MB/sec while yesterday ~25MB/sec.&lt;BR /&gt;&lt;BR /&gt;Today, I did a nettl trace from a Oracle select. The first trace with active Direct NFS client and the second with HP-UX NFS client.&lt;BR /&gt;The main difference I found is, that Oracle sendâ  s two read calls in one package. Also the answer of the file seems to come faster.&lt;BR /&gt;I will attach the trace files. One with Oracle direct NFS, another with Oracle and HP-UX NFS and the last was captured during ioscan â  c â  e â  s 1m â  r 32k â  I 0 -+n â  f /mnt/testfile&lt;BR /&gt;&lt;BR /&gt;I hope this information help.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian</description>
      <pubDate>Tue, 18 Dec 2007 17:02:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116350#M541007</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-18T17:02:12Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116351#M541008</link>
      <description>Hi Christian,&lt;BR /&gt;&lt;BR /&gt;Since you have the test environment all set up, I'd like you to try testing with UDP instead of TCP just to see what kind of difference you see.  If you could mount the filesystem with "-o proto=udp" and try the test again I'd be curious if you see different performance or behavior.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Tue, 18 Dec 2007 17:06:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116351#M541008</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-18T17:06:23Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116352#M541009</link>
      <description>&lt;!--!*#--&gt;One other data point, I just installed MemFS on my 11.23 NFS server and configured a 2GB memory file system.  I exported this file system to my 11.23 client and here's what I see when I use iozone:&lt;BR /&gt;&lt;BR /&gt;        Run began: Tue Dec 18 10:17:09 2007&lt;BR /&gt;&lt;BR /&gt;        Include fsync in write timing&lt;BR /&gt;        Include close in write timing&lt;BR /&gt;        File size set to 1048576 KB&lt;BR /&gt;        Record Size 32 KB&lt;BR /&gt;        No retest option selected&lt;BR /&gt;        Command line used: iozone -e -c -s 1g -r 32k -i 0 -+n -t 1 -F /hp-2/iozone&lt;BR /&gt;        Output is in Kbytes/sec&lt;BR /&gt;        Time Resolution = 0.000001 seconds.&lt;BR /&gt;        Processor cache size set to 1024 Kbytes.&lt;BR /&gt;        Processor cache line size set to 32 bytes.&lt;BR /&gt;        File stride size set to 17 * record size.&lt;BR /&gt;        Throughput test with 1 process&lt;BR /&gt;        Each process writes a 1048576 Kbyte file in 32 Kbyte records&lt;BR /&gt;&lt;BR /&gt;        Children see throughput for  1 initial writers  =  111903.17 KB/sec&lt;BR /&gt;        Parent sees throughput for  1 initial writers   =  111882.39 KB/sec&lt;BR /&gt;        Min throughput per process                      =  111903.17 KB/sec &lt;BR /&gt;        Max throughput per process                      =  111903.17 KB/sec&lt;BR /&gt;        Avg throughput per process                      =  111903.17 KB/sec&lt;BR /&gt;        Min xfer                                        = 1048576.00 KB&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;On my test system, which is an rx2600, I can push my GigE NIC to wire speed with NFS/TCP and iozone writing a 1GB file - provided the NFS server's filesystem is fast enough.  Using a memory-based filesystem on the server helped me remove this as a bottle neck.&lt;BR /&gt;&lt;BR /&gt;Just thought I'd pass that along.&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Tue, 18 Dec 2007 18:21:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116352#M541009</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-18T18:21:50Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116353#M541010</link>
      <description>One other data point about how my NFS client and server are configured for TCP, I have the following entries in my /etc/rc.config.d/nddconf file:&lt;BR /&gt;&lt;BR /&gt;TRANSPORT_NAME[0]=tcp&lt;BR /&gt;NDD_NAME[0]=tcp_recv_hiwater_def&lt;BR /&gt;NDD_VALUE[0]=1048576&lt;BR /&gt;&lt;BR /&gt;TRANSPORT_NAME[1]=tcp&lt;BR /&gt;NDD_NAME[1]=tcp_xmit_hiwater_def&lt;BR /&gt;NDD_VALUE[1]=1048576&lt;BR /&gt;&lt;BR /&gt;This sets the TCP send and receive window sizes to 1MB, which helps TCP advertise a larger window between the client and server.  Don't know if you're using the default window sizes or not, but I thought this was worth mentioning as well.&lt;BR /&gt;&lt;BR /&gt;If you make changes like this you'd need to issue the command "ndd -c" to re-read the configuration file and activate the changes.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Tue, 18 Dec 2007 18:25:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116353#M541010</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-18T18:25:34Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116354#M541011</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;I set the ndd settings for send and receive window size as you recommend to 1MB. No other changes are made in nddconf file. Before this change there was the default set, I think 32K.&lt;BR /&gt;&lt;BR /&gt;Here are my results from the iozone write tests with tcp and udp mount points:&lt;BR /&gt;&lt;BR /&gt;Include close in write timing&lt;BR /&gt;Include fsync in write timing&lt;BR /&gt;File size set to 1048576 KB&lt;BR /&gt;Record Size 32 KB&lt;BR /&gt;No retest option selected&lt;BR /&gt;Command line used: /iozone -c -e -s 1g -r 32k -i 0 -+n -f /mnt/test_iozone&lt;BR /&gt;Output is in Kbytes/sec&lt;BR /&gt;Time Resolution = 0.000003 seconds.&lt;BR /&gt;Processor cache size set to 1024 Kbytes.&lt;BR /&gt;Processor cache line size set to 32 bytes.&lt;BR /&gt;File stride size set to 17 * record size.&lt;BR /&gt;&lt;BR /&gt;(Oracle recommended mount options â   except udp)&lt;BR /&gt;mount -o rw,bg,hard,nointr,suid,timeo=600,rsize=32768,wsize=32768,proto=&lt;XXX&gt;,vers=3 flr-rbg04:/vol/ora11g/dbf /mnt&lt;BR /&gt;&lt;BR /&gt;&lt;XXX&gt;=udp: 69587KB/sec&lt;BR /&gt;&lt;XXX&gt;=tcp: 49298KB/sec&lt;BR /&gt;&lt;BR /&gt;I crosscheked the iozone with default window size of 32k:&lt;BR /&gt;&lt;XXX&gt;=udp: 61113KB/sec&lt;BR /&gt;&lt;XXX&gt;=tcp: 44875KB/sec&lt;BR /&gt;&lt;BR /&gt;It looks like we are on a good way to improve the performance and came closer to the Oracle NFS client.&lt;BR /&gt;But â   by the way udp is not supported by Oracle 10g.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian&lt;/XXX&gt;&lt;/XXX&gt;&lt;/XXX&gt;&lt;/XXX&gt;&lt;/XXX&gt;</description>
      <pubDate>Wed, 19 Dec 2007 12:00:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116354#M541011</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-19T12:00:49Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116355#M541012</link>
      <description>Hi Christian,&lt;BR /&gt;&lt;BR /&gt;One of the enhancements we made to our NFS client specifically for Oracle was the "forcedirectio" mount option.  It doesn't look like you're using this option currently.  I don't know if it will help iozone performance, but it can help Oracle throughput in certain environments because it bypasses the buffer cache and lets Oracle do the caching on its own behalf.&lt;BR /&gt;&lt;BR /&gt;Please try mounting the filesystem with the "forcedirectio" option and let me know how it affects both iozone and Oracle throughput.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Wed, 19 Dec 2007 16:41:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116355#M541012</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-19T16:41:07Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116356#M541013</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;I tried the forcedirectio options with iozone and Oracle. It seems in our environment the forcedirectio degrade the throughput.&lt;BR /&gt;I also append the sysstat output from the filer during the tests. In that log â   especially during the first test â   I noticed that the throughput is very unstable. It varies between 0 and 72MB/sec!?!&lt;BR /&gt;By the way â   yesterday while googleing for Dave Olker, I found another article â  Designing a High Performance NFS Serverâ  . In that I found that â  TCP Segmentation Offloadâ   and â  Checksum Offloadâ   is disabled by default. I enabled both as you recommend in your guide. Actually I was wondering why this features are disabled by default.&lt;BR /&gt;Attached you will find the logs I record during the tests.&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian</description>
      <pubDate>Thu, 20 Dec 2007 08:20:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116356#M541013</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-20T08:20:42Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116357#M541014</link>
      <description>&lt;P&gt;Hi Christian,&lt;BR /&gt;&lt;BR /&gt;Yeah, that Dave Olker guy writes a lot of technical papers. He usually posts them on docs.hp.com once he finishes them so check there periodically for future ones. &lt;LI-EMOJI id="lia_winking-face" title=":winking_face:"&gt;&lt;/LI-EMOJI&gt;&lt;BR /&gt;&lt;BR /&gt;Do you happen to have any 11i v3 systems in your test ring? We replaced the entire NFS client and server code in 11i v3 so I'd be willing to bet it would behave differently than the 11i v2 client. However, since my 11i v2 client is able to push my HP-UX server to full speed this seems like an issue specific to HP-&amp;gt;NetApp.&lt;BR /&gt;&lt;BR /&gt;I've heard in the past NetApp has recommended HP customers tune down the number of biods to 1 or 2 and that would give them better performance with filers. I always assumed that was because of the thundering herd and filesystem semaphore issues we had in our code that we recently resolved with the nfs_wakeup_one=2 and nfs_fine_grain_fs_lock=2 tunables. But there may be other quirks between our client and the filers that don't show up with other servers.&lt;BR /&gt;&lt;BR /&gt;In any case, now that you've found the "Designing a High Performance NFS Server" paper, I'd suggest using all the tips/tricks outlined there (CKO, TSO, TCP windows, etc.). I'd also suggest trying the 11i v2 client with 1 or 2 biod daemons to see if that affects throughput at all. Finally, I'd really like to see how an 11i v3 client behaves in your environment.&lt;BR /&gt;&lt;BR /&gt;I have 11i v2 and v3 systems in my test ring but my NetApp filer is too slow to see any performance differences in my tests.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Dave&lt;/P&gt;</description>
      <pubDate>Fri, 18 Jun 2021 11:00:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116357#M541014</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2021-06-18T11:00:13Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116358#M541015</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;Actually we donâ  t have a 11v3 server. So I canâ  t test with the new client.&lt;BR /&gt;I tired the tests with 2 and 1 biods -&amp;gt; but no better performance. I think it is like you suggest the communication between HP-UX and Netapp performs not well.&lt;BR /&gt;I hope I will have the time to install 11v3 on the test server to try this combination.&lt;BR /&gt;Today it is my last day in office. I will be back in January 7th. I hope we could continue the discussion then.&lt;BR /&gt;Wish you a marry Christmas and a happy new year. Hope to hear from you in next year!&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian&lt;BR /&gt;</description>
      <pubDate>Fri, 21 Dec 2007 13:12:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116358#M541015</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2007-12-21T13:12:12Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116359#M541016</link>
      <description>Hi Christian,&lt;BR /&gt;&lt;BR /&gt;Ok, thanks for confirming the "small number of biods" thing for me.  I've heard that recommendation so many times over the years and it just never made any sense to me, especially after we removed the bottlenecks in our client code.&lt;BR /&gt;&lt;BR /&gt;If you're able to get an 11i v3 client in there to test with that would be great.  I only wish my filer was fast enough to really see a difference between my 11i v2 client and my Solaris, Linux, 11i v3 clients.  Maybe someday I'll get one, but seeing as you have one already set up I'd love to get some kind of confirmation that 11i v3 clients perform as expected in your environment so we know we need to concentrate on the 11i v2 client.&lt;BR /&gt;&lt;BR /&gt;I hope you have a very happy and safe holiday season,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Fri, 21 Dec 2007 15:34:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116359#M541016</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2007-12-21T15:34:11Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116360#M541017</link>
      <description>Hi Dave,&lt;BR /&gt;&lt;BR /&gt;back from holiday - and emptying my inbox in the last two days. I had a good time during holiday - hope you too!&lt;BR /&gt;&lt;BR /&gt;I installed my test server rx3600 with 11iv3 (09/07) without any patches. This is my first time with 11iv3.&lt;BR /&gt;By the way, I can't see any biod's. Do you recommend any kernel parameters for this env.? &lt;BR /&gt;I changed the default values for nddconf and the network interface (checksum offload,...) as on the 11iv2 installation.&lt;BR /&gt;&lt;BR /&gt;I repeate the test with iozone.&lt;BR /&gt;&lt;BR /&gt;iozone -c -e -s 1g -r 32k -i 0 -+n -f /mnt/testfile&lt;BR /&gt;&lt;BR /&gt;        Iozone: Performance Test of File I/O&lt;BR /&gt;                Version $Revision: 3.283 $&lt;BR /&gt;                Compiled for 64 bit mode.&lt;BR /&gt;                Build: hpuxs-11.0w&lt;BR /&gt;&lt;BR /&gt;        Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins&lt;BR /&gt;                     Al Slater, Scott Rhine, Mike Wisner, Ken Goss&lt;BR /&gt;                     Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,&lt;BR /&gt;                     Randy Dunlap, Mark Montague, Dan Million,&lt;BR /&gt;                     Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,&lt;BR /&gt;                     Erik Habbinga, Kris Strecker, Walter Wong.&lt;BR /&gt;&lt;BR /&gt;        Run began: Wed Jan  9 13:57:28 2008&lt;BR /&gt;&lt;BR /&gt;        Include close in write timing&lt;BR /&gt;        Include fsync in write timing&lt;BR /&gt;        File size set to 1048576 KB&lt;BR /&gt;        Record Size 32 KB&lt;BR /&gt;        No retest option selected&lt;BR /&gt;        Command line used: /iozone -c -e -s 1g -r 32k -i 0 -+n -f /mnt/testfile&lt;BR /&gt;        Output is in Kbytes/sec&lt;BR /&gt;        Time Resolution = 0.000002 seconds.&lt;BR /&gt;        Processor cache size set to 1024 Kbytes.&lt;BR /&gt;        Processor cache line size set to 32 bytes.&lt;BR /&gt;        File stride size set to 17 * record size.&lt;BR /&gt;                                                            random  random    bkwd  record  stride&lt;BR /&gt;&lt;BR /&gt;              KB  reclen   write rewrite    read    reread    read   write    read rewrite    read   fwrite fre&lt;BR /&gt;write   fread  freread&lt;BR /&gt;         1048576      32   37498       0&lt;BR /&gt;&lt;BR /&gt;the output of sysstat 1 on the filer during the iozone test (it looks like the write performance varied between ~5 and ~66MB per sec.):&lt;BR /&gt;&lt;BR /&gt; CPU    NFS   CIFS   HTTP      Net kB/s     Disk kB/s      Tape kB/s    Cache&lt;BR /&gt;                               in   out     read  write    read write     age&lt;BR /&gt; 38%      9      0      0       2     2    35700      0       0 29295       3&lt;BR /&gt; 22%     82      0      0    2857    67    91548      0       0 88801       2&lt;BR /&gt; 37%   1758      0      0   60742  1414    35652   9796       0 28770       2&lt;BR /&gt; 57%   1294      0      0   43838  1030    59884  74704       0 57868       2&lt;BR /&gt; 51%   1951      0      0   66875  1564    64525   9201       0 59478       2&lt;BR /&gt; 57%    170      0      0    5794   136    65106 114475       0 62985       2&lt;BR /&gt; 57%   1888      0      0   64692  1514    89545  14000       0 85942       2&lt;BR /&gt; 53%    647      0      0   22181   519    64939  78332       0 64427       2&lt;BR /&gt; 50%   1976      0      0   67705  1583    63628  17223       0 58596       2&lt;BR /&gt; 56%    718      0      0   24702   578    67984  76218       0 65081       2&lt;BR /&gt; 59%   1837      0      0   62877  1471    87709  27107       0 85465       2&lt;BR /&gt; 46%    343      0      0   11819   277    73807  58543       0 70166       2&lt;BR /&gt; 52%   1955      0      0   66961  1567    50421  34118       0 47175       2&lt;BR /&gt; 50%    690      0      0   23653   553    58146  66050       0 56174       2&lt;BR /&gt; 55%   1913      0      0   65517  1534    64556  34976       0 60228       2&lt;BR /&gt; 40%    464      0      0   16416   380    37040  75440       0 34144       2&lt;BR /&gt; 28%    861      0      0   28972   680    39828  17664       0 36700       2&lt;BR /&gt; 60%   1540      0      0   52791  1236    48636   4990       0 48142       2&lt;BR /&gt; 50%   1809      0      0   62490  1460    56860   9016       0 50725       2&lt;BR /&gt; 46%      0      0      0       0     0    37828 132824       0 34341       2&lt;BR /&gt; 43%   1966      0      0   66855  1566    37730  21483       0 34765       2&lt;BR /&gt; 46%   1110      0      0   38551   908    35738  57417       8 33093       2&lt;BR /&gt; 46%   1344      0      0   45537  1068    37624  63232       0 33817       2&lt;BR /&gt; 44%    953      0      0   33172   772    45084  54076       0 40829       2&lt;BR /&gt; 48%   1842      0      0   62603  1467    37388  45160       0 33948       2&lt;BR /&gt; 45%    719      0      0   24695   578    45557  69462       0 42693       2&lt;BR /&gt; 52%   1680      0      0   58084  1355    67968  24402       0 65740       2&lt;BR /&gt; 53%   1090      0      0   36839   865    40776  65580       0 37814       2&lt;BR /&gt; 77%     96      0      0    3148    76    46094   9339       0 39741       2&lt;BR /&gt; 25%      0      0      0       0     0    83600      0       0 82248       2&lt;BR /&gt; 12%      0      0      0       0     1    49176      0       0 48693       2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Best regards,&lt;BR /&gt;Christian&lt;BR /&gt;&lt;BR /&gt;PS: I can test 11iv3 only until end of next week - then the machine will be reinstalled with 11iv2 and go for production.</description>
      <pubDate>Wed, 09 Jan 2008 15:04:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116360#M541017</guid>
      <dc:creator>Christian Birkmeier</dc:creator>
      <dc:date>2008-01-09T15:04:37Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116361#M541018</link>
      <description>&lt;!--!*#--&gt;Hi Christian&lt;BR /&gt;&lt;BR /&gt;we are running Oracle 10.2 (SAP) on HP-UX&lt;BR /&gt;11.23 Itegrity servers.&lt;BR /&gt;DB located on NetApp / NFS.&lt;BR /&gt;&lt;BR /&gt;running same test on 3020 Nearstore&lt;BR /&gt;aggregate is build with 38 disk SATA /500GB&lt;BR /&gt;&lt;BR /&gt;iozone -c -e -s 1g -r 32k -i 0 -+n -f /mnt/testfile&lt;BR /&gt;&lt;BR /&gt;              KB  reclen   write&lt;BR /&gt;         1048576      32   77178&lt;BR /&gt;&lt;BR /&gt;I think this is not bad.&lt;BR /&gt;&lt;BR /&gt;mount options are same as yours.&lt;BR /&gt;nfs rw,bg,hard,nointr,rsize=32768,wsize=32768,proto=tcp,vers=3,suid,timeo=600 0 0&lt;BR /&gt;&lt;BR /&gt;NetApp vol options&lt;BR /&gt;&lt;BR /&gt;nosnap=on, nosnapdir=on, minra=on, no_atime_update=on, nvfail=on,&lt;BR /&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=off,&lt;BR /&gt;convert_ucode=off, maxdirsize=20971, schedsnapname=ordinal,&lt;BR /&gt;fs_size_fixed=off, guarantee=file, svo_enable=off, svo_checksum=off,&lt;BR /&gt;svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,&lt;BR /&gt;fractional_reserve=100, extent=off, try_first=volume_grow&lt;BR /&gt;&lt;BR /&gt;nothing special tune'd&lt;BR /&gt;&lt;BR /&gt;Best regards&lt;BR /&gt;Joerg</description>
      <pubDate>Thu, 10 Jan 2008 00:21:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116361#M541018</guid>
      <dc:creator>Jörg Brandenburger</dc:creator>
      <dc:date>2008-01-10T00:21:31Z</dc:date>
    </item>
    <item>
      <title>Re: Oracle DB over NFS on Netapp only 35MB/sec</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116362#M541019</link>
      <description>Hi Joerg,&lt;BR /&gt;&lt;BR /&gt;Thanks for sanity checking this.  My only NetApp filer is only capable of about 40MB/sec regardless of which NFS client I use, and I get the same throughput from all clients.&lt;BR /&gt;&lt;BR /&gt;I wonder if you'd get even better throughput from your 11.23 client if you tried the various tuning suggestions I listed earlier in this thread...&lt;BR /&gt;&lt;BR /&gt;Also, can you tell me exactly what command you used to display the detailed options for the NetApp filesystem?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;&lt;BR /&gt;Dave</description>
      <pubDate>Thu, 10 Jan 2008 01:10:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/oracle-db-over-nfs-on-netapp-only-35mb-sec/m-p/4116362#M541019</guid>
      <dc:creator>Dave Olker</dc:creator>
      <dc:date>2008-01-10T01:10:31Z</dc:date>
    </item>
  </channel>
</rss>

