<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic nic performance in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163426#M535409</link>
    <description>Hi!!&lt;BR /&gt;&lt;BR /&gt;I have an rp7410 with 2 partitions.  1st partition running 11.23, and the 2nd running 11.31.  We have had an ongoing problem with the NIC perfomance on the 11.23 side.  lanadmin -x 0 shows 1000, autoneg on.  We can't seem to get more than 10MB/s out of it.  The cable and the switch are all ok, as we have tested both on the 11.31 partition.&lt;BR /&gt;&lt;BR /&gt;This has been an ongoing problem.  We just commissioned this server after running into the same problem on an N4000 with a GB NIC installed.  Couldn't get more than 10MB/s on that one either.  I was thinking maybe something in the setup or config is off.&lt;BR /&gt;&lt;BR /&gt;If you need any config files, or more info, please let me know.&lt;BR /&gt;&lt;BR /&gt;thx</description>
    <pubDate>Sat, 14 Mar 2009 11:56:15 GMT</pubDate>
    <dc:creator>Ron Irving</dc:creator>
    <dc:date>2009-03-14T11:56:15Z</dc:date>
    <item>
      <title>nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163426#M535409</link>
      <description>Hi!!&lt;BR /&gt;&lt;BR /&gt;I have an rp7410 with 2 partitions.  1st partition running 11.23, and the 2nd running 11.31.  We have had an ongoing problem with the NIC perfomance on the 11.23 side.  lanadmin -x 0 shows 1000, autoneg on.  We can't seem to get more than 10MB/s out of it.  The cable and the switch are all ok, as we have tested both on the 11.31 partition.&lt;BR /&gt;&lt;BR /&gt;This has been an ongoing problem.  We just commissioned this server after running into the same problem on an N4000 with a GB NIC installed.  Couldn't get more than 10MB/s on that one either.  I was thinking maybe something in the setup or config is off.&lt;BR /&gt;&lt;BR /&gt;If you need any config files, or more info, please let me know.&lt;BR /&gt;&lt;BR /&gt;thx</description>
      <pubDate>Sat, 14 Mar 2009 11:56:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163426#M535409</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T11:56:15Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163427#M535410</link>
      <description>&lt;BR /&gt;What did you use to verify the speed and path?&lt;BR /&gt;&lt;BR /&gt;- traceroute&lt;BR /&gt;- MTU size / Jumbo frames : netstat -in&lt;BR /&gt;&lt;BR /&gt;ttcp - ftp.arl.mil/ftp/pubttcp&lt;BR /&gt;netperf - &lt;A href="http://www.netperf.org" target="_blank"&gt;www.netperf.org&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;FTP? get a large incoming file to /dev/nul? &lt;BR /&gt;I assume that you used the same test on both partitions and thus ensured the source and sink are up to the task?&lt;BR /&gt;&lt;BR /&gt;NFS? has its own set of parameters.. are they ok? And NFS got better with 11.31.&lt;BR /&gt;&lt;BR /&gt;Find a recent version of David Olkers NFS tuning documents ? (recent HPTF proceedings  2002 flavor in&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/1435/NFSPerformanceTuninginHP-UX11.0and11iSystems.pdf" target="_blank"&gt;http://docs.hp.com/en/1435/NFSPerformanceTuninginHP-UX11.0and11iSystems.pdf&lt;/A&gt; )&lt;BR /&gt;&lt;BR /&gt;Good luck!&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 12:28:33 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163427#M535410</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2009-03-14T12:28:33Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163428#M535411</link>
      <description>Hi and thanks!!&lt;BR /&gt;&lt;BR /&gt;We are using netbackup to restore a DB to an attached SAN. The backup itself will take about 4 1/2 hours on average.  The restore takes 20+ hours.  Like I said, through the 11.31 partition, just using scp as a test, we're getting 40MB/sec, but using the same file to transfer via scp on the 11.23 side, 10MB/sec tops.&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 12:35:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163428#M535411</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T12:35:14Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163429#M535412</link>
      <description>MTU size is 1500</description>
      <pubDate>Sat, 14 Mar 2009 12:36:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163429#M535412</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T12:36:51Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163430#M535413</link>
      <description>Hi Ron,&lt;BR /&gt;&lt;BR /&gt;Could you post the iocsan -fnClan?&lt;BR /&gt;&lt;BR /&gt;I don't know if you have a spare nic port on both partitions, you could try to eliminate the switch. Just use a cross-over cable to connect both nics and setup a private network between the nics.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;On other thought is the GigEther-01 (igelan) software.&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=GigEther-01" target="_blank"&gt;http://h20392.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=GigEther-01&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;Defects fixed in B.11.23.0809 release:&lt;BR /&gt;&lt;BR /&gt;    * QXCR1000591671: Support for 1000FD speed on IGELAN&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Robert-Jan&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 12:39:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163430#M535413</guid>
      <dc:creator>Robert-Jan Goossens_1</dc:creator>
      <dc:date>2009-03-14T12:39:32Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163431#M535414</link>
      <description>Ok...here it is.</description>
      <pubDate>Sat, 14 Mar 2009 12:42:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163431#M535414</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T12:42:42Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163432#M535415</link>
      <description>Are you using cat5E or cat6 cables?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 12:54:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163432#M535415</guid>
      <dc:creator>Robert-Jan Goossens_1</dc:creator>
      <dc:date>2009-03-14T12:54:55Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163433#M535416</link>
      <description>Cat5 and everything checks out.</description>
      <pubDate>Sat, 14 Mar 2009 12:57:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163433#M535416</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T12:57:13Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163434#M535417</link>
      <description>&amp;gt;&amp;gt; MTU size is 1500&lt;BR /&gt;&lt;BR /&gt;On both systems right?&lt;BR /&gt;Please consider Jumbo-frame at 9000 bytes.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt; We are using netbackup to restore a DB to an attached SAN. The backup itself will take about 4 1/2 hours on average. The restore takes 20+ hours. Like I said, through the 11.31 partition, just using scp as a test, we're getting 40MB/sec, but using the same file to transfer via scp on the 11.23 side, 10MB/sec tops.&lt;BR /&gt;&lt;BR /&gt;Sorry to be a pest, but you did not explicitly indicate that the performance difference is also visible during netbackup?&lt;BR /&gt;OR is the backup from 11.31 and the restore to 11.23? Still for performance backup != restore.&lt;BR /&gt;I'm sure it does, and that is probably what prompted you to investigate, but still...&lt;BR /&gt;&lt;BR /&gt;Maybe try one more tool other than scp, or establish the core performance with a foundation tool like ttcp? &lt;BR /&gt;&lt;BR /&gt;There are a lot of tuneable to check, depending on the protocols used. Settings like tcp_sack_enable, and whether the driver supports 'trains' of packets:&lt;BR /&gt;# ndd â  get /dev/ip ip_ill_status&lt;BR /&gt;look for 'train'&lt;BR /&gt;&lt;BR /&gt;A most excellent paper describing it all, but maybe overwhelming is: &lt;A href="http://docs.hp.com/en/11890/perf-whitepaper-tcpip-v10.pdf" target="_blank"&gt;http://docs.hp.com/en/11890/perf-whitepaper-tcpip-v10.pdf&lt;/A&gt;&lt;BR /&gt;Dig in!&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 13:23:27 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163434#M535417</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2009-03-14T13:23:27Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163435#M535418</link>
      <description>I'll try to be a little clearer.  We backup our production server, (rp8420,) every night, approximately 400GB.  It takes between 4 and 6 hours to complete this.  I am restoring to the dev system, (rp7410,) using that image created by netbackup.  Again, it's taking 20+ hours to complete the restore.  Our backup admin cannot find any problems on his side.  Everything looks good on my end too.&lt;BR /&gt;&lt;BR /&gt;Should I go ahead and change the MTU value to something else?  Is this a dynamic change?  I know it's a lanadmin setting...can you give me the command?&lt;BR /&gt;&lt;BR /&gt;thx</description>
      <pubDate>Sat, 14 Mar 2009 14:58:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163435#M535418</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T14:58:59Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163436#M535419</link>
      <description>How do you *know* this is a network problem?&lt;BR /&gt;&lt;BR /&gt;Cos all the tests I can see here will have taken disk access into account as well...&lt;BR /&gt;&lt;BR /&gt;You need to isolate the problem so you can eliminate other variables. Start by testing the speed of the network only.&lt;BR /&gt;&lt;BR /&gt;Get a copy of netperf installed  on the 11.23 partition and wherever the Netbackup data is coming from and run a simple TCP_STREAM test between the 2. Netperf is easy to use and only takes 15-20 minutes to install and setup. Downloads and manual available here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.netperf.org" target="_blank"&gt;http://www.netperf.org&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;You will need a compiler to build it though...&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Sat, 14 Mar 2009 15:21:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163436#M535419</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2009-03-14T15:21:11Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163437#M535420</link>
      <description>Right Duncan. That's the gist of my replies, and that's why I mentioned using /dev/null,&lt;BR /&gt;or /dev/zero to sink or source lots of data for free.&lt;BR /&gt;&lt;BR /&gt;The scp tests are a step, but just a step.&lt;BR /&gt;&lt;BR /&gt;Of course in the end tests are just that, tests. And only the real mcCoy counts (backup/restore).&lt;BR /&gt;&lt;BR /&gt;To that end I would suggest to Ron to reverse the application. Try a backup from dev, even though you might not 'need' that. See whether that takes more close to the 4 or to those 20 hours. And keep an eye on the system as to what it is doing during that time. idle? waiting for disk or network? what kind of disk IO rates are generated and do you believe those to be sustainable in the configuration.&lt;BR /&gt;&lt;BR /&gt;And uh... any good reason NOT to go forwards to 11.31 on the other partition? In the plans? Accelerate those plans?&lt;BR /&gt;&lt;BR /&gt;Hein.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 15:33:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163437#M535420</guid>
      <dc:creator>Hein van den Heuvel</dc:creator>
      <dc:date>2009-03-14T15:33:49Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163438#M535421</link>
      <description>Ok...I was looking around, (very carefully...trying to do a restore now,) and here's what I found: &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;GE-DRV                B.11.23.0512   HP PCI Gigabit Ethernet Driver&lt;BR /&gt;  IGELAN-DRV            B.11.23.0712   HP PCI Gigabit Ethernet Driver&lt;BR /&gt;&lt;BR /&gt;Could these 2 drivers be conflicting?  What would be the result of me removing the GE-DRV?&lt;BR /&gt;&lt;BR /&gt;It's a start, right?&lt;BR /&gt;</description>
      <pubDate>Sat, 14 Mar 2009 18:10:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163438#M535421</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T18:10:18Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163439#M535422</link>
      <description>This isn't Windows - all the drivers are from HP and all work fine together - as it happens the drivers are for different cards anyway... if you're interested you can see which driver for which cards in the matrix here:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://docs.hp.com/en/supportmatrixEthernetver2.pdf/supportmatrixEthernetver2.pdf" target="_blank"&gt;http://docs.hp.com/en/supportmatrixEthernetver2.pdf/supportmatrixEthernetver2.pdf&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;The important thing in diagnosing these sorts of issues is to approach it in a systematic fashion. You think you have a network problem - you need to prove that by removing the other items from the equation and just testing the performance of the network. Try that with netperf or as Hein suggests by sneding data to /dev/null.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Sat, 14 Mar 2009 18:22:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163439#M535422</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2009-03-14T18:22:48Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163440#M535423</link>
      <description>Hey Duncan, et al.  Thanks for all of  your responses.  We did that yesterday.  We dd'd a /dev/zero file, and created a 500MB 'file' to transfer.  Like I said, we rarely got above 10MB/sec.  That was outside of netbackup, which seems to move even slower.  &lt;BR /&gt;&lt;BR /&gt;I, again, was looking around, and noticed that the hpigelanconf file has none of the variables set.&lt;BR /&gt;&lt;BR /&gt;Perplexed yet?  I am</description>
      <pubDate>Sat, 14 Mar 2009 18:28:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163440#M535423</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T18:28:35Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163441#M535424</link>
      <description>nope...that's not it</description>
      <pubDate>Sat, 14 Mar 2009 18:34:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163441#M535424</guid>
      <dc:creator>Ron Irving</dc:creator>
      <dc:date>2009-03-14T18:34:39Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163442#M535425</link>
      <description>&amp;gt;&amp;gt; created a 500MB 'file' to transfer&lt;BR /&gt;&lt;BR /&gt;so you created a file to transfer&lt;BR /&gt;&lt;BR /&gt;so it wasn't just based on network performance - the file had to be read off disk.&lt;BR /&gt;&lt;BR /&gt;Honestly spend the time to get netperf installed - it doesn't use files at all so you get "true" network performance only from tests with it...&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Sat, 14 Mar 2009 18:36:22 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163442#M535425</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2009-03-14T18:36:22Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163443#M535426</link>
      <description>Are you doing any testing with direct cables? Network performance depends on a lot of components in the middle. I would start by testing with a crossover cable between the two partitions -- that gets rid of switch config errors and well as routers/gateways, etc. You should see 50-75 Mbytes/sec between the two Gbit cards. As mentioned, simple disks may be too slow to meet the top speed of the two cards, but a high performance fibre array should keep the data flowing at full speed. Use ftp for the test to minimize the overhead associated with scp.&lt;BR /&gt; &lt;BR /&gt;Then connect the cables to the switch and perform the same test. It should be virtually identical to the crossover cable test. If not, check the switch's port settings. Then work down the chain to the other system. In a network, the maximum performance of the Gbit card will be the slowest link along the way. Also note that Jumbo frames (mtu=9000) must be supported through every hop to be effective.</description>
      <pubDate>Sat, 14 Mar 2009 22:14:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163443#M535426</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2009-03-14T22:14:49Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163444#M535427</link>
      <description>&lt;!--!*#--&gt;instead of creating a file, just pipe the output of dd trough the network like this:&lt;BR /&gt;&lt;BR /&gt;dd if=/dev/zero | ssh frances "dd of=/dev/null"&lt;BR /&gt;&lt;BR /&gt;This will eliminate the disk speeds in the process. But as it was suggested before, best to use netperf. ;)&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sun, 15 Mar 2009 10:29:38 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163444#M535427</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2009-03-15T10:29:38Z</dc:date>
    </item>
    <item>
      <title>Re: nic performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163445#M535428</link>
      <description>I should have mentioned, that testing the LAN with ssh/scp is not the best practice, ssh does a lot of encryption, it is a known problem of HP-UX machines. So again, just go ahead with netperf! ;)&lt;BR /&gt;</description>
      <pubDate>Sun, 15 Mar 2009 10:43:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/nic-performance/m-p/5163445#M535428</guid>
      <dc:creator>Viktor Balogh</dc:creator>
      <dc:date>2009-03-15T10:43:57Z</dc:date>
    </item>
  </channel>
</rss>

