<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Strange network performance issue in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602267#M557183</link>
    <description>I have a IA64 system running HP-UX 11.23 that I plan to run a NFS server on for a few Linux cluster nodes.&lt;BR /&gt;&lt;BR /&gt;The server has two gigabit NIC's channeled together into a 2G pipe against a Cisco switch.&lt;BR /&gt;&lt;BR /&gt;When a few (5 or less) nodes are reading data from the server the performance is almost acceptable but the total bandwidth used is just over 1G.&lt;BR /&gt;&lt;BR /&gt;When more nodes access the server at the same time the performance drops.  I see a linear drop in performance as the nodes increase.&lt;BR /&gt;&lt;BR /&gt;The servers total output never gets close to the 2 Gbit range.&lt;BR /&gt;&lt;BR /&gt;At first I thought this was some NFS tuning issue and read up on NFS tuning, both at the kernel level and filesystem level (Including number of NFS daemons running, number of kernel threads for TCP NFSv3 and so on) but I was unable to change the pattern or get more performance out of the machine.&lt;BR /&gt;&lt;BR /&gt;I then made simple tests with ftp and I see the same problem there.&lt;BR /&gt;&lt;BR /&gt;The server's CPU's are mostly Idle all the time during testing.&lt;BR /&gt;&lt;BR /&gt;On the same switch I have a Linux server with the same network setup (2G channel) and it can utilize it's bandwidth fully.&lt;BR /&gt;&lt;BR /&gt;The HP-UX box and switch both agree on speed and duplex settings and there are no errors, collisions or other issues on the Server or the Switchport.&lt;BR /&gt;&lt;BR /&gt;I anyone has any ideas on how I should proceed to locate the problem, their help will be greatly appriciated.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;Richard.</description>
    <pubDate>Thu, 11 Aug 2005 21:51:45 GMT</pubDate>
    <dc:creator>Richard Allen</dc:creator>
    <dc:date>2005-08-11T21:51:45Z</dc:date>
    <item>
      <title>Strange network performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602267#M557183</link>
      <description>I have a IA64 system running HP-UX 11.23 that I plan to run a NFS server on for a few Linux cluster nodes.&lt;BR /&gt;&lt;BR /&gt;The server has two gigabit NIC's channeled together into a 2G pipe against a Cisco switch.&lt;BR /&gt;&lt;BR /&gt;When a few (5 or less) nodes are reading data from the server the performance is almost acceptable but the total bandwidth used is just over 1G.&lt;BR /&gt;&lt;BR /&gt;When more nodes access the server at the same time the performance drops.  I see a linear drop in performance as the nodes increase.&lt;BR /&gt;&lt;BR /&gt;The servers total output never gets close to the 2 Gbit range.&lt;BR /&gt;&lt;BR /&gt;At first I thought this was some NFS tuning issue and read up on NFS tuning, both at the kernel level and filesystem level (Including number of NFS daemons running, number of kernel threads for TCP NFSv3 and so on) but I was unable to change the pattern or get more performance out of the machine.&lt;BR /&gt;&lt;BR /&gt;I then made simple tests with ftp and I see the same problem there.&lt;BR /&gt;&lt;BR /&gt;The server's CPU's are mostly Idle all the time during testing.&lt;BR /&gt;&lt;BR /&gt;On the same switch I have a Linux server with the same network setup (2G channel) and it can utilize it's bandwidth fully.&lt;BR /&gt;&lt;BR /&gt;The HP-UX box and switch both agree on speed and duplex settings and there are no errors, collisions or other issues on the Server or the Switchport.&lt;BR /&gt;&lt;BR /&gt;I anyone has any ideas on how I should proceed to locate the problem, their help will be greatly appriciated.&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.&lt;BR /&gt;Richard.</description>
      <pubDate>Thu, 11 Aug 2005 21:51:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602267#M557183</guid>
      <dc:creator>Richard Allen</dc:creator>
      <dc:date>2005-08-11T21:51:45Z</dc:date>
    </item>
    <item>
      <title>Re: Strange network performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602268#M557184</link>
      <description>how do you measure the performance of this server while you are running these tests ? If you have it, run glance and watch the memory and disk utilizations as well as the network load. Also check to see if the table utilizations are hitting anywhere near their maximums. And let us know your findings for better educated guesses.</description>
      <pubDate>Thu, 11 Aug 2005 22:54:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602268#M557184</guid>
      <dc:creator>Mel Burslan</dc:creator>
      <dc:date>2005-08-11T22:54:51Z</dc:date>
    </item>
    <item>
      <title>Re: Strange network performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602269#M557185</link>
      <description>It sounds like you are not utilising both interfaces after all.  Yuo need to have HP APA software installed and configured to group them together, but it is not easy to configure.  &lt;BR /&gt;&lt;BR /&gt;Please post your lanscan output and the status of the virtual interface e.g. /usr/sbin/ifconfig lan100.&lt;BR /&gt;&lt;BR /&gt;It will also help to post your settings in in /etc/rc.config.d/hp_apaconf&lt;BR /&gt;plus &lt;BR /&gt;/etc/rc.config.d/hp_apaportconf&lt;BR /&gt;and&lt;BR /&gt;/etc/rc.config.d/hpgelanconf&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 12 Aug 2005 05:13:36 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602269#M557185</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2005-08-12T05:13:36Z</dc:date>
    </item>
    <item>
      <title>Re: Strange network performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602270#M557186</link>
      <description>I just re-read your posting and saw that you were getting just over 1Gbit so your apa must be OK.  &lt;BR /&gt;&lt;BR /&gt;What we found in the past is that while NFS is for transferring large files at high speed, it is not good for transferring lots of small files.&lt;BR /&gt;&lt;BR /&gt;We found that SAMBA/CIFS is better for transferring many small files.  You can use that as a replacement for NFS.&lt;BR /&gt;&lt;BR /&gt;FTP is even quicker, but harder to set-up.&lt;BR /&gt;&lt;BR /&gt;rcp/scp is slowest.</description>
      <pubDate>Fri, 12 Aug 2005 05:36:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602270#M557186</guid>
      <dc:creator>Steve Lewis</dc:creator>
      <dc:date>2005-08-12T05:36:40Z</dc:date>
    </item>
    <item>
      <title>Re: Strange network performance issue</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602271#M557187</link>
      <description>The files the Linux cluster nodes are reading are very large.  Atleast a Gigabyte each.&lt;BR /&gt;I start the nodes reading a file and time them. Then I can calculate, based on the time it takes to transfer the file, the bandwidth used.&lt;BR /&gt;&lt;BR /&gt;There is almost no disk load on the server during these experiments of mine because I tend to make all the nodes transfer the same file and it looks like the buffercache on the server is taking care of buisness.&lt;BR /&gt;&lt;BR /&gt;No system tables are close to their maximum values.&lt;BR /&gt;&lt;BR /&gt;watson# lanscan&lt;BR /&gt;Hardware Station        Crd Hdw   Net-Interface  NM  MAC       HP-DLPI DLPI&lt;BR /&gt;Path     Address        In# State NamePPA        ID  Type      Support Mjr#&lt;BR /&gt;LinkAgg0 0x0012799E2207 900 UP    lan900 snap900 6   ETHER     Yes     119&lt;BR /&gt;LinkAgg1 0x000000000000 901 DOWN  lan901 snap901 7   ETHER     Yes     119&lt;BR /&gt;LinkAgg2 0x000000000000 902 DOWN  lan902 snap902 8   ETHER     Yes     119&lt;BR /&gt;LinkAgg3 0x000000000000 903 DOWN  lan903 snap903 9   ETHER     Yes     119&lt;BR /&gt;LinkAgg4 0x000000000000 904 DOWN  lan904 snap904 10  ETHER     Yes     119&lt;BR /&gt;&lt;BR /&gt;watson# ifconfig lan900&lt;BR /&gt;lan900: flags=1843&lt;UP&gt;&lt;BR /&gt;        inet 172.17.150.14 netmask ffffff00 broadcast 172.17.150.255&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;watson# grep -v ^# /etc/rc.config.d/hp_apaportconf | grep -v ^$&lt;BR /&gt;HP_APAPORT_INTERFACE_NAME[0]=lan0&lt;BR /&gt;HP_APAPORT_GROUP_CAPABILITY[0]=3&lt;BR /&gt;HP_APAPORT_CONFIG_MODE[0]=FEC_AUTO&lt;BR /&gt;HP_APAPORT_INTERFACE_NAME[1]=lan1&lt;BR /&gt;HP_APAPORT_GROUP_CAPABILITY[1]=3&lt;BR /&gt;HP_APAPORT_CONFIG_MODE[1]=FEC_AUTO&lt;BR /&gt;&lt;BR /&gt;watson# grep -v ^# /etc/rc.config.d/hp_apaconf | grep -v ^$    &lt;BR /&gt;HP_APA_START_LA_PPA=900&lt;BR /&gt;HP_APA_DEFAULT_PORT_MODE=MANUAL&lt;BR /&gt;HP_APA_INTERFACE_NAME[0]=lan900&lt;BR /&gt;HP_APA_LOAD_BALANCE_MODE[0]=LB_MAC&lt;BR /&gt;HP_APA_GROUP_CAPABILITY[0]=3&lt;BR /&gt;HP_APA_HOT_STANDBY[0]=off&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;watson# grep -v ^# /etc/rc.config.d/hpgelanconf | grep -v ^$&lt;BR /&gt;HP_GELAN_INIT_ARGS="HP_GELAN_STATION_ADDRESS HP_GELAN_SPEED HP_GELAN_MTU HP_GELAN_FLOW_CONTROL HP_GELAN_AUTONEG HP_GELAN_SEND_COAL_TICKS HP_GELAN_RECV_COAL_TICKS HP_GELAN_SEND_MAX_BUFS HP_GELAN_RECV_MAX_BUFS"&lt;BR /&gt;HP_GELAN_INTERFACE_NAME[0]=&lt;BR /&gt;HP_GELAN_STATION_ADDRESS[0]=&lt;BR /&gt;HP_GELAN_SPEED[0]=&lt;BR /&gt;HP_GELAN_MTU[0]=&lt;BR /&gt;HP_GELAN_FLOW_CONTROL[0]=&lt;BR /&gt;HP_GELAN_AUTONEG[0]=&lt;BR /&gt;HP_GELAN_SEND_COAL_TICKS[0]=&lt;BR /&gt;HP_GELAN_RECV_COAL_TICKS[0]=&lt;BR /&gt;HP_GELAN_SEND_MAX_BUFS[0]=&lt;BR /&gt;HP_GELAN_RECV_MAX_BUFS[0]=&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Kernel parameters that I have changed:&lt;BR /&gt;watson# kctune -S&lt;BR /&gt;Tunable              Value  Expression  Changes&lt;BR /&gt;dbc_max_pct             70  70          Immed&lt;BR /&gt;dbc_min_pct             10  10          Immed&lt;BR /&gt;default_disk_ir          1  1           &lt;BR /&gt;dnlc_hash_locks       4096  4096        &lt;BR /&gt;dst                      0  0           &lt;BR /&gt;fs_async                 1  1           &lt;BR /&gt;ftable_hash_locks     4096  4096        &lt;BR /&gt;max_thread_proc       1024  1024        Immed&lt;BR /&gt;maxvgs                  32  32          &lt;BR /&gt;ncsize               32768  32768       &lt;BR /&gt;nflocks               8192  8192        Imm (auto disabled)&lt;BR /&gt;nstrpty                 60  60          &lt;BR /&gt;timezone                 0  0           &lt;BR /&gt;vnode_cd_hash_locks   4096  4096        &lt;BR /&gt;vnode_hash_locks      4096  4096        &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I've tested 32, 64 and 128 nfsd's running but again, I doubt that this is a NFS problem because I cannot get the server to pump out data like I know it shoulf be able to do.&lt;BR /&gt;&lt;/UP&gt;</description>
      <pubDate>Fri, 12 Aug 2005 15:54:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/strange-network-performance-issue/m-p/3602271#M557187</guid>
      <dc:creator>Richard Allen</dc:creator>
      <dc:date>2005-08-12T15:54:01Z</dc:date>
    </item>
  </channel>
</rss>

