<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Storage performance in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123582#M448262</link>
    <description>SAN problems are always hard to find - first I would check what is different in system configuration (2GB vs. 4 GB speed settings, driver etc., load balancing policy, ...). I would also suspect connection problems - check the port error statistics on the switch.</description>
    <pubDate>Thu, 07 Aug 2008 05:26:10 GMT</pubDate>
    <dc:creator>Torsten.</dc:creator>
    <dc:date>2008-08-07T05:26:10Z</dc:date>
    <item>
      <title>Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123580#M448260</link>
      <description>&lt;!--!*#--&gt;Hi all.&lt;BR /&gt;&lt;BR /&gt;I'm having a problem with storage performance. HP-UX:&lt;BR /&gt;&lt;BR /&gt;uname -a&lt;BR /&gt;HP-UX &lt;NODENAME&gt; B.11.31 U ia64 0070246111 unlimited-user license&lt;BR /&gt;&lt;BR /&gt;This is a new two node Oracle Cluster. Just started the configuration of disk devices for OCR and Voting disk.&lt;BR /&gt;&lt;BR /&gt;Both nodes can access the same disks. The disks are located on HP EVA4100 Storages. Both nodes have two HBAs. &lt;BR /&gt;&lt;BR /&gt;Testing performance, one node has no problems, but the second node has terrible performance. For example:&lt;BR /&gt;&lt;BR /&gt;Node 1:&lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/rdisk/ocr of=/dev/null bs=8k count=131072&lt;BR /&gt;131072+0 records in&lt;BR /&gt;131072+0 records out&lt;BR /&gt;&lt;BR /&gt;real    1m13.79s&lt;BR /&gt;user    0m0.09s&lt;BR /&gt;sys     0m1.07s&lt;BR /&gt;&lt;BR /&gt;sar output:&lt;BR /&gt;&lt;BR /&gt;           device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;&lt;BR /&gt;            disk8   98.80    0.50    1697   27149    0.00    0.58&lt;BR /&gt;            disk8   98.40    0.50    1787   28596    0.00    0.55&lt;BR /&gt;            disk8   91.20    0.50    1296   20742    0.00    0.70&lt;BR /&gt;            disk8  100.00    0.50     485    7754    0.00    2.19&lt;BR /&gt;&lt;BR /&gt;This node is "normal" if you can say it.&lt;BR /&gt;&lt;BR /&gt;Node 2:&lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/rdisk/ocr of=/dev/null bs=8k count=131072&lt;BR /&gt;&lt;BR /&gt;&lt;THIS command="" could="" take="" forever=""&gt;&lt;BR /&gt;&lt;BR /&gt;Sar output:&lt;BR /&gt;&lt;BR /&gt;11:22:42   device   %busy   avque   r+w/s  blks/s  avwait  avserv&lt;BR /&gt;            disk8   80.00    0.50       2      32    0.00  400.96&lt;BR /&gt;            disk8  100.00    0.50       2      32    0.00  500.50&lt;BR /&gt;            disk8   99.80    0.50       2      32    0.00  500.00&lt;BR /&gt;&lt;BR /&gt;As you can see, the %busy is 100% and the service time extreamely high.&lt;BR /&gt;&lt;BR /&gt;I don't know what else to check. Node 1 works correctly, so should not be a storage problem. I have tried with only one HBA enabled on node 2, same results. I have tried a a new non-shared LUN, same results. The HBA seems to be in the correct bus at the correct speed.&lt;BR /&gt;&lt;BR /&gt;Any help would be apreciated.&lt;BR /&gt;&lt;BR /&gt;Hardware information:&lt;BR /&gt;&lt;BR /&gt;Node 1:&lt;BR /&gt;&lt;BR /&gt;    Model:              ia64 hp server rx6600&lt;BR /&gt;    Main Memory:        16352 MB&lt;BR /&gt;    Processors:         8&lt;BR /&gt;    OS mode:            64 bit&lt;BR /&gt;&lt;BR /&gt;/opt/fcms/bin/fcmsutil /dev/fcd0&lt;BR /&gt;&lt;BR /&gt;                           Vendor ID is = 0x1077&lt;BR /&gt;                           Device ID is = 0x2422&lt;BR /&gt;            PCI Sub-system Vendor ID is = 0x103C&lt;BR /&gt;                   PCI Sub-system ID is = 0x12D6&lt;BR /&gt;                               PCI Mode = PCI-X 266 MHz&lt;BR /&gt;                       ISP Code version = 4.0.90&lt;BR /&gt;                       ISP Chip version = 3&lt;BR /&gt;                               Topology = PTTOPT_FABRIC&lt;BR /&gt;                             Link Speed = 2Gb&lt;BR /&gt;                     Local N_Port_id is = 0x010300&lt;BR /&gt;                  Previous N_Port_id is = None&lt;BR /&gt;            N_Port Node World Wide Name = 0x5001438001724859&lt;BR /&gt;            N_Port Port World Wide Name = 0x5001438001724858&lt;BR /&gt;            Switch Port World Wide Name = 0x200300051e35a5de&lt;BR /&gt;            Switch Node World Wide Name = 0x100000051e35a5de&lt;BR /&gt;                           Driver state = ONLINE&lt;BR /&gt;                       Hardware Path is = 0/3/1/0&lt;BR /&gt;                     Maximum Frame Size = 2048&lt;BR /&gt;         Driver-Firmware Dump Available = NO&lt;BR /&gt;         Driver-Firmware Dump Timestamp = N/A&lt;BR /&gt;                         Driver Version = @(#) fcd B.11.31.0709 Jun 11 2007&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Node 2:&lt;BR /&gt;&lt;BR /&gt;System Hardware&lt;BR /&gt;&lt;BR /&gt;    Model:              ia64 hp server rx2660&lt;BR /&gt;    Main Memory:        16363 MB&lt;BR /&gt;    Processors:         4&lt;BR /&gt;    OS mode:            64 bit&lt;BR /&gt;&lt;BR /&gt; /opt/fcms/bin/fcmsutil /dev/fcd0&lt;BR /&gt;&lt;BR /&gt;                           Vendor ID is = 0x1077&lt;BR /&gt;                           Device ID is = 0x2422&lt;BR /&gt;            PCI Sub-system Vendor ID is = 0x103C&lt;BR /&gt;                   PCI Sub-system ID is = 0x12D6&lt;BR /&gt;                               PCI Mode = PCI-X 266 MHz&lt;BR /&gt;                       ISP Code version = 4.0.90&lt;BR /&gt;                       ISP Chip version = 3&lt;BR /&gt;                               Topology = PTTOPT_FABRIC&lt;BR /&gt;                             Link Speed = 2Gb&lt;BR /&gt;                     Local N_Port_id is = 0x010400&lt;BR /&gt;                  Previous N_Port_id is = None&lt;BR /&gt;            N_Port Node World Wide Name = 0x5001438001724791&lt;BR /&gt;            N_Port Port World Wide Name = 0x5001438001724790&lt;BR /&gt;            Switch Port World Wide Name = 0x200400051e35a5de&lt;BR /&gt;            Switch Node World Wide Name = 0x100000051e35a5de&lt;BR /&gt;                           Driver state = ONLINE&lt;BR /&gt;                       Hardware Path is = 0/2/1/0&lt;BR /&gt;                     Maximum Frame Size = 2048&lt;BR /&gt;         Driver-Firmware Dump Available = NO&lt;BR /&gt;         Driver-Firmware Dump Timestamp = N/A&lt;BR /&gt;                         Driver Version = @(#) fcd B.11.31.0709 Jun 11 2007&lt;BR /&gt;&lt;BR /&gt;&lt;/THIS&gt;&lt;/NODENAME&gt;</description>
      <pubDate>Wed, 06 Aug 2008 22:43:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123580#M448260</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-08-06T22:43:05Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123581#M448261</link>
      <description>Shalom,&lt;BR /&gt;&lt;BR /&gt;Ivan, can you give details on the Oracle Cluster, major version and patch.&lt;BR /&gt;&lt;BR /&gt;You may be looking at storage, however it could be caused by the lack of an Oracle patch or the need for a newly minted OS patch that Oracle now requires.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 07 Aug 2008 05:04:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123581#M448261</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2008-08-07T05:04:18Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123582#M448262</link>
      <description>SAN problems are always hard to find - first I would check what is different in system configuration (2GB vs. 4 GB speed settings, driver etc., load balancing policy, ...). I would also suspect connection problems - check the port error statistics on the switch.</description>
      <pubDate>Thu, 07 Aug 2008 05:26:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123582#M448262</guid>
      <dc:creator>Torsten.</dc:creator>
      <dc:date>2008-08-07T05:26:10Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123583#M448263</link>
      <description>Shalom again,&lt;BR /&gt;&lt;BR /&gt;I agree with Torsten.&lt;BR /&gt;&lt;BR /&gt;Every silly little detail plus SAN storage utilities should be checked.&lt;BR /&gt;&lt;BR /&gt;What kind of SAN is it? EVA? EMC?&lt;BR /&gt;&lt;BR /&gt;I've experience with those two brands.&lt;BR /&gt;&lt;BR /&gt;SEP</description>
      <pubDate>Thu, 07 Aug 2008 06:53:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123583#M448263</guid>
      <dc:creator>Steven E. Protter</dc:creator>
      <dc:date>2008-08-07T06:53:21Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123584#M448264</link>
      <description>Ivan,&lt;BR /&gt;&lt;BR /&gt;Some things to look at:&lt;BR /&gt;&lt;BR /&gt;1. If you don't already have a copy of evainfo, get one and compare output between nodes for the LUN:&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?swItem=co-53627-1" target="_blank"&gt;http://h20000.www2.hp.com/bizsupport/TechSupport/SoftwareDescription.jsp?swItem=co-53627-1&lt;/A&gt;〈=en&amp;amp;cc=us&amp;amp;idx=0&amp;amp;mode=4&amp;amp;&lt;BR /&gt;&lt;BR /&gt;2. As /dev/disk/ocr is a non-standard disk name I'm assuming you've created it either as a symbolic link to a real disk or used mknod to create a new device file with the same major/minor details as a real disk. Either way, have you repeated the test on the real disk? Have you checked that they were created/linked the same?&lt;BR /&gt;&lt;BR /&gt;3. Looking at the "real" device special file, do you get the same output on both nodes for:&lt;BR /&gt;&lt;BR /&gt;scsimgr get_info -D /dev/rdisk/diskX&lt;BR /&gt;&lt;BR /&gt;4. Try clearing the stats for the disk and then repeating your test:&lt;BR /&gt;&lt;BR /&gt;scsimgr clear_stat -D /dev/rdisk/diskX&lt;BR /&gt;time dd if=/dev/rdisk/diskX of=/dev/null bs=8k count=131072&lt;BR /&gt;scsimgr get_stat -D /dev/rdisk/diskX&lt;BR /&gt;&lt;BR /&gt;Any significant differences?&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Thu, 07 Aug 2008 08:06:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123584#M448264</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2008-08-07T08:06:06Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123585#M448265</link>
      <description>Thank you all for your time.&lt;BR /&gt;&lt;BR /&gt;Steven&lt;BR /&gt;======&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; Can you give details on the Oracle Cluster, major version and patch&lt;BR /&gt;&lt;BR /&gt;At this point, the oracle it not even installed, as the test where done with dd.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;&amp;gt;&amp;gt; What kind of SAN is it? EVA? EMC?&lt;BR /&gt;&lt;BR /&gt;As you can see above, is an EVA4100 storage.&lt;BR /&gt;&lt;BR /&gt;Torsten&lt;BR /&gt;=======&lt;BR /&gt;&amp;gt;&amp;gt; &amp;gt;first I would check what is different in system configuration (2GB vs. 4 GB speed settings, driver etc., load balancing policy, ...). I would also suspect connection problems - check the port error statistics on the switch.&lt;BR /&gt;&lt;BR /&gt;As you can see in the fcmsutil output, 2 GB is the speed of the node 1 and node 2 HBA. And the service time is too high even for a 1 GB HBA.&lt;BR /&gt;&lt;BR /&gt;The ports at the switch reports no problem, and already tested with different load balancing policies, and with only one HBA.&lt;BR /&gt;&lt;BR /&gt;Duncan&lt;BR /&gt;======&lt;BR /&gt;&lt;BR /&gt;The disk name was created with mknod because Oracle had problems to identify the device names for the OCR and VDISK the device has more than 4 characters. The performance tests where done over the "original devices", and same results.&lt;BR /&gt;&lt;BR /&gt;I do get the same output for scsimgr get_info.&lt;BR /&gt;&lt;BR /&gt;Not sure if clearing statistics will help, but I will give a try when I go to the customer site again.&lt;BR /&gt;&lt;BR /&gt;I did not know about evainfo, I will give a try.&lt;BR /&gt;&lt;BR /&gt;Thanks to all!</description>
      <pubDate>Thu, 07 Aug 2008 13:43:16 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123585#M448265</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-08-07T13:43:16Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123586#M448266</link>
      <description>Ivan&lt;BR /&gt;&lt;BR /&gt;I didn't mean that clearing the devices statistics would fix the issue, but that it would give you a clean starting point to compare the data from a "scsimgr get_stat" command between the 2 nodes after repeating your test(e.g. to see if one system has more IO retries or maybe has LUN path offlines)&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Thu, 07 Aug 2008 14:15:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123586#M448266</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2008-08-07T14:15:25Z</dc:date>
    </item>
    <item>
      <title>Re: Storage performance</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123587#M448267</link>
      <description>There was a hardware problem with one of the HBAs. Once replaced, the performance problem was solved.&lt;BR /&gt;&lt;BR /&gt;Thanks to all.</description>
      <pubDate>Wed, 13 Aug 2008 13:43:45 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/storage-performance/m-p/5123587#M448267</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-08-13T13:43:45Z</dc:date>
    </item>
  </channel>
</rss>

