<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: HBA iops limit in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092235#M55522</link>
    <description>Guess I should close this thread.&lt;BR /&gt;&lt;BR /&gt;Duncan answered the initial question, about the IOPS limit, and also gave some good information about IO performance troubleshooting.  The initial bottleneck was the scsi_queue_depth, and increasing that appears to have improved things quite a bit.</description>
    <pubDate>Fri, 29 Feb 2008 19:14:14 GMT</pubDate>
    <dc:creator>Ben Dehner</dc:creator>
    <dc:date>2008-02-29T19:14:14Z</dc:date>
    <item>
      <title>HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092229#M55516</link>
      <description>My question is fairly straightforward: what is the iops processing limit on an AB465A HBA?&lt;BR /&gt;&lt;BR /&gt;I'm running an RX6600 using two AB465A combo cards with two ports connected.  Under peak IO loads -- when exceeding about 3000 iops -- I am seeing performance degradation; specfically, there is significant IO request queuing on my host, as shown by sar and glance.  (This is an Oracle DB server, so the IO size will typically be 8k blocks).&lt;BR /&gt;&lt;BR /&gt;What I think is happening is that we are exceeding the maximum number of iops that the HBAs are capable of.  Unforunately, I can't  find anything like a published spec on what to expect.  The  closest I found is a whitepaper at &lt;A href="http://docs.hp.com/en/6848/AB465Awhitepaperbook.pdf" target="_blank"&gt;http://docs.hp.com/en/6848/AB465Awhitepaperbook.pdf&lt;/A&gt; that show the maxium throughput of that card at about 190 MB/sec using 128k blocks, which translates to about 1500 iops.  This is consistent with the observation that performance problems happen when exceeding 3000 (1500 x 2 cards) iops.&lt;BR /&gt;&lt;BR /&gt;Obviously, in a SAN environment, there are other components (switches, array) that can contribute to IO bottlenecks.  For the moment, I want to focus on the HBAs and see if they are the problem.&lt;BR /&gt;&lt;BR /&gt;Thanks</description>
      <pubDate>Tue, 12 Feb 2008 15:39:42 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092229#M55516</guid>
      <dc:creator>Ben Dehner</dc:creator>
      <dc:date>2008-02-12T15:39:42Z</dc:date>
    </item>
    <item>
      <title>Re: HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092230#M55517</link>
      <description>Ben,&lt;BR /&gt;&lt;BR /&gt;The ASIC in the AB465A card is a Qlogic ISP2312, and a quick search on google will throw up a few indications that it can sustain considerably more than 3000 iops.&lt;BR /&gt;&lt;BR /&gt;Remember that just cos the blocksize of the filesystem might be 8K (it is 8k isn't it?), and the database block size is set to 8K, doesn't mean that actual physical IOs need to be 8k. Assuming we are looking at HP-UX here (you don't explicity state that), can you fill us in on some of the other configuration info? &lt;BR /&gt;&lt;BR /&gt;What makes you think the bottleneck is the HBAs? &lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Wed, 13 Feb 2008 15:14:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092230#M55517</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2008-02-13T15:14:06Z</dc:date>
    </item>
    <item>
      <title>Re: HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092231#M55518</link>
      <description>One other thing,&lt;BR /&gt;&lt;BR /&gt;again assuming this is a HP-UX system, run fcmsutil against the FC HBA looking at the stats. If the card is really having to queue up IO I'd expect to see some interesting figures in the stats.&lt;BR /&gt;&lt;BR /&gt;Identify the device files for your HBA ports using:&lt;BR /&gt;&lt;BR /&gt;ioscan -funCfc&lt;BR /&gt;&lt;BR /&gt;Then run fcmsutil on then, e.g.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;fcmsutil /dev/fcd0 stat&lt;BR /&gt;&lt;BR /&gt;Post any interesting non-zero stats here...&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Wed, 13 Feb 2008 15:34:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092231#M55518</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2008-02-13T15:34:34Z</dc:date>
    </item>
    <item>
      <title>Re: HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092232#M55519</link>
      <description>Why do I think the HBA might be the bottleneck?  Mostly because I don't know what I'm doing and the HBA was a convenient straw to clutch at.&lt;BR /&gt;&lt;BR /&gt;Ths system is running HP-UX 11.23, and I looked at fcmsutil; didn't see anything that looked wrong.  What I am seeing, with sar -d, a lot of disk queuing, and, in Glance, a lot of disk queuing.  I was wondering if the HBAs simple could not handle the iops.  On further investigation, (and education), what I think is happening is that the SCSI queue_depth is set too low on my LUNs. &lt;BR /&gt;&lt;BR /&gt;The back end disk array is an IBM N5200, which has about 50 x 10k FC spindles presenting 2.5TB of storage in 4 LUNs.  The back-end can handle plenty of iops, lots more than we're throwing at it.  I've checked the array performance stats, it is fine.  However, since all of the IO is being funneled through 4 LUNs -- and 4 SCSI command queues -- I'm guessing that the default setting for the SCSI queue depth is causing the host to throttle back the rate at which it is sending the IO to the array.  I'm going to try increasing the queue_depth and see what that does.</description>
      <pubDate>Wed, 13 Feb 2008 19:42:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092232#M55519</guid>
      <dc:creator>Ben Dehner</dc:creator>
      <dc:date>2008-02-13T19:42:53Z</dc:date>
    </item>
    <item>
      <title>Re: HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092233#M55520</link>
      <description>Ben,&lt;BR /&gt;&lt;BR /&gt;Don't worry - all perf tuning starts with eliminating possible bottlenecks - now you can be reasonably confident that the HBAs aren't it in this case.&lt;BR /&gt;&lt;BR /&gt;Looking at SCSI queues seems a reasonable plave to go next... out of interest I have a few more questions:&lt;BR /&gt;&lt;BR /&gt;1. What sort of IO profile are we talking about here? Random or sequential, or a mix?&lt;BR /&gt;&lt;BR /&gt;2. What average wait times and service times are you seeing in 'sar -d' for these LUNs?&lt;BR /&gt;&lt;BR /&gt;3. Are you seeing high CPU usage as well? Particularly system CPU? (check with sar -u)&lt;BR /&gt;&lt;BR /&gt;4. 3000 iops of size 8k is a miserable 24MB/s - I can get close to that out of a single rusty old SCSI disk in my C3000 workstation, although this wouldn't be a surprise for IOs with such a small blocksize. Have you tried some tests on the raw logical volumes (are you using LVM?) to see what peak IO you can reach.&lt;BR /&gt;&lt;BR /&gt;e.g.&lt;BR /&gt;&lt;BR /&gt;timex dd if=/dev/vg01/rlvol1 of=/dev/null bs=8k count=1310720&lt;BR /&gt;&lt;BR /&gt;That will sequentially read 10GB of data off an lvol and send it to /dev/null in 8k blocks.&lt;BR /&gt;&lt;BR /&gt;You could repeat the test with higher values to see what you might get from larger block sizes, and also run some in parallel tological volumes on the other LUNs.&lt;BR /&gt;&lt;BR /&gt;HTH&lt;BR /&gt;&lt;BR /&gt;Duncan</description>
      <pubDate>Thu, 14 Feb 2008 09:21:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092233#M55520</guid>
      <dc:creator>Duncan Edmonstone</dc:creator>
      <dc:date>2008-02-14T09:21:58Z</dc:date>
    </item>
    <item>
      <title>Re: HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092234#M55521</link>
      <description>Okay, I've changed the queue depth from 8 to 16.  That seems to have alleviated many of the symptoms I was seeing.&lt;BR /&gt;&lt;BR /&gt;1) The IOs would be mostly small and random.  Almost all Oracle database, with a FS blocksize of 8k and an Oracle blocksize of 8k.&lt;BR /&gt;&lt;BR /&gt;2) before changing the queue_depth, average wait/service of 1.0/8 ms; after change, wait/service of 0.1/8 ms.  In addition, the avque dropped from 2.5 or so to &amp;lt; 1.0.&lt;BR /&gt;&lt;BR /&gt;-- I know these averages don't seem like a problem, but during peak IO loads of &amp;gt; 3000 iops, it would become a problem.&lt;BR /&gt;&lt;BR /&gt;3) CPU usage is moderate.  SYS cpu time is &amp;lt; 5%&lt;BR /&gt;&lt;BR /&gt;4) not sure how much this test would tell.  We are already running a sustained IO load of 30-40 MB/sec.  As I was typing this, something hit the system and it peaked up to 4700 iops with 80+ MB/sec of transfer.  However, I saw very little queuing, which has been the problem in the past.&lt;BR /&gt;&lt;BR /&gt;And I am using LVM, with LVM striping and a stripe size of 16k.</description>
      <pubDate>Thu, 14 Feb 2008 16:04:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092234#M55521</guid>
      <dc:creator>Ben Dehner</dc:creator>
      <dc:date>2008-02-14T16:04:25Z</dc:date>
    </item>
    <item>
      <title>Re: HBA iops limit</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092235#M55522</link>
      <description>Guess I should close this thread.&lt;BR /&gt;&lt;BR /&gt;Duncan answered the initial question, about the IOPS limit, and also gave some good information about IO performance troubleshooting.  The initial bottleneck was the scsi_queue_depth, and increasing that appears to have improved things quite a bit.</description>
      <pubDate>Fri, 29 Feb 2008 19:14:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/hba-iops-limit/m-p/5092235#M55522</guid>
      <dc:creator>Ben Dehner</dc:creator>
      <dc:date>2008-02-29T19:14:14Z</dc:date>
    </item>
  </channel>
</rss>

