<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EVA4000 throughput in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676411#M18508</link>
    <description>HP has pretty realistic published performance #'s.  Many of our competitors will tell you that their array with 4 2GB FC ports will do 800MB/s.&lt;BR /&gt;&lt;BR /&gt;Bullcookies.&lt;BR /&gt;&lt;BR /&gt;Sustained throughput is different than bus speeds.&lt;BR /&gt;&lt;BR /&gt;You can test the performance with some performance testing software and a large configuration.  We typically test with as many drives as the array can handle, and usually in RAID1.  Of course, you'll need several fast servers.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Vince</description>
    <pubDate>Tue, 22 Nov 2005 07:51:04 GMT</pubDate>
    <dc:creator>Vincent Fleming</dc:creator>
    <dc:date>2005-11-22T07:51:04Z</dc:date>
    <item>
      <title>EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676410#M18507</link>
      <description>Hi Everybody,&lt;BR /&gt;&lt;BR /&gt;As you know on EVA 4000 there are 4 Host ports. Each of them can connect with a 2 Gb/s connection to a SAN switch. So what does HP say EVA 4000 can handle 350 MB/s throughput? How can I test it?&lt;BR /&gt;&lt;BR /&gt;Alireza</description>
      <pubDate>Tue, 22 Nov 2005 06:48:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676410#M18507</guid>
      <dc:creator>Delrish</dc:creator>
      <dc:date>2005-11-22T06:48:01Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676411#M18508</link>
      <description>HP has pretty realistic published performance #'s.  Many of our competitors will tell you that their array with 4 2GB FC ports will do 800MB/s.&lt;BR /&gt;&lt;BR /&gt;Bullcookies.&lt;BR /&gt;&lt;BR /&gt;Sustained throughput is different than bus speeds.&lt;BR /&gt;&lt;BR /&gt;You can test the performance with some performance testing software and a large configuration.  We typically test with as many drives as the array can handle, and usually in RAID1.  Of course, you'll need several fast servers.&lt;BR /&gt;&lt;BR /&gt;Regards,&lt;BR /&gt;&lt;BR /&gt;Vince</description>
      <pubDate>Tue, 22 Nov 2005 07:51:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676411#M18508</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2005-11-22T07:51:04Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676412#M18509</link>
      <description>Thank you,&lt;BR /&gt;It seems I did not explain clear. I mean as long as we have 2 Gb/s connection we cannot get 350 MB/s. How did HP test it? &lt;BR /&gt;&lt;BR /&gt;Alireza</description>
      <pubDate>Tue, 22 Nov 2005 11:21:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676412#M18509</guid>
      <dc:creator>Delrish</dc:creator>
      <dc:date>2005-11-22T11:21:13Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676413#M18510</link>
      <description>The EVA4000 has 4 front-end ports, each 200 MegaBytes/second bandwidth. And it has two back-end ports, each 200 MegaBytes/second bandwidth, too.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;350/4 = 87.5 MegaBytes/second throughput per port.</description>
      <pubDate>Tue, 22 Nov 2005 11:40:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676413#M18510</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-11-22T11:40:00Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676414#M18511</link>
      <description>Letâ  s talk about it some more, I create a LUN on EVA4000, It has 4 ports, So my server will find it through 4 different paths. For example in HP-UX, OS detects it as 4 different disks (for example c6t0d0,c7t0d0,c8t0d0,c9t0d0). All of the other servers will detect this LUN as 4 Disks with the same name, too. If there are 10 servers that are connected to this EVA and all of them have to access this shared LUN,(like as Oracle RAC) how should I balance the load between 4 EVAâ  s ports? If I use just c6t0d0 disk, all the traffic will go through a port and another ports will be ideal and I/O will be my bottleneck. What is the solution for this case?</description>
      <pubDate>Tue, 22 Nov 2005 12:30:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676414#M18511</guid>
      <dc:creator>Delrish</dc:creator>
      <dc:date>2005-11-22T12:30:37Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676415#M18512</link>
      <description>Let's get the terminology right...&lt;BR /&gt;You don't create a LUN, you create a virtual disk. The virtual disk is then mapped to the SCSI LUN address space of each defined host when you 'present' it. You normally have 4 paths, so there are 4 different SCSI LUNs to access a single virtual disk.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;It is important to understand that on every EVA (3000/4000/5000/6000/8000) a single virtual disk is managed by one of the controllers at a time - a different virtual disk, of course, can be managed by the other controller.&lt;BR /&gt;&lt;BR /&gt;On the new EVAs (4000/6000/8000) you can do I/O through the non-managing controller as well, but there is a performance loss, because the data need to be re-routed over the mirror ports to the managing controller. It is not a great deal for write I/Os, because the data is usually sent anyway to go into the mirror cache, but the read I/Os will create additional traffic.&lt;BR /&gt;&lt;BR /&gt;The paths through the managing controller are called the performance paths and I recommend that you only use them. That will give you two paths with 200 MegaBytes/second - should be OK, because the EVA4000 has two back-end loops with 200 MegaBytes/second anyway.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;In most cases you are dealing with multiple virtual disks. You should divide them over both controllers so that you have some kind of load sharing and can make efficient use of all 4 paths.&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Nov 2005 12:52:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676415#M18512</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-11-22T12:52:08Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676416#M18513</link>
      <description>Thank you very much for the explanation,&lt;BR /&gt;I want to setup an enormous Oracle RAC system. I need I/O bandwidth as much as possible. I would rather to use all 350MB/s capacity of EVA4000. According to Oracle 10g documentation, I need just one disk (LUN or anything else that is existed in EVA environment) for storing my data (we will use new Oracle ASM technology instead of old RAW portions as a shared storage). I/O capacity is very important for our DBA and because of that we want to buy EVA. We will use EVA 2C1D configuration. What is your solution for this case?&lt;BR /&gt;Any kind of experience and help are highly appreciated&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Nov 2005 14:53:24 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676416#M18513</guid>
      <dc:creator>Delrish</dc:creator>
      <dc:date>2005-11-22T14:53:24Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676417#M18514</link>
      <description>Honestly,&lt;BR /&gt;I am a bit skeptical that you will be able to get 350MB/s with an EVA4000 2C1D, because it has only 14 disk drives. Each drive would have to be able to run with almost 200 IOPS to be able to deliver that much data.&lt;BR /&gt;&lt;BR /&gt;The chunk size is 128KB, so:&lt;BR /&gt; 350,000,000 / 14 / 128,000 = 195.3125</description>
      <pubDate>Tue, 22 Nov 2005 15:05:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676417#M18514</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-11-22T15:05:37Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676418#M18515</link>
      <description>Ok, so we must use more enclosures in EVA. However my question about load balancing between 4 host ports is still existed? If I have as much as needed enclosures and HDDs, Is there a solution for the case mentioned in my previous post?&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Nov 2005 15:49:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676418#M18515</guid>
      <dc:creator>Delrish</dc:creator>
      <dc:date>2005-11-22T15:49:12Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676419#M18516</link>
      <description>I think, you must create at least 2 LUN and divide Oracle data into 2 parts to store on this LUNs. As I know Oracle supports such approach.&lt;BR /&gt;Than bind first LUN for one EVA controller, second--to another.</description>
      <pubDate>Tue, 22 Nov 2005 15:58:21 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676419#M18516</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2005-11-22T15:58:21Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676420#M18517</link>
      <description>A couple of things...&lt;BR /&gt;&lt;BR /&gt;First off, if you use 15k drives, you can get 200 IOPS per disk, BUT, that's pushing it in a database (ie: random small-block I/O) environment.  You're more likely to see more like 100 IOPS per disk in random environments.&lt;BR /&gt;&lt;BR /&gt;It all depends on how random you I/O is - the more you make the heads seek, the fewer I/Os it will be able to do.  (it takes time to seek the heads)&lt;BR /&gt;&lt;BR /&gt;So, distrubute the load over as many drives as you can afford.&lt;BR /&gt;&lt;BR /&gt;Second, use at least 2 LUNs.  That way, you will use both controllers.  Here's my suggestion - create one LUN for the dataspaces, and one LUN for the logs.  Both should be in their own disk group (ie: 2 disk groups).  The logs should be at least 3 drives, maybe 4.  The rest can go to the dataspaces.&lt;BR /&gt;&lt;BR /&gt;Use Vraid-1 - much faster.&lt;BR /&gt;&lt;BR /&gt;Watch which path is your primary path to avoid doing all your I/O through the wrong controller, as Uwe mentioned above.&lt;BR /&gt;&lt;BR /&gt;Good luck,&lt;BR /&gt;&lt;BR /&gt;Vince</description>
      <pubDate>Tue, 22 Nov 2005 15:58:56 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676420#M18517</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2005-11-22T15:58:56Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676421#M18518</link>
      <description>Load balancing...&lt;BR /&gt;well, what operating system are we talking about?&lt;BR /&gt;&lt;BR /&gt;Remember that you need a multipath filter between the EVA and the file system handler.&lt;BR /&gt;&lt;BR /&gt;According to your profile, you seem to deal a lot with HP-UX. Last time I checked, PVlinks cannot do any multipath load balancing. The traditional way is to create multiple virtual disks, access them via different primary paths and do the balancing (implicitly) via striping.&lt;BR /&gt;&lt;BR /&gt;Another way would be Secure Path V3.0F - the AutoPath component supprts some kind of "dynamic load balancing".</description>
      <pubDate>Tue, 22 Nov 2005 16:06:15 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676421#M18518</guid>
      <dc:creator>Uwe Zessin</dc:creator>
      <dc:date>2005-11-22T16:06:15Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000 throughput</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676422#M18519</link>
      <description>Yeah, but you want to be careful with the load balancing.  You don't want to balance across controllers, because of the I/O forwarding behavior.&lt;BR /&gt;&lt;BR /&gt;It's good to balance over multiple ports on the same controller, though.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 22 Nov 2005 16:53:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-throughput/m-p/3676422#M18519</guid>
      <dc:creator>Vincent Fleming</dc:creator>
      <dc:date>2005-11-22T16:53:55Z</dc:date>
    </item>
  </channel>
</rss>

