<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EVA4000: Disk Sub System in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988625#M40118</link>
    <description>Q1: Almost entirely dependent on the nature and latency of your WAN link between the sites.&lt;BR /&gt;&lt;BR /&gt;Q2: There isn't a direct mechanism for prioritizing one server's access to an EVA over another's. So one might consider creating a separate disk group for "special" applications, but with only 12 spindles you don't have a choice. The minimum size for a disk group is 8 spindles so you don't have enough for two. Even if you had the necessary 16 spindles, creating two disk groups of 8 would really be shooting yourself in the foot for reasons of both performance and capacity. More spindles per disk group / fewer disk groups is nearly always the best answer when setting up an EVA. &lt;BR /&gt;&lt;BR /&gt;Q3: Yes - a disk group can contain disks from multiple enclosures. For an EVA4000 the maximum is 4 enclosures with a total of 56 spindles in a single disk group.</description>
    <pubDate>Mon, 03 Jul 2006 09:22:25 GMT</pubDate>
    <dc:creator>Mark Poeschl_2</dc:creator>
    <dc:date>2006-07-03T09:22:25Z</dc:date>
    <item>
      <title>EVA4000: Disk Sub System</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988623#M40116</link>
      <description>We have two EVA sites, with two HSV200 controllers and one disk enclosure (12x146Gb) each site.&lt;BR /&gt;&lt;BR /&gt;Q1: Is there any estimation for performance difference between sync. and async. write modes for the DR Group.&lt;BR /&gt;&lt;BR /&gt;Q2: Is there any way to priorities disk traffic for different applications, connected to EVA? i.e. we have core banking system DB (Oracle), also file server for Terminal Server farm, also MS Exchange mailstore/logfiles. Let's say we connect them all to EVA. We don't want that exchange peak performance will cause lag for corebanking system. How we can avoid this? Or this is to be done by splitting disk array into different disk groups, and separate mission critical application with it's own disk groups?&lt;BR /&gt;     &lt;BR /&gt;Q3: Can one disk group contains disk from different disk enclosures? What is the limit of disk/enclosures quantity for one disk group?&lt;BR /&gt;</description>
      <pubDate>Mon, 03 Jul 2006 06:14:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988623#M40116</guid>
      <dc:creator>Arman Obosyan</dc:creator>
      <dc:date>2006-07-03T06:14:08Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000: Disk Sub System</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988624#M40117</link>
      <description>Hello Arman:&lt;BR /&gt;&lt;BR /&gt;Q1: The performance will vary depending of the I/O rate, the distancy, the link and the bandwidht. For very fast links at small distances, the performance will not suffer. There is a document that may help you "hp StorageWorks continuous access EVA replication performance estimator V1.1". See the eva documentation site.&lt;BR /&gt;&lt;BR /&gt;Q2: Normally, EVAs are capable of handle large I/O because of the virtutalization technology. You cannot specify a priority for the disks. The disk group separation could help if you separate random from sequential access I/O patterns into different disk groups, but normally, when more disk on the disk groups, the better the performance. You should do performance test on your own environment.&lt;BR /&gt;&lt;BR /&gt;Q3: A disk group can contain all disks from all disk enclosures supported by the EVA with a minimum of 8.</description>
      <pubDate>Mon, 03 Jul 2006 09:21:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988624#M40117</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2006-07-03T09:21:26Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000: Disk Sub System</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988625#M40118</link>
      <description>Q1: Almost entirely dependent on the nature and latency of your WAN link between the sites.&lt;BR /&gt;&lt;BR /&gt;Q2: There isn't a direct mechanism for prioritizing one server's access to an EVA over another's. So one might consider creating a separate disk group for "special" applications, but with only 12 spindles you don't have a choice. The minimum size for a disk group is 8 spindles so you don't have enough for two. Even if you had the necessary 16 spindles, creating two disk groups of 8 would really be shooting yourself in the foot for reasons of both performance and capacity. More spindles per disk group / fewer disk groups is nearly always the best answer when setting up an EVA. &lt;BR /&gt;&lt;BR /&gt;Q3: Yes - a disk group can contain disks from multiple enclosures. For an EVA4000 the maximum is 4 enclosures with a total of 56 spindles in a single disk group.</description>
      <pubDate>Mon, 03 Jul 2006 09:22:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988625#M40118</guid>
      <dc:creator>Mark Poeschl_2</dc:creator>
      <dc:date>2006-07-03T09:22:25Z</dc:date>
    </item>
    <item>
      <title>Re: EVA4000: Disk Sub System</title>
      <link>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988626#M40119</link>
      <description>Thanks.</description>
      <pubDate>Wed, 05 Jul 2006 00:47:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/eva4000-disk-sub-system/m-p/4988626#M40119</guid>
      <dc:creator>Arman Obosyan</dc:creator>
      <dc:date>2006-07-05T00:47:08Z</dc:date>
    </item>
  </channel>
</rss>

