<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: EMC Symmetrix lun sizes in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980610#M8540</link>
    <description>Yes, this is possible. Just ask your EMC Engineer to do it.&lt;BR /&gt;The only thing you have to worry about is, that you might  be wasting some space per spindle if the sum of hypers does not fit exactly on the physical disks.</description>
    <pubDate>Fri, 23 May 2003 09:15:40 GMT</pubDate>
    <dc:creator>Bernd Reize</dc:creator>
    <dc:date>2003-05-23T09:15:40Z</dc:date>
    <item>
      <title>EMC Symmetrix lun sizes</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980609#M8539</link>
      <description>Howdy,&lt;BR /&gt;Is it possible to mix 2 different sizes (8GB &amp;amp; 18GB) of LUN's on the same spindle in a Symmetrix? We are combining 2 systems onto the same frame and will have to have some overlap on the disks. One system has 18GB Lun's and the other has 8GB. It would be preferable to keep the existing sizes as the LUN's are mapped 1:1 with 8GB and 18GB filesystems.&lt;BR /&gt;Thanks&lt;BR /&gt;Ian</description>
      <pubDate>Fri, 23 May 2003 08:45:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980609#M8539</guid>
      <dc:creator>Ian Vaughan</dc:creator>
      <dc:date>2003-05-23T08:45:14Z</dc:date>
    </item>
    <item>
      <title>Re: EMC Symmetrix lun sizes</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980610#M8540</link>
      <description>Yes, this is possible. Just ask your EMC Engineer to do it.&lt;BR /&gt;The only thing you have to worry about is, that you might  be wasting some space per spindle if the sum of hypers does not fit exactly on the physical disks.</description>
      <pubDate>Fri, 23 May 2003 09:15:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980610#M8540</guid>
      <dc:creator>Bernd Reize</dc:creator>
      <dc:date>2003-05-23T09:15:40Z</dc:date>
    </item>
    <item>
      <title>Re: EMC Symmetrix lun sizes</title>
      <link>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980611#M8541</link>
      <description>In Fact we have found that there was a bit of confusion doing it that way, add to the fact that BCVs need to be matched up size to size.  &lt;BR /&gt;&lt;BR /&gt;Next comes the question is it a 2 8.43 gb Meta, or an 18 GB hyper.  I prefer to keep the spindles and hypers consistant, And I like to use Metas for larger than 8.43 GB hypers, but I often use lvm to create larger filesystems out of 8.43 gb hypers.  &lt;BR /&gt;&lt;BR /&gt;If you have symopt and ECC, and you layout your optomizer rules, I have seen the performance from 8.43 GBs in an LVM perform with nearly equivalent throughput as 64 GB striped Metas. &lt;BR /&gt;&lt;BR /&gt;The real questions about EMC disk for us has been:  How much money can we put into cache. and how much into DISK. Once the frame is bought, the limit seems to be how much CASH goes into CACHE.  With 24 GB cache we see very small differences between a 16 gb meta and 2 8.43s in an LVM. WLA numbers were within .025% .000025 on transaction throughput difference on a SYM8830.&lt;BR /&gt;&lt;BR /&gt;Hope that clarifies some issues for you.  &lt;BR /&gt;&lt;BR /&gt;We use HFS for our filesystems and trust the Symm for the data integrity.&lt;BR /&gt;&lt;BR /&gt;Tim Sanko&lt;BR /&gt;</description>
      <pubDate>Fri, 23 May 2003 14:03:11 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/emc-symmetrix-lun-sizes/m-p/2980611#M8541</guid>
      <dc:creator>Tim Sanko</dc:creator>
      <dc:date>2003-05-23T14:03:11Z</dc:date>
    </item>
  </channel>
</rss>

