<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: understand XP arrays virtualization in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903884#M22334</link>
    <description>You misunderstood me!&lt;BR /&gt;HP in facct does support 7+1 for 300GB disks. &lt;BR /&gt;&lt;BR /&gt;Because of the reasons I mentioned there is a higher possibility of being exposed due to higher failure rates and and the higher recovery times of the 300GB disks. &lt;BR /&gt;&lt;BR /&gt;So it is up to you!&lt;BR /&gt;&lt;BR /&gt;I can tell you that we in fact do have several customers running RAID5 7+1 with 300GB disks and none has ever faced a problem!&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Peter</description>
    <pubDate>Mon, 04 Dec 2006 10:46:19 GMT</pubDate>
    <dc:creator>Peter Mattei</dc:creator>
    <dc:date>2006-12-04T10:46:19Z</dc:date>
    <item>
      <title>understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903878#M22328</link>
      <description>Hello,&lt;BR /&gt;We had an XP128 which was configured by HP for us. the presented disks are open-e with 13G each. recently we  bought a new XP10000, but this time they used OPEN-V volumes which are bigger (200G).&lt;BR /&gt;So, I'm wondering, what is the differnece between the 2, and why we were using such limited LUNs, which were causing as a lot of trouble when managing them (1T volune needs 80 LUNs) ?&lt;BR /&gt;&lt;BR /&gt;Many thanks,&lt;BR /&gt;Farid.&lt;BR /&gt;</description>
      <pubDate>Sun, 26 Nov 2006 08:33:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903878#M22328</guid>
      <dc:creator>farid S</dc:creator>
      <dc:date>2006-11-26T08:33:26Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903879#M22329</link>
      <description>You don't mention when you first installed the XP128, but at the time, it may have been the largest LDEV size available with the XP arrays.  It was also possible to 'LUSE' those LDEVs together to create larger LUNs, but for performance reasons, you probably used a host LVM and striped the LUNs across different raid groups.</description>
      <pubDate>Sun, 26 Nov 2006 20:25:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903879#M22329</guid>
      <dc:creator>chris callihan</dc:creator>
      <dc:date>2006-11-26T20:25:48Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903880#M22330</link>
      <description>First of all you need to understand the architecture of the XP arrays. &lt;BR /&gt;The roots come from the mainframe world where OS390/zOS define fixed device types with fixed sizes. A 3390-3 for example has a size of 2.84GB, the corresponding Open-3 has 2.461GB. &lt;BR /&gt;&lt;BR /&gt;The design principles of the XP always were:&lt;BR /&gt;1. Highest availability&lt;BR /&gt;2. Highest performance&lt;BR /&gt;&lt;BR /&gt;To achieve highest availability there is no single point of failure and every disk in a RAID group is in a different power partition, FC-loop and has a separate ACP processor. &lt;BR /&gt;See page 14 in the following whitepaper. It shows how the RAID groups are setup. &lt;BR /&gt;&lt;A href="http://h71028.www7.hp.com/ERC/downloads/4AA0-7923ENW.pdf" target="_blank"&gt;http://h71028.www7.hp.com/ERC/downloads/4AA0-7923ENW.pdf&lt;/A&gt; &lt;BR /&gt;&lt;BR /&gt;Now with the introduction of the XP10000 and XP12000 real flexible open systems emulation has been introduced: OPEN-V. &lt;BR /&gt;It allows each and every individual LDEV and LUN to have variable sizes from 46MB to 2TB. &lt;BR /&gt;&lt;BR /&gt;So you could create a LUN of 2TB and present it to your OS. But; this very LUN would sit on 4, 8 or with the new 28+4 (4x 7+1 striped) on 32 physical disks "only". &lt;BR /&gt;And here comes the next thing;: performance!&lt;BR /&gt;You will get high performance by distributing your load over as many disks and resources within the XP as possible!&lt;BR /&gt;You achieve that by using a Volume Manager. &lt;BR /&gt;This is the reason for your 80 LUNs. &lt;BR /&gt;These are dispersed over all Disk Groups and give an accumulated IOQueue for performance. &lt;BR /&gt;Also see the Whitepaper. &lt;BR /&gt;&lt;BR /&gt;With the XP12000 you have the highest available and finest array in the world: congratulations!!&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;XP-Pete</description>
      <pubDate>Mon, 27 Nov 2006 02:32:03 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903880#M22330</guid>
      <dc:creator>Peter Mattei</dc:creator>
      <dc:date>2006-11-27T02:32:03Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903881#M22331</link>
      <description>28+4 (4 7+1P Stripes)?&lt;BR /&gt;&lt;BR /&gt;Peter, is this configuration already available and can this be done on all all kinds of disk capacities?&lt;BR /&gt;&lt;BR /&gt;On our last 300GB cannisters, we were told 7+1P is not supported per Hitachi (the source of HP's XP Arrays). All of our 300GB array groups are 6+2 double parity raid groups.&lt;BR /&gt;&lt;BR /&gt;Faridyou, yes.. with  smaller and traditional LUNs - you will be able to build faster performing storage units (LVOLS or VOLS) on your hosts. With an XP10000/12000 - though, please plan carefully on how "small" your LUNs can get even if the number of LUNs on these arrays are way more than the 4096 Limit of the older Hitachi Arrays. The XP12K for instance I think can have up to 64K LUNs -- so you can still technically employ host-based volume managers to build high performing volumes (which are mostly stripes). &lt;BR /&gt;&lt;BR /&gt;I further suggest you use VxVM with your XP10000 as VxVM can intelligently build your stripes and auto distributes the components based on the location and its distribution amongst ACPs and raidgroups inside the XP Array.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Sat, 02 Dec 2006 21:50:43 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903881#M22331</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-12-02T21:50:43Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903882#M22332</link>
      <description>Hi Nelson&lt;BR /&gt;&lt;BR /&gt;Yes, 28+4 which is actually striping over 4x RAID5 7+1 is available since FW 50-06-14.&lt;BR /&gt;14+2 (striping over 2x RAID5 7+1) is available since 50-05-06.&lt;BR /&gt;And yes, you can do it with all available disks!&lt;BR /&gt;&lt;BR /&gt;We have successfully implemented both mainly on Windows and Linux installations. &lt;BR /&gt;&lt;BR /&gt;Note that all LDEVs will be striped over the 32 disks and you cannot mix 28+4 with 14+2 or 7+1 on the very same disks and it takes 32 disks = 8 4-disk Array Groups to configure it!&lt;BR /&gt;&lt;BR /&gt;HP and  Hitachi do support RAID5 on all available disksk including 300GB disks!&lt;BR /&gt;HDS the other supplier of the RAID500 array has decided to support 300GB disks with RAID6 only &lt;BR /&gt;HP recommends using RAID6 for business critical data, other data can reside on RAID1 or 5!&lt;BR /&gt;&lt;BR /&gt;We do have several installations with 300GB disks, most with RAID5 some with RAID6.&lt;BR /&gt;Note that with RAID6 you cannot stripe over more than 8 disks!&lt;BR /&gt;&lt;BR /&gt;The reason behind recommending RAID6:&lt;BR /&gt;300GB disks tend to have higher failure rates and a correction copy in case of a sudden death takes at least 4.3 hours to complete which is twice as much as 146GB disks.&lt;BR /&gt;&lt;BR /&gt;However, we did not have a single double disk failure so far!&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;XP-Pete</description>
      <pubDate>Sun, 03 Dec 2006 09:28:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903882#M22332</guid>
      <dc:creator>Peter Mattei</dc:creator>
      <dc:date>2006-12-03T09:28:17Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903883#M22333</link>
      <description>Peter,&lt;BR /&gt;&lt;BR /&gt;I still cannot understand.. You mentioned 7+1P is not allowed for 300GB drives BUT a stripe (4 way I suppose) of 7+1P groups is allowed for any disk sizes? Or is the 28+4 (4x 7+1P) scheme you mentioned is actually a 2x2 mirror stripe?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 04 Dec 2006 10:40:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903883#M22333</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-12-04T10:40:53Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903884#M22334</link>
      <description>You misunderstood me!&lt;BR /&gt;HP in facct does support 7+1 for 300GB disks. &lt;BR /&gt;&lt;BR /&gt;Because of the reasons I mentioned there is a higher possibility of being exposed due to higher failure rates and and the higher recovery times of the 300GB disks. &lt;BR /&gt;&lt;BR /&gt;So it is up to you!&lt;BR /&gt;&lt;BR /&gt;I can tell you that we in fact do have several customers running RAID5 7+1 with 300GB disks and none has ever faced a problem!&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Peter</description>
      <pubDate>Mon, 04 Dec 2006 10:46:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903884#M22334</guid>
      <dc:creator>Peter Mattei</dc:creator>
      <dc:date>2006-12-04T10:46:19Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903885#M22335</link>
      <description>Peter thanks for the info. We were told Hitachi and XP does not allow a 7+1P configuration for 300GB disks.. This was mid last year (2005). Was there a specific rev of the XP12K firmware that started allowing it?&lt;BR /&gt;&lt;BR /&gt;We're at Rev 5001.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 04 Dec 2006 10:56:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903885#M22335</guid>
      <dc:creator>Alzhy</dc:creator>
      <dc:date>2006-12-04T10:56:07Z</dc:date>
    </item>
    <item>
      <title>Re: understand XP arrays virtualization</title>
      <link>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903886#M22336</link>
      <description>We as well did initially not support it but am not sure if it was disabled in XP microcode or not. &lt;BR /&gt;I also do not know if the HDS USP microcode  would allow to configure it or not!&lt;BR /&gt;&lt;BR /&gt;Cheers&lt;BR /&gt;Peter</description>
      <pubDate>Mon, 04 Dec 2006 11:21:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/understand-xp-arrays-virtualization/m-p/3903886#M22336</guid>
      <dc:creator>Peter Mattei</dc:creator>
      <dc:date>2006-12-04T11:21:25Z</dc:date>
    </item>
  </channel>
</rss>

