<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: VA7100 RAID Allocations in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904487#M7516</link>
    <description>After you created additional LUN please 'leave VA alone' (I mean do not access its LUNs) to allow it to optimize its contents. All IO issued to VA will have priority on optimization. BTW, do you see 'array is optimizing' when you issue 'armdsp -a'? Try it some times, when host is idle and VA is able to optimize its contents&lt;BR /&gt;Eugeny</description>
    <pubDate>Mon, 17 Feb 2003 17:34:32 GMT</pubDate>
    <dc:creator>Eugeny Brychkov</dc:creator>
    <dc:date>2003-02-17T17:34:32Z</dc:date>
    <item>
      <title>VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904479#M7508</link>
      <description>I have a VA7100 array that had 6 disks. I have 1 94 GB lun and it is currently 50 % used. Looking at the armperf the 50GB of data is all being stored in RAID 5 DP mode. Several weeks ago I added 2 additional disks to the array, for a total of 8, I would have expected some of the data in the LUN to go back to raid 0 + 1. &lt;BR /&gt;&lt;BR /&gt;Command View shows the following allocations:&lt;BR /&gt;&lt;BR /&gt;Logical Drives 94 GB&lt;BR /&gt;Unallocated 59 GB&lt;BR /&gt;Redundancy 79 GB&lt;BR /&gt;Active Spare 33 GB&lt;BR /&gt;&lt;BR /&gt;Do I have to add more disks to get a more even balance between raid 5 and raid 0 + 1? Or is there a command I need to run?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 13 Feb 2003 22:40:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904479#M7508</guid>
      <dc:creator>James Raffeld</dc:creator>
      <dc:date>2003-02-13T22:40:59Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904480#M7509</link>
      <description>I had always understood that these arrays would always try to store everything as mirrored (RAID 0/1), and only rotate blocks to RAID-5DP when forced to (unallocated space dropping below zero, or some threshold).  Your array does not appear to be doing the above, so either I was misinformed, or you have an issue with your array (firmware, failed drive, something like that).&lt;BR /&gt;&lt;BR /&gt;It might be useful if you provided more info on your array and environment (firmware rev, bdf, complete stats from the array, etc).&lt;BR /&gt;&lt;BR /&gt;Regards, --bmr</description>
      <pubDate>Thu, 13 Feb 2003 22:48:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904480#M7509</guid>
      <dc:creator>Brian M Rawlings</dc:creator>
      <dc:date>2003-02-13T22:48:46Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904481#M7510</link>
      <description>James,&lt;BR /&gt;try creating one more LUN, for example, 30GB in size and look if VA behavior will change. Attach armdsp -a output to your next reply&lt;BR /&gt;Eugeny</description>
      <pubDate>Thu, 13 Feb 2003 22:49:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904481#M7510</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-02-13T22:49:01Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904482#M7511</link>
      <description>Here is the armdsp. I will make an additional LUN in the morning....thanks for the quick response!</description>
      <pubDate>Thu, 13 Feb 2003 22:56:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904482#M7511</guid>
      <dc:creator>James Raffeld</dc:creator>
      <dc:date>2003-02-13T22:56:53Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904483#M7512</link>
      <description>armdsp looks great. Some notes: HP18 firmware is available. You can call HP for upgrade; private loop connected: if you use FC switches (not hubs and direct connect) then VA would better be in fabric.&lt;BR /&gt;Please let us know on your testing after LUN creation&lt;BR /&gt;Eugeny</description>
      <pubDate>Thu, 13 Feb 2003 23:01:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904483#M7512</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-02-13T23:01:32Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904484#M7513</link>
      <description>made the second lun and filled it 10% with some random&lt;BR /&gt;data...from what I can tell in the early stages of investgating this most of the data form this new LUN is being stored in 5DP Mode. Also&lt;BR /&gt;check on the pre-existing LUN 1 and all 50GB still stored in 5DP. This a production box so I do not want to be carefull in looking at this. Thanks for the note about HP18. Then upgrade is planned for some scheduled downtime in April....I will also change the FC port to fabric at that time as this array is direct attached. Thanks for the help and as the testing continues I will post the results.</description>
      <pubDate>Fri, 14 Feb 2003 14:27:13 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904484#M7513</guid>
      <dc:creator>James Raffeld</dc:creator>
      <dc:date>2003-02-14T14:27:13Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904485#M7514</link>
      <description>If the array is directly attached, you can't switch to 'fabric' mode.  Direct attached FC is done in 'loop' mode, which is half-duplex.  &lt;BR /&gt;&lt;BR /&gt;It takes a switch (the "fabric device") to allow you to operate in 'fabric login' mode.  This allows your FC HBA and storage device to run in full-duplex mode, which can double your bandwidth (or not, depending on your typical access pattern).&lt;BR /&gt;&lt;BR /&gt;If you have apps that more or less read and write equally, most of the time, and they are all fairly busy, full-duplex will help a lot.  If you have one app, and it mostly reads, you won't see any perceptible difference in overall performance.&lt;BR /&gt;&lt;BR /&gt;Also, to add a fabric while still maintaining no single point of failure in your storage scheme, you need to add a pair of switches, not just one.  So, going "fabric" just for performance adds some appreciable cost, and needs to be approached intelligently to be sure you'll get the benefit you expect.&lt;BR /&gt;&lt;BR /&gt;Fortunatly, if no other servers are involved (i.e., no need for security or zoning), HP sells the 8-port Brocade 2Gb FC switches in an "Entry" configuration, meaning no zoning functionality.  In this limited application, the switches are under $5K each.&lt;BR /&gt;&lt;BR /&gt;So, there are some considerations to going "fabric".  Hope the background helps... &lt;BR /&gt;&lt;BR /&gt;--bmr</description>
      <pubDate>Fri, 14 Feb 2003 17:23:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904485#M7514</guid>
      <dc:creator>Brian M Rawlings</dc:creator>
      <dc:date>2003-02-14T17:23:44Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904486#M7515</link>
      <description>We have the switches in place just need to setup a zone and place the VA in a zone with the L1000 so that none of the Microsoft stuff can see it.&lt;BR /&gt;&lt;BR /&gt;Still cannot understand why the VA7100 wants to allocate all the space as 5dp even the new LUN I made is 99% stored in 5dp. &lt;BR /&gt;&lt;BR /&gt;I think at my next cycle of upgrades in a few months a will redo the luns and the array and just force the va7100 in use 1+0 all the time.</description>
      <pubDate>Mon, 17 Feb 2003 17:09:59 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904486#M7515</guid>
      <dc:creator>James Raffeld</dc:creator>
      <dc:date>2003-02-17T17:09:59Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904487#M7516</link>
      <description>After you created additional LUN please 'leave VA alone' (I mean do not access its LUNs) to allow it to optimize its contents. All IO issued to VA will have priority on optimization. BTW, do you see 'array is optimizing' when you issue 'armdsp -a'? Try it some times, when host is idle and VA is able to optimize its contents&lt;BR /&gt;Eugeny</description>
      <pubDate>Mon, 17 Feb 2003 17:34:32 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904487#M7516</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-02-17T17:34:32Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904488#M7517</link>
      <description>The general idea I've heard of is you have to have withing one RG value of 'Allocated to Regular LUNs' to be more than 50% of 'Total Physical Size' value&lt;BR /&gt;Eugeny</description>
      <pubDate>Mon, 17 Feb 2003 17:37:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904488#M7517</guid>
      <dc:creator>Eugeny Brychkov</dc:creator>
      <dc:date>2003-02-17T17:37:31Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904489#M7518</link>
      <description>Thanks for the additional info about letting the array idle. Sunday after the my normal cold backup of oracle I will let the system sit and leave the db down and see if it will start optimizing.</description>
      <pubDate>Mon, 17 Feb 2003 17:46:26 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904489#M7518</guid>
      <dc:creator>James Raffeld</dc:creator>
      <dc:date>2003-02-17T17:46:26Z</dc:date>
    </item>
    <item>
      <title>Re: VA7100 RAID Allocations</title>
      <link>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904490#M7519</link>
      <description>Upgrade to HP18, keep current Resilience setting (Normal Mode), enable Pre-fetch.  These are key to providing the best performance.&lt;BR /&gt;&lt;BR /&gt;AutoRAID keeps the active write working set in RAID 1+0.  These are the small (&amp;lt;256K from cache to the disks) IOs.  Data that is not frequently written, or written with large blocks is kept in RAID 5DP.&lt;BR /&gt;&lt;BR /&gt;The old Model 12H maximized RAID 1 space.  But that proved to be a poor performance choice.  As new data is written to the array, it must first create free space by converting some RAID 1+0 capacity to RAID 5 ??? this takes time, time that the new write must wait on.  Also, without free space, the array could not do the highly efficient log-structured RAID 5 writes.  The VA will optimize for free-space, but is also able to keep large amounts of data in RAID 1+0 for short periods.&lt;BR /&gt;&lt;BR /&gt;If you created a new LUN and wrote randomly to the array and it created RAID 5DP capacity, then cache was able to concatenate the IOs into 256K records to the back end.  These large-block writes are more efficient to the log-structured RAID 5DP than to RAID 1+0.&lt;BR /&gt;</description>
      <pubDate>Wed, 19 Feb 2003 06:45:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/va7100-raid-allocations/m-p/2904490#M7519</guid>
      <dc:creator>Roger_22</dc:creator>
      <dc:date>2003-02-19T06:45:40Z</dc:date>
    </item>
  </channel>
</rss>

