<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic MSA1000 questions in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879398#M21940</link>
    <description>We have a MSA1000 running FW 4.32 build 300 used as storage for a MS exchange server running with MSCS. (I am not a Windows person so apologies for the lack of clarity of the Application config). We use Secure Path for multipath access, and a product called Double Take to replicate selected files to a backup site (which currently resides two feet away)- as far as I understand things this replication uses a dedicated LAN and not the SAN unlike CA with EVA*&lt;BR /&gt;&lt;BR /&gt;We have been having some performance problems and I have been looking into the SAN side of things and starting from the bottom up I used the CLI to find out how the MSA1000 was configured. It has a single shelf with 14 disks, 2 in the spare set.&lt;BR /&gt;&lt;BR /&gt;My questions (which may have nothing to do with our performance problem but I am trying to understand how this was put together) are as follows:&lt;BR /&gt;&lt;BR /&gt;1) show profile indicates that we are running with  &lt;BR /&gt;"Mode 12 = Ignore Force Unit Access on Write"&lt;BR /&gt;whereas all other OS profiles use&lt;BR /&gt;"Mode 12 = Enforce Force Unit Access on Write"&lt;BR /&gt;There are two windows profiles:&lt;BR /&gt;&lt;BR /&gt;Profile name = Windows: with ignore FUA on write set&lt;BR /&gt;and &lt;BR /&gt;Profile name = Windows_SP2_and_below: with enforce FUA on write set&lt;BR /&gt;&lt;BR /&gt;All connections use the Default profile and the Default profile is Windows ie ignore FUA on write.&lt;BR /&gt;&lt;BR /&gt;It seems to me that FUA may have something to do with an OS requesting a synchronous write eg for metadata. &lt;BR /&gt;&lt;BR /&gt;1.1 How is FUA on write interpreted by the MSA1000, use the cache as a write through cache? &lt;BR /&gt;1.2 How come Windows can get away with ignoring this whereas OpenVMS, Tru64, Linux, Solaris, Netware , HP (?UX) enforce it? &lt;BR /&gt;&lt;BR /&gt;1.3 We are also running Win2003 SP2. Does this also mean we have the wrong setting? &lt;BR /&gt;&lt;BR /&gt;1.4 Are we in danger of meta-data corruption?&lt;BR /&gt;&lt;BR /&gt;2) There are 12 RAID 1 units each with a stripe size of 128K, each spread across all 12 available physical disks. We also have 9 RAID 5 units each with a stripe size of 16K, each spread across all 12 available physical disks.&lt;BR /&gt;&lt;BR /&gt;2.1 My gut feeling is thay this is not an ideal configuration: is not 16K rather a small I-O size for RAID 5? &lt;BR /&gt;&lt;BR /&gt;2.2 Is having 16k stripes and 128K stripes coexisting on the same physical disks not a recipe for fragmentation and poor I-O?&lt;BR /&gt;&lt;BR /&gt;3. When connecting to the MSA1000 for the first time a show this_controller showed the active controller battery as off. When I looked a little later it was on. &lt;BR /&gt;&lt;BR /&gt;If the battery is off does this not mean the cache becomes write through and performance declines.&lt;BR /&gt;&lt;BR /&gt;Any comments appreciated. Full show tech_support attached &lt;BR /&gt;</description>
    <pubDate>Thu, 12 Oct 2006 15:17:02 GMT</pubDate>
    <dc:creator>Tom Swigg_1</dc:creator>
    <dc:date>2006-10-12T15:17:02Z</dc:date>
    <item>
      <title>MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879398#M21940</link>
      <description>We have a MSA1000 running FW 4.32 build 300 used as storage for a MS exchange server running with MSCS. (I am not a Windows person so apologies for the lack of clarity of the Application config). We use Secure Path for multipath access, and a product called Double Take to replicate selected files to a backup site (which currently resides two feet away)- as far as I understand things this replication uses a dedicated LAN and not the SAN unlike CA with EVA*&lt;BR /&gt;&lt;BR /&gt;We have been having some performance problems and I have been looking into the SAN side of things and starting from the bottom up I used the CLI to find out how the MSA1000 was configured. It has a single shelf with 14 disks, 2 in the spare set.&lt;BR /&gt;&lt;BR /&gt;My questions (which may have nothing to do with our performance problem but I am trying to understand how this was put together) are as follows:&lt;BR /&gt;&lt;BR /&gt;1) show profile indicates that we are running with  &lt;BR /&gt;"Mode 12 = Ignore Force Unit Access on Write"&lt;BR /&gt;whereas all other OS profiles use&lt;BR /&gt;"Mode 12 = Enforce Force Unit Access on Write"&lt;BR /&gt;There are two windows profiles:&lt;BR /&gt;&lt;BR /&gt;Profile name = Windows: with ignore FUA on write set&lt;BR /&gt;and &lt;BR /&gt;Profile name = Windows_SP2_and_below: with enforce FUA on write set&lt;BR /&gt;&lt;BR /&gt;All connections use the Default profile and the Default profile is Windows ie ignore FUA on write.&lt;BR /&gt;&lt;BR /&gt;It seems to me that FUA may have something to do with an OS requesting a synchronous write eg for metadata. &lt;BR /&gt;&lt;BR /&gt;1.1 How is FUA on write interpreted by the MSA1000, use the cache as a write through cache? &lt;BR /&gt;1.2 How come Windows can get away with ignoring this whereas OpenVMS, Tru64, Linux, Solaris, Netware , HP (?UX) enforce it? &lt;BR /&gt;&lt;BR /&gt;1.3 We are also running Win2003 SP2. Does this also mean we have the wrong setting? &lt;BR /&gt;&lt;BR /&gt;1.4 Are we in danger of meta-data corruption?&lt;BR /&gt;&lt;BR /&gt;2) There are 12 RAID 1 units each with a stripe size of 128K, each spread across all 12 available physical disks. We also have 9 RAID 5 units each with a stripe size of 16K, each spread across all 12 available physical disks.&lt;BR /&gt;&lt;BR /&gt;2.1 My gut feeling is thay this is not an ideal configuration: is not 16K rather a small I-O size for RAID 5? &lt;BR /&gt;&lt;BR /&gt;2.2 Is having 16k stripes and 128K stripes coexisting on the same physical disks not a recipe for fragmentation and poor I-O?&lt;BR /&gt;&lt;BR /&gt;3. When connecting to the MSA1000 for the first time a show this_controller showed the active controller battery as off. When I looked a little later it was on. &lt;BR /&gt;&lt;BR /&gt;If the battery is off does this not mean the cache becomes write through and performance declines.&lt;BR /&gt;&lt;BR /&gt;Any comments appreciated. Full show tech_support attached &lt;BR /&gt;</description>
      <pubDate>Thu, 12 Oct 2006 15:17:02 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879398#M21940</guid>
      <dc:creator>Tom Swigg_1</dc:creator>
      <dc:date>2006-10-12T15:17:02Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879399#M21941</link>
      <description>Tom, &lt;BR /&gt;&lt;BR /&gt;Normally, I would recommend upgrading to 4.48 but we will be introducing new MSA1000 FW very soon.  So, you could wait till then.&lt;BR /&gt;&lt;BR /&gt;Your right, you configuration isn't ideal.  How many mailbox users are you supporting?&lt;BR /&gt;&lt;BR /&gt;To your questions.&lt;BR /&gt;1.1, 1.2, 1.3, 1.4.  Windows is notorious for setting FUA.  FUA, is Force Unit Access, it basically forces data to write-through cache to the drives.  The problem was when the MSA cache was full, Windows would send commands with the FUA bit set and wouldn't stop. If there were many commands with FUA set, we couldn't flush the controllers cache.  So, now we ignore the FUA.  The MSA cache is BBWC.  All data is written into BBWC. No concern for corruption.  The other OS's, sparsely use the FUA.  &lt;BR /&gt;&lt;BR /&gt;You really should set SSP. Windows servers have a habit of claiming all LUNs. Wouldn't want you to install a new server and inadvertly delete a LUN. Set the host profile to Windows.&lt;BR /&gt;&lt;BR /&gt;2, 2.1, 2.2.  Not ideal configuration but that really depends on your I/O pattern.  The problem with sharing so many spindle is drive contention. Meaning, request waiting for access to use the drive.&lt;BR /&gt;&lt;BR /&gt;3).If you make configuration changes, we momentarily disable/enable cache.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;  &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Oct 2006 16:41:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879399#M21941</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2006-10-12T16:41:31Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879400#M21942</link>
      <description>Thanks John,&lt;BR /&gt;&lt;BR /&gt;Our active user base is about 20,000 with around another 40,000 mailboxes remaining from previous academic years. We get about 100,000 emails a day. The mailbox data stores are living on the RAID 5 units. I would have thought that 16K was a small stripe-size for RAID5.&lt;BR /&gt;&lt;BR /&gt;Are there any known problems with FW 4.32 as it is a bit old?&lt;BR /&gt;&lt;BR /&gt;Also, there are only 4 connections seen on the active controller. Should this not be 8 if we have multipath access to the MSA1000 from 2 nodes? &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 13 Oct 2006 02:54:47 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879400#M21942</guid>
      <dc:creator>Tom Swigg_1</dc:creator>
      <dc:date>2006-10-13T02:54:47Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879401#M21943</link>
      <description>When ever a equipment manufacturer releases new firmware or drivers there are reasons.  Usually it is due to a certain situation or condition.  &lt;BR /&gt;&lt;BR /&gt;Depending on a certain condition, 4.48 could be potentially faster.  I have never seen the problem associated with Notes/Exchange. Those applications are IOPS transactions.&lt;BR /&gt;&lt;BR /&gt;Based on the above information, your mail servers storage is not properly sized for the number of users.  If your users are experiencing slow response. You really should consider, adding one maybe two extra MSA30 shelves with 15k drives. &lt;BR /&gt;&lt;BR /&gt;You can attach the MSA cli cable to monitor some other performance counters on the MSA.&lt;BR /&gt;&amp;gt;show cacheinfo - Displays a snapshot of your cache useage. Cycle through the command a few times&lt;BR /&gt;&amp;gt;show taskstats - Displays a snapshot of the commands the MSA controller is working on.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;start perf - Let this run for a while.  To average out the counters below.&lt;BR /&gt;&lt;BR /&gt;&amp;gt;show perf&lt;BR /&gt;&amp;gt;show perf physical&lt;BR /&gt;&amp;gt;show perf logical&lt;BR /&gt;&lt;BR /&gt;&amp;gt;stop perf&lt;BR /&gt;&amp;gt;clear perf&lt;BR /&gt;&lt;BR /&gt;  If you are running windows, you some of the perfmon counters. Perfmon counter Physical disk-&amp;gt;current QueueLength, pick your busiest LUN.  &lt;BR /&gt;&lt;BR /&gt;I only see 4 hba's.  What is your SAN setup. &lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 13 Oct 2006 07:34:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879401#M21943</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2006-10-13T07:34:08Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879402#M21944</link>
      <description>John, is new fw long-awaited A/A for MSA1000?</description>
      <pubDate>Sat, 14 Oct 2006 13:44:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879402#M21944</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2006-10-14T13:44:00Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879403#M21945</link>
      <description>Maybe ;)</description>
      <pubDate>Sat, 14 Oct 2006 18:28:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879403#M21945</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2006-10-14T18:28:58Z</dc:date>
    </item>
    <item>
      <title>Re: MSA1000 questions</title>
      <link>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879404#M21946</link>
      <description>John, when-when-when?:)&lt;BR /&gt;</description>
      <pubDate>Tue, 05 Dec 2006 16:54:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/msa1000-questions/m-p/3879404#M21946</guid>
      <dc:creator>Basil Vizgin</dc:creator>
      <dc:date>2006-12-05T16:54:30Z</dc:date>
    </item>
  </channel>
</rss>

