<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: MSA 1000 Performance Problem in MSA Storage</title>
    <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479053#M11032</link>
    <description>Denys,&lt;BR /&gt;What is your RAID level on that 14 disk array.  What is the stripe size.  You mentioned Oracle, what is your block size.  &lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Tue, 08 Feb 2005 15:59:31 GMT</pubDate>
    <dc:creator>John Kufrovich</dc:creator>
    <dc:date>2005-02-08T15:59:31Z</dc:date>
    <item>
      <title>MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479046#M11025</link>
      <description>Hi All,&lt;BR /&gt;&lt;BR /&gt;I wonder if you could shed some light on a write performane problem we are having with a new MSA 1000 unit. HP support consultants/engineers have been looking at this for the last week and I am beginning to lose patience with the lack of progress on this.&lt;BR /&gt;&lt;BR /&gt;The configuration is as follows:&lt;BR /&gt;&lt;BR /&gt;3 x DL380 G4 Servers running Windows 2000 SP4 with 2GB RAM and mirrored 72GB 10k disks on the internal array.&lt;BR /&gt;&lt;BR /&gt;Each server contains a HP FC2214 2Gb fiber card with the latest firmware and drivers (as per the support site).&lt;BR /&gt;&lt;BR /&gt;These three cards connect to a Brockade(spelling?) switch plugged internally into the MSA 1000. &lt;BR /&gt;&lt;BR /&gt;The MSA 1000 is configured with 256MB cache in a 50/50 split for read/write. The firmware is version4.32.&lt;BR /&gt;&lt;BR /&gt;There are 10 x 146GB 10k U320 disks in the MSA and these are divided into two RAID5 sets (4 disks each) and one RAID1 set (the other two disks).&lt;BR /&gt;&lt;BR /&gt;When using explorer to copy a file from the internal RAID or a SAN disk to a SAN disk we are getting approximately 10MBytes/sec throughput. Using IOmeter to write we are getting approximately 10MB/s also. Using IOmeter to read the disks we are getting throughput of about 130MB/s. &lt;BR /&gt;&lt;BR /&gt;To troubleshoot the problem the following things have been tried:&lt;BR /&gt;&lt;BR /&gt;1. The brockade switch has been replaced&lt;BR /&gt;2. The MSA controller has been replaced&lt;BR /&gt;3. The Qlogic cards were replace with an Emulex card&lt;BR /&gt;4. The cache split has been altered (0% read / 100% write)&lt;BR /&gt;5. A pre-release of the MSA controller (version 4.4) has been tested&lt;BR /&gt;6. All volumes have been defragged&lt;BR /&gt;7. The internal array controllers (on DL380) have been upgraded&lt;BR /&gt;8. The brockade switch has been removed and a direct fiber link to the MSA has been tested (still only 10MB/s)&lt;BR /&gt;9. A Dell server has been attached (still only 10MB/s)&lt;BR /&gt;10. Cache has been enabled and disabled (no difference) using dskcache.exe&lt;BR /&gt;&lt;BR /&gt;Now for the real spanner in the works. If we start a write from one server we are getting 10MB/s throughput as monitored on brockade switch. If I start another write job on each of the other two servers my total throughpt on the switch is three times the maximum input from the servers (i.e. 10MB/s from each server and I can write at 30MB/s!!!) This is verfied by the port throughput performance graph on the switch.&lt;BR /&gt;&lt;BR /&gt;So it looks as if the MSA is happy to write at  30MB/s but the servers seem to be limited to outputting 10MB/s. Again I can read at 130MB/s!! &lt;BR /&gt;&lt;BR /&gt;Any body shed some light on this? Or do I just return the whole kit a defective?&lt;BR /&gt;&lt;BR /&gt;Thanks for any help,&lt;BR /&gt;Jason.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Fri, 04 Feb 2005 20:42:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479046#M11025</guid>
      <dc:creator>Jason Keane</dc:creator>
      <dc:date>2005-02-04T20:42:30Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479047#M11026</link>
      <description>Hi, &lt;BR /&gt;&lt;BR /&gt;Have you maybe implemented zoning to separate the servers on the hardware level from each other? I would suggest checking some queues using performance monitor, to establish where is the bottleneck exactly. Check the io queues, if the queue is full than there could be something in the HBA driver/setting. You'll probably find there is something within the server (OS) stopping your bits and bytes from flying. You say that you tried also with Emulex..and the result were the same? Secure Path installed? If you still have Emulex cards, try increasing the queue depth and setting the tprlo parameters.... &lt;BR /&gt;&lt;BR /&gt;If the system is not yet in the production, you cloud try creating one array and two raid 1 volumes inside, just to try the performance....&lt;BR /&gt;&lt;BR /&gt;I hope I have given you any untried options..must agree with you this is strange &lt;BR /&gt;&lt;BR /&gt;rgds&lt;BR /&gt;&lt;BR /&gt;Bostjan</description>
      <pubDate>Mon, 07 Feb 2005 04:19:30 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479047#M11026</guid>
      <dc:creator>Bostjan Kosi</dc:creator>
      <dc:date>2005-02-07T04:19:30Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479048#M11027</link>
      <description>Jason,&lt;BR /&gt;Have you checked to make sure that there is no pending activity on the arrays?  You can see this by going into ACU and looking for messages.  If the array hasn't finished initialization then you will see a performance hit.&lt;BR /&gt;&lt;BR /&gt;Glenn</description>
      <pubDate>Mon, 07 Feb 2005 08:13:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479048#M11027</guid>
      <dc:creator>Glenn N Wuenstel</dc:creator>
      <dc:date>2005-02-07T08:13:23Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479049#M11028</link>
      <description>Do all LUNS you expect the same performance from use the same amount og physical disks?&lt;BR /&gt;&lt;BR /&gt;I you can post the "More Info" text from all the LUNs this might help.&lt;BR /&gt;&lt;BR /&gt;KurtG</description>
      <pubDate>Mon, 07 Feb 2005 09:37:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479049#M11028</guid>
      <dc:creator>KurtG</dc:creator>
      <dc:date>2005-02-07T09:37:19Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479050#M11029</link>
      <description>Hi Folks,&lt;BR /&gt;&lt;BR /&gt;Thanks for the offering of help so far. Here's some info for you&lt;BR /&gt;&lt;BR /&gt;1. We've checked with performance monitor the queue's, etc and all seems within normal limits. &lt;BR /&gt;&lt;BR /&gt;2. The Controller is no busying doing anything as we are testing on a RAID 0 (single disk)&lt;BR /&gt;&lt;BR /&gt;But now for the spanner... we upgraded one of the servers to Windows 2003. After the upgrade it was still slow (about 12MB/s). When then turn on the Enable Write Protect option for the MSA in Device manager (Disk Drives -&amp;gt; Policies) and we could write at about 70-80MB/s! Happy days. However we need to turn on this option for Windows 2000. We are running SP4 and the hotfix as described here is installed (by default with SP4)&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://support.microsoft.com/default.aspx?scid=kb;en-us;811392" target="_blank"&gt;http://support.microsoft.com/default.aspx?scid=kb;en-us;811392&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;When we use the dskcache.exe program to turn on the cache it says its enabled but there's no performance increase. It really looks now like the write cache is not enabled. We have opened a call with MS to see if there's something here.&lt;BR /&gt;&lt;BR /&gt;Surely I am not the only one to see this? Are people just running with a slow SAN and not notice?&lt;BR /&gt;&lt;BR /&gt;Ta&lt;BR /&gt;Jason.&lt;BR /&gt;</description>
      <pubDate>Mon, 07 Feb 2005 10:52:48 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479050#M11029</guid>
      <dc:creator>Jason Keane</dc:creator>
      <dc:date>2005-02-07T10:52:48Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479051#M11030</link>
      <description>Hi Jason, &lt;BR /&gt;&lt;BR /&gt;We're having a similar issue with our MSA 1000 SAN attached to HP ProLiant DL380s. &lt;BR /&gt;- MSA firmware 4.32; 512 MB cache (2 controllers)&lt;BR /&gt;- 14 HDD 73GB 15K rpm. &lt;BR /&gt;- QLogic QLA23xx FCA&lt;BR /&gt;&lt;BR /&gt;I've noticed little to no performance improvement when running an Oracle database striped over the 14 disks compared to one on just a local disk. So far, I've tried different RAID scenarios and cache settings. I'll check out the results with IOMeter&lt;BR /&gt;&lt;BR /&gt;Denys</description>
      <pubDate>Tue, 08 Feb 2005 03:20:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479051#M11030</guid>
      <dc:creator>Denys van Kempen</dc:creator>
      <dc:date>2005-02-08T03:20:00Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479052#M11031</link>
      <description>update</description>
      <pubDate>Tue, 08 Feb 2005 03:20:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479052#M11031</guid>
      <dc:creator>Denys van Kempen</dc:creator>
      <dc:date>2005-02-08T03:20:31Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479053#M11032</link>
      <description>Denys,&lt;BR /&gt;What is your RAID level on that 14 disk array.  What is the stripe size.  You mentioned Oracle, what is your block size.  &lt;BR /&gt;&lt;BR /&gt; &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 08 Feb 2005 15:59:31 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479053#M11032</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-08T15:59:31Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479054#M11033</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;I'm running a batch job that generates about 30GB of redo in 2 hours. The I/O profile when I use dedicated disks for undo, redo, tables, index and temp, etc. is 100% busy on redo and undo. Total IO = &amp;gt; 60% write I/O. &lt;BR /&gt;&lt;BR /&gt;In the Statspack report log file sync, log buffer and db file waits amount for 99% of the wait time with a cpu/wait ratio of 30/70. On waits its about 50/50 (redo write waits vs db file read waits).  &lt;BR /&gt;&lt;BR /&gt;One of the nuttiest configuration I tried was to stripe RAID0 over 14 disks for redo only to get the lof file sync waits down. This this did improve things much. The application commit rate is very high but this can not be altered. &lt;BR /&gt;&lt;BR /&gt;I have not yet tried different block sizes to reduce the db file waits. Its at 8K. &lt;BR /&gt;&lt;BR /&gt;Remarkably enough, the execution time of the batch is about the same for the MSA1000 SAN (14 disks in different RAID1+0 configs) as when using 2 local disks on the DL380 (Windows 2003 SE). &lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 07:12:46 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479054#M11033</guid>
      <dc:creator>Denys van Kempen</dc:creator>
      <dc:date>2005-02-09T07:12:46Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479055#M11034</link>
      <description>Are you sure the LUN(s) spans all 14 disks? I have seen situations where an array previosly expanded with more disk did not reconfigurer the LUNs to span all new physical disk.&lt;BR /&gt;&lt;BR /&gt;KurtG</description>
      <pubDate>Wed, 09 Feb 2005 08:59:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479055#M11034</guid>
      <dc:creator>KurtG</dc:creator>
      <dc:date>2005-02-09T08:59:04Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479056#M11035</link>
      <description>Denys,&lt;BR /&gt;Can you provide the HBA parameters registry parameters.&lt;BR /&gt;&lt;BR /&gt;LM, -&amp;gt; System, -&amp;gt; CurrentControlSet -&amp;gt;Services&lt;BR /&gt;&lt;BR /&gt;Locate either HP2300 and or ql2300&lt;BR /&gt;I need everything under parameters-&amp;gt;devices.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 10:11:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479056#M11035</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-09T10:11:12Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479057#M11036</link>
      <description>Hi Folks,&lt;BR /&gt;&lt;BR /&gt;Just another update for you. I believe I have exactly isolated the problem and I don't think it's related to RAID set, spindle count, etc. FYI we have done all tests on RAID 0,1,5 wit the same problem.&lt;BR /&gt;&lt;BR /&gt;To explain in detail it appears that prior to Windows 2000 SP3 (i.e. RTM, SP0 and SP1) the MS code had a bug that allowed file data to be held in a controller write cache (with or without battery backed write cache or BBWC). This became a problem when power failed and you didn't have battery backed controller cache as you lost data. To resolve the data loss issue in SP3 MS fixed the bug by updating the disk.sys driver to send the SCSI command WRITE + FLUSH for every file write command. This SCSI command has the effect or forcing the controller to write its cache immediately to disk, thus bypassing any write caching that might go on - even if you've BBWC. As a result you get really poor write performance as you are writing directly to the disks bypassing the write cache. The read cache is always enabled as no data can be lost in a disk read. &lt;BR /&gt;&lt;BR /&gt;The net effect of this is that a read from the MSA will have high-throughput (as it's from the cache) whereas the write will have poor throughput. Tesing with IOMETER will prove the read performance as you can do a pure read and IOMETER can be forced to filter the FLUSH command from the SCSI sequence to demonstrate high write throughput. The problem with IOMETER in this instance though it that it does not function the way Windows I/O happens (e.g. using Explorer, Exchange, SQL, etc). Therefore it can give inaccurate results at to the overall true system performance.&lt;BR /&gt;&lt;BR /&gt;Microsoft posted a SP3 patch that was included in SP4 to resolve this issue and enable write caching using dskcache.exe. The problem is that either the MS or HP driver ignores the WRITE command and automatically adds a FLUSH command. By placing in a FLUSH command filter (using a small kernel mode drive) the high write throughput can be obtained. However this is not a supported by either MS or HP configuration.&lt;BR /&gt;&lt;BR /&gt;Upon escalation of the issue to both vendors (MS + HP) each is adamant that it's the other vendors issue. To be fair to MS at least they are still troubleshooting it, HP are of the opinion it's purely a MS issue - and maybe they are right.&lt;BR /&gt;&lt;BR /&gt;This problem was resolved using Windows 2003 and the "Enable Advanced Performance" option. Which effectivley filters out the FLUSH command but in an MS approve code base (i.e. you have support).&lt;BR /&gt;&lt;BR /&gt;Checking of LUNS, RAID sizes, types, etc. I don't believe will resolve the issue. It simply a software issue with Windows 2000 SP3 and SP4.&lt;BR /&gt;&lt;BR /&gt;I hope this clarifies for everyone.&lt;BR /&gt;&lt;BR /&gt;Cheers,&lt;BR /&gt;Jason.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 11:12:01 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479057#M11036</guid>
      <dc:creator>Jason Keane</dc:creator>
      <dc:date>2005-02-09T11:12:01Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479058#M11037</link>
      <description>Jason,&lt;BR /&gt;&lt;BR /&gt;There was an issue with older MSA FW and MS setting the FUA bit.  The 4.xx MSA FW ignores the FUA bit, because we have BBWC.  &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 11:56:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479058#M11037</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-09T11:56:55Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479059#M11038</link>
      <description>Hi John,&lt;BR /&gt;&lt;BR /&gt;We are running Firmware 4.32 on the controller. However if it ignores the FUA bit, how come on Windows 2003 there is no performance increase until the "Enable Advanced Performance" is selected? My understaning of this option is that it turns off the flush command? Is this right?&lt;BR /&gt;&lt;BR /&gt;Thanks,&lt;BR /&gt;Jason.&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 12:01:39 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479059#M11038</guid>
      <dc:creator>Jason Keane</dc:creator>
      <dc:date>2005-02-09T12:01:39Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479060#M11039</link>
      <description>I wish I had insight into MS SCSI driver.  Perhaps this is the reasoning, they changed the SCSI driver for 2003.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 12:22:18 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479060#M11039</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-09T12:22:18Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479061#M11040</link>
      <description>If you have access to the cli, look at your windows profile.  You will see we ignore the FUA (Force Unit Access) bit.  If you want to test something out.  Change your host profile to the degraded Windows profile, where we accept the FUA.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 12:29:44 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479061#M11040</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-09T12:29:44Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479062#M11041</link>
      <description>Hi John,&lt;BR /&gt;&lt;BR /&gt;I can connect up a cable to the MSA and test from there. When you mention Windows degraded profile, I assume this is a profile setting within the cli of the controller? &lt;BR /&gt;&lt;BR /&gt;Ta,&lt;BR /&gt;Jason.&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 12:53:08 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479062#M11041</guid>
      <dc:creator>Jason Keane</dc:creator>
      <dc:date>2005-02-09T12:53:08Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479063#M11042</link>
      <description>Right,&lt;BR /&gt;You can also set via ACU SSP select host profile.&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 09 Feb 2005 15:59:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479063#M11042</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-09T15:59:57Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479064#M11043</link>
      <description>John,&lt;BR /&gt;&lt;BR /&gt;Thanks for your input. Attached the registry export for HP2300 and QL2300&lt;BR /&gt;</description>
      <pubDate>Thu, 10 Feb 2005 07:09:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479064#M11043</guid>
      <dc:creator>Denys van Kempen</dc:creator>
      <dc:date>2005-02-10T07:09:19Z</dc:date>
    </item>
    <item>
      <title>Re: MSA 1000 Performance Problem</title>
      <link>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479065#M11044</link>
      <description>Jason, &lt;BR /&gt;In your first post, you mention you had pre-release FW.  Where did you get it?&lt;BR /&gt;&lt;BR /&gt;Since you are running windows, what does your perfmon look like.  &lt;BR /&gt;perfmon - physical disk - select the lun you are writing to.  Do as you stated above.  Is you graph a flat line.  Describe what you see or provide a snap.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 10 Feb 2005 09:53:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/msa-storage/msa-1000-performance-problem/m-p/3479065#M11044</guid>
      <dc:creator>John Kufrovich</dc:creator>
      <dc:date>2005-02-10T09:53:07Z</dc:date>
    </item>
  </channel>
</rss>

