<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Extremely slow io on cciss raid6 in Operating System - Linux</title>
    <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214896#M32833</link>
    <description>I've tried both raw disk and ext3 and the the problem is not related to fs issues as I've seen in other suggestions to the same problem.&lt;BR /&gt;&lt;BR /&gt;I know I cannot expect the speed of lightning with raid5/6 but more than 8Mb/s is not expecting to much. The speed is actually not the biggest issue. The frustrating problem is that the server is totally locked while writing to disk. The server is going to be a slave database server but it is simply not possible with the current performance.&lt;BR /&gt;&lt;BR /&gt;To test the performance I run:&lt;BR /&gt;&lt;BR /&gt;read:&lt;BR /&gt;&lt;BR /&gt;time dd of=/dev/zero if=/dev/mapper/VolGroup00-test bs=1M count=3000&lt;BR /&gt;3000+0 records in&lt;BR /&gt;3000+0 records out&lt;BR /&gt;3145728000 bytes (3.1 GB) copied, 15.6588 seconds, 201 MB/s&lt;BR /&gt;&lt;BR /&gt;real 0m15.713s&lt;BR /&gt;user 0m0.005s&lt;BR /&gt;sys 0m4.264s&lt;BR /&gt;&lt;BR /&gt;write:&lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/zero of=/dev/mapper/VolGroup00-test bs=1M count=3000&lt;BR /&gt;3000+0 records in&lt;BR /&gt;3000+0 records out&lt;BR /&gt;3145728000 bytes (3.1 GB) copied, 426.12 seconds, 7.4 MB/s&lt;BR /&gt;&lt;BR /&gt;real 7m6.139s&lt;BR /&gt;user 0m0.003s&lt;BR /&gt;sys 0m4.418s&lt;BR /&gt;&lt;BR /&gt;</description>
    <pubDate>Thu, 12 Jun 2008 05:11:58 GMT</pubDate>
    <dc:creator>Ulrik Holmén</dc:creator>
    <dc:date>2008-06-12T05:11:58Z</dc:date>
    <item>
      <title>Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214888#M32825</link>
      <description>I've installed RHEL 5.1 on a DL320S server with a Smart Array P400 Controller with 6 SATA disks in a RAID6 (ADG) setup. The write speed is terrible. I normally get about 8Mb/s write speed which is not what I expect from such hardware. &lt;BR /&gt;&lt;BR /&gt;I've tried different kernels and parameters to increase the speed and it has helped with the reading speed which is now at about 200Mb/s sustained rate as long as no writes occure during the read. As soon as a write occures the read speed decreases radically. &lt;BR /&gt;&lt;BR /&gt;I've noticed a lot of people seems to have the same problem but so far I haven't seen any good solutions apart from replacing the array controller. The iowait is above 90% while writing to the disk and this is making the whole system incredibly slow. Just listing files in a directory can take 20s due to the iowait.&lt;BR /&gt;&lt;BR /&gt;I'm running the latest of everyting now. Firmware, kernel etc but the problem is still there. I've tried the cciss.sf.net driver and the vanilla kernel driver. All the same.&lt;BR /&gt;&lt;BR /&gt;System information:&lt;BR /&gt;&lt;BR /&gt;uname:&lt;BR /&gt;Linux someserver 2.6.25 #1 SMP Wed Jun 11 21:21:21 CEST 2008 i686 i686 i386 GNU/Linux&lt;BR /&gt;&lt;BR /&gt;from dmesg:&lt;BR /&gt;HP CISS Driver (v 3.6.14)&lt;BR /&gt;ACPI: PCI Interrupt 0000:0a:00.0[A] -&amp;gt; GSI 16 (level, low) -&amp;gt; IRQ 16&lt;BR /&gt;cciss0: &amp;lt;0x3230&amp;gt; at PCI 0000:0a:00.0 IRQ 217 using DAC&lt;BR /&gt;      blocks= 4294967296 block_size= 512&lt;BR /&gt;      blocks= 5860333808 block_size= 512&lt;BR /&gt;      heads=255, sectors=32, cylinders=718179&lt;BR /&gt;&lt;BR /&gt;      blocks= 5860333808 block_size= 512&lt;BR /&gt;      heads=255, sectors=32, cylinders=718179&lt;BR /&gt;&lt;BR /&gt; cciss/c0d0: p1 p2&lt;BR /&gt;&lt;BR /&gt;/proc/interrupts:&lt;BR /&gt;           CPU0       CPU1       &lt;BR /&gt;  0:        255          0   IO-APIC-edge      timer&lt;BR /&gt;  1:          8          0   IO-APIC-edge      i8042&lt;BR /&gt;  3:          1          0   IO-APIC-edge    &lt;BR /&gt;  4:          2          0   IO-APIC-edge    &lt;BR /&gt;  8:          3          0   IO-APIC-edge      rtc&lt;BR /&gt;  9:          0          0   IO-APIC-fasteoi   acpi&lt;BR /&gt; 12:        131          0   IO-APIC-edge      i8042&lt;BR /&gt; 21:     990599          0   IO-APIC-fasteoi   uhci_hcd:usb1, uhci_hcd:usb2, uhci_hcd:usb3, uhci_hcd:usb4, ehci_hcd:usb6&lt;BR /&gt; 22:      22324          0   IO-APIC-fasteoi   ipmi_si&lt;BR /&gt; 23:        166          0   IO-APIC-fasteoi   uhci_hcd:usb5&lt;BR /&gt;215:       5802       2005   PCI-MSI-edge      eth0&lt;BR /&gt;217:     512723          0   PCI-MSI-edge      cciss0&lt;BR /&gt;NMI:          0          0   Non-maskable interrupts&lt;BR /&gt;LOC:    3144939    3144944   Local timer interrupts&lt;BR /&gt;RES:       1045      34959   Rescheduling interrupts&lt;BR /&gt;CAL:        209        653   function call interrupts&lt;BR /&gt;TLB:        445        478   TLB shootdowns&lt;BR /&gt;TRM:          0          0   Thermal event interrupts&lt;BR /&gt;SPU:          0          0   Spurious interrupts&lt;BR /&gt;ERR:          0&lt;BR /&gt;MIS:          0&lt;BR /&gt;&lt;BR /&gt;/proc/driver/cciss/cciss0:&lt;BR /&gt;cciss0: HP Smart Array P400 Controller&lt;BR /&gt;Board ID: 0x3234103c&lt;BR /&gt;Firmware Version: 4.12&lt;BR /&gt;IRQ: 217&lt;BR /&gt;Logical drives: 1&lt;BR /&gt;Current Q depth: 0&lt;BR /&gt;Current # commands on controller: 16&lt;BR /&gt;Max Q depth since init: 19&lt;BR /&gt;Max # commands on controller since init: 24&lt;BR /&gt;Max SG entries since init: 31&lt;BR /&gt;Sequential access devices: 0&lt;BR /&gt;&lt;BR /&gt;cciss/c0d0: 3000.49GB RAID ADG&lt;BR /&gt;&lt;BR /&gt;/sys/block/cciss\!c0d0/queue/read_ahead_kb:&lt;BR /&gt;128&lt;BR /&gt;&lt;BR /&gt;/sys/block/cciss\!c0d0/queue/max_sectors_kb: &lt;BR /&gt;512&lt;BR /&gt;&lt;BR /&gt;vmstat -a 1 5&lt;BR /&gt;procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------&lt;BR /&gt; r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st&lt;BR /&gt; 0  9    128  13176 931408  57280    0    0  5938  2580  245 1682  3  2 5 90  0&lt;BR /&gt; 1  9    128  13124 931260  57268    0    0  3848 20088  455 3236  2  3  0 95  0&lt;BR /&gt; 0  9    128  13124 931260  57268    0    0     0  1516   65 2131  0  0  0 100  0&lt;BR /&gt; 0  6    128  13044 931548  57268    0    0  8712  3564  667 4246  5  3  0 93  0&lt;BR /&gt; 1  4    128  13208 931564  57268    0    0     0  6216   66 2084  0  1  0 99  0&lt;BR /&gt;&lt;BR /&gt;Any ideas apart from changing the array adapter?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 11 Jun 2008 14:33:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214888#M32825</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-11T14:33:12Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214889#M32826</link>
      <description>hi ulrik,&lt;BR /&gt;&lt;BR /&gt;we have the same problem!&lt;BR /&gt;did you find a solution for now?&lt;BR /&gt;we have the issue on different HP Servers with different P400 Controllers, everytime the same...&lt;BR /&gt;&lt;BR /&gt;my questions:&lt;BR /&gt;- what hardware revision has your controller? (lspci output)&lt;BR /&gt;- did you try to put the controller in an other pci-x slot?&lt;BR /&gt;- what is the output of "lshw" from the pci slot the controller is in?&lt;BR /&gt;&lt;BR /&gt;My Post: &lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1240003" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1240003&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;thank you!&lt;BR /&gt;&lt;BR /&gt;i hope we can solve this problem :(&lt;BR /&gt;&lt;BR /&gt;greets</description>
      <pubDate>Wed, 11 Jun 2008 14:46:05 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214889#M32826</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-11T14:46:05Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214890#M32827</link>
      <description>I would not expect performance for a RAID 6 configuration and small controllers. Â¿Do you have write-back cache? Â¿How many disks you have? Â¿Have you considered using RAID 5 + spare instead of RAID 6?. Â¿What is the performance testing tool that you use? Â¿What is the block size used?&lt;BR /&gt;&lt;BR /&gt;Â¿Can you create a RAID 0, with 1 disk, and then with all disks, for performance test purposes?. With this you could identify the performance for each disk, and then for all disks in a stripe configuration, and then, compare with RAID 6 performance.&lt;BR /&gt;&lt;BR /&gt;Use iostat -x to identify the "service time" (svctm) on each situation. Post your results.</description>
      <pubDate>Wed, 11 Jun 2008 15:03:25 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214890#M32827</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-06-11T15:03:25Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214891#M32828</link>
      <description>hi ivan,&lt;BR /&gt;&lt;BR /&gt;its not an issue with the raidlevel.&lt;BR /&gt;we tried it on about 6 servers, with raid 1 / 5 and 6, everytime the same.&lt;BR /&gt;&lt;BR /&gt;please read my post i linked, i think this shows the issue a little better.</description>
      <pubDate>Wed, 11 Jun 2008 15:17:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214891#M32828</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-11T15:17:00Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214892#M32829</link>
      <description>I see in your test that you run over a file system. You should run your tests over the raw device. Â¿Was this FS ext3? Â¿Journaling enabled? For filesystem, use Iozone or Bonnie. &lt;BR /&gt;&lt;BR /&gt;Â¿What would be the performance over a single disk?&lt;BR /&gt;&lt;BR /&gt;A large block size won't always be better.</description>
      <pubDate>Wed, 11 Jun 2008 15:41:04 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214892#M32829</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-06-11T15:41:04Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214893#M32830</link>
      <description>hi ivan,&lt;BR /&gt;&lt;BR /&gt;thanks for your answer.&lt;BR /&gt;&lt;BR /&gt;i know the possibilities of tweaking with blocksizes and filesystems but i think 8 mb/s of writing has it reasons somewhere else :)&lt;BR /&gt;&lt;BR /&gt;i cant start bonnie because the server gets too much load an the services on it get offline if i start writing too much to the disk.</description>
      <pubDate>Wed, 11 Jun 2008 16:04:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214893#M32830</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-11T16:04:07Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214894#M32831</link>
      <description>Then, let's wait  Ulrik HolmÃ©n  results.</description>
      <pubDate>Wed, 11 Jun 2008 17:30:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214894#M32831</guid>
      <dc:creator>Ivan Ferreira</dc:creator>
      <dc:date>2008-06-11T17:30:14Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214895#M32832</link>
      <description>Hi Ivan,&lt;BR /&gt;&lt;BR /&gt;Now i could make bonnie and more other tests. &lt;BR /&gt;&lt;BR /&gt;Here my results:&lt;BR /&gt;&lt;BR /&gt;- direktly to the disk, without ext3:&lt;BR /&gt;&lt;BR /&gt;sync; time sh -c "dd if=/dev/zero of=/dev/cciss/c0d0p3 bs=1024k count=1000; sync"&lt;BR /&gt;1000+0 records in&lt;BR /&gt;1000+0 records out&lt;BR /&gt;1048576000 bytes (1.0 GB) copied, 677.96 seconds, 1.5 MB/s&lt;BR /&gt;&lt;BR /&gt;real    11m18.088s&lt;BR /&gt;user    0m0.000s&lt;BR /&gt;sys     0m2.584s&lt;BR /&gt;&lt;BR /&gt;- bonnie:&lt;BR /&gt;&lt;BR /&gt;bonnie -b -s 1100 -d /tmp/ -u root&lt;BR /&gt;Using uid:0, gid:0.&lt;BR /&gt;Writing with putc()...done&lt;BR /&gt;Writing intelligently...done&lt;BR /&gt;Rewriting...done&lt;BR /&gt;Reading with getc()...done&lt;BR /&gt;Reading intelligently...done&lt;BR /&gt;start 'em...done...done...done...&lt;BR /&gt;Create files in sequential order...done.&lt;BR /&gt;Stat files in sequential order...done.&lt;BR /&gt;Delete files in sequential order...done.&lt;BR /&gt;Create files in random order...done.&lt;BR /&gt;Stat files in random order...done.&lt;BR /&gt;Delete files in random order...done.&lt;BR /&gt;Version  1.03       ------Sequential Output------ --Sequential Input- --Random-&lt;BR /&gt;                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--&lt;BR /&gt;Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP&lt;BR /&gt;our.server.na 1100M  7366  15  4572   0  3456   0 26078  53 76173   5 135.3   0&lt;BR /&gt;                    ------Sequential Create------ --------Random Create--------&lt;BR /&gt;                    -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--&lt;BR /&gt;              files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP&lt;BR /&gt;                 16   396   0 +++++ +++   263   0   227   0 +++++ +++  1564   2&lt;BR /&gt;our.server.name,1100M,7366,15,4572,0,3456,0,26078,53,76173,5,135.3,0,16,396,0,+++++,+++,263,0,227,0,+++++,+++,1564,2&lt;BR /&gt;&lt;BR /&gt;i dont know bonnie very well, but it doesnt look fine. server had 0.0 load before.&lt;BR /&gt;&lt;BR /&gt;greets,&lt;BR /&gt;florian</description>
      <pubDate>Thu, 12 Jun 2008 01:03:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214895#M32832</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-12T01:03:57Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214896#M32833</link>
      <description>I've tried both raw disk and ext3 and the the problem is not related to fs issues as I've seen in other suggestions to the same problem.&lt;BR /&gt;&lt;BR /&gt;I know I cannot expect the speed of lightning with raid5/6 but more than 8Mb/s is not expecting to much. The speed is actually not the biggest issue. The frustrating problem is that the server is totally locked while writing to disk. The server is going to be a slave database server but it is simply not possible with the current performance.&lt;BR /&gt;&lt;BR /&gt;To test the performance I run:&lt;BR /&gt;&lt;BR /&gt;read:&lt;BR /&gt;&lt;BR /&gt;time dd of=/dev/zero if=/dev/mapper/VolGroup00-test bs=1M count=3000&lt;BR /&gt;3000+0 records in&lt;BR /&gt;3000+0 records out&lt;BR /&gt;3145728000 bytes (3.1 GB) copied, 15.6588 seconds, 201 MB/s&lt;BR /&gt;&lt;BR /&gt;real 0m15.713s&lt;BR /&gt;user 0m0.005s&lt;BR /&gt;sys 0m4.264s&lt;BR /&gt;&lt;BR /&gt;write:&lt;BR /&gt;&lt;BR /&gt;time dd if=/dev/zero of=/dev/mapper/VolGroup00-test bs=1M count=3000&lt;BR /&gt;3000+0 records in&lt;BR /&gt;3000+0 records out&lt;BR /&gt;3145728000 bytes (3.1 GB) copied, 426.12 seconds, 7.4 MB/s&lt;BR /&gt;&lt;BR /&gt;real 7m6.139s&lt;BR /&gt;user 0m0.003s&lt;BR /&gt;sys 0m4.418s&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2008 05:11:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214896#M32833</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-12T05:11:58Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214897#M32834</link>
      <description>While writing (The LVM is located on cciss/c0d0p2):&lt;BR /&gt;&lt;BR /&gt;Device:         rrqm/s   wrqm/s   r/s   w/s    rMB/s    wMB/s avgrq-sz avgqu-sz   await  svctm  %util&lt;BR /&gt;cciss/c0d0        0.00  1440.00  0.00 51.50     0.00     6.24   248.00   144.74 2835.11  19.43 100.05&lt;BR /&gt;cciss/c0d0p1 &lt;BR /&gt;               0.00     0.00  0.00  0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00&lt;BR /&gt;cciss/c0d0p2 &lt;BR /&gt;               0.00  1440.00  0.00 51.50     0.00     6.24   248.00   144.74 2835.11  19.43 100.05&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2008 05:28:55 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214897#M32834</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-12T05:28:55Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214898#M32835</link>
      <description>I solved the problem by forcing enable of writecache. It seems I forgot to order the BBWC and then the writecache was disabled by default. I have to order a BBWC to make sure I don't break anything in case of a powerfailure.&lt;BR /&gt;&lt;BR /&gt;ctrl slot=2 modify drivewritecache=enable&lt;BR /&gt;&lt;BR /&gt;The difference was quite astonishing:&lt;BR /&gt;&lt;BR /&gt;sync; time dd if=/dev/zero of=/dev/mapper/VolGroup00-test bs=1M count=3000; sync&lt;BR /&gt;3000+0 records in&lt;BR /&gt;3000+0 records out&lt;BR /&gt;3145728000 bytes (3.1 GB) copied, 35.2907 seconds, 89.1 MB/s&lt;BR /&gt;&lt;BR /&gt;real 0m35.292s&lt;BR /&gt;user 0m0.008s&lt;BR /&gt;sys 0m4.891s&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2008 06:03:37 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214898#M32835</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-12T06:03:37Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214899#M32836</link>
      <description>&lt;BR /&gt;Check out this Forum question&lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1225006" target="_blank"&gt;http://forums11.itrc.hp.com/service/forums/questionanswer.do?threadId=1225006&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2008 06:22:35 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214899#M32836</guid>
      <dc:creator>Jon Gomersall</dc:creator>
      <dc:date>2008-06-12T06:22:35Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214900#M32837</link>
      <description>dear ulrik,&lt;BR /&gt;&lt;BR /&gt;i would NOT prefer enabling DWC because you are losing data on power loss. the BBWC doesnt help here because its the write cache directly on the disk.&lt;BR /&gt;its intresting that this helps, we have disabled disk writecache on all our servers (where we dont use p400) and get the performance you have (100-120mb/s).&lt;BR /&gt;i think enabling DWC helps, but ISNT the solution.&lt;BR /&gt;&lt;BR /&gt;see what HP says under: &lt;A href="http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01149818" target="_blank"&gt;http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01149818&lt;/A&gt;〈=en&amp;amp;cc=us&amp;amp;taskId=101&amp;amp;prodSeriesId=3369549&amp;amp;prodTypeId=329290&lt;BR /&gt;&lt;BR /&gt;greets</description>
      <pubDate>Thu, 12 Jun 2008 10:37:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214900#M32837</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-12T10:37:19Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214901#M32838</link>
      <description>No. I think you're right. It wasn't the real solution but it mitigates the problem with io wait. I have now ordered a battery backup to the controller so soon I feel safe with the solution as well. &lt;BR /&gt;&lt;BR /&gt;I've read a lot of reports regarding what I think is the same problem with solutions ranging from BBWC to different fs. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Thu, 12 Jun 2008 12:01:53 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214901#M32838</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-12T12:01:53Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214902#M32839</link>
      <description>dear ulrik,&lt;BR /&gt;&lt;BR /&gt;i think its not a problem with the bbwc.&lt;BR /&gt;we search for the error since about 1 week, we have our DL320s with BBWC and the problem occurs also.&lt;BR /&gt;&lt;BR /&gt;i guess it is one of the following:&lt;BR /&gt;&lt;BR /&gt;- problem with DMA handling, i looked at the cciss driver source, there is a case with P600 where problems occur&lt;BR /&gt;- hardware revision of the p400. we have a customer who has also the same dl320s but a newer P400 HW REV, he has NOT the problems we have&lt;BR /&gt;- high memory irq conflict or shared pci-x slot&lt;BR /&gt;&lt;BR /&gt;can you send me some information of your system:&lt;BR /&gt;&lt;BR /&gt;- hpaducli -f &lt;FILENAME&gt; result&lt;BR /&gt;- hpacucli output from "controller slot=2 show config detail"&lt;BR /&gt;- lspci -v and lshw output&lt;BR /&gt;&lt;BR /&gt;thank you!&lt;BR /&gt;&lt;BR /&gt;Florian&lt;/FILENAME&gt;</description>
      <pubDate>Thu, 12 Jun 2008 12:11:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214902#M32839</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-12T12:11:57Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214903#M32840</link>
      <description>Output from HP Array Configuration Utility CLI 8.0-14.0&lt;BR /&gt;</description>
      <pubDate>Fri, 13 Jun 2008 08:11:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214903#M32840</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-13T08:11:06Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214904#M32841</link>
      <description>Output from hpaducli</description>
      <pubDate>Fri, 13 Jun 2008 08:16:23 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214904#M32841</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-13T08:16:23Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214905#M32842</link>
      <description>Output from lspci</description>
      <pubDate>Fri, 13 Jun 2008 08:16:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214905#M32842</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-13T08:16:57Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214906#M32843</link>
      <description>Output from lshw.&lt;BR /&gt;&lt;BR /&gt;Here you go.&lt;BR /&gt;&lt;BR /&gt;I have also been through the driver a couple of times while troubleshooting the issue and I saw the P600 failure in DMA prefecth but I'm convinced that it is not the same issue as we have with the P400 as the result from DMA prefetching from memory locations outside of the memory would not only be slow write access :).</description>
      <pubDate>Fri, 13 Jun 2008 08:22:14 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214906#M32843</guid>
      <dc:creator>Ulrik Holmén</dc:creator>
      <dc:date>2008-06-13T08:22:14Z</dc:date>
    </item>
    <item>
      <title>Re: Extremely slow io on cciss raid6</title>
      <link>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214907#M32844</link>
      <description>thanks for your files, looks all normal i think.&lt;BR /&gt;&lt;BR /&gt;my questions:&lt;BR /&gt;&lt;BR /&gt;- are you using original hp disks?&lt;BR /&gt;- did you try to upgrade to latest fw?&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Mon, 16 Jun 2008 08:37:20 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-linux/extremely-slow-io-on-cciss-raid6/m-p/4214907#M32844</guid>
      <dc:creator>fschicker</dc:creator>
      <dc:date>2008-06-16T08:37:20Z</dc:date>
    </item>
  </channel>
</rss>

