<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Hpsa problems in ProLiant Servers (ML,DL,SL)</title>
    <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6979700#M159538</link>
    <description>&lt;P&gt;Sorry for necroposting but, we've got this problem too, with retired p812. Digging into hpsa source gave us that strange &lt;SPAN class="st"&gt;C1:B1:T0:L3&lt;/SPAN&gt; notation is similar to scsi device notation without letters - 1:1:0:3&lt;/P&gt;&lt;P&gt;&amp;gt; lsscsi&lt;/P&gt;&lt;P&gt;[0:0:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; storage HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; P410i&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; -&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;BR /&gt;[0:1:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sda&lt;BR /&gt;[0:1:0:1]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdb&lt;BR /&gt;[0:1:0:2]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdc&lt;BR /&gt;[0:1:0:3]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdd&lt;BR /&gt;[0:1:0:4]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sde&lt;BR /&gt;[0:1:0:5]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdf&lt;BR /&gt;[1:0:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; storage HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; P812&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; -&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;BR /&gt;[1:1:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdg&lt;BR /&gt;[1:1:0:1]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdh&lt;BR /&gt;[1:1:0:2]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdi&lt;BR /&gt;[1:1:0:3]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdj&lt;/P&gt;&lt;P&gt;or&lt;/P&gt;&lt;P&gt;ls /sys/class/scsi_device/1:1:0:3/device/block/&lt;/P&gt;&lt;P&gt;sdj&lt;/P&gt;&lt;P&gt;Phisical device behind raid can be easily identified by "Disk Name:" value from hpacucli or hpssacli output.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Wed, 11 Oct 2017 10:59:06 GMT</pubDate>
    <dc:creator>meteozond</dc:creator>
    <dc:date>2017-10-11T10:59:06Z</dc:date>
    <item>
      <title>Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6477574#M143005</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;My system worked perfectly for fews months but since 1-2 weeks I have a big issue. Sometimes, in /var/log/messages I got messages like that :&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: cp ffff880075976000 is reported invalid (probably means target device no longer present)&lt;BR /&gt;hpsa 0000:04:00.0: cp ffff880075976000 is reported invalid (probably means target device no longer present)&lt;BR /&gt;hpsa 0000:04:00.0: FAILED abort on device C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: resetting device 0:0:0:0&lt;BR /&gt;hpsa 0000:04:00.0: device is ready.&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: Abort request on C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: cp ffff880057314000 is reported invalid (probably means target device no longer present)&lt;BR /&gt;hpsa 0000:04:00.0: cp ffff880057314000 is reported invalid (probably means target device no longer present)&lt;BR /&gt;hpsa 0000:04:00.0: FAILED abort on device C0:B0:T0:L0&lt;BR /&gt;hpsa 0000:04:00.0: resetting device 0:0:0:0&lt;BR /&gt;hpsa 0000:04:00.0: device is ready.&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This cause high load average when the problem appear ( 80+ ).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I run on Debian Wheezy stable and 3.13-0.bpo.1-amd64 kernel.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any ideas ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Thu, 15 May 2014 18:19:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6477574#M143005</guid>
      <dc:creator>skyice</dc:creator>
      <dc:date>2014-05-15T18:19:40Z</dc:date>
    </item>
    <item>
      <title>Re: Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6824889#M152438</link>
      <description>&lt;P&gt;Hello.&lt;/P&gt;&lt;P&gt;I'm having the same problem on one of our HP Proliant DL385p G8 servers.&lt;/P&gt;&lt;P&gt;As anyone replied to this post or found what is causing this particular message?&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Mon, 18 Jan 2016 16:33:51 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6824889#M152438</guid>
      <dc:creator>JosePa</dc:creator>
      <dc:date>2016-01-18T16:33:51Z</dc:date>
    </item>
    <item>
      <title>Re: Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6858085#M153512</link>
      <description>&lt;P&gt;I had exactly these symptoms on one box two months ago. First time it went away by power cycling and having somebody reseat all drives. But it reoccurred after a week. In the second installment, after half a day with these issues, finally the smartarray controller recognized a drive als faulty.... Got that drive replaced, everything fine since then.&lt;/P&gt;&lt;P&gt;Unfortunatelly these kernel messages do not give any hint regarding which drive exactly has the problems, and no other monitoring (smartctl, ILO) shows any hint either...&lt;/P&gt;&lt;P&gt;Half an hour ago, a different server has been making exactly the same noises.... Two times... with Linux (qemu processes here) getting "stuck" in CPU "wait" for some time (minutes). I/O rates were abysmal then. After an hour it finally recognized one of the drives as "Predictive Failure". But that didn't fail the drive completely - I/O rates stayed bad until I had the drive replaced.&lt;BR /&gt;Context here: servers were all DL380 Gen9 with the usual P440ar controller and 300 GB 10k SAS drives in RAID5 mode. Kernel is a self-built vanilla 3.14.67 at the moment, was a slightly earlier 3.14.x two month ago when the issue hit the first time&lt;BR /&gt;&lt;BR /&gt;BTW: anybody know how I can, with hpssacli, throw a disk out of an array as a failure, without having a hot spare? Without a hotspare hpssacli remove always only tells me something about needing some license key...&lt;/P&gt;</description>
      <pubDate>Wed, 11 May 2016 13:23:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6858085#M153512</guid>
      <dc:creator>patrick_schaaf</dc:creator>
      <dc:date>2016-05-11T13:23:57Z</dc:date>
    </item>
    <item>
      <title>Re: Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6858362#M153518</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Maybe same problem with HPE DL380gen9, hp 440ar&lt;/P&gt;&lt;P&gt;vshpere 6 update1, last HPE iso&lt;/P&gt;&lt;P&gt;firmware: 3.56&lt;/P&gt;&lt;P&gt;drivers:&lt;/P&gt;&lt;P&gt;the server was deploy 2 weeks ago. no problem when we install esxi and deploying VM.&lt;/P&gt;&lt;P&gt;but from this week, backup problem and users says that is slow. VM respond slowly and when I view disk performance on vpshere client, i have about 40ms of read latency. latency only on SAS disk (4 disk 600GB SAS 6GB 10k- raid5) , no latency on SSD drivers (raid1)&lt;/P&gt;&lt;P&gt;No hardware problem reporting from ilo&lt;/P&gt;&lt;P&gt;crystal disk mark performance on attachment&lt;/P&gt;&lt;P&gt;May be a ne firmware HP bug !!!!!!!!!!!!&lt;/P&gt;</description>
      <pubDate>Wed, 11 May 2016 09:29:17 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6858362#M153518</guid>
      <dc:creator>abelliot</dc:creator>
      <dc:date>2016-05-11T09:29:17Z</dc:date>
    </item>
    <item>
      <title>Re: Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6928919#M156035</link>
      <description>&lt;P&gt;the same problem with DL380G8 server&lt;/P&gt;&lt;P&gt;[26485294.814356] hpsa 0000:24:00.0: Abort request on C3:B0:T0:L1&lt;BR /&gt;[26485294.814514] hpsa 0000:24:00.0: invalid command: LUN:0100004000000000 CDB:00000000600100000000000000000000&lt;BR /&gt;[26485294.814518] hpsa 0000:24:00.0: probably means device no longer present&lt;BR /&gt;[26485294.814599] hpsa 0000:24:00.0: invalid command: LUN:0100004000000000 CDB:00000000000001600000000000000000&lt;BR /&gt;[26485294.814603] hpsa 0000:24:00.0: probably means device no longer present&lt;BR /&gt;[26485294.814606] hpsa 0000:24:00.0: FAILED abort on device C3:B0:T0:L1&lt;BR /&gt;[26485294.814664] hpsa 0000:24:00.0: resetting device 3:0:0:1&lt;BR /&gt;[26485310.470969] hpsa 0000:24:00.0: device is ready.&lt;BR /&gt;[26485385.993390] hpsa 0000:24:00.0: Abort request on C3:B0:T0:L1&lt;BR /&gt;[26485385.993580] hpsa 0000:24:00.0: invalid command: LUN:0100004000000000 CDB:00000000d02a00000000000000000000&lt;BR /&gt;[26485385.993584] hpsa 0000:24:00.0: probably means device no longer present&lt;BR /&gt;[26485385.993666] hpsa 0000:24:00.0: invalid command: LUN:0100004000000000 CDB:0000000000002ad00000000000000000&lt;BR /&gt;[26485385.993669] hpsa 0000:24:00.0: probably means device no longer present&lt;BR /&gt;[26485385.993672] hpsa 0000:24:00.0: FAILED abort on device C3:B0:T0:L1&lt;BR /&gt;[26485385.993733] hpsa 0000:24:00.0: resetting device 3:0:0:1&lt;BR /&gt;[26485398.801924] hpsa 0000:24:00.0: device is ready.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;whe the problem&amp;nbsp;occurred no any disk operation possible&lt;/P&gt;&lt;P&gt;server totally stuck down&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;any workaround or any way to understand which disk is broken ?&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jan 2017 01:05:40 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6928919#M156035</guid>
      <dc:creator>Rom1kz</dc:creator>
      <dc:date>2017-01-03T01:05:40Z</dc:date>
    </item>
    <item>
      <title>Re: Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6933711#M156198</link>
      <description>&lt;P&gt;I've resolved the problem&lt;/P&gt;&lt;P&gt;using smartctl i got the disk whick has lot of errors&lt;/P&gt;&lt;P&gt;then i just replace it and problem is gone&lt;/P&gt;</description>
      <pubDate>Fri, 20 Jan 2017 11:36:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6933711#M156198</guid>
      <dc:creator>Rom1kz</dc:creator>
      <dc:date>2017-01-20T11:36:58Z</dc:date>
    </item>
    <item>
      <title>Re: Hpsa problems</title>
      <link>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6979700#M159538</link>
      <description>&lt;P&gt;Sorry for necroposting but, we've got this problem too, with retired p812. Digging into hpsa source gave us that strange &lt;SPAN class="st"&gt;C1:B1:T0:L3&lt;/SPAN&gt; notation is similar to scsi device notation without letters - 1:1:0:3&lt;/P&gt;&lt;P&gt;&amp;gt; lsscsi&lt;/P&gt;&lt;P&gt;[0:0:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; storage HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; P410i&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; -&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;BR /&gt;[0:1:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sda&lt;BR /&gt;[0:1:0:1]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdb&lt;BR /&gt;[0:1:0:2]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdc&lt;BR /&gt;[0:1:0:3]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdd&lt;BR /&gt;[0:1:0:4]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sde&lt;BR /&gt;[0:1:0:5]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdf&lt;BR /&gt;[1:0:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; storage HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; P812&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; -&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;nbsp;&lt;BR /&gt;[1:1:0:0]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdg&lt;BR /&gt;[1:1:0:1]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdh&lt;BR /&gt;[1:1:0:2]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdi&lt;BR /&gt;[1:1:0:3]&amp;nbsp;&amp;nbsp;&amp;nbsp; disk&amp;nbsp;&amp;nbsp;&amp;nbsp; HP&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; LOGICAL VOLUME&amp;nbsp;&amp;nbsp; 6.64&amp;nbsp; /dev/sdj&lt;/P&gt;&lt;P&gt;or&lt;/P&gt;&lt;P&gt;ls /sys/class/scsi_device/1:1:0:3/device/block/&lt;/P&gt;&lt;P&gt;sdj&lt;/P&gt;&lt;P&gt;Phisical device behind raid can be easily identified by "Disk Name:" value from hpacucli or hpssacli output.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 11 Oct 2017 10:59:06 GMT</pubDate>
      <guid>https://community.hpe.com/t5/proliant-servers-ml-dl-sl/hpsa-problems/m-p/6979700#M159538</guid>
      <dc:creator>meteozond</dc:creator>
      <dc:date>2017-10-11T10:59:06Z</dc:date>
    </item>
  </channel>
</rss>

