<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk timeout error in HPE EVA Storage</title>
    <link>https://community.hpe.com/t5/hpe-eva-storage/disk-timeout-error/m-p/4789076#M49200</link>
    <description>can you please advise me on this error</description>
    <pubDate>Wed, 18 May 2011 11:58:29 GMT</pubDate>
    <dc:creator>caj</dc:creator>
    <dc:date>2011-05-18T11:58:29Z</dc:date>
    <item>
      <title>Disk timeout error</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/disk-timeout-error/m-p/4789075#M49199</link>
      <description>Hi ,&lt;BR /&gt;&lt;BR /&gt;I have 5* bl460c g7 server with EVA6400 ,VC fabric with 2* 8/40 brocade switch.&lt;BR /&gt;&lt;BR /&gt;Each VC has 2*8 GB link to each switch .And all the 4 port on the each controller is connected as per the standard .&lt;BR /&gt;&lt;BR /&gt;OS RHEL 5u6&lt;BR /&gt;&lt;BR /&gt;I am facing the below error when ever there is heavy I/O ,or some times (one try out of 10) simple multipath -ll or pvs will hung for some time and it throw the below error in the console/message file  .Does any one having any idea what si gone wrong ?&lt;BR /&gt;&lt;BR /&gt;Jan 19 03:15:54 testkernel: INFO: task mpath_prio_alua:23084 blocked for more than 120 seconds.&lt;BR /&gt;Jan 19 03:15:54 testkernel: "echo 0 &amp;gt; /proc/sys/kernel/hung_task_timeout_secs" disables this message.&lt;BR /&gt;Jan 19 03:15:54 testkernel: mpath_prio_al D ffffffff80153806     0 23084   8051                     (NOTLB)&lt;BR /&gt;Jan 19 03:15:54 testkernel:  ffff810f6b2f9a28 0000000000000086 ffff81080a6a2080 ffff81080bdbe4f8&lt;BR /&gt;Jan 19 03:15:54 testkernel:  ffff81080bdbe000 0000000000000001 ffff810828250820 ffff81080b82e7a0&lt;BR /&gt;Jan 19 03:15:54 testkernel:  00005afd5ca08298 0000000000002cd3 ffff810828250a08 0000000f0a0b52c0&lt;BR /&gt;Jan 19 03:15:54 testkernel: Call Trace:&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8006315F&gt;] wait_for_completion+0x79/0xa2&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8008E40A&gt;] default_wake_function+0x0/0xe&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF80148093&gt;] blk_execute_rq_nowait+0x7e/0x92&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8014813F&gt;] blk_execute_rq+0x98/0xc0&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8014BA95&gt;] sg_io+0x258/0x356&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8014C01A&gt;] scsi_cmd_ioctl+0x1d2/0x3b5&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8012D28B&gt;] avc_has_perm+0x46/0x58&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8000CF88&gt;] do_lookup+0x65/0x1e6&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF880A90BD&gt;] :sd_mod:sd_ioctl+0x93/0xc2&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF80149B33&gt;] blkdev_driver_ioctl+0x5d/0x72&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8014A184&gt;] blkdev_ioctl+0x63c/0x697&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8012D28B&gt;] avc_has_perm+0x46/0x58&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8012DB5F&gt;] inode_has_perm+0x56/0x63&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF800E7C1E&gt;] blkdev_open+0x0/0x4f&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF800E7C41&gt;] blkdev_open+0x23/0x4f&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8001EABC&gt;] __dentry_open+0x101/0x1dc&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF800E6F9E&gt;] block_ioctl+0x1b/0x1f&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8004226A&gt;] do_ioctl+0x21/0x6b&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8003026E&gt;] vfs_ioctl+0x457/0x4b9&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8004C73B&gt;] sys_ioctl+0x59/0x78&lt;BR /&gt;Jan 19 03:15:54 testkernel:  [&lt;FFFFFFFF8005D28D&gt;] tracesys+0xd5/0xe0&lt;BR /&gt;J&lt;BR /&gt;&lt;/FFFFFFFF8005D28D&gt;&lt;/FFFFFFFF8004C73B&gt;&lt;/FFFFFFFF8003026E&gt;&lt;/FFFFFFFF8004226A&gt;&lt;/FFFFFFFF800E6F9E&gt;&lt;/FFFFFFFF8001EABC&gt;&lt;/FFFFFFFF800E7C41&gt;&lt;/FFFFFFFF800E7C1E&gt;&lt;/FFFFFFFF8012DB5F&gt;&lt;/FFFFFFFF8012D28B&gt;&lt;/FFFFFFFF8014A184&gt;&lt;/FFFFFFFF80149B33&gt;&lt;/FFFFFFFF880A90BD&gt;&lt;/FFFFFFFF8000CF88&gt;&lt;/FFFFFFFF8012D28B&gt;&lt;/FFFFFFFF8014C01A&gt;&lt;/FFFFFFFF8014BA95&gt;&lt;/FFFFFFFF8014813F&gt;&lt;/FFFFFFFF80148093&gt;&lt;/FFFFFFFF8008E40A&gt;&lt;/FFFFFFFF8006315F&gt;</description>
      <pubDate>Wed, 18 May 2011 03:40:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/disk-timeout-error/m-p/4789075#M49199</guid>
      <dc:creator>caj</dc:creator>
      <dc:date>2011-05-18T03:40:00Z</dc:date>
    </item>
    <item>
      <title>Re: Disk timeout error</title>
      <link>https://community.hpe.com/t5/hpe-eva-storage/disk-timeout-error/m-p/4789076#M49200</link>
      <description>can you please advise me on this error</description>
      <pubDate>Wed, 18 May 2011 11:58:29 GMT</pubDate>
      <guid>https://community.hpe.com/t5/hpe-eva-storage/disk-timeout-error/m-p/4789076#M49200</guid>
      <dc:creator>caj</dc:creator>
      <dc:date>2011-05-18T11:58:29Z</dc:date>
    </item>
  </channel>
</rss>

