<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Disk Array Issue for Storageworks 2405 in Disk Enclosures</title>
    <link>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190547#M49229</link>
    <description>&lt;P&gt;OS= HPUX 11.11&lt;/P&gt;&lt;P&gt;System= RP7420&lt;/P&gt;&lt;P&gt;Disk Array = Storageworks 2405 with SCIS C1010 Ultra wide 160 drives&lt;/P&gt;&lt;P&gt;Having issue where one drive indicates it is bad and upon replacement – we get zero disk space on the drive.&lt;/P&gt;&lt;P&gt;Now starting to get other disk drive errors.&lt;/P&gt;&lt;P&gt;Wondering if this might be a midplane issue on the disk array.&lt;/P&gt;&lt;P&gt;The 2 disks that are bad are not part of a volume group on system. One of the disks was a mirror disk in a volume group but lvreduced the mirrors and &amp;nbsp;vgreduced it out of the volume group. Before that the vgdisplay showed some stale extents on the mirrored lvol’s but that doesn’t show up anymore since it was reduced out.&lt;/P&gt;&lt;P&gt;The diskinfo for the 2 bad disks showing 0 kbytes is below. They show up fine in ioscan as CLAIMED with no errors. Very puzzling. They also show up as FAILED in cstm – that output is below.&lt;/P&gt;&lt;P&gt;teossh1:root:/root&amp;gt;diskinfo /dev/rdsk/c10t9d0&lt;/P&gt;&lt;P&gt;SCSI describe of /dev/rdsk/c10t9d0:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vendor: HP 73.4G&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; product id: ST373453FC&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; type: direct access&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size: 0 Kbytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; bytes per sector: 0&lt;/P&gt;&lt;P&gt;teossh1:root:/root&amp;gt;diskinfo /dev/rdsk/c10t1d0&lt;/P&gt;&lt;P&gt;SCSI describe of /dev/rdsk/c10t1d0:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vendor: HP 73.4G&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; product id: ST373453FC&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; type: direct access&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size: 0 Kbytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; bytes per sector: 0&lt;/P&gt;&lt;P&gt;The cstm output for those 2 disks:&lt;/P&gt;&lt;P&gt;78&amp;nbsp; 1/0/4/1/0.8.0.255.0. SCSI Disk (HP73.4GST37345 Information FAILED&lt;/P&gt;&lt;P&gt;86&amp;nbsp; 1/0/4/1/0.8.0.255.0. SCSI Disk (HP73.4GST37345 Information FAILED&lt;/P&gt;&lt;P&gt;No longer seeing ESM alerts in syslog. For some reason those went away.&lt;/P&gt;</description>
    <pubDate>Thu, 22 Jun 2023 18:45:10 GMT</pubDate>
    <dc:creator>Polidori</dc:creator>
    <dc:date>2023-06-22T18:45:10Z</dc:date>
    <item>
      <title>Disk Array Issue for Storageworks 2405</title>
      <link>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190547#M49229</link>
      <description>&lt;P&gt;OS= HPUX 11.11&lt;/P&gt;&lt;P&gt;System= RP7420&lt;/P&gt;&lt;P&gt;Disk Array = Storageworks 2405 with SCIS C1010 Ultra wide 160 drives&lt;/P&gt;&lt;P&gt;Having issue where one drive indicates it is bad and upon replacement – we get zero disk space on the drive.&lt;/P&gt;&lt;P&gt;Now starting to get other disk drive errors.&lt;/P&gt;&lt;P&gt;Wondering if this might be a midplane issue on the disk array.&lt;/P&gt;&lt;P&gt;The 2 disks that are bad are not part of a volume group on system. One of the disks was a mirror disk in a volume group but lvreduced the mirrors and &amp;nbsp;vgreduced it out of the volume group. Before that the vgdisplay showed some stale extents on the mirrored lvol’s but that doesn’t show up anymore since it was reduced out.&lt;/P&gt;&lt;P&gt;The diskinfo for the 2 bad disks showing 0 kbytes is below. They show up fine in ioscan as CLAIMED with no errors. Very puzzling. They also show up as FAILED in cstm – that output is below.&lt;/P&gt;&lt;P&gt;teossh1:root:/root&amp;gt;diskinfo /dev/rdsk/c10t9d0&lt;/P&gt;&lt;P&gt;SCSI describe of /dev/rdsk/c10t9d0:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vendor: HP 73.4G&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; product id: ST373453FC&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; type: direct access&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size: 0 Kbytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; bytes per sector: 0&lt;/P&gt;&lt;P&gt;teossh1:root:/root&amp;gt;diskinfo /dev/rdsk/c10t1d0&lt;/P&gt;&lt;P&gt;SCSI describe of /dev/rdsk/c10t1d0:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vendor: HP 73.4G&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; product id: ST373453FC&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; type: direct access&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; size: 0 Kbytes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; bytes per sector: 0&lt;/P&gt;&lt;P&gt;The cstm output for those 2 disks:&lt;/P&gt;&lt;P&gt;78&amp;nbsp; 1/0/4/1/0.8.0.255.0. SCSI Disk (HP73.4GST37345 Information FAILED&lt;/P&gt;&lt;P&gt;86&amp;nbsp; 1/0/4/1/0.8.0.255.0. SCSI Disk (HP73.4GST37345 Information FAILED&lt;/P&gt;&lt;P&gt;No longer seeing ESM alerts in syslog. For some reason those went away.&lt;/P&gt;</description>
      <pubDate>Thu, 22 Jun 2023 18:45:10 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190547#M49229</guid>
      <dc:creator>Polidori</dc:creator>
      <dc:date>2023-06-22T18:45:10Z</dc:date>
    </item>
    <item>
      <title>Query: Disk Array Issue for Storageworks 2405</title>
      <link>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190553#M49231</link>
      <description>&lt;P style="margin: 0;"&gt;&lt;STRONG&gt;System recommended content:&lt;/STRONG&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;1. &lt;A href="https://hpe.to/6608OCrHr" target="_blank" rel="noopener"&gt;HP StorageWorks XP12000 Disk Arrays - µcode Upgrades Using The "non-stop SCSI" Method and OS Issued Reserves, RAID500 Alert # 29&lt;/A&gt;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Please click on "Thumbs Up/Kudo" icon to give a "Kudo".&lt;/P&gt;
&lt;P style="margin: 0;"&gt;&amp;nbsp;&lt;/P&gt;
&lt;P style="margin: 0;"&gt;Thank you for being a HPE valuable community member.&lt;/P&gt;</description>
      <pubDate>Thu, 22 Jun 2023 19:46:00 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190553#M49231</guid>
      <dc:creator>support_s</dc:creator>
      <dc:date>2023-06-22T19:46:00Z</dc:date>
    </item>
    <item>
      <title>Re: Disk Array Issue for Storageworks 2405</title>
      <link>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190699#M49234</link>
      <description>&lt;P&gt;&lt;a href="https://community.hpe.com/t5/user/viewprofilepage/user-id/2163706"&gt;@Polidori&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Refer to the below link and let me know how it goes:&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.hpe.com/t5/operating-system-hp-ux/disk-array-issue-for-storageworks-2405/m-p/7190528/highlight/false#M948479" target="_blank"&gt;https://community.hpe.com/t5/operating-system-hp-ux/disk-array-issue-for-storageworks-2405/m-p/7190528/highlight/false#M948479&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 26 Jun 2023 08:55:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/disk-enclosures/disk-array-issue-for-storageworks-2405/m-p/7190699#M49234</guid>
      <dc:creator>Vinky_99</dc:creator>
      <dc:date>2023-06-26T08:55:07Z</dc:date>
    </item>
  </channel>
</rss>

