- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Replace failed SAS drive
Operating System - HP-UX
1748223
Members
4737
Online
108759
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
01-10-2011 01:52 PM
01-10-2011 01:52 PM
Replace failed SAS drive
Greetings,
I have finally run into a situation where a SAS drive failed, and the hot spare drive took over, as expected. The failed drive was replaced, and now I find myself in a situation where the replacement drive is marked as the new hot spare.
Is there a way to migrate the data back to the old drive, so that I can keep my system documentation the same? Also, the sasmgr phy=all listing still shows a reference to the missing/failed drive. I assume this can be cleaned up as well.
Here is what things looked like before the failure:
---------- PHYSICAL DRIVES ----------
LUN dsf SAS Address Enclosure Bay Size(MB)
/dev/rdsk/c0t1d0 0x500000e112616f52 1 3 140014
/dev/rdsk/c0t2d0 0x500000e1129bf332 1 4 140014
---------- LOGICAL DRIVE 6 ----------
Raid Level : RAID 1
Volume sas address : 0x9f962b139063d89
Device Special File : /dev/rdsk/c0t6d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139136
Rebuild Rate : 20.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e1128f0402 1 5 140014 SECONDARY ONLINE
0x500000e1128c1e62 1 6 140014 PRIMARY ONLINE
---------- LOGICAL DRIVE 7 ----------
Raid Level : RAID 1
Volume sas address : 0x9ede979990d7441
Device Special File : /dev/rdsk/c0t5d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139236
Rebuild Rate : 0.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e1126e08e2 1 8 140014 PRIMARY ONLINE
0x500000e1126103d2 1 7 140014 SECONDARY ONLINE
---------- GLOBAL SPARE DRIVES ----------
SAS Address Enc Bay Size(MB) Pool State
0x500000e112735fc2 1 2 140014 0 ACTIVE
Here is what they look like now:
# sasmgr get_info -D /dev/sasd1 -q raid
Mon Jan 10 16:49:56 2011
---------- PHYSICAL DRIVES ----------
LUN dsf SAS Address Enclosure Bay Size(MB)
/dev/rdsk/c0t1d0 0x500000e112616f52 1 3 140014
/dev/rdsk/c0t2d0 0x500000e1129bf332 1 4 140014
---------- LOGICAL DRIVE 6 ----------
Raid Level : RAID 1
Volume sas address : 0x9f962b139063d89
Device Special File : /dev/rdsk/c0t6d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139136
Rebuild Rate : 20.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e112735fc2 1 2 140014 SECONDARY ONLINE
0x500000e1128c1e62 1 6 140014 PRIMARY ONLINE
---------- LOGICAL DRIVE 7 ----------
Raid Level : RAID 1
Volume sas address : 0x9ede979990d7441
Device Special File : /dev/rdsk/c0t5d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139236
Rebuild Rate : 0.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e1126e08e2 1 8 140014 PRIMARY ONLINE
0x500000e1126103d2 1 7 140014 SECONDARY ONLINE
---------- GLOBAL SPARE DRIVES ----------
SAS Address Enc Bay Size(MB) Pool State
0x5000c5002c73c959 1 5 140014 0 ACTIVE
Here is the physical dev listing:
]# sasmgr get_info -D /dev/sasd1 -q phy=all
Mon Jan 10 16:50:52 2011
Info for PHY ID : 0
PHY Health : UP
Port SAS Address : 0x500605b0018abb10
Attached SAS Address : 0x500000e1129bf332
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 1
PHY Health : UP
Port SAS Address : 0x500605b0018abb11
Attached SAS Address : 0x500000e112616f52
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 2
PHY Health : UP
Port SAS Address : 0x500605b0018abb12
Attached SAS Address : 0x500000e112735fc2
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 3
PHY Health : DOWN
Port SAS Address : 0x0
Attached SAS Address : 0x0
Current Link Rate : 1.5 Gbps
Max Link Rate : 1.5 Gbps
Info for PHY ID : 4
PHY Health : UP
Port SAS Address : 0x500605b0018abb14
Attached SAS Address : 0x500000e1126e08e2
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 5
PHY Health : UP
Port SAS Address : 0x500605b0018abb15
Attached SAS Address : 0x500000e1126103d2
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 6
PHY Health : UP
Port SAS Address : 0x500605b0018abb16
Attached SAS Address : 0x500000e1128c1e62
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 7
PHY Health : UP
Port SAS Address : 0x500605b0018abb17
Attached SAS Address : 0x5000c5002c73c959
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
As you can see, it looks as if there is still a reference to the failed drive...how do I clean this up?
Thanks in advance,
-tjh
I have finally run into a situation where a SAS drive failed, and the hot spare drive took over, as expected. The failed drive was replaced, and now I find myself in a situation where the replacement drive is marked as the new hot spare.
Is there a way to migrate the data back to the old drive, so that I can keep my system documentation the same? Also, the sasmgr phy=all listing still shows a reference to the missing/failed drive. I assume this can be cleaned up as well.
Here is what things looked like before the failure:
---------- PHYSICAL DRIVES ----------
LUN dsf SAS Address Enclosure Bay Size(MB)
/dev/rdsk/c0t1d0 0x500000e112616f52 1 3 140014
/dev/rdsk/c0t2d0 0x500000e1129bf332 1 4 140014
---------- LOGICAL DRIVE 6 ----------
Raid Level : RAID 1
Volume sas address : 0x9f962b139063d89
Device Special File : /dev/rdsk/c0t6d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139136
Rebuild Rate : 20.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e1128f0402 1 5 140014 SECONDARY ONLINE
0x500000e1128c1e62 1 6 140014 PRIMARY ONLINE
---------- LOGICAL DRIVE 7 ----------
Raid Level : RAID 1
Volume sas address : 0x9ede979990d7441
Device Special File : /dev/rdsk/c0t5d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139236
Rebuild Rate : 0.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e1126e08e2 1 8 140014 PRIMARY ONLINE
0x500000e1126103d2 1 7 140014 SECONDARY ONLINE
---------- GLOBAL SPARE DRIVES ----------
SAS Address Enc Bay Size(MB) Pool State
0x500000e112735fc2 1 2 140014 0 ACTIVE
Here is what they look like now:
# sasmgr get_info -D /dev/sasd1 -q raid
Mon Jan 10 16:49:56 2011
---------- PHYSICAL DRIVES ----------
LUN dsf SAS Address Enclosure Bay Size(MB)
/dev/rdsk/c0t1d0 0x500000e112616f52 1 3 140014
/dev/rdsk/c0t2d0 0x500000e1129bf332 1 4 140014
---------- LOGICAL DRIVE 6 ----------
Raid Level : RAID 1
Volume sas address : 0x9f962b139063d89
Device Special File : /dev/rdsk/c0t6d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139136
Rebuild Rate : 20.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e112735fc2 1 2 140014 SECONDARY ONLINE
0x500000e1128c1e62 1 6 140014 PRIMARY ONLINE
---------- LOGICAL DRIVE 7 ----------
Raid Level : RAID 1
Volume sas address : 0x9ede979990d7441
Device Special File : /dev/rdsk/c0t5d0
Raid State : OPTIMAL
Raid Status Flag : ENABLED
Raid Size : 139236
Rebuild Rate : 0.00 %
Rebuild Progress : 100.00 %
Participating Physical Drive(s) :
SAS Address Enc Bay Size(MB) Type State
0x500000e1126e08e2 1 8 140014 PRIMARY ONLINE
0x500000e1126103d2 1 7 140014 SECONDARY ONLINE
---------- GLOBAL SPARE DRIVES ----------
SAS Address Enc Bay Size(MB) Pool State
0x5000c5002c73c959 1 5 140014 0 ACTIVE
Here is the physical dev listing:
]# sasmgr get_info -D /dev/sasd1 -q phy=all
Mon Jan 10 16:50:52 2011
Info for PHY ID : 0
PHY Health : UP
Port SAS Address : 0x500605b0018abb10
Attached SAS Address : 0x500000e1129bf332
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 1
PHY Health : UP
Port SAS Address : 0x500605b0018abb11
Attached SAS Address : 0x500000e112616f52
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 2
PHY Health : UP
Port SAS Address : 0x500605b0018abb12
Attached SAS Address : 0x500000e112735fc2
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 3
PHY Health : DOWN
Port SAS Address : 0x0
Attached SAS Address : 0x0
Current Link Rate : 1.5 Gbps
Max Link Rate : 1.5 Gbps
Info for PHY ID : 4
PHY Health : UP
Port SAS Address : 0x500605b0018abb14
Attached SAS Address : 0x500000e1126e08e2
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 5
PHY Health : UP
Port SAS Address : 0x500605b0018abb15
Attached SAS Address : 0x500000e1126103d2
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 6
PHY Health : UP
Port SAS Address : 0x500605b0018abb16
Attached SAS Address : 0x500000e1128c1e62
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
Info for PHY ID : 7
PHY Health : UP
Port SAS Address : 0x500605b0018abb17
Attached SAS Address : 0x5000c5002c73c959
Current Link Rate : 3 Gbps
Max Link Rate : 3 Gbps
As you can see, it looks as if there is still a reference to the failed drive...how do I clean this up?
Thanks in advance,
-tjh
I learn something new everyday. (usually because I break something new everyday)
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP