- Community Home
- >
- Storage
- >
- Midrange and Enterprise Storage
- >
- HPE 3PAR StoreServ Storage
- >
- HP 3PAR Expansion with all magazine degraded and d...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2016 08:08 AM
10-19-2016 08:08 AM
Hello,
I am facing an issue with my expansion on my 3PAR System.
After updating the OS to 3.2.1 (MU2) P13, my expansion is shown as "degraded".
In fact, all the magazines are degraded and all the new disks added are "failed" instead of "normal".
Thank you in advance for your help
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2016 09:46 AM
10-19-2016 09:46 AM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2016 10:27 AM
10-19-2016 10:27 AM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Login via ssh to 3par and run admithw
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2016 03:15 PM
10-19-2016 03:15 PM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Hello,
Thanks for your answers. Here are results :
stkbdk3par01 cli% admithw Checking for drive table upgrade packages Checking nodes... Checking volumes... Checking system LDs... Checking ports... Checking state of disks... Volume creation may fail due to the following issues: Component ----------------Description---------------- Qty PD Too few PDs of type/speed/size behind Nodes 1 Component -Identifier- -----------------------------Description------------------------------ PD Nodes:0&1 Only 6 NL/7K/2000GB PDs are attached to these nodes; the minimum is 12 Enter c to continue despite this issue or q to quit and fix the issue manually: c Checking cabling... Checking cage firmware... Checking if this is an upgrade that added new types of drives... Checking for disks to admit... 0 disks admitted Checking admin volume... Admin volume exists. Checking if logging LDs need to be created... System has less cages than nodes; using -ha mag for logging LDs. No new logging LDs need to be created Checking if preserved data LDs need to be created... System has less cages than nodes; using -ha mag for preserved data LDs. No new preserved data LDs need to be created Checking if system scheduled tasks need to be created... Checking if the rights assigned to extended roles need to be updated... No need to update extended roles rights. There are currently servicemag operations in progress. This will prevent accurate spare allocation, so spare chunklets cannot be added or redistributed at this time. Enter c to continue despite this issue or q to quit and fix the issue manually: c System Reporter data volume exists. Not enough CPGs to create default AO CFG. Checking system health... Checking alert Checking cabling Checking cage Checking cert Checking dar Checking date Checking fs Checking host Checking ld Checking license Checking network Checking node Checking pd Checking port Checking rc Checking snmp Checking task Checking vlun Checking vv Component ----------------Description---------------- Qty Alert New alerts 49 Network Too few working admin network connections 1 PD Too few PDs of type/speed/size behind Nodes 1 PD Magazines with failed servicemag operations 17 PD Magazines that are being serviced 5 vlun Hosts not connected to a port 3 admithw has completed.
stkbdk3par01 cli% showversion Release version 3.2.1 (MU2) Patches: P07,P11,P13 Component Name Version CLI Server 3.2.1 (P11) CLI Client 3.2.1 System Manager 3.2.1 (P13) Kernel 3.2.1 (MU2) TPD Kernel Code 3.2.1 (P11) TPD Kernel Patch 3.2.1 (P13)
stkbdk3par01 cli% showsys ---------------(MB)---------------- ID ----Name---- ----Model---- -Serial- Nodes Master TotalCap AllocCap FreeCap FailedCap 86957 stkbdk3par01 HP_3PAR 7200c 1686957 2 0 34035712 6294528 7664640 20076544
stkbdk3par01 cli% showpd ----Size(MB)---- ----Ports---- Id CagePos Type RPM State Total Free A B Capacity(GB) 0 0:0:0 SSD 150 normal 872448 478208 1:0:1* 0:0:1 920 1 0:1:0 SSD 150 normal 872448 477184 1:0:1 0:0:1* 920 2 0:2:0 SSD 150 normal 872448 478208 1:0:1* 0:0:1 920 3 0:3:0 SSD 150 normal 872448 477184 1:0:1 0:0:1* 920 4 0:4:0 SSD 150 normal 872448 480256 1:0:1* 0:0:1 920 5 0:5:0 SSD 150 normal 872448 479232 1:0:1 0:0:1* 920 6 0:6:0 SSD 150 normal 872448 478208 1:0:1* 0:0:1 920 7 0:7:0 SSD 150 normal 872448 477184 1:0:1 0:0:1* 920 8 0:12:0 SSD 150 normal 872448 479232 1:0:1* 0:0:1 920 9 0:13:0 SSD 150 normal 872448 480256 1:0:1 0:0:1* 920 10 0:14:0 SSD 150 normal 872448 481280 1:0:1* 0:0:1 920 11 0:15:0 SSD 150 normal 872448 480256 1:0:1 0:0:1* 920 12 0:16:0 SSD 150 normal 872448 479232 1:0:1* 0:0:1 920 13 0:17:0 SSD 150 normal 872448 480256 1:0:1 0:0:1* 920 14 0:18:0 SSD 150 normal 872448 479232 1:0:1* 0:0:1 920 15 0:19:0 SSD 150 normal 872448 479232 1:0:1 0:0:1* 920 16 1:6:0 FC 10 failed 838656 0 1:0:2* 0:0:2 900 17 1:7:0 FC 10 failed 838656 0 1:0:2 0:0:2* 900 18 1:8:0 FC 10 failed 838656 0 1:0:2* 0:0:2 900 19 1:9:0 FC 10 failed 838656 0 1:0:2 0:0:2* 900 20 1:10:0 FC 10 failed 838656 0 1:0:2* 0:0:2 900 21 1:11:0 FC 10 failed 838656 0 1:0:2 0:0:2* 900 22 1:12:0 FC 10 failed 838656 0 1:0:2* 0:0:2 900 23 1:13:0 FC 10 failed 838656 0 1:0:2 0:0:2* 900 24 1:14:0 FC 15 failed 284672 0 1:0:2* 0:0:2 300 25 1:15:0 FC 15 failed 284672 0 1:0:2 0:0:2* 300 26 1:16:0 FC 15 failed 284672 0 1:0:2* 0:0:2 300 27 1:17:0 FC 15 failed 284672 0 1:0:2 0:0:2* 300 28 1:18:0 FC 15 failed 284672 0 1:0:2* 0:0:2 300 29 1:19:0 FC 15 failed 284672 0 1:0:2 0:0:2* 300 30 1:20:0 FC 15 failed 284672 0 1:0:2* 0:0:2 300 31 1:21:0 FC 15 failed 284672 0 1:0:2 0:0:2* 300 32 1:0:0 NL 7 failed 1848320 0 1:0:2* 0:0:2 2000 33 1:1:0 NL 7 failed 1848320 0 1:0:2 0:0:2* 2000 34 1:2:0 NL 7 failed 1848320 0 1:0:2* 0:0:2 2000 35 1:3:0 NL 7 failed 1848320 0 1:0:2 0:0:2* 2000 36 1:4:0 NL 7 failed 1848320 0 1:0:2* 0:0:2 2000 37 1:5:0 NL 7 failed 1848320 0 1:0:2 0:0:2* 2000 ---------------------------------------------------------------------- 38 total 34035712 7664640
The 22 disks failed located in cage1 are new. Cage1 is an new enclosure.
The former technical guy who cable the controler and the enclosure, made a mistake with the loop A.
So i correct the cabling, and in my opinion i need to "reset" the enclosure because i think that the controler had the old config (with the loop mistakes) in memory.
PS: After correcting the cabling, i restarted the enclosure... but nothing!
Thank you
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2016 03:22 PM
10-19-2016 03:22 PM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
More resutls from servicemag
stkbdk3par01 cli% servicemag status -d Cage 1, magazine 0: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 32 -- Failed Cage 1, magazine 1: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 33 -- Failed Cage 1, magazine 2: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 34 -- Failed Cage 1, magazine 3: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 35 -- Failed Cage 1, magazine 4: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 36 -- Failed Cage 1, magazine 5: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 37 -- Failed Cage 1, magazine 6: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 16 -- Failed Cage 1, magazine 7: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 17 -- Failed Cage 1, magazine 8: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 15 are not normal or currently being serviced. servicemag start -wait -pdid 18 -- Failed Cage 1, magazine 9: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:05 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 15 are not normal or currently being serviced. servicemag start -wait -pdid 19 -- Failed Cage 1, magazine 10: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:05 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 15 are not normal or currently being serviced. servicemag start -wait -pdid 20 -- Failed Cage 1, magazine 11: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:05 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 15 are not normal or currently being serviced. servicemag start -wait -pdid 21 -- Failed Cage 1, magazine 12: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:05 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 15 are not normal or currently being serviced. servicemag start -wait -pdid 22 -- Failed Cage 1, magazine 13: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:05 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 13 are not normal or currently being serviced. servicemag start -wait -pdid 23 -- Failed Cage 1, magazine 14: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:05 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 14 are not normal or currently being serviced. servicemag start -wait -pdid 24 -- Failed Cage 1, magazine 15: The magazine was successfully brought offline by a servicemag start command. The command completed at Wed Sep 14 08:03:15 2016. The output of the servicemag start was: servicemag start -wait -pdid 25 ... servicing disks in mag: 1 15 ... normal disks: ... not normal disks: WWN [5000CCA05B3F09D7] Id [25] ... relocating chunklets to spare space... ... bypassed mag 1 15 Failed -- failed to turn drive's LED amber servicemag start -wait -pdid 25 -- Succeeded Cage 1, magazine 16: The magazine was successfully brought offline by a servicemag start command. The command completed at Wed Sep 14 08:03:14 2016. The output of the servicemag start was: servicemag start -wait -pdid 26 ... servicing disks in mag: 1 16 ... normal disks: ... not normal disks: WWN [5000CCA05B3F0913] Id [26] ... relocating chunklets to spare space... ... bypassed mag 1 16 Failed -- failed to turn drive's LED amber servicemag start -wait -pdid 26 -- Succeeded Cage 1, magazine 17: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 13 are not normal or currently being serviced. servicemag start -wait -pdid 27 -- Failed Cage 1, magazine 18: The magazine was successfully brought offline by a servicemag start command. The command completed at Wed Sep 14 08:03:14 2016. The output of the servicemag start was: servicemag start -wait -pdid 28 ... servicing disks in mag: 1 18 ... normal disks: ... not normal disks: WWN [5000CCA05B3F11CB] Id [28] ... relocating chunklets to spare space... ... bypassed mag 1 18 Failed -- failed to turn drive's LED amber servicemag start -wait -pdid 28 -- Succeeded Cage 1, magazine 19: The magazine was successfully brought offline by a servicemag start command. The command completed at Wed Sep 14 08:03:14 2016. The output of the servicemag start was: servicemag start -wait -pdid 29 ... servicing disks in mag: 1 19 ... normal disks: ... not normal disks: WWN [5000CCA05B3F243B] Id [29] ... relocating chunklets to spare space... ... bypassed mag 1 19 Failed -- failed to turn drive's LED amber servicemag start -wait -pdid 29 -- Succeeded Cage 1, magazine 20: The magazine was successfully brought offline by a servicemag start command. The command completed at Wed Sep 14 08:03:14 2016. The output of the servicemag start was: servicemag start -wait -pdid 30 ... servicing disks in mag: 1 20 ... normal disks: ... not normal disks: WWN [5000CCA05B3DE1FB] Id [30] ... relocating chunklets to spare space... ... bypassed mag 1 20 Failed -- failed to turn drive's LED amber servicemag start -wait -pdid 30 -- Succeeded Cage 1, magazine 21: A servicemag start command failed on this magazine. The command completed at Wed Sep 14 08:03:06 2016. The output of the servicemag start was: Failed -- Servicemag failed as spinning down more disks could cause loss of TOC quorum. The system has 32 TOC disks and 16 are not normal or currently being serviced. servicemag start -wait -pdid 31 -- Failed
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2016 11:21 PM
10-19-2016 11:21 PM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Please give showpd -s and showpd -i
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2016 07:52 AM
10-20-2016 07:52 AM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
stkbdk3par01 cli% showpd -s Id CagePos Type -State- -Detailed_State-- --SedState-- 0 0:0:0 SSD normal normal not_capable 1 0:1:0 SSD normal normal not_capable 2 0:2:0 SSD normal normal not_capable 3 0:3:0 SSD normal normal not_capable 4 0:4:0 SSD normal normal not_capable 5 0:5:0 SSD normal normal not_capable 6 0:6:0 SSD normal normal not_capable 7 0:7:0 SSD normal normal not_capable 8 0:12:0 SSD normal normal not_capable 9 0:13:0 SSD normal normal not_capable 10 0:14:0 SSD normal normal not_capable 11 0:15:0 SSD normal normal not_capable 12 0:16:0 SSD normal normal not_capable 13 0:17:0 SSD normal normal not_capable 14 0:18:0 SSD normal normal not_capable 15 0:19:0 SSD normal normal not_capable 16 1:6:0 FC failed vacated,servicing fips_capable 17 1:7:0 FC failed vacated,servicing fips_capable 18 1:8:0 FC failed vacated,servicing fips_capable 19 1:9:0 FC failed vacated,servicing fips_capable 20 1:10:0 FC failed vacated,servicing fips_capable 21 1:11:0 FC failed vacated,servicing fips_capable 22 1:12:0 FC failed vacated,servicing fips_capable 23 1:13:0 FC failed vacated,servicing fips_capable 24 1:14:0 FC failed vacated,servicing not_capable 25 1:15:0 FC failed vacated,servicing not_capable 26 1:16:0 FC failed vacated,servicing not_capable 27 1:17:0 FC failed vacated,servicing not_capable 28 1:18:0 FC failed vacated,servicing not_capable 29 1:19:0 FC failed vacated,servicing not_capable 30 1:20:0 FC failed vacated,servicing not_capable 31 1:21:0 FC failed vacated,servicing not_capable 32 1:0:0 NL failed vacated,servicing not_capable 33 1:1:0 NL failed vacated,servicing not_capable 34 1:2:0 NL failed vacated,servicing not_capable 35 1:3:0 NL failed vacated,servicing not_capable 36 1:4:0 NL failed vacated,servicing not_capable 37 1:5:0 NL failed vacated,servicing not_capable ------------------------------------------------------ 38 total stkbdk3par01 cli%
stkbdk3par01 cli% showpd -i Id CagePos State ----Node_WWN---- --MFR-- -----Model------ -Serial- -FW_Rev- Protocol MediaType -----AdmissionTime----- 0 0:0:0 normal 5000CCA02B396047 HGST HSSC0920S5xnNMRI 2MW0K65A 3P02 SAS MLC 2015-06-06 17:04:21 GMT 1 0:1:0 normal 5000CCA02B3957C3 HGST HSSC0920S5xnNMRI 2MW0JMLA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 2 0:2:0 normal 5000CCA02B39255F HGST HSSC0920S5xnNMRI 2MW0E8KA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 3 0:3:0 normal 5000CCA02B395FF7 HGST HSSC0920S5xnNMRI 2MW0K5JA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 4 0:4:0 normal 5000CCA02B39609B HGST HSSC0920S5xnNMRI 2MW0K6VA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 5 0:5:0 normal 5000CCA02B39566F HGST HSSC0920S5xnNMRI 2MW0JJVA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 6 0:6:0 normal 5000CCA02B39699B HGST HSSC0920S5xnNMRI 2MW0KUEA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 7 0:7:0 normal 5000CCA02B3957A7 HGST HSSC0920S5xnNMRI 2MW0JMBA 3P02 SAS MLC 2015-06-06 17:04:21 GMT 8 0:12:0 normal 5000CCA04F2561D7 HGST HSCP0920S5xnNMRI 0RVNKA2A 3P00 SAS MLC 2016-04-08 21:59:00 GMT 9 0:13:0 normal 5000CCA04F255F63 HGST HSCP0920S5xnNMRI 0RVNK50A 3P00 SAS MLC 2016-04-08 21:59:00 GMT 10 0:14:0 normal 5000CCA04F24CF03 HGST HSCP0920S5xnNMRI 0RVN7JYA 3P00 SAS MLC 2016-04-08 21:59:00 GMT 11 0:15:0 normal 5000CCA04F23F057 HGST HSCP0920S5xnNMRI 0RVMSR7A 3P00 SAS MLC 2016-04-08 21:59:00 GMT 12 0:16:0 normal 5000CCA04F23FFA7 HGST HSCP0920S5xnNMRI 0RVMTRVA 3P00 SAS MLC 2016-04-08 21:59:00 GMT 13 0:17:0 normal 5000CCA04F23E98B HGST HSCP0920S5xnNMRI 0RVMS86A 3P00 SAS MLC 2016-04-08 21:59:00 GMT 14 0:18:0 normal 5000CCA04F255FF3 HGST HSCP0920S5xnNMRI 0RVNK65A 3P00 SAS MLC 2016-04-08 21:59:00 GMT 15 0:19:0 normal 5000CCA04F256037 HGST HSCP0920S5xnNMRI 0RVNK6RA 3P00 SAS MLC 2016-04-08 21:59:00 GMT 16 1:6:0 failed 5000C5008F9AC224 SEAGATE SLTN0900S5xnF010 S0N4Y91S 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 17 1:7:0 failed 5000C5008F7EF9EC SEAGATE SLTN0900S5xnF010 S0N4X25L 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 18 1:8:0 failed 5000C5008F6B7F3C SEAGATE SLTN0900S5xnF010 S0N4TPMC 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 19 1:9:0 failed 5000C500959DB480 SEAGATE SLTN0900S5xnF010 S0N515HS 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 20 1:10:0 failed 5000C5008F7EA56C SEAGATE SLTN0900S5xnF010 S0N4X2VY 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 21 1:11:0 failed 5000C5008F7F06BC SEAGATE SLTN0900S5xnF010 S0N4WBLV 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 22 1:12:0 failed 5000C500959DDAE4 SEAGATE SLTN0900S5xnF010 S0N5156K 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 23 1:13:0 failed 5000C5008F9ACDF4 SEAGATE SLTN0900S5xnF010 S0N4Y8VN 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 24 1:14:0 failed 5000CCA05B3F0D23 HGST HKCF0300S5xeN015 0TH3NYMP 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 25 1:15:0 failed 5000CCA05B3F09D7 HGST HKCF0300S5xeN015 0TH3NRUP 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 26 1:16:0 failed 5000CCA05B3F0913 HGST HKCF0300S5xeN015 0TH3NP7P 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 27 1:17:0 failed 5000CCA05B3EFD27 HGST HKCF0300S5xeN015 0TH3MWMP 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 28 1:18:0 failed 5000CCA05B3F11CB HGST HKCF0300S5xeN015 0TH3P87P 3P00 SAS Magnetic 2016-07-21 17:29:52 GMT 29 1:19:0 failed 5000CCA05B3F243B HGST HKCF0300S5xeN015 0TH3RH9P 3P00 SAS Magnetic 2016-07-21 17:29:53 GMT 30 1:20:0 failed 5000CCA05B3DE1FB HGST HKCF0300S5xeN015 0TH3110P 3P00 SAS Magnetic 2016-07-21 17:29:53 GMT 31 1:21:0 failed 5000CCA05B3F11F3 HGST HKCF0300S5xeN015 0TH3P8KP 3P00 SAS Magnetic 2016-07-21 17:29:53 GMT 32 1:0:0 failed 5000C50096CDE258 SEAGATE SAVN2000S5xeN7.2 S460EWAQ 3P01 SAS Magnetic 2016-08-03 16:49:45 GMT 33 1:1:0 failed 5000C50096CDB860 SEAGATE SAVN2000S5xeN7.2 S460J583 3P01 SAS Magnetic 2016-08-03 16:49:45 GMT 34 1:2:0 failed 5000C500967AA930 SEAGATE SAVN2000S5xeN7.2 S460HNVJ 3P01 SAS Magnetic 2016-08-03 16:49:45 GMT 35 1:3:0 failed 5000C50096CDA250 SEAGATE SAVN2000S5xeN7.2 S460J3FS 3P01 SAS Magnetic 2016-08-03 16:49:45 GMT 36 1:4:0 failed 5000C50096CE2454 SEAGATE SAVN2000S5xeN7.2 S460G4SE 3P01 SAS Magnetic 2016-08-03 16:49:45 GMT 37 1:5:0 failed 5000C50096CE298C SEAGATE SAVN2000S5xeN7.2 S460J2K7 3P01 SAS Magnetic 2016-08-03 16:49:45 GMT ------------------------------------------------------------------------------------------------------------------------ 38 total stkbdk3par01 cli%
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2016 08:21 AM
10-20-2016 08:21 AM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Before rebooting the controller (or nodes), how can i check if they are configured in cluste ?
I've seen the first Node is marked for Master "Yes" and the second "No". Does it mean that the two nodes are in cluster and i can safely reboot them one by one ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2016 10:19 AM
10-20-2016 10:19 AM
Re: HP 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-23-2016 03:31 PM
10-23-2016 03:31 PM
Re: HPE 3PAR Expansion with all magazine degraded and disks with "vacated, servicing" state
Perhaps it's time to talk to HPE Support about your problem?
The PDs were admitted 2-3 months ago, what happened in Sept?
I see nothing in your output that explains the reason for doing the auto-servicemag and why so many?
And "vacated, servicing' isn't helpful about the primary cause. Unless it was the bad cage cabling?
For all of the PDs that have "A servicemag start command failed on this magazine" you might try:
(Your first one is cage1 mag0.)
servicemag unmark 1 0
And if that doesn't work, try with: servicemag clearstatus 1 0
Then what does "showpd -s" show?
What does showcage show?
>I've seen the first Node is marked for Master "Yes" and the second "No". Does it mean that the two nodes are in cluster and I can safely reboot them one by one?
Yes, shownode will show them if in the cluster.