- Community Home
- >
- Servers and Operating Systems
- >
- Legacy
- >
- Operating System - Tru64 Unix
- >
- HSZ Disks
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-24-2009 01:16 AM
тАО06-24-2009 01:16 AM
HSZ Disks
Hello Gurus,
We are running a 2 node tru cluster with version 5.1 A.It has HSZ. Currently everythings working ok, but to be safe , we would like to be prepared for the hard disk failure case.
Please find the attached doc for the relevant details.
As can be seen there are already 2 spare disks which should take place of the failed disk in case of problem. It will be automatic ├в ┬жright? And what next then? How do we go ahead and replace the failed disk with the new one and data??
Please suggest.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-24-2009 03:56 PM
тАО06-24-2009 03:56 PM
Re: HSZ Disks
If the HSZ is configured properly and has all of the updated HSOF patches, it *should* be a seamless process. ;-)
The failed disk will be placed in the failedset group and if the spare drive meets the size requirement of the previously failed drive, it will be added into the storage set that the failed drive came out of.
The hardware array will rebuild without intervention.
The way your controller is configured now. The Failedset is showing noautospare. Meaning the failed drive will have to be deleted manually before physically removing the failed drive and replacing with a new spare.
See commands:
ADD/DELETE/SHOW SPARESET
SET/DELETE/SHOW FAILEDSET
In here:
HSZ70 CLI reference:
ftp://15.192.45.22/ftp1/pub/openstorage/v70_hsz70_cli_reference_a01.pdf
Basically:
Drive fails and placed in the FAILEDSET.
If same or larger size drive is available, SPARESET drive is moved into degraded RAID set and rebuild begins.
A) Issue DELETE FAILEDSET DISKxxxxx
B) Physically remove DISKxxxxx from slotX
C) Insert good spare disk in SlotX
D) Issue ADD SPARESET DISKxxxxx
E) Issue SHOW SPARESET to make sure new drive in added into the SPARESET. If it doesn't show up, you may have to RUN CONFIG and repeat
steps D & E.
Your mileage may vary...
hth,
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-25-2009 01:04 AM
тАО06-25-2009 01:04 AM
Re: HSZ Disks
I am having few questions though,
1. How to co relate scu show edt output and show disks output. E.g Does the disk shown in scu op as "Bus 0 Target 1 Lun 0" equivalent to "Port 0 Targ 1 Lun 0" shown in show disks ??
2. The steps you have given are also same if the failed disk is from MIRROR or RAIDFS set ?? Whats the difference between RAIDFS, MIRROR and D106??
3. Whats the implications of setting the Failedset to autospare?? And how to do it?
Meanwhile I will go read the docs to find out the solutions to above questions (Just in case noone replies :) )
Regards,
admin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-25-2009 01:46 AM
тАО06-25-2009 01:46 AM
Re: HSZ Disks
when the raid sets are created you give them a name e.g your mirror0 and raidfs2, these will be presented to the os as one unit, to get more detail on them do show raidfs2 for example in the hsz, the d106 etc entries are just jbods (single disks, unraided).
hth
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-25-2009 04:34 AM
тАО06-25-2009 04:34 AM
Re: HSZ Disks
Unix itself does not use the unitnumber but you can use it to map OS-volumes to HSZ-volumes.
86: /dev/disk/dsk6c DEC HSG80 bus-0-targ-0-lun-15
87: /dev/disk/dsk7c DEC HSG80 bus-0-targ-0-lun-10
88: /dev/disk/dsk8c DEC HSG80 bus-0-targ-5-lun-20
89: /dev/disk/dsk9c DEC HSG80 bus-0-targ-0-lun-19
The other option is to use the WWID of the disks to identify them.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-25-2009 04:40 AM
тАО06-25-2009 04:40 AM
Re: HSZ Disks
SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH
-------------------------------------------------------------------------
86: 14 eagle disk none 2 4 dsk6 [0/3/15]
WWID:01000010:6000-1fe1-000b-5a10-0009-1050-4750-0099
BUS TARGET LUN PATH STATE
---------------------------------
0 3 15 valid
0 2 15 valid
0 1 15 valid
0 0 15 valid
root@eagle #