1752277 Members
4484 Online
108786 Solutions
New Discussion юеВ

HSZ Disks

 
admin1979
Super Advisor

HSZ Disks


Hello Gurus,


We are running a 2 node tru cluster with version 5.1 A.It has HSZ. Currently everythings working ok, but to be safe , we would like to be prepared for the hard disk failure case.

Please find the attached doc for the relevant details.

As can be seen there are already 2 spare disks which should take place of the failed disk in case of problem. It will be automatic ├в ┬жright? And what next then? How do we go ahead and replace the failed disk with the new one and data??
Please suggest.
5 REPLIES 5
cnb
Honored Contributor

Re: HSZ Disks

Hi,

If the HSZ is configured properly and has all of the updated HSOF patches, it *should* be a seamless process. ;-)

The failed disk will be placed in the failedset group and if the spare drive meets the size requirement of the previously failed drive, it will be added into the storage set that the failed drive came out of.

The hardware array will rebuild without intervention.


The way your controller is configured now. The Failedset is showing noautospare. Meaning the failed drive will have to be deleted manually before physically removing the failed drive and replacing with a new spare.

See commands:

ADD/DELETE/SHOW SPARESET
SET/DELETE/SHOW FAILEDSET

In here:

HSZ70 CLI reference:
ftp://15.192.45.22/ftp1/pub/openstorage/v70_hsz70_cli_reference_a01.pdf

Basically:

Drive fails and placed in the FAILEDSET.
If same or larger size drive is available, SPARESET drive is moved into degraded RAID set and rebuild begins.

A) Issue DELETE FAILEDSET DISKxxxxx
B) Physically remove DISKxxxxx from slotX
C) Insert good spare disk in SlotX
D) Issue ADD SPARESET DISKxxxxx
E) Issue SHOW SPARESET to make sure new drive in added into the SPARESET. If it doesn't show up, you may have to RUN CONFIG and repeat
steps D & E.

Your mileage may vary...

hth,

admin1979
Super Advisor

Re: HSZ Disks

Thanks for the detailed reply.
I am having few questions though,

1. How to co relate scu show edt output and show disks output. E.g Does the disk shown in scu op as "Bus 0 Target 1 Lun 0" equivalent to "Port 0 Targ 1 Lun 0" shown in show disks ??

2. The steps you have given are also same if the failed disk is from MIRROR or RAIDFS set ?? Whats the difference between RAIDFS, MIRROR and D106??
3. Whats the implications of setting the Failedset to autospare?? And how to do it?

Meanwhile I will go read the docs to find out the solutions to above questions (Just in case noone replies :) )


Regards,
admin
marsh_1
Honored Contributor

Re: HSZ Disks

hi,

when the raid sets are created you give them a name e.g your mirror0 and raidfs2, these will be presented to the os as one unit, to get more detail on them do show raidfs2 for example in the hsz, the d106 etc entries are just jbods (single disks, unraided).

hth

Pieter 't Hart
Honored Contributor

Re: HSZ Disks

simpelest way is to give it a unit-number when presenting to the host.
Unix itself does not use the unitnumber but you can use it to map OS-volumes to HSZ-volumes.

86: /dev/disk/dsk6c DEC HSG80 bus-0-targ-0-lun-15
87: /dev/disk/dsk7c DEC HSG80 bus-0-targ-0-lun-10
88: /dev/disk/dsk8c DEC HSG80 bus-0-targ-5-lun-20
89: /dev/disk/dsk9c DEC HSG80 bus-0-targ-0-lun-19

The other option is to use the WWID of the disks to identify them.
Pieter 't Hart
Honored Contributor

Re: HSZ Disks

root@eagle # hwmgr -show scsi -full -id 86

SCSI DEVICE DEVICE DRIVER NUM DEVICE FIRST
HWID: DEVICEID HOSTNAME TYPE SUBTYPE OWNER PATH FILE VALID PATH
-------------------------------------------------------------------------
86: 14 eagle disk none 2 4 dsk6 [0/3/15]

WWID:01000010:6000-1fe1-000b-5a10-0009-1050-4750-0099


BUS TARGET LUN PATH STATE
---------------------------------
0 3 15 valid
0 2 15 valid
0 1 15 valid
0 0 15 valid
root@eagle #