Integrity Servers
cancel
Showing results for 
Search instead for 
Did you mean: 

sas disk problem

 
JoeBob_2
Occasional Advisor

sas disk problem

I am new to sas controllers and disks. One of my team mates recently removed a mirrored disk from a vg in slot 8 of an rx2660. There were no raid devices configured, just a total of four disks in two mirrored vgs. (No sasmgr commands were used, the lvol was un-mirrored and the vg reduced and the disk removed). I received a replacement disk, and when I went to re-mirror the vol, I ioscanned the system and saw
ora51:/root #-> ioscan -fnC disk
Class I H/W Path Driver S/W State H/W Type Description
====================================================================================
disk 0 0/0/2/1.0.16.0.0 sdisk CLAIMED DEVICE TEAC DV-28E-N
/dev/dsk/c0t0d0 /dev/rdsk/c0t0d0
disk 1 0/1/1/0.0.0.0.0 sdisk CLAIMED DEVICE HP DG072A9BB7
/dev/dsk/c1t0d0 /dev/dsk/c1t0d0s3 /dev/rdsk/c1t0d0s2
/dev/dsk/c1t0d0s1 /dev/rdsk/c1t0d0 /dev/rdsk/c1t0d0s3
/dev/dsk/c1t0d0s2 /dev/rdsk/c1t0d0s1
disk 2 0/1/1/0.0.0.1.0 sdisk CLAIMED DEVICE HP DG072A9BB7
/dev/dsk/c1t1d0 /dev/dsk/c1t1d0s3 /dev/rdsk/c1t1d0s2
/dev/dsk/c1t1d0s1 /dev/rdsk/c1t1d0 /dev/rdsk/c1t1d0s3
/dev/dsk/c1t1d0s2 /dev/rdsk/c1t1d0s1
disk 35 0/1/1/0.0.0.2.0 sdisk CLAIMED DEVICE HP DG072A9BB7
/dev/dsk/c1t2d0 /dev/rdsk/c1t2d0
disk 3 0/1/1/0.0.0.4.0 sdisk CLAIMED DEVICE HP IR Volume
/dev/dsk/c1t4d0 /dev/rdsk/c1t4d0

The disk had been originally at c1t3d0, I rmsf'd the old hardware path, but the new disk is a raid disk.

Is it possible to change the raid status of disk c1t4d0 online or do I need to boot to the efi shell and use drvcfg? I used the following commands and received the following responses from the system
ora51:/root #-> sasmgr set_attr -D /dev/sasd0 -q lun=/dev/dsk/c1t4d0 -q locate_led=on
ERROR: Inquiry failed: Not a typewriter
ERROR: More than one PHY in the iport: Invalid argument
Locate LED set to ON.
ora51:/root #-> sasmgr delete -D /dev/sasd0 -q raid -q raid_vol=0

WARNING: This is a DESTRUCTIVE operation.
This might result in failure of current I/O requests.
Do you want to continue ?(y/n) [n]...(I entered 'y')
ERROR: Unable to delete a RAID volume: Bad address
8 REPLIES 8
Torsten.
Acclaimed Contributor

Re: sas disk problem

Can you please post

# sasmgr get_info -D /dev/sasd0 -q raid

to see what the current status is.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
JoeBob_2
Occasional Advisor

Re: sas disk problem

ora51:/root #-> sasmgr get_info -D /dev/sasd0 -q raid

Mon Nov 17 12:17:48 2008

---------- PHYSICAL DRIVES ----------
LUN dsf SAS Address Enclosure Bay Size(MB)

/dev/rdsk/c1t0d0 0x500000e01401b492 1 5 70007
/dev/rdsk/c1t1d0 0x500000e012b453c2 1 6 70007
/dev/rdsk/c1t2d0 0x500000e01401b172 1 7 70007

---------- LOGICAL DRIVE 8 ----------

Raid Level : RAID 1
Volume sas address : 0xbd16fadc0cccf09
Device Special File : /dev/rdsk/c1t4d0
Raid State : DEGRADED
Raid Status Flag : ENABLED
Raid Size : 69878
Rebuild Rate : 0.00 %
Rebuild Progress : 100.00 %

Participating Physical Drive(s) :

SAS Address Enc Bay Size(MB) Type State

0x5000c50001dd67c5 1 8 70007 PRIMARY ONLINE
0x0 0 0 70007 SECONDARY MISSING
Torsten.
Acclaimed Contributor

Re: sas disk problem

According to this output you have:

1 disk as part of a hardware RAID (!!) in slot 8 (!)

the partner is configured to be in slot 7, but this disk is "missing".

IMHO you should still be able to access the device /dev/rdsk/c1t4d0 .

You have disks in slots 5, 6, 7, 8


I assume the disk was pulled from slot 7 - was the server powered off during the disk swap?

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
JoeBob_2
Occasional Advisor

Re: sas disk problem

Torsten, thanks for your responses. The disk was pulled from slot 8, which had been the mirror to the disk in slot 7. The system was alive when the disk was pulled, but I wasn't present. So far as I know, the admin reduced the lvol mirror and then reduced the disk in slot 8 out of the volume group and then pulled the disk. When I inserted a replacement in slot 8, it presented itself as a raid disk.
Torsten.
Acclaimed Contributor

Re: sas disk problem

So maybe this disk was part of a RAID in another system before?

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
JoeBob_2
Occasional Advisor

Re: sas disk problem

I wondered about that myself. My supervisor supplied the replacement part, in a sealed anti-static bag, but there was no info on the outside of the bag. It seems very possible the replacement disk in slot 8 came from a source where it had been a raid disk.
Torsten.
Acclaimed Contributor

Re: sas disk problem

Note: if you want to modify this RAID in any way, it is LOGICAL DRIVE 8, so use 8, not 0.

Hope this helps!
Regards
Torsten.

__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.

__________________________________________________
No support by private messages. Please ask the forum!

If you feel this was helpful please click the KUDOS! thumb below!   
JoeBob_2
Occasional Advisor

Re: sas disk problem

ora51:/root #-> sasmgr delete -D /dev/sasd0 -q raid -q raid_vol=8

WARNING: This is a DESTRUCTIVE operation.
This might result in failure of current I/O requests.
Do you want to continue ?(y/n) [n]...
RAID Volume 8 deleted successfully.

ora51:/root #-> sasmgr get_info -D /dev/sasd0 -q raid

Tue Nov 18 06:56:59 2008

---------- PHYSICAL DRIVES ----------
LUN dsf SAS Address Enclosure Bay Size(MB)

/dev/rdsk/c1t0d0 0x500000e01401b492 1 5 70007
/dev/rdsk/c1t1d0 0x500000e012b453c2 1 6 70007
/dev/rdsk/c1t2d0 0x500000e01401b172 1 7 70007
/dev/rdsk/c1t5d0 0x5000c50001dd67c5 1 8 70007

That did the trick! Thanks!