Disk Enclosures
1752565 Members
5556 Online
108788 Solutions
New Discussion

Cascade Failure - 4SI RAID

 
rmueller58
Valued Contributor

Cascade Failure - 4SI RAID

2nd Day in a row I've have a problem with RAID

Yesterday it appeared to begin on 1:0 and 1:1 today it appears to have started on 1:9, Any thoughts on what to look at 1st.

The configuration is an SC10 8 disk set, with a RAID 4SI card.

in discussions it could be another disk in the set, the chassis backplane of the SC10 or the RAID card.. Any thoughts as to what to look at 1st.

HP's FE is dispatched and on his way over to the shop, wanted to see if anyone has experienced the multiple cascade failure and the best avenue to fix and correct it..




Jan 9 01:13:48 esuunix1 syslog: IRMD[Info]: Adapter 1/4/0/1: Battery is fully charged. It is safe to set the cache policy to WRBACK if desired. In order to do that, please run
irm. Select the RAID adapter at /dev/iop0 and change the cache policy of the desired logical drives to WRBACK.
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2451]: Setting STREAMS-HEAD high water value to 131072.
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2457]: nfsd do_one mpctl succeeded: ncpus = 4.
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2457]: nfsd do_one pmap 2
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2457]: nfsd do_one pmap 3
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2458]: nfsd do_one bind 0
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2459]: nfsd do_one bind 1
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2457]: nfsd do_one bind 3
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2460]: nfsd do_one bind 2
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2457]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2463]: nfsd 3 2 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2457]: nfsd 3 3 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2459]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2467]: nfsd 1 0 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2458]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2469]: nfsd 1 1 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2471]: nfsd 1 2 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2459]: nfsd 1 3 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2470]: nfsd 0 0 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2472]: nfsd 0 1 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2458]: nfsd 0 3 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2473]: nfsd 0 2 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2462]: nfsd 3 1 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2461]: nfsd 3 0 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2460]: Return from t_optmgmt(XTI_DISTRIBUTE) 0
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2480]: nfsd 2 0 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2481]: nfsd 2 1 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2460]: nfsd 2 3 sock 4
Jan 9 01:14:55 esuunix1 /usr/sbin/nfsd[2482]: nfsd 2 2 sock 4
Jan 9 01:15:03 esuunix1 LVM[2526]: Volume Group configuration for /dev/vg00 has been saved in /etc/lvmconf/vg00.conf
Jan 9 01:15:03 esuunix1 LVM[2526]: vgcfgbackup /dev/vg00
Jan 9 01:15:03 esuunix1 LVM[2556]: Volume Group configuration for /dev/vg01 has been saved in /etc/lvmconf/vg01.conf
Jan 9 01:15:03 esuunix1 LVM[2556]: vgcfgbackup /dev/vg01
Jan 9 01:15:03 esuunix1 LVM[2561]: Volume Group configuration for /dev/vg02 has been saved in /etc/lvmconf/vg02.conf
Jan 9 01:15:03 esuunix1 LVM[2561]: vgcfgbackup /dev/vg02
Jan 9 01:15:04 esuunix1 LVM[2562]: Volume Group configuration for /dev/vg03 has been saved in /etc/lvmconf/vg03.conf
Jan 9 01:15:04 esuunix1 LVM[2562]: vgcfgbackup /dev/vg03
Jan 9 01:15:04 esuunix1 LVM[2563]: Volume Group configuration for /dev/vg04 has been saved in /etc/lvmconf/vg04.conf
Jan 9 01:15:04 esuunix1 LVM[2563]: vgcfgbackup /dev/vg04
Jan 9 01:15:08 esuunix1 prngd[2643]: prngd 0.9.26 (12 Jul 2002) started up for user root
Jan 9 01:15:08 esuunix1 prngd[2643]: have 6 out of 512 filedescriptors open
Jan 9 02:15:08 esuunix1 krsd[2646]: Delay time is 300 seconds
Jan 9 01:17:15 esuunix1 su: + tty?? root-informix
Jan 9 01:29:46 esuunix1 su: + tty?? root-informix
Jan 9 01:30:44 esuunix1 above message repeats 70 times
Jan 9 01:36:36 esuunix1 su: + tty?? root-informix
Jan 9 01:45:50 esuunix1 syslog: IRMD[Warning]: Adapter 1/4/0/1 PDrv 1:9 State Change from ONLINE to FAILED
Jan 9 01:45:50 esuunix1 syslog: IRMD[Warning]: Adapter 1/4/0/1 PDrv 1:10 State Change from ONLINE to FAILED
Jan 9 01:45:51 esuunix1 syslog: IRMD[Warning]: Adapter 1/4/0/1 PDrv 1:11 State Change from ONLINE to FAILED
Jan 9 01:38:39 esuunix1 su: + tty?? root-informix
Jan 9 01:45:51 esuunix1 above message repeats 19 times
Jan 9 01:45:51 esuunix1 syslog: IRMD[Severe]: Adapter 1/4/0/1 LDrv 3 State Change from OPTIMAL to OFFLINE
Jan 9 01:45:52 esuunix1 vmunix: LVM: VG 64 0x010000: Lost quorum.
Jan 9 01:45:52 esuunix1 vmunix: This may block configuration changes and I/Os. In order to reestablish quorum at least 1 of the following PVs (represented by current link) must
become available:
Jan 9 01:45:52 esuunix1 vmunix: <31 0x040200>
Jan 9 01:45:52 esuunix1 vmunix: LVM: VG 64 0x010000: PVLink 31 0x040200 Failed! The PV is not accessible.
Jan 9 01:46:13 esuunix1 su: + tty?? root-informix
Jan 9 01:46:15 esuunix1 syslog: IRMD[Warning]: Adapter 1/4/0/1 PDrv 1:2 State Change from ONLINE to FAILED
Jan 9 01:46:16 esuunix1 syslog: IRMD[Warning]: Adapter 1/4/0/1 PDrv 1:3 State Change from ONLINE to FAILED
Jan 9 01:46:14 esuunix1 su: + tty?? root-informix
Jan 9 01:46:16 esuunix1 above message repeats 9 times
Jan 9 01:46:16 esuunix1 syslog: IRMD[Warning]: Adapter 1/4/0/1 PDrv 1:8 State Change from ONLINE to FAILED
Jan 9 01:46:16 esuunix1 syslog: IRMD[Severe]: Adapter 1/4/0/1 LDrv 2 State Change from OPTIMAL to OFFLINE
Jan 9 01:47:03 esuunix1 su: + tty?? root-informix
Jan 9 01:47:46 esuunix1 vmunix: LVM: VG 64 0x040000: Lost quorum.
Jan 9 01:47:46 esuunix1 vmunix: This may block configuration changes and I/Os. In order to reestablish quorum at least 1 of the following PVs (represented by current link) must
become available:
Jan 9 01:47:46 esuunix1 vmunix: <31 0x040300>
Jan 9 01:47:28 esuunix1 su: + tty?? root-informix
Jan 9 01:47:46 esuunix1 above message repeats 39 times
Jan 9 01:47:46 esuunix1 vmunix: LVM: VG 64 0x040000: PVLink 31 0x040300 Failed! The PV is not accessible.

1 REPLY 1
rmueller58
Valued Contributor

Re: Cascade Failure - 4SI RAID

Problem related to backplane on SC10, FE replaced backplane and cascade failure ceased.