HPE EVA Storage
1751695 Members
4674 Online
108781 Solutions
New Discussion юеВ

Re: CV-EVA reporting EVA4400 size incorrectly

 
Adrian Graham_1
Regular Advisor

CV-EVA reporting EVA4400 size incorrectly

Folks,

Yet another issue has arisen with one of our EVAs running XCS 09501100. It has a VTL disk group of 12 1TB FATA drives (HP05 firmware), last week one of these failed so I replaced it with a new one (HP06) and since then I get the size misreported of the whole array. I've enclosed a snapshot of the command view summary.

We'll be upgrading the other drives to HP06 tonight in case it's a firmware issue but otherwise has anyone seen this before?

Cheers
4 REPLIES 4
V├нctor Cesp├│n
Honored Contributor

Re: CV-EVA reporting EVA4400 size incorrectly

This is seen sometimes after replacing a disk. The controllers do not calculate the space on the disk group correctly and report absurd numbers.

This is usually solved performing an resync of the controllers (this has to be done by an HP CE).

You can also try to reboot both controllers, but we usually opt for the resync, it's less disruptive.
Adrian Graham_1
Regular Advisor

Re: CV-EVA reporting EVA4400 size incorrectly

That's outstandingly bad! Given the failure rate of the 1TB drives does this mean we're facing regular resyncs or reboots?

Thanks!
V├нctor Cesp├│n
Honored Contributor

Re: CV-EVA reporting EVA4400 size incorrectly

I said "this is seen sometimes", not that this will happen everytime a disk is replaced.
Also the usual recommendation of updating all the software and firmware. XCS to 09522000, 1 TB drives to HP06, Command View to 9.1

And before replacing a disk you should have the EVA log cheked by an experienced engineer, the bugs on the I/O modules firmware lead many times to timeouts and check conditions on the disks, and disks being marked as failed when it's not their fault.
Adrian Graham_1
Regular Advisor

Re: CV-EVA reporting EVA4400 size incorrectly

Thanks for that. As I say, we're doing the drives tonight and will hopefully do XCS at the next maintenance weekend assuming that this version isn't pulled like the previous ones :)