Disk Enclosures
1753720 Members
4917 Online
108799 Solutions
New Discussion юеВ

Re: understanding EVA leveling...

 
Tom O'Toole
Respected Contributor

understanding EVA leveling...

The leveling status of array is confusing. A 146G disk drive was replaced in an array here (running xcs 6.000, but I don't know how relevant the version is). About an hour later, the disk group was marked leveling 'inactive', but the occupancy of this drive was only 2.5GB and rising. The occupancy of the group is like 75%. It seeems like data is being migrated out to this drive although no leveling was occuring. Is migrating data to a just-grouped drive handled outside the normal leveling?

Many HP people are recommending waiting for leveling to finish before performing another process, like an ungrouping. Originally I did not wait for leveling, but I would prefer to be as safe as possible. Anybody have comments on this?
Can you imagine if we used PCs to manage our enterprise systems? ... oops.
6 REPLIES 6
gabbar
Trusted Contributor

Re: understanding EVA leveling...

How did you remove the hard drive?? Did you ungrouped it before removing it??
Tom O'Toole
Respected Contributor

Re: understanding EVA leveling...


I think he failed out, so we had to pull it.
Can you imagine if we used PCs to manage our enterprise systems? ... oops.
tkc
Esteemed Contributor

Re: understanding EVA leveling...

Hi Tom,

The best option would be to collect the logs from the SMA and send it to HP support for investigation. To collect the logs :

- Start Command View EVA
- Type /fieldservice in the address field after the end of the current address
- On this screen select your EVA from the dropdown list and press select system
- Select the capture system information button and WAIT for the popup save screen
- Finally save the file as *.zip to your workstation and send it to HP for investigation

The logs can explain why the levelling become inactive.
Mark...
Honored Contributor

Re: understanding EVA leveling...

Hi,
This latest update from HP might be of interest to you:
http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01035871&dimid=1256978157&dicid=alr_jul07&jumpid=em_alerts/us/jul07/all/xbu/emailsubid/mrm/mcc/loc/rbu_category-Advisory(Revised)/alerts

If you scan down the advisory recommends that you update from V6.000 to V6.100 or V6.110 "immediately". There is no need to go to 6.110 as this is just for the new 41/61/8100 systems and the functionality is then same for both.

Mark...
if you have nothing useful to say, say nothing...
Mark Poeschl_2
Honored Contributor

Re: understanding EVA leveling...

"Leveling" and "migration" are actaully two separate operations, and both may occur during a drive replacement.

If I read your scenario right:
- Your drive failed hard so you never had a chance to ungroup it.
- You replaced it and added it to the same disk group
- The EVA now has to reconstruct the data that used to reside on that drive using RAID information - "migration"
- There may or may not be a short levelling operation after the migration completes depending on how much the entire disk group's occupancy changed during the "migration"

Also, bear in mind that with XCS 6000 there is a known bug wherein the "new" capacity of replaced or added drives isn't shown as free capacity in the disk group. A controller re-sync is required to correct this. Later versions of XCS fix this bug.

Tom O'Toole
Respected Contributor

Re: understanding EVA leveling...


Definitely aware of the XCS 6.000 issues and we're in the process off moving systems off this array so we can upgrade the drive firmware and to move to xcs 6.100. Because of 'difficulties' with AIX handling of array 'events', the plan is still to do this maintenance offline, (or at the very least, without aix production systems). Should be done the migration by the end of the week
(we routinely move AIX and VMS systems between EVA arrays using host-based mirroring/shadowing).

Mark P. - I guess it's a complicated process. The eva would normally start reconstruction immediately to the rss members still in the array, before a new drive was added. Usually when a single drive fails and is replaced, the eva seems to try to keep the old rss id and index, so the new drive generally goes back into that rss. I guess at that point it has to re-evaluate its reconstruction based on this restored member of that RSS. It does make sense that processes other than leveling might be involved.

I should have given you 10 points :-)
Can you imagine if we used PCs to manage our enterprise systems? ... oops.