Disk Enclosures
Showing results for 
Search instead for 
Did you mean: 

Urgent :FC60 - Unable to install new 72 Gb disks -

Karl-Heinz Topfer
Occasional Visitor

Urgent :FC60 - Unable to install new 72 Gb disks -

We are trying to configure a FC60 array with new 72 Gb (10k) disks .
This disks will replace the existing 36 Gb disks on the disk enclosure.
The new disks are at firmware level HP03.

When we try to create a Raid -1 LUN the system spits the follwoing error message .

Error in command execution, "RMT_AM60ERRORSTATUS_MSG"

COMMAND STATE : A SCSI error occurred

Sense Key = 0x05: "ILLEGAL REQUEST"
Additional Sense Code = 0x26
Additional Sense Code Qual = 0x00
FRU Code = 0x54

Decoded SCSI Sense:
Invalid Field in Parameter List

amcfg: Error in command execution
I have seen a similar error in the forum but it did not have any fix /solution.
We have installed the latest HP array manager patch PHCO_28166

Any assistance on this is much appreciated as we are in the middle of a outage.

Honored Contributor

Re: Urgent :FC60 - Unable to install new 72 Gb disks -

Most probably an error in the HP Firmware on the drive. Try replacing the drive with a drive with the manufacturers firmware. Problem with HP drive products is they monkey arround with the manufacturers firmware and who knows what they screw up.
Who knows the drive better, the manufacturer or HP?
Hot Swap Hard Drives
Valued Contributor

Re: Urgent :FC60 - Unable to install new 72 Gb disks -

Hi Karthik,

How many 72Gb disk? and what is the command you are using to add the new raid 1?

Have you initialised the disk before creating the RAID 1 set?

Karl-Heinz Topfer
Occasional Visitor

Re: Urgent :FC60 - Unable to install new 72 Gb disks -

Karl-Heinz Topfer replying on behalf or Karthik, with whom I was working on this.
Thanks for the two replies so far. Not sure if firmware was an issue, as we eventually succeeded. The disks were from HP (Seagate disks), and supplied by a third party. Orrin, to answer your question I'll give details on what we planned to do and what eventuated.

System is N4000, HP-UX 11.0. FC60 has 3 SC10 enclosures (0,1,2) fully populated with 18GB and 36GB disks. 10 x 36GB disks are in enclosure 2.
Stage 1 of upgrade was to replace 2 x 18GB hot spares (1 in enclosure 0 and 1 in enclosure 1) with 72GB hot spares. Then replace all 10 36GB disks in enclosure 3 with 72GB disks. All 5 RAID 1 LUNs in enclosure 3 were assigned to file system /u06, the only fs in vol. group vg06.

The 2 hot spare replacements were successful. Our steps after that were to be:
o amcfg -D 9 gfbfinfc # Remove/Unbind 36GB luns
o amcfg -D 10 gfbfinfc
o amcfg -D 11 gfbfinfc
o amcfg -D 12 gfbfinfc
o amcfg -D 13 gfbfinfc
o amdsp -a gfbfinfc # Check array config/status
o Replace ten 36 GB disks with 72 GB disks.
o amdsp -a gfbfinfc # Check array.confirm new disks
* Bind/Config /u06 72GB LUNs
o amcfg -L A:9 d 5:0,6:0 -r 1 -s 64 gfbfinfc
o amcfg -L B:10 -d 5:1,6:1 -r 1 -s 64 gfbfinfc
o amcfg -L A:11 -d 5:2,6:2 -r 1 -s 64 gfbfinfc
o amcfg -L B:12 -d 5:3,6:3 -r 1 -s 64 gfbfinfc
o amcfg -L A:13 -d 5:4,6:4 -r 1 -s 64 gfbfinfc
.... followed by ioscan, insf, pvcreate, vgcreate, etc.

The first BIND, for controller A LUN 9 failed with the reported error message. Tried with sam - same result. Tried after system and FC60 power down - same result.
Issued BIND for controller B LUN 10 and that ran successfully. BINDs for remaining 4 LUNs failed. Tried the BIND commands using controller B in all cases. same failures. Powered off again. Reseated the 8 disks. No change.
Experimented with hot spares by removing ones added earlier and replacing with any of the 8 disks in enclosure 2, to check if disks are okay. Got mixed and illogical results - e.g. one disk could be made hot spare in enclosure 1 and a slot in enclosure 2. Doing the same with another disk would succeed in encl. 1 slot but not in the same slot in encl. 2. However, using another slot in encl. 2 might work. Had different results with all the disks.

Tried putting back two of the original 36GB disks and BINDing them - got the error message.

By trial and error we found that we could another BIND working by making a disk a hot spare in enclosure 0 or 1, not issuing the 'ammgr -d' for it but simply removing it and inserting it in a slot in enclosure 2. It might then be seen as a hot spare in encl. 2. Issue 'ammgr -d' to unmake its hot spare status. Do this with another disk and try BINDing the pair. In some cases this would work, sometimes not. By continuing with this trial and error method we eventually got 4 LUNs bound. Then the last two disk slots disappeared, not displaying at all in amdsp listing, nor showing up in cstm. Rebooted with full system and FC60 power down. The two slots were then recognised as UNASSIGNED. BIND failed again. Then repeated the make/unmake as hot spare trick, which needed two attempts before the final BIND got underway. Rest of the exercise went okay after that.

We are now faced with the next stage, to replace 12 x 18GB disks with 12 x 36GB disks, using the 10 disks removed in Stage 1, plus 2 new disks. Obviously we are hesitant to try until we have some explanation of this strange behaviour. We placed a call with HP and engineer has captured logs for analysis.

Other observations, just to cover everything: The FC60 battery reported CRITICAL after first time we powered down. It was replaced at end of our change.
Our amdsp before the change showed all LUNs except one as being on controller B. A display from a long time ago showed that LUNs had originally been spread evenly across both A and B controllers.
Karl-Heinz Topfer