Operating System - OpenVMS
1748276 Members
4026 Online
108761 Solutions
New Discussion

IO AUTOCONFIG cannot find a new disk

 
Bob Niepostyn
Occasional Contributor

IO AUTOCONFIG cannot find a new disk

G’day All,

New fibre disk has been created and presented to VMS Server (Alpha 8.3) but “mcr sysman io autoconfig” cannot find it.

Background:
Autoconfig has worked on the first instance but not on the second. Sequence of events has been as follows:
1. Disk $1$DGA2021 has been mounted as a shadow member of DSA202
2. Our storage team created LUN 2022
3. Autoconfig found $1$DGA2022
4. Disk $1$DGA2022: has been added to shadow DSA202
5. After shadow copy completed $1$DGA2021 has been removed from DSA202
6. Storage team reclaimed $1$DGA2021 (i.e. it has been destroyed).

“Show device $1$DGA2021” gives status “Online”.

All of the above worked without any problems and I wanted to do a similar trick and replace existing $1$DGA2031 with new $1$DGA2032 on DSA203, but when our storage team created LUN 2032 autoconfig cannot find it (id does not show any errors).

Additional info:
Device $1$DGA2021 shows errors and
“SDA> clue memory/stat” shows “Dead Page Table Scans 112”.

Hypothesis:
Old and destroyed disk $1$DGA2021 causes some condition that prevents autoconfig to complete.
I know that you cannot remove disk from VMS, but can you somehow “set it offline”?

PS. I had similar error condition some time ago and it had been resolved after reboot.

I would greatly appreciate any help.

Regards, Bob Niepostyn.
4 REPLIES 4
Lester Dreckow
Advisor

Re: IO AUTOCONFIG cannot find a new disk

Bob,

I've had no trouble reusing an os_unit_id presented by an EVA 4000 since I learnt about scsi_path_verify ...

http://forums13.itrc.hp.com/service/forums/questionanswer.do?threadId=981292

So I always do

$ mc sysman io scsi_path_verify
$ mc sysman io autoconfigure /log

Regards,
Lester
Hoff
Honored Contributor

Re: IO AUTOCONFIG cannot find a new disk

Best guess: the storage team has (had?) a problem with the creation or with the zoning, or you're going to want to follow the FC SAN configuration sequence here:

http://labs.hoffmanlabs.com/node/786 <-console
http://labs.hoffmanlabs.com/node/1222 <-SYSMAN

Whither it's the former or the FIND remains to be determined.

This "$1$DGA2021 shows errors" tells us nothing about the errors. Sometimes errors are normal.

That you're even looking at the CLUE memory statistics here implies you know more than you might be including here; is this Alpha box severely under-configured? (I would not normally expect to get from Fibre Channel SAN system discussions over to the page tables; barring rogue buffer writes or such, memory and I/O are two rather distinct and separate subsystems within OpenVMS.)

The scans would imply that the working sets are constrained, or the system is having problems fitting applications into the working sets, or the memory is constrained; have you (also) been seeing RWMPB states or such on this box? (And RWMPB usually isn't related directly to the SAN, it's usually a memory-related or paging-related or working set constraint.)

Load and run T4 and monitor your usage or (if you already know the box is constrained, as could be inferred) look at increasing the working sets and adding more physical memory and/or at off-loading the box.
Bob Niepostyn
Occasional Contributor

Re: IO AUTOCONFIG cannot find a new disk

Lester,
Thank you very much.
Command
mcr sysman io scsi_path_verify
is exactly what I needed and it fixed the problem.
The old disk $1$DGA2021 status went from "Online" to "Offline", autoconfig found the new disk $1$DGA2032.
It is interesting, though, that autoconfig failed without showing any warnings.

Hoff,
I could not get access to your links, even after creating an account, but thanks anyway. They will be useful links for the future.

Regards, Bob.
Robert Atkinson
Respected Contributor

Re: IO AUTOCONFIG cannot find a new disk

This has been a problem that's troubled us over the years. Although I suspected it was something to do with the reuse of a LUN, I could never quite get to the bottom of it.

It's great to know there's a workaround for this.

Cheers, Rob.