Simpler Navigation for Servers and Operating Systems - Please Update Your Bookmarks
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
If you have bookmarked forums or discussion boards in Servers and Operating Systems, we suggest you check and update them as needed.
Operating System - Tru64 Unix
cancel
Showing results for 
Search instead for 
Did you mean: 

2 node cluster DR test

SOLVED
Go to solution
Timothy Weinand
Occasional Visitor

2 node cluster DR test

I am attempting to simulate a DR scenario with a 2 node cluster. I removed the hosts from my working storage group, created a new storage group with a couple of disks and added the hosts to it. I have a local disk so I was able to boot to Tru64. I saw the disks, labelled them and partitioned them. I restored each node root, the cluster root, /usr and /var partitions from tape. However, I am unable to boot from the console. When I try to boot, I get the error "failed to open dga100.1001.0.11.0". I no longer see that drive and my bootdef_dev variable is blank.

Anybody know what's going on?
5 REPLIES
Steven Schweda
Honored Contributor

Re: 2 node cluster DR test

> [...] I have a local disk [...]

So there are some non-local disks involved?

> [...] I was able to boot to Tru64.

On what? What are these systems?

> I saw the disks, [...]

How, exactly? What, exactly, did you see?

As usual, showing actual commands with their
actual output can be more helpful than vague
descriptions and interpretations.

> When I try to boot, [...]

How, exactly?

> I get the error "failed to open
> dga100.1001.0.11.0".

>>>show device

> [...] I no longer see that drive and my
> bootdef_dev variable is blank.

Did you set bootdef_dev before?

Did you ever see this disk from the console,
or only from Tru64? It's nice that Tru64
can find this device, but booting from it
requires that the console be able to find it,
too, and the console may be less clever than
Tru64.

Did you do any WWIDMGR stuff from the
console? (My equipment is too stupid to know
what WWIDMGR is, so I know nothing, but my
casual reading suggests that it has some
relevance to non-local disks.)
Timothy Weinand
Occasional Visitor

Re: 2 node cluster DR test

Thanks for the reply.

For more information, there is a local disk that has a Tru64 installation. This was the base disk I used to originally create the cluster. The non-local disks are on an EMC SAN. When I boot with the local disk, I can see the non-local disks in hwmgr and I can label them and partition them.

I can see the devices at the console after executing wwidmgr -quickset. The issue is that I don't see a dga100 device at the console and I don't know why it's looking for it.

In doing more research, I've found out that I will need to do more than just restore the data as there are cnx partitions on the system disks that are not restored via vrestore.
Rob Leadbeater
Honored Contributor
Solution

Re: 2 node cluster DR test

Hi,

As Steven requested, can you post the actual screen output that you're getting ?

If you've been removing and adding SAN storage devices, it's possible that the WWID stuff has got confused.

Probably best to clear everything out before trying another quickset.

P00>>> wwidmgr -clear all

Another thing to check is that you've got boot_reset set to ON, which is mandatory for clustered systems.

Cheers,

Rob
Timothy Weinand
Occasional Visitor

Re: 2 node cluster DR test

Thanks for the replies everyone. It was determined that it's best to rebuild the cluster from the local disk. That's the plan I intend to use when this is deployed for real.
Timothy Weinand
Occasional Visitor

Re: 2 node cluster DR test

Rebuilding cluster from scratch