Operating System - Tru64 Unix
1829443 Members
1566 Online
109992 Solutions
New Discussion

Switch device path in Tru64

 
SOLVED
Go to solution
Christof Schoeman
Frequent Advisor

Switch device path in Tru64

Hi

I know I've seen this question posted in the forums before, but can, for the life of me, not find the thread. Pardon me for asking it again.

Is there something for Tru64 similar to the OpenVMS command $ SET DEVICE /SWITCH /PATH=xxx ?

In other words, is it possible to force a single device (or subset of devices) to fail over to an alternate FC path?

We're running V5.1B PK4, connected to EMC storage.

Hope you can help.
3 REPLIES 3
Han Pilmeyer
Esteemed Contributor

Re: Switch device path in Tru64

No, there is no such command. Multi-pathing is completely transparent on tru64 UNIX.

There are ways to fail over a LUN on a storage array, but since you have EMC storage that doesn't apply to you.

Cou you elaborate on why you would want to do this?
Christof Schoeman
Frequent Advisor

Re: Switch device path in Tru64

Thanks for your reply.

This is all to do with load balancing.

Tru64 apparently does automatic load balancing, but only within a RAD. To quote Han Pilmeyer (hope he doesn't mind) - "On NUMA based systems we will only load balance between adapters within the same NUMA boundaries (RAD). Only if there is no path in the RAD will we go out to other RAD's in the same system..."

If I then have, say, a Marvel, with two HBAs connected to different CPUs (thus different RADs), am I correct in assuming that Tru64 will not load balance between the two?

If that is the case, it could help to force some disks to use another path. Not?

Han Pilmeyer
Esteemed Contributor
Solution

Re: Switch device path in Tru64

Looks like something I wrote yesterday in a private e-mail. ;-)

Typically we would still load balance as not all of your I/O would be initiated from one CPU. However if you do have a single stream application that does all I/O's on a single CPU, then you could benefit by placing both HBA's in the same PCI box. But you also have to consider the number of LUN's and the number of ports that your using on the EMC array. Just throwing more HBA's at it may not help.

There are also certain kernel parameters that could play a roll (e.g. sched_distance should be set to 2 or 3 depending on the type of Marvel you have).