- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND ...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2007 12:54 AM
05-09-2007 12:54 AM
Until now, I thought that HSV's active/pasive or active/active behavour had nothing to do with the number of HBAs the host accessing the storage had, because, in case controller A failed, the virtual disk would be managed by controller B, changing, in the OpenVMS host, from the path 'HBAport-hostportControllerA' to the path 'HBAport-hostportControllerB' without disruption. This behavour would cover storage availability since the controller failure point of view (of course, it the only HBA in the node fails there is no failover, but this is another case).
So, I have been surprised when reading the 'SAN design reference guide', the number 7 of EVA storage system rules says:
"EVA4000/6000/8000 supports active/active failover. EVA3000/5000 supports active/active (VCS 4x) or active/passive (VCS 3x) failover.
Active/active and active/passive failover requieres a minimum of two Fibre Channel HBAs and native operating system or layered multipathing driver functionality. See the whitepaper listed below for exceptions".
Is this correct or am I missing some information that clarifies this sentence?. Does the 'multipath failover' term refer only to multiple paths when having more than one HBA?.
My OpenVMS node with only one HBA sees the disk through two paths and I think it should be considered a multipath disk as OpenVMS is able to change from one path to the other one.
By the way, has anybody a pointer to the whitepaper referenced above whose title is:
"Connecting Single HBA Servers to the Enterprise Virtual Array without multipathing software".
I have looked for it but I haven't found it. Perhaps its reading clarifies my doubt.
On the other hand,I have two more questions:
* I don't understand very well the behavour of the value 'None' when defining the 'Virtual disk preferred path mode' at Command View EVA. According to the help, it means that presentation of the virtual disk alternates between two controllers. Does this mean that, when starting the two controllers, the controller owner of each virtual disk would be alternatively one controller and the other one?. In this case, when using the 'None' option, which is the failover behavour in case the controller owning the virtual disk fails?. I think that it's the same as 'failover/failback-Controller B' option except that when controller B is up, the virtual disk doesn't move again to it but it continues with controller A, but I am not sure.
*Since OpenVMS point of view, if we define at Command View EVA a preferred path for a virtual disk, say controller A, and OpenVMS, at boot time, assigns an available path to that device through controller B, would it be advisable to execute a SET DEVICE/PATH command to specify the suitable path?. Has this behavour changed in the last OpenVMS version?.
Thank you very much in advance.
Ana
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2007 02:24 AM
05-09-2007 02:24 AM
Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND OPENVMS
I think you can get the answer to your first question by issuing a $SHOW DEVICE/MULTIPATH/FULL $1$DGA. The output will tell you what paths to the devices are avaiable.
As to your second question, I don't recall ever seeing a recommendation from VMS engineering to set a preferred path on a unit or LUN on a StorageWorks array. I think the recommendation has always been to let VMS manage the path to the LUN. That recommendation does not preclude using $SET DEVICE/PATH=
Bill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2007 05:48 AM
05-09-2007 05:48 AM
SolutionIf you have multiple virtual disks with preferred path = "none", the EVA will assign controller ownership[*] round-robin.
[*] even on active/active firmware, a single virtual disk is managed by a single controller.
I have not tried it myself, but it looks like the latest OpenVMS version can now find the 'optimized' (also called 'performance') paths to the owning controller, so you can do a path preferrence based on your knowledge of I/O load and OpenVMS will honour it.
HP seems to have removed the 'single HBA' manual from their web pages, but I don't think it is interesting for OpenVMS uses, anyway. The multipath filter is part of the operating system an it is obvious that a single adapter is a single point of failure.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2007 07:29 PM
05-09-2007 07:29 PM
Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND OPENVMS
I specifically asked what would happen at Controller Boot time if the Preferred Path Mode was set to 'None', and was told that the primary controller (first one to boot) would assume control of all the LUN's. I'd be interested if anyone has categoric evidence to the contrary.
Because the XL range of EVA controllers (4/6/8K) are active/active, VMS can happily talk down the path to the non-owning controller. I/O requests will be internally passed over to the owning controller. Obviously this has some performance impact.
However, if the primary controller sees more than 60% of requests to a particular LUN coming down the non-owning controller, it will move the LUN over to the other controller.
Coming from a HSG80 background, I asked specific questions about load balancing, and the advice was this :-
Balance the LUNs over the controllers, based on known application hit rates, (i.e. if all of your hot files are on 4 drives, balance these over the controllers equally), using the Preferred Path.
VMS will switch the paths around, so you can use the SET DEVICE/PATH command to rebalance periodically, if you wish.
There is a 'best practices' document around somewhere, which explains how to set up your EVA. To be honest, the way you physically build the cabinet is as important as how you configure the controllers.
You can check you've got it right using EVA Performance Monitor.
I'd also heard about Multipath Load Balancing being introduced. In fact, I thought it was going to be in version 8.3, so if anyone knows anything more on this, I'd also be very interested.
Rob.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2007 09:12 PM
05-09-2007 09:12 PM
Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND OPENVMS
The only thing still not clear is the EVA rule sentence: "number 7 of EVA storage system rules says: "Active/active and active/passive failover REQUIRES a minimum of two Fibre Channel HBAs...". Do you mean that it's only to emphasize that if you have only an HBA you have a single point of failure since the adapter point of view, but the failover mechanisms work well either with one or two HBAs?. Say in other words, can we upgrade VCS firmware to 4.x and have nodes with only one HBA?.
On the other hand, with respect to Rob comment about virtual disk ownership assumption by the primary controller, we have several OpenVMS nodes with different versions and it seems (although I should do more tests) that it is correct: most of the disks are served by the path connecting to the host port 1 of controller A. Specifically, we have set the OpenVMS virtual disks preferred path to None at the EVA and we have, at OpenVMS level:
* NODE A: OpenVMS 7.3-2 with one HBA:
7 DGA disks, all through the same path (the path to the port1 of controller A).
* NODE B: OpenVMS 7.3-2 with two HBAs:
22 DGA disks:
4 disks through PGB0 adapter (equally balanced through the two paths)
18 disks through PGA0 adapter (equally balanced through the two paths - 9 of them through the path to the port 1 of controller A).
* NODE C: OpenVMS 7.3-2 with two HBAs:
6 disks only through PGA0 (3 disks through port 1 of controller A and 3 disks through port 2 of controller B)
PGB0 adapter is not used (?????)
* NODE D: OpenVMS 8.2 with one HBA:
5 disks, all through port 1 of controller A
* NODE E: OpenVMS 8.3 with one HBA:
1 disk, through port2 of controller B
Although we'd need more info, specially more disks in node E, it seems two things:
- The information is more equally balanced (between some limits) when the node has two HBAs than when it has one. In spite of this, it seems that it has more priority the path to port 1 of controller A (I assume that this is the primary controller) than the rest (see node B). It's significative that node C, although with two adapters, only one is used.
- Although we'd need more disks to test in node E, it seems that OpenVMS 8.3 has a better algorithm to choose a path to a disk. At least, according to what is seen in the other nodes, it should have chosen the path to port 1 of controller A instead of port 2 of controller B currently used.
To finish, only two comments:
- The SHOW DEVICE output at OpenVMS 8.3 has two important info fields about switch time ('last switched to time' and 'last switched from time') that I think that is filled when either the user or the operating system itself changes path. In our case, the value is 'none' so, I have no way to know if it has this value because there hasn't been a need to change. I'll check closely as soon as we add another adapter to the node.
- I have read somewhere that previous versions of OpenVMS 8.3 choosed the path according to the controllers I/O load in that same moment, although the I/O evolution could lead to another choose, but I am not sure if this is correct or not.
Please, I'd like you to answer my first question, and any comments or suggestions will be very appreciated.
Thanks in advance.
Ana
Ana
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-09-2007 09:40 PM
05-09-2007 09:40 PM
Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND OPENVMS
I've just had a read through the Course notes, and I can't see anything to qualify this statement either.
I wonder if there's something specific to one of the operating systems's supported, that does not come into effect with OpenVMS?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-10-2007 01:54 AM
05-10-2007 01:54 AM
Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND OPENVMS
--
No, there are no plans to do multipath load balancing. There *is* a notion of static path balancing at boot time (introduced in V7.3-1), but that simply spreads the current paths for the various devices across all available local paths.
We had hoped to do "concurrent" multipath (that is, using multiple paths at a time for I/O), but there aren't enough engineering resources to do that now.
-- Rob (ex-Multipath engineer)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-14-2007 12:19 AM
05-14-2007 12:19 AM
Re: SOME DOUBTS ABOUT HSV CONTROLLER FAILOVER AND OPENVMS
I think that, in general, all my questions have been solved.
Thank you very much again.
Regards.
Ana