- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- 9iRAC HMP and HyperFabric failover options in a SG...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-17-2003 12:44 AM
10-17-2003 12:44 AM
9iRAC HMP and HyperFabric failover options in a SG Cluster
The issue / question(s):
We want to use HMP rather than UDP/IP for the RAC instances since it has performance advantages and we want to take full advantage of CacheFusion. I've heard (maye a rumor) that when running HMP IPC via Hyperfabric hardware, local network interface failover is not available. Our desired state is to have both A6386A cards 'active' on each node and IF one was to fail, it's traffic would just be routed to the second A6386A in the same node - using HMP rather than UDP/IP.
Within the HyperFabric manual "installing and administering hyperfabric - pg 114", the section "How HyperFabric Handles Adapter Failures" , it indicates the if a node has two adapters, both are active; and that if one adapter fails, its network traffic is switched to the other active HyperFabric adapter in the node. Just wondering if a MAJOR detail was left out - ONLY WORKS in UDP/IP mode ??
Am I to blindly assume when having 9iRAC use HMP, the Oracle instance on the a node can withstand a failure of ONE HyperFabric card - i.e. that instance should not 'hang' ??? obvioiusly I can test pulling the fibre cable providing some level of confidence; and that if both HyperFabrics become unavailable, ServiceGuard would then indicate a package failure since we would have configured the HyperFabric as a Hardware SG package dependency.
If anyone has detailed setup and configuration and/or technical documents that talk specifically about HMP/9iRAC configuration settings, options for clic local failover without impacting the existing Oracle instance, availability and performance, it would be greatly appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2003 04:07 AM
10-18-2003 04:07 AM
Re: 9iRAC HMP and HyperFabric failover options in a SG Cluster
I'm afraid I don't know Hyperfabroc well enough to answer your questions properly - I would however offer the following advice:
1) The current general release version of hyperfabric does NOT allow for local card failover when using the HMP protocol.
2) That said I understand that HP have a patch to fix this in beta - you should talk to your local HP rep and have them contact the 'HP/Oracle Cooperative Technology Center' (this is their website though you may not be able to get in: http://www.hporaclectc.com ) They may have a date for release of this patch or be able to provide you with a release date.
HTH
Duncan
I am an HPE Employee
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-15-2004 06:16 AM
03-15-2004 06:16 AM
Re: 9iRAC HMP and HyperFabric failover options in a SG Cluster
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-16-2004 12:19 AM
03-16-2004 12:19 AM
Re: 9iRAC HMP and HyperFabric failover options in a SG Cluster
This release B.11.11.03 will ONLY be available via the web at http://www.software.hp.com.
The software will be under 'internet ready and networking' with the title 'HyperFabric for HP-UX 11i v1.0 with Local Failover for HMP'. The user must configure local failover using the steps given in the "HyperFabric Administrator's Guide" (title includes: "HP-UX 11i v1, includes Transparent Local Failover for HMP").
See this link:
http://docs.hp.com/hpux/netcom/index.html#HyperFabric
Customers ordering the new Hyperfabric2 hardware must obtain the new software off the web to use the HMP transparent local failover features.
Features include:
1. Link failure detection and automatic failover.
2. Route failure detection and automatic failover.
3. Support for Large User Count, max 20K per node. (Max 4 card per node)
4. SAM support for HA enhancements. (excluding OLAR)
5. Support for platforms: rp 54xx, rp74xx, rp84xx, SuperDome.
The HA enhancements enable HMP to be used in a production Oracle RAC environment. The increased Large User Count enables larger Oracle RAC installations. Now with the new Hyperfabric software, customers using Oracle RAC 9i with HMP on Hyperfabric2 hardware (A6386A, A6384A, A6388A) will be able to use the new local failover feature. A Hyperfabric resource (adapter, cable, switch, or switch port) can fail in a cluster, and HMP will now transparently failover traffic to the other available resources.
REF: http://techcom.cup.hp.com/dir_hyperfabric/hfint_home.htm
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
04-21-2004 12:32 AM
04-21-2004 12:32 AM
Re: 9iRAC HMP and HyperFabric failover options in a SG Cluster
To enable HMP:
make -f ins_rdbms.mk rac_on ipc_hms ioracle
To enable UDP:
make -f ins_rdbms.mk rac_on ipc_udp ioracle
The file /opt/clic/lib/skgxp/skclic.conf contains HMP configuration parameters that are relevant to Oracle:
CLIC_ATTR_APPL_MAX_PROCS = Maximum number of Oracle processes.
CLIC_ATTR_APPL_MAX_NQS = Being obsoleted. Set to the same as CLIC_ATTR_APPL_MAX_PROCS
CLIC_ATTR_APPL_MAX_MEM_EPTS = 5000 Maximum number of Buffer descriptors.
CLIC_ATTR_APPL_MAX_RECV_EPTS = Maximum number of Oracle Ports = CLIC_ATTR_APPL_MAX_PROCS
CLIC_ATTR_APPL_DEFLT_PROC_SENDS = 1024 Maximum number of outstanding sends
CLIC_ATTR_APPL_DEFLT_NQ_RECVS = 1024 Maximum number of outsdanding receives on a port