- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Oracle 10g in a HP-UX SRP system type container
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-22-2011 09:07 AM
11-22-2011 09:07 AM
Oracle 10g in a HP-UX SRP system type container
Hello,
I set up a ServiceGuard cluster and SRP containers as cluster resources. Into these containers we want to consolidate a multi-system database server set. The containers and the cluster are running fine, we have 6 SRP containers per cluster node (It's a two node cluster = 12 oracle DBs). I aligned PRM to the needs of the current systems on the physical nodes, set the kernel and ndd parameters.
Our problem is that according to the DBA one can't open enough connections to the listener which runs on tcp port 1521. In this document:
http://h20195.www2.hp.com/v2/GetPDF.aspx/4AA0-2630ENW.pdf
it states that
"If another Oracle listener already exists on the host (e.g. in another SRP compartment), make sure to change the default entry of the port of Oracle listener during the parameter input phase; it cannot be equal to any other port used on the system."
System: 11iv3 with the latest patchbundle (Sept 2011)
SRP version: 3.01
1.) Do anybody ever met with this limitation? My question would be, what's the reason for it? Because each container has a separate ip address, so they have separate tcp/udp port space. The ip addresses are bound like a "simple" virtual ip to the lan0 interface, (lan0:1, lan0:2 and so on).
2.) If you have any good practices with such an environment, don't hesitate to share with me. E.g. what kernel parameters should I set on the physical hosts to have enough resorces for each containers?
3.) How can I dynamically change /etc/prmconfig to give a fairly good resource share for every container in case of a package failover?
Regards,
Viktor
Unix operates with beer.
- Tags:
- Oracle
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-30-2011 06:06 PM
11-30-2011 06:06 PM
Re: Oracle 10g in a HP-UX SRP system type container
.1) Do anybody ever met with this limitation? My question would be, what's the reason for it? Because each container has a separate ip address, so they have separate tcp/udp port space. The ip addresses are bound like a "simple" virtual ip to the lan0 interface, (lan0:1, lan0:2 and so on).
This limitation does not exist with Containers (SRP) v3. Multiple containers with their own IP addresses can listen independently while using the same port ID as you have noted. You might be hitting another resource limit.
2.) If you have any good practices with such an environment, don't hesitate to share with me. E.g. what kernel parameters should I set on the physical hosts to have enough resorces for each containers?
The Containers administration guide has information on best practices that is worth reviewing. Containers does not consume additional resources of any significance beyond the actual workload you are running, so you just need to ensure that you are setting the various kernel limits for processes, etc. high enough for your aggregate workload- same as if you were not using containers. There is no special tuning of the kernel resources required for containers . There are a couple kernel behavior settings, but these are set for you and do not need adjustment.
3.) How can I dynamically change /etc/prmconfig to give a fairly good resource share for every container in case of a package failover?
Most likley you will not have to change PRMConfig settings if you use FSS and not dedicated cores (PSET)
If you are using share based allocations (FSS) and use the default provided, all workloads will have access to full CPU capacity. If at some point the system runs out of capacity, all running workloads will have equal access to the CPU. This is usually sufficient for most deployments, as all workloads have equal minimum CPU, and running past full system capacity is generally a rare occurrence. You can also set caps on specific containers, and/or give a container a proportionately higher share of capacity at peak (2x share size =2x CPU allocated - if needed).
When the packages fail over they will get their proportional share. So if you had 12 containers all with 10 shares of CPU. At peak capacity with 6 running each would be guaranteed 1/6 of CPU. If serviceguard started up the other 6 as a result of a failover, each container on the system gets 1/12. (10*12 /12 )
If you wanted to favor the original or failover workload, you just adjust the share ratios accordingly. For example if original containers get 10 shares each, and failover containers get 20- in this scenario the failover containers are each guaranteed a minimum CPU allocation twice that of any of the original container .