- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Availability of MC/SG CLI for Unix/Linux monit...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-28-2007 07:26 PM
06-28-2007 07:26 PM
Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
I do monitor our Unix servers with Nagios.
Because the DB/Application admins of one of our customers want to be notified by SMS on any unscheduled cluster event,
I wrote a wee Perl plug-in that actually doesn't do much more than parsing the output of "cmviewconf" to first populate a hash with all the cluster packages as keys and their respective primary nodes as values.
Then simply a second parsing of "cmviewcl -l package" is required to populate another hash that collects the packages' current state, status and node as hashref values.
The actual check then is a mere last comparitive map loop over the keys (viz. packages) of both hashes.
If something deviates a nagios notification is sent to our SMS gateway for every contact required to be notified.
This all works very well.
But a little drawback is that the cm* commands need to be executed on one of the cluster nodes.
For now I thus resort to having my plug-in defined as an NRPE command and executed by the inetd spawned nrpe daemon.
However, I very much would prefer if I could run the monitoring MC/SG cm* commands directly from my nagios server without the detour via nrpe.
I know that HP once offered an even free (cost-wise) MC/SG manager for Linux.
What I didn't like about it was that it only seemed to offere a GUI, if my memory serves me correctly.
Instead what I would like to have was a CLI where I can run the cm* commands from a Unix/Linux shell on any host that is not a node of any SG cluster.
Has anyone come accross something like that?
Or does the SG offer a usable API to tinker up ones own CLI?
Regards
Ralph
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2007 12:58 AM
06-29-2007 12:58 AM
Re: Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
Serviceguard's authentication system prevents non-authorized systems from using the hacl-cfg/udp port to access SG commands.
Yes, Serviceguard Manager exists, but as you state, doesn't have the CLI to enable mail/notification of undesirable states.
Serviceguard 11.16 is the first version to support Role-based Access.
The cluster configuration file shows it this way:
# Access Control Policy Parameters.
#
# Three entries set the access control policy for the cluster:
# First line must be USER_NAME, second USER_HOST, and third USER_ROLE.
# Enter a value after each.
#
# 1. USER_NAME can either be ANY_USER, or a maximum of
# 8 login names from the /etc/passwd file on user host.
# The following special characters are NOT supported for USER_NAME
# ' ', '/', '\', '*'
# 2. USER_HOST is where the user can issue Serviceguard commands.
# If using Serviceguard Manager, it is the COM server.
# Choose one of these three values: ANY_SERVICEGUARD_NODE, or
# (any) CLUSTER_MEMBER_NODE, or a specific node. For node,
# use the official hostname from domain name server, and not
# an IP addresses or fully qualified name.
# 3. USER_ROLE must be one of these three values:
# * MONITOR: read-only capabilities for the cluster and packages
# * PACKAGE_ADMIN: MONITOR, plus administrative commands for packages
# in the cluster
# * FULL_ADMIN: MONITOR and PACKAGE_ADMIN plus the administrative
# commands for the cluster.
#
# Access control policy does not set a role for configuration
# capability. To configure, a user must log on to one of the
# cluster's nodes as root (UID=0). Access control
# policy cannot limit root users' access.
#
# MONITOR and FULL_ADMIN can only be set in the cluster configuration file,
# and they apply to the entire cluster. PACKAGE_ADMIN can be set in the
# cluster or a package configuration file. If set in the cluster
# configuration file, PACKAGE_ADMIN applies to all configured packages.
# If set in a package configuration file, PACKAGE_ADMIN applies to that
# package only.
#
# Conflicting or redundant policies will cause an error while applying
# the configuration, and stop the process. The maximum number of access
# policies that can be configured in the cluster is 200.
#
# Example: to configure a role for user john from node noir to
# administer a cluster and all its packages, enter:
# USER_NAME john
# USER_HOST noir
# USER_ROLE FULL_ADMIN
#USER_NAME john
#USER_HOST CLUSTER_MEMBER_NODE
#USER_ROLE MONITOR
That said, any remote system may be granted monitor rights, provided the remote system has Serviceguard installed (ANY_SERVICEGUARD_NODE) AND must have a NIC on the same subnet as the hostnames of the monitored cluster.
Lastly, there is:
ssh cmviewcl -f line
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2007 01:44 AM
06-29-2007 01:44 AM
Re: Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
I already made use of role based monitoring commands.
$ cmviewconf|grep -E 'user (name|host|role):'|head -3
user name: nagios
user host: CLUSTER_MEMBER_NODE
user role: monitor
However, my Nagios server is neither part of any SG cluster, nor is it situated in the same LAN segment.
Well ok, I can live with the current nrpe solution, which works fine.
I also consider setting up another distributed Nagios server on one of this cluster's nodes, and have it send_nsca check results to the main Nagios server which would treat these as passive checks.
I already have set up another distributed Nagios server to monitor a whole firewalled LAN this way, which also works well.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2007 05:22 AM
06-29-2007 05:22 AM
Re: Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
I have not used it on HP-UX or Linux but you might look into WBEM. There is a ServiceGuard Provider that could be used to get the info you desire. I wish I could offer more help on it, but I have not had the time to read up on it. I used WMI (Microsoft's WBEM) in the past and really found it useful for querying machines on the network. I am not sure how much time you have either, but here is a starting point.
http://h71028.www7.hp.com/enterprise/cache/9928-0-0-225-121.html
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-30-2007 05:11 AM
06-30-2007 05:11 AM
Re: Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
you will see that EMS can monitor packages, nodes and other resources of the cluster. A EMS request can either execute a OS command, snmp trap or put a message in a logfile. So if your SMS alarms are triggered via a email use ems to send a email when certain cluster events happen.
good luck
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2007 08:01 PM
07-01-2007 08:01 PM
Re: Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
WBEM really sounds interesting.
On the other hand, I just have skimmed the preface of the developer's guide to read that C++ knowledge and a compiler was required,
in order to make use of the HP supplied WBEM SDK.
Since I lack both, I think I better stick with my current simple Perl solution.
Also the required effort and complexity seems way beyond that of NRPE.
Maybe someday someone will come up with a Perl WBEM API which would make it more tangible for non-developers like me?
As there already exists quite a good Perl SOAP toolkit with SOAP::Lite, I really can't think that the provision of a WBEM API should be that far-fetched.
Or are there any licensing issues,
and WBEM isn't an open standard?
Besides, I also would like to know if one can use the HP implementation of WBEM without purchasing a license?
I still would have to read the WBEM docs to learn more about it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
07-01-2007 08:19 PM
07-01-2007 08:19 PM
Re: Availability of MC/SG CLI for Unix/Linux monitoring host not part of cluster
yes, I think I could set up my Nagios server to catch EMS generated SMTP traps.
I think these would also be set up as passive checks.
At least in the doc there should be a section on it.
So far we haven't configured this cluster to make use of EMS monitoring services.
Of course one cannot compare the EMS monitors with my primitive Nagios plugins.
But at least for now I already have the syslog parsed by check_log2.pl (an already provided Perl plugin from the contrib branch) for all my NRPE nodes.
In HP-UX syslog files I simply watch for log messages of facility kern, which appear as vmunix, parsed as pattern to generate a warning because these are almost always an indication of some problem ahead.
I know that one can configure EMS to send mails on certain events.
I think they only don't comply with the pretty low standards of what constitutes a valid Nagios plugin.
I must check if there are configuration options to format the EMS mails according to Nagios requirements...