HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- GP adviser syntax
Operating System - HP-UX
1835927
Members
2674
Online
110088
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-07-2006 08:08 PM
08-07-2006 08:08 PM
GP adviser syntax
Hello,
since I haven't discovered any other way to get at certain kernel stats (without requiring me to do some hpux systems programming myself) I now have to rely on MWA (OpenView or whatever the current naming convention).
I found the easiest access method being to run glance in adviser_only mode and providing it an appropiate adviser syntax file.
I started with something as easy as fetching gbl_mem_*_util stats and rrdgraphing them wich works satisfactorily.
However, what I really am interested in,
as this box seems to suffer from frequent disk I/O hogs (due to poor db storage allocation design?)
are more metrics such as lv_read|write_rate,
or since the db implementers preferred cooked over raw i/o,
fs_phys_read|write|io_rate.
Now I wonder how I have to put my adviser syntax.
From what I have read in the short online help of glance it looks as though one is required to FS LOOP or LV LOOP over all LVs or filesystems
to fetch the above mentioned rates within an IF block that tests for the wanted LV|FS_DIRNAME.
Is this the only way to access the data,
or is there a more direct method
because I know lv_dirname beforehand?
Also I am not sure about what metrics would better reflect the true load.
In metrics.txt explanations it reads that the lv_* metrics account for physical i/o (as opposed to logical chunks that use the buffer
cache).
Currently the problem is that the dbf filesystems are VxFS on cluster shared volumes in whose package control scripts there were no special vxfs mount options defined (sadly),
where I think one should better have used those that prevent usage of buffer cache.
So for now I guess that we have performance penalties owe to unecessary buffer cache usage.
I probably also ought to rrdgraph gbl_mem_cache_hit_pct to get a clearer picture of buffer cache efficiency.
Finally, I am also uncertain about the suitable interval selection.
Do I get somewhat meaningful results if I only fetch those metrics after 2 glance -adviser_only iterations which I restart every 5 minutes (the query inrterval of Munin server)?
Or would it be better to have glance run in -bootup mode?
Hm, but then I guess this would require parsing some output file, which isn't quite as elegant.
Hopefully not too many questions
Ralph
since I haven't discovered any other way to get at certain kernel stats (without requiring me to do some hpux systems programming myself) I now have to rely on MWA (OpenView or whatever the current naming convention).
I found the easiest access method being to run glance in adviser_only mode and providing it an appropiate adviser syntax file.
I started with something as easy as fetching gbl_mem_*_util stats and rrdgraphing them wich works satisfactorily.
However, what I really am interested in,
as this box seems to suffer from frequent disk I/O hogs (due to poor db storage allocation design?)
are more metrics such as lv_read|write_rate,
or since the db implementers preferred cooked over raw i/o,
fs_phys_read|write|io_rate.
Now I wonder how I have to put my adviser syntax.
From what I have read in the short online help of glance it looks as though one is required to FS LOOP or LV LOOP over all LVs or filesystems
to fetch the above mentioned rates within an IF block that tests for the wanted LV|FS_DIRNAME.
Is this the only way to access the data,
or is there a more direct method
because I know lv_dirname beforehand?
Also I am not sure about what metrics would better reflect the true load.
In metrics.txt explanations it reads that the lv_* metrics account for physical i/o (as opposed to logical chunks that use the buffer
cache).
Currently the problem is that the dbf filesystems are VxFS on cluster shared volumes in whose package control scripts there were no special vxfs mount options defined (sadly),
where I think one should better have used those that prevent usage of buffer cache.
So for now I guess that we have performance penalties owe to unecessary buffer cache usage.
I probably also ought to rrdgraph gbl_mem_cache_hit_pct to get a clearer picture of buffer cache efficiency.
Finally, I am also uncertain about the suitable interval selection.
Do I get somewhat meaningful results if I only fetch those metrics after 2 glance -adviser_only iterations which I restart every 5 minutes (the query inrterval of Munin server)?
Or would it be better to have glance run in -bootup mode?
Hm, but then I guess this would require parsing some output file, which isn't quite as elegant.
Hopefully not too many questions
Ralph
Madness, thy name is system administration
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP