- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Best Practice for Large Memory Systems - Crash Dum...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 01:14 AM
02-12-2007 01:14 AM
Best Practice for Large Memory Systems - Crash Dumps
Question: Is there really a need for a full dump to analyse the most common HP-UX problems?
Our swap/dump config is as follows:
vg00 - small 4GB swap
vg01 - large swap 1 - 36GB
vg02 - large swap 2 - 74GB
/dev/vg02/swap / dump defaults 0 0
Are there any "best practice" for dump configuration out there that HP suggests for their benefit of effectively capturing crashinfo?
Thanks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 01:24 AM
02-12-2007 01:24 AM
Re: Best Practice for Large Memory Systems - Crash Dumps
docs.hp.com/en/5991-2881/5991-2881.pdf
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 01:30 AM
02-12-2007 01:30 AM
Re: Best Practice for Large Memory Systems - Crash Dumps
You may want to create a separate dump lvol pointed to a large cache disk LUN and turn off the savecrash steps in /etc/rc.config.d so the reboot will occur at normal speed. Then transfer the crash dump manually at a later time.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 01:38 AM
02-12-2007 01:38 AM
Re: Best Practice for Large Memory Systems - Crash Dumps
+ Kernel text/static data
+ Kernel dynamic data in use
+ User-space kernel thread stacks (UAREA)
Kernel dynamic memory which is free-and-cached (Super Page Pool) is only needed when there's a problem in the SPP itself [which is pretty rare] or memory corruption happens to hit the SPP [rare, unpredictable in general... and you're quite likely to hit higher level caches anyway]. User data is very rarely needed (plus most Users don't want HP support reading their application private data for security reasons [could be classified, customer sensitive, etc.])
This should be the default configuration for crashconf already:
# crashconf
Crash dump configuration has been changed since boot.
CLASS PAGES INCLUDED IN DUMP DESCRIPTION
-------- ---------- ---------------- -------------------------------------
UNUSED 847833 no, by default unused pages
USERPG 2456409 no, by default user process pages
BCACHE 356147 no, by default buffer cache pages
KCODE 11214 no, by default kernel code pages
USTACK 1537 yes, by default user process stacks
FSDATA 132 yes, by default file system metadata
KDDATA 495684 yes, by default kernel dynamic data
KSDATA 7170 yes, by default kernel static data
SUPERPG 9920 no, by default unused kernel super pages
Total pages on system: 4186046
Total pages included in dump: 504523
Dump compressed: ON
Dump Parallel: ON
DEVICE OFFSET(kB) SIZE (kB) LOGICAL VOL. NAME
------------ ---------- ---------- ------------ -------------------------
1:0x00000e 2612064 33554432 64:0x000002 /dev/vg00/lvol2
----------
33554432
[That's an 11.31 system, you don't say what version you have -- I believe 11.23 is reasonably equivalent, 11.11 is not as powerful in the dump options].
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 01:57 AM
02-12-2007 01:57 AM
Re: Best Practice for Large Memory Systems - Crash Dumps
If your systems are 11iv1 then installing the compressed dump option can speed up dump tim in some scenarios:
http://h20293.www2.hp.com/portal/swdepot/displayProductInfo.do?productNumber=CDUMP11i
IIRC this is a standard feature in 11iv2
HTH
Duncan
I am an HPE Employee

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 03:19 AM
02-12-2007 03:19 AM
Re: Best Practice for Large Memory Systems - Crash Dumps
Another thing we do is create a /var/adm/crash file system (so var doesn't fill up).
I usally set that to 16GB.
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-12-2007 03:45 AM
02-12-2007 03:45 AM
Re: Best Practice for Large Memory Systems - Crash Dumps
The only issues is it need 5 working processors to run, and a minimum of 2GB Ram