- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- Re: Speed of writing dump file and DUMPSTYLE sett...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2006 10:01 AM
тАО09-06-2006 10:01 AM
Now that disks are larger what is the consensus of writing a selective dump versus a full dump? I think selective dumps may not write the contents of reserved memory. Is this important?
What is the typicial speed MB/S for writing to a dump file. Let assume a MSA1000 with 15K rpm disk.
I was looking at the sizing algorithm for sizing the SYSDUMP.DMP file and it looks like the size recommended for a compresses dump is 2/3 the full dump size. If I have a DS10, AS4100, or GS140 do I get a slower dump trying to compress the data to dump versus writing more data but requiring less CPU power?
Thanks
Cass
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2006 10:57 AM
тАО09-06-2006 10:57 AM
Re: Speed of writing dump file and DUMPSTYLE settings
Presuming that this is still true, even with 4Gb FC, you have to judge whether your priority is minimal outage time in the event of a crash, or completeness of the dump file.
You CAN have a compressed selective dump MUCH smaller than 2/3 the size of memory. Remember what goes into a selective dump, and the order. System page table, Global page table, the important parts of System space, then global pages, finally complete processes.
Things that do NOT get saved include XQP cache lines and some other objects that are rarely needed for analysis.
As each logical memory block is written, the bugcheck code compares the size of the next logical memory block with the remaining space in the file, and saves it if there is enough room. Each process is a complete logical memory block. If there is not enough space remaining that process does not get saved, until there are no processes left that are small enough to fit into the last free chunk of the dumpfile.
If you have a small file, this could be a lot of processes not saved, a much shorter dumpfile write time, and an earlier start to the reboot. On the other hand, an uncompressed 32Gb dumpfile could take a long time to write...
You choose what is best for your systems.
I do not have any figures on dumpfile write speed. Is this hypothetical dump being written by a single standalone node with exclusive access to the target disk ? or is another node in the cluster trying to do a shadow set merge as the dumpfile is being written to the same disk... ?
In practice, in a SAN it is a good idea to do dump-off-system-disk to a physically local disk to each node. A few years ago I reported a dumpfile write problem to engineering.
If you have a san with dual HBA and dual controllers for each LUN, you have 4 paths to each logical disk. If your dumpfile is on a three member shadow set, you have a total of 12 logical paths to the various members.
The problem is that the path to use is chosen by the console, which can only know about a maximum of 4 paths. It was quite common to find that the console tried all 4 paths it knew about without find a single valid path, so the attempt to write a dump was aborted...
Hence the advice to use a local SCSI disk for a DOSD dumpfile. The disk does not need to be mounted by VMS.
In other words, how long is your particular piece of string... :-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2006 02:22 PM
тАО09-06-2006 02:22 PM
SolutionAnything that can reduce the amount of data physically written to the disk will certainly speed up the dumping process; the CPU overhead is considered to be negligible (this, again, comes directly from the guy who spends his life optimizing crash dump creation.)
-- Rob (VMS Engineering)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-06-2006 05:36 PM
тАО09-06-2006 05:36 PM
Re: Speed of writing dump file and DUMPSTYLE settings
Hence the advice to use a local SCSI disk for a DOSD dumpfile. The disk does not need to be mounted by VMS.
During startup, the DOSD disk should be mounted (in SYCONFIG or SYLOGICALS) with a logical name CLUE$DOSD_DEVICE pointing to it. Otherwise, you will not get the CLUE file and the entry in CLUE$HISTORY.
There was a discussion about dumpfile write speed in ITRC quite some time ago. I would encourage anybody, who gets a dump and can time the dump write operation, to just document to necessary data for reference.
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=861610
Volker.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-07-2006 03:55 AM
тАО09-07-2006 03:55 AM
Re: Speed of writing dump file and DUMPSTYLE settings
> pertinent information was NOT written, we'd like to hear about it.
I HAVE seen a crash where pertinent information was NOT saved, and the event had to be written off as unresolveable.
It was a few years ago, while I was working for the UK suppport centre, so may have been fixed since then. Certainly there will not be any evidence preserved for investigation now.
IIRC, the crash revolved around invalid data found in an XQP cache line. At least, that's what the listing said, but being a selective dump the cache lines were not saved so we could never identify what was wrong with that cache line.
I would expect that this situation has not changed, XQP cache could consume quite a lot of memory, and implicitly a fair chunk of dumpfile space.
Has there been any consideration given to enhancing the compression algorithm ? Modern CPU's could easily do more compressing in the time taken to write the compressed data. Provided that a suitable non-lossy algorithm with a sufficiently higher compression ratio can be found, it would only require a means to identify to the relevant tools which algorithm to use when accessing the file.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО09-25-2006 10:50 AM
тАО09-25-2006 10:50 AM
Re: Speed of writing dump file and DUMPSTYLE settings
Regards,
Cass