Operating System - HP-UX
1752580 Members
2995 Online
108788 Solutions
New Discussion юеВ

shmem allocation question

 
Jan Slezak
Occasional Advisor

shmem allocation question

Hello all,

One of our DBA wants to trace the crashed informix db memory dump. The crash occured on production system and he wants to examine the dump on test system. It is about 25G worth memory dump file, which is possible to be loaded into the memory by utility called xtrace (informix utility), but it fails to load the file for I think obvoius reasons (tusc output):

read(3, "e0b7\001\01 \006ffffffff\0\0\0\0".., 656) ............... = 656 shmget(0, 0xfffc0000, IPC_CREAT|IPC_EXCL|0600) ................... =
7831558
shmat(7831558, 0xc000000003418000, 0) ............................
ERR#22 EINVAL
shmat(7831558, NULL, 0) .......................................... = 0xc000000002bac000

The xtrace is trying to create 4G shmem region (0xfffc0000) and then trying to attach to the address the region is expected to start at (0xc000000003418000) - this is, however, not correct (according to the man pages for shmat), because the shmem region was not existing before, thus the starting address has to be assigned by OS first (received 0xc000000002bac000 on 2nd shmat call with NULL addr field). As the wanted and the assigned shmem region starting address do not match, it is not possible to examine the memory dump - utility probably can't handle the offset.

As there's about 800M gap betw wanted (0xc000000003418000) and assigned (0xc000000002bac000) shmem region starting address, my immediate thinking was to create some fake 800M shared mem region prior to running xtrace, so the desired & assigned address would eventually match.

This succeeded for 2 out of 7 segemnts to be allocated in total, others are tred to be mapped to higher addresses (about 8G offset betw 2nd and 3rd 4G segment. 3rd should start @ 0xc000000203398000 but is starting @ 0xc0000002e353c00). Is there any possibility, how to tell the OS to allocate shmem continuously? Many thanks for ideas.

Regards, Jan
4 REPLIES 4
Don Morris_1
Honored Contributor

Re: shmem allocation question

Is this PA or IPF? PA --- I can't think of any way to reliably get what you want, there's no option to place discrete SysV segments contiguously in the shared address space (and no guarantee that sufficient contiguous address space is available at any rate).

On IPF, however -- chatr +as mpas xtrace and let it put the segment where it wants to within its own address space. [MPAS is allowed to specify a target virtual address on shmat()]. This is about the only thing I can think of that would give reliable results.
Jan Slezak
Occasional Advisor

Re: shmem allocation question

Of course IA64, sorry not to mention that. Tried the chatr +as mpas already which resulted in mapping the shmem segments into different virtual memory area than desired. Anyway thanks for mentioning this.
Dennis Handly
Acclaimed Contributor

Re: shmem allocation question

Any reason why the tool xtrace relies on the address being mapped?

gdb just reads the core file and applies a fixed mapping to every access.

BTW, is this is a core file? Or a system panic dump?
Jan Slezak
Occasional Advisor

Re: shmem allocation question

I personally think it does not; however the DBA insisted to replicate the issue on the testbox including exact location of shmem segments in virtual memory. I finally succeeded to create such a fake shmem region which allowed the xtrace to put the shmem regions to the desired virtual addresses, but still no luck, the dump might be corrupt as itself. The funny thing is that xtrace now returns 0 (at least allocating all the segments on desired virtual addresses) even not being successful in tracing the dump.

'file' command doesn't tell it is a corefile, neither it is a system panic dump. According to the info I have from the DBA, this dump was generated by crashing informix instance, not by the system like core file. I think this would rather be question for IBM support as the memory mapping doesn't seem to be the root cause of all this.