- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Core dump created at operator new[]
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2013 05:58 AM - edited 06-03-2015 05:08 AM
05-13-2013 05:58 AM - edited 06-03-2015 05:08 AM
Core dump created at operator new[]
Hi All,
we have c++ application which is core dumping at new[] call. Below is the sample core trace.
#3 0xc000000000599530:0 in std::terminate+0x50 () from /usr/lib/hpux64/libCsup.so.1 No symbol table info available. #4 0xc0000000005a8da0:0 in __cxa_throw+0x450 () from /usr/lib/hpux64/libCsup.so.1 No symbol table info available. #5 0xc0000000005cb7c0:0 in operator new[]+0x1a0 () from /usr/lib/hpux64/libCsup.so.1 No symbol table info available. #6 0x40000000001a4b20:0 in FmsBufferIO::WriteEventSeqToBuffFiles+0x1360 () No symbol table info available. #7 0x40000000001a35f0:0 in FmsBufferIO::Insert+0xb0 () No symbol table info available. #8 0x400000000019bcf0:0 in FmsEventBufferDb::Insert+0xd0 (
We were able to reproduce this issue in lab when system has insufficient memory. But in the production environment using TOP and GLANCE we can see that core dump is happening even when there is sufficient free memory available (for than 40gb). Refer below snaps at the time of core dump:
-rw------- 1 fms users 1081694120 May 10 04:34 core_201305100434 -rw------- 1 fms users 1081628584 May 10 05:01 core_201305100501 -rw------- 1 fms users 1081694120 May 10 05:40 core_201305100540 Available memory in the system during this time:- DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 04:34:00 MEM 63.2 47.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 04:34:02 MEM 63.2 47.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 04:34:04 MEM 63.2 47.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 04:34:06 MEM 63.2 47.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 04:34:08 MEM 63.2 47.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:01:00 MEM 62.5 47.8gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:01:02 MEM 62.5 47.8gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:01:04 MEM 62.5 47.8gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:01:06 MEM 62.5 47.8gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:01:08 MEM 62.5 47.8gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:01:10 MEM 62.5 47.8gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:40:00 MEM 62.4 48.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:40:02 MEM 62.4 48.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:40:04 MEM 62.4 48.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:40:06 MEM 62.4 48.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:40:08 MEM 62.4 48.0gb DATE AND TIME GBL_MEM_UTIL GBL_MEM_FREE 05/10/2013 - 05:40:10 MEM 62.4 48.0gb
We are suspecting that there are some HPUX OS parameters which are resulting in non-allocation of memory to user processes at production system. The question which is that parameter?
OS Information:
sysname = HP-UX
nodename = ptxha205
release = B.11.23
version = U (unlimited-user license)
machine = ia64
idnumber = 3366364911
vmunix _release_version:
@(#) $Revision: vmunix: B11.23_LR FLAVOR=perf Fri Aug 29 22:35:38 PDT 2003 $
P.S> This thread has been moved from HP-UX > General to HP-UX > languages - Hp Forums moderator
Shelendra Agarwal
Business Solutions/CMS
RTBSS-CentralView
HP-India
+91-9945056319
shelendra.agarwal@hp.com
- Tags:
- kernel parms
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2013 09:35 AM - edited 05-13-2013 10:58 PM
05-13-2013 09:35 AM - edited 05-13-2013 10:58 PM
Re: operator new[] throws std::bad_alloc
You are either out of swap or you are exceeding maxdsiz or maxdsiz_64bit.
You have a 64 bit application, so it is the latter.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2013 07:48 PM
05-13-2013 07:48 PM
Re: operator new[] throws std::bad_alloc
Thanks for answer:
Swap utilization is 44%
Memory utilization is 63%
maxdsize is 1GB and maxdsize_64bit is 4 GB
there are chances that the process may need more than 1GB memory allocation due to large amount of data kept in memory. Let me try to increase the maxdsize to 2GB and see when the process coredumps. Will keep you posted.
Shelendra Agarwal
Business Solutions/CMS
RTBSS-CentralView
HP-India
+91-9945056319
shelendra.agarwal@hp.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2013 07:48 PM
05-13-2013 07:48 PM
Re: operator new[] throws std::bad_alloc
Hi,
Below is the list of kernel parameters, could you please suggest if we need any changes.
Output of: /usr/sbin/kctune
Tunable Value Expression Changes
NSTREVENT 50 Default
NSTRPUSH 16 Default
NSTRSCHED 0 Default
STRCTLSZ 1024 Default
STRMSGSZ 0 Default
acctresume 4 Default
acctsuspend 2 Default
aio_listio_max 256 Default Immed
aio_max_ops 4096 4096 Immed
aio_monitor_run_sec 30 Default Immed
aio_physmem_pct 10 Default
aio_prio_delta_max 20 Default Immed
aio_proc_thread_pct 70 Default Immed
aio_proc_threads 1024 Default Immed
aio_req_per_thread 1 Default Immed
allocate_fs_swapmap 0 Default
alwaysdump 0 Default Immed
chanq_hash_locks 256 Default
core_addshmem_read 0 Default Immed
core_addshmem_write 0 Default Immed
create_fastlinks 0 Default
dbc_max_pct 5 5 Immed
dbc_min_pct 1 1 Immed
default_disk_ir 0 Default
desfree_pct 0 Default Auto
disksort_seconds 0 Default
dma32_pool_size 268435456 Default
dmp_pathswitch_blks_shift 10 Default
dmp_rootdev_is_vol 0 Default
dmp_swapdev_is_vol 0 Default
dnlc_hash_locks 512 Default
dontdump 0 Default Immed
dst 1 Default
dump_compress_on 1 Default Immed
enable_idds 0 Default Immed
eqmemsize 15 Default
executable_stack 0 0 Immed
fr_statemax 800000 Default Immed
fr_tcpidletimeout 86400 Default Immed
fs_async 0 Default
fs_symlinks 20 Default Immed
ftable_hash_locks 64 Default
gvid_no_claim_dev 0 Default Immed
hires_timeout_enable 0 Default Immed
hp_hfs_mtra_enabled 1 Default
intr_strobe_ics_pct 100 Default Auto
io_ports_hash_locks 64 Default
ioforw_timeout 0 Default Auto
ipf_icmp6_passthru 0 Default Immed
is_vxtrace_enabled 1 Default
ksi_alloc_max 33600 Default Immed
ksi_send_max 32 Default
lotsfree_pct 0 Default Auto
max_acct_file_size 2560000 Default Immed
max_async_ports 4096 4096
max_mem_window 0 Default
max_thread_proc 1200 1200 Immed
maxdsiz 1073741824 Default Immed
maxdsiz_64bit 4294967296 Default Immed
maxfiles 2048 Default
maxfiles_lim 4096 Default Immed
maxrsessiz 8388608 Default
maxrsessiz_64bit 8388608 Default
maxssiz 134217728 134217728 Immed
maxssiz_64bit 1073741824 1073741824 Immed
maxtsiz 100663296 Default Immed
maxtsiz_64bit 1073741824 Default Immed
maxuprc 4096 4096 Immed
maxvgs 256 256
msgmap 4098 4098
msgmax 8192 Default Immed
msgmnb 16384 Default Immed
msgmni 4096 4096
msgseg 32767 32767
msgssz 96 Default
msgtql 4096 4096
ncdnode 150 Default Immed
nclist 8292 Default
ncsize 34816 34816
nfile 65536 Default Auto
nflocks 4096 Default Auto
ninode 34816 34816
nkthread 9232 9232 Immed
nproc 4200 Default Immed
npty 60 Default
nstrpty 60 60
nstrtel 60 Default
nswapdev 10 Default
nswapfs 10 Default
nsysmap 8400 Default
nsysmap64 8400 Default
o_sync_is_o_dsync 0 Default
pa_maxssiz_32bit 83648512 Default
pa_maxssiz_64bit 536870912 Default
pagezero_daemon_enabled 1 Default Immed
pfdat_hash_locks 128 Default
physical_io_buffers 4352 Default Auto
pthread_condvar_prio_boost 0 Default Immed
region_hash_locks 128 Default
remote_nfs_swap 0 Default
rng_bitvals 9876543210 Default
rng_sleeptime 2 Default
rtsched_numpri 32 Default
sched_thread_affinity 6 Default Immed
scroll_lines 100 Default Immed
scsi_max_qdepth 8 Default Immed
scsi_maxphys 1048576 Default
secure_sid_scripts 1 Default Immed
semaem 16384 Default
semmni 4096 4096
semmns 8192 8192
semmnu 4196 4196
semmsl 2048 Default Immed
semume 100 Default
semvmx 32767 Default
sendfile_max 0 Default
shmmax 137438953472 137438953472 Immed
shmmni 512 512 Immed
shmseg 300 Default Immed
st_ats_enabled 0 Default
st_fail_overruns 0 Default
st_large_recs 0 Default
st_san_safe 0 Default Immed
streampipes 0 Default
swapmem_on 1 Default
swchunk 65536 65536
sysv_hash_locks 128 Default
tcphashsz 2048 Default
timeslice 10 Default
timezone 420 Default
unlockable_mem 0 Default
vnode_cd_hash_locks 128 Default
vnode_hash_locks 128 Default
vol_checkpt_default 10240 Default
vol_dcm_replay_size 262144 Default
vol_default_iodelay 50 Default
vol_fmr_logsz 4 Default
vol_max_bchain 32 Default
vol_max_nconfigs 20 Default
vol_max_nlogs 20 Default
vol_max_nmpool_sz 4194304 Default Immed
vol_max_prm_dgs 1024 Default
vol_max_rdback_sz 4194304 Default Immed
vol_max_vol 8388608 Default
vol_max_wrspool_sz 4194304 Default Immed
vol_maxio 1024 Default
vol_maxioctl 32768 Default
vol_maxkiocount 2048 Default
vol_maxparallelio 256 Default
vol_maxspecialio 1024 Default
vol_maxstablebufsize 256 Default
vol_min_lowmem_sz 532480 Default Immed
vol_mvr_maxround 256 Default
vol_nm_hb_timeout 10 Default
vol_rootdev_is_vol 0 Default
vol_rvio_maxpool_sz 4194304 Default Immed
vol_subdisk_num 4096 Default
vol_swapdev_is_vol 0 Default
vol_vvr_transport 1 Default
vol_vvr_use_nat 0 Default
volcvm_cluster_size 16 Default
volcvm_smartsync 1 Default
voldrl_max_drtregs 2048 Default
voldrl_min_regionsz 512 Default
voldrl_volumemax_drtregs 256 Default
voliomem_chunk_size 65536 Default
voliomem_maxpool_sz 4194304 Default
voliot_errbuf_dflt 16384 Default
voliot_iobuf_default 8192 Default
voliot_iobuf_limit 131072 Default
voliot_iobuf_max 65536 Default
voliot_max_open 32 Default
volpagemod_max_memsz 6144 Default Immed
volraid_rsrtransmax 1 Default
vps_ceiling 64 64
vps_chatr_ceiling 1048576 Default
vps_pagesize 4 Default
vx_era_nthreads 5 Default
vx_maxlink 32767 Default
vx_ninode 0 Default Immed
vxfs_bc_bufhwm 0 Default Immed
vxfs_ifree_timelag 0 Default Immed
vxtask_max_monitors 32 Default
Regards,
Sanjib.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-13-2013 11:01 PM
05-13-2013 11:01 PM
Re: operator new[] throws std::bad_alloc
>there are chances that the process may need more than 1 GB memory allocation due to large amount of data
Instead you need to increase maxdsiz_64bit. Try 8 GB.
>the list of kernel parameters, could you please suggest if we need any changes.
Right now only maxdsiz_64bit.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2013 12:00 AM
05-15-2013 12:00 AM
Re: operator new[] throws std::bad_alloc
Hi,
I tried to change both the below parameters.
maxdsiz 2147483648 2147483648 Immed
maxdsiz_64bit 8589934592 8589934592 Immed
but the programme is not able to allocate more than 919 MB memory and crashing at this point.
0 pts/3 3567 group2 128 20 919M 285M sleep 0:00 10.36 1.44 memoryleak.exe
0 pts/3 3567 group2 128 20 919M 636M sleep 0:00 8.77 2.89 memoryleak.exe
Is there any parameter which is restricting the programme to allocate more than 919 MB, do we need to change any other parameter, So that it will allocate more memory.
Regards,
Sanjib.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2013 04:49 AM
05-15-2013 04:49 AM
Re: operator new[] throws std::bad_alloc
>Is there any parameter which is restricting the program to allocate more than 919 MB
Well, there is the ulimit command that will restrict the amount of space. What does "ulimit -a" show?
But typically that only effects 32 bit processes.
I suppose if there is a 64 bit process that calls setrlimit(2), then it or any children will not be able to allocate more than that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2013 09:34 AM
05-15-2013 09:34 AM
Re: operator new[] throws std::bad_alloc
Hi,
Below is the output of ulimit.
ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 2097152
stack(kbytes) 131072
memory(kbytes) unlimited
coredump(blocks) 4194303
nofiles(descriptors) 2048
Now my kernel parameter settings is as follows
maxdsiz 2147483648 2147483648 Immed
maxdsiz_64bit 8589934592 8589934592 Immed
I build my programme in two separate bit format and executed the same, I observed the following:-
file test-32.exe
test-32.exe: ELF-32 executable object file - IA64
When I executed this programmes, it crashed at 912M.
4 pts/0 19030 group2 128 20 912M 292M sleep 0:01 16.86 4.37 test-32.exe
4 pts/0 19030 group2 128 20 912M 496M sleep 0:01 10.81 4.57 test-32.exe
file test-64.exe
test-64.exe: ELF-64 executable object file - IA64
When I executed this programme, it crashed at arround 8GB memory size.
8 pts/0 24541 group2 128 20 8398M 1076M run 0:02 61.23 8.53 test-64.exe
8 pts/0 24541 group2 128 20 8398M 2125M run 0:04 75.08 16.61 test-64.exe
8 pts/0 24541 group2 128 20 8398M 2136M sleep 0:04 52.34 17.25 test-64.exe
If I understand correctly, the parameter maxdsiz is meant for 32 bit and maxdsiz_64bit is meant for 64 bit programme. but looks like the parameter maxdsiz is not working as expected.
Please let me know if any further recomendation.
Regards,
Sanjib.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-15-2013 10:11 AM - edited 05-15-2013 10:11 AM
05-15-2013 10:11 AM - edited 05-15-2013 10:11 AM
Re: operator new[] throws std::bad_alloc
>I build my program in two separate bit format
(You need to mention this up front. Your initial example was 64 bit and some of your replies now indicate you switched.)
>file test-32.exe: When I executed this program, it crashed at 912M.
>looks like the parameter maxdsiz is not working as expected.
Everything is working fine. Unless you link your 32 bit app with the right set of options, the max data area you can get is 1 GB. 1 GB is used for text, the other 2 GB is used for shared memory/shlibs.
Since you have a 64 bit executable, you should only use that one.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
05-21-2013 11:40 PM
05-21-2013 11:40 PM
Re: operator new[] throws std::bad_alloc
Hi,
My executable is 64 bit.
memory-test-64.exe: ELF-64 executable object file - IA64
I have set the kernel parameter as follows i.e. maxdsiz as 1GB and maxdsiz_64bit as 4GB.
fms@ptxha119:/erm/detection/sys> /usr/sbin/kctune | grep maxd
maxdsiz 1073741824 Default Immed
maxdsiz_64bit 4294967296 Default Immed
But when I am running the programme, its crashing as arround 1GB memory limit.
fms@ptxha119:/erm/detection/marcoP> more top_output1.txt|grep memory
0 pts/2 9720 fms 138 20 1055M 582M sleep 0:00 7.26 2.14 memory-test-64.exe
0 pts/2 9720 fms 128 20 1055M 1014M sleep 0:01 5.80 3.56 memory-test-64.exe
Do I need to change any other parameter? Please suggest if any.
Thanks in advance.
Regards,
Sanjib.