- Community Home
- >
- Servers and Operating Systems
- >
- Legacy
- >
- Operating System - Tru64 Unix
- >
- Re: JVM in Tru64 5.1B
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-29-2004 02:47 PM
тАО06-29-2004 02:47 PM
JVM in Tru64 5.1B
i have a Weblogic Server running in Tru64 5.1B,
but it dumps every 2-3 days and never fufil request until
i restart the server .
i checked the log file ,found the exception is
could not allocate code space: No such file or directory,
file /vobs/JavaGroup/Products/Java/J2SDK/fastvm/srcjava/
sys/alpha/md.c, line 154
i think that it may be a bug of JVM ,but i am not certain.
could anyone give me some advices ? Thanks
- Tags:
- Java
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-29-2004 06:24 PM
тАО06-29-2004 06:24 PM
Re: JVM in Tru64 5.1B
Can you ensure you have sufficient space in machine
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-29-2004 08:46 PM
тАО06-29-2004 08:46 PM
Re: JVM in Tru64 5.1B
Btw. a "could not allocate code space" is not a bug. If your system is out of the box without any changes for the application it will be an administration problem ;-)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2004 01:06 PM
тАО06-30-2004 01:06 PM
Re: JVM in Tru64 5.1B
My System Configurations are listed below :
Processor has been on-line since 05/24/2004 14:00:14
The alpha EV6.7 (21264A) processor operates at 500 MHz,
has a cache size of 4194304 bytes,
and has an alpha internal floating point processor.
Status of processor 1 as of: 07/01/04 08:32:09
Processor has been on-line since 05/24/2004 14:00:14
The alpha EV6.7 (21264A) processor operates at 500 MHz,
has a cache size of 4194304 bytes,
and has an alpha internal floating point processor.
Status of processor 2 as of: 07/01/04 08:32:09
Processor has been on-line since 05/24/2004 14:00:14
The alpha EV6.7 (21264A) processor operates at 500 MHz,
has a cache size of 4194304 bytes,
and has an alpha internal floating point processor.
Status of processor 3 as of: 07/01/04 08:32:09
Processor has been on-line since 05/24/2004 14:00:14
The alpha EV6.7 (21264A) processor operates at 500 MHz,
has a cache size of 4194304 bytes,
and has an alpha internal floating point processor.
# vmstat -P
Total Physical Memory = 8192.00 M
= 1048576 pages
Physical Memory Clusters:
start_pfn end_pfn type size_pages / size_bytes
0 256 pal 256 / 2.00M
256 130730 os 130474 / 1019.33M
130730 131072 pal 342 / 2.67M
131072 1048562 os 917490 / 7167.89M
1048562 1048576 pal 14 / 112.00k
Physical Memory Use:
start_pfn end_pfn type size_pages / size_bytes
256 289 scavenge 33 / 264.00k
289 1104 text 815 / 6.37M
1104 1256 data 152 / 1.19M
1256 1491 bss 235 / 1.84M
1491 1695 kdebug 204 / 1.59M
1695 1702 cfgmgmt 7 / 56.00k
1702 1704 locks 2 / 16.00k
1704 1718 pmap 14 / 112.00k
1718 4244 unixtable 2526 / 19.73M
4244 4436 logs 192 / 1.50M
4436 7838 vmtables 3402 / 26.58M
7838 131072 managed 123234 / 962.77M
131072 152084 vmtables 21012 / 164.16M
152084 1048562 managed 896478 / 7003.73M
============================
Total Physical Memory Use: 1048306 / 8189.89M
Managed Pages Break Down:
free pages = 619616
active pages = 326484
inactive pages = 0
wired pages = 45819
ubc pages = 27484
==================
Total = 1019403
WIRED Pages Break Down:
vm wired pages = 5280
ubc wired pages = 0
meta data pages = 31405
malloc pages = 5619
contig pages = 1442
user ptepages = 1017
kernel ptepages = 1047
free ptepages = 9
==================
Total = 45819
# ulimit -a
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) 2929688
stack(kbytes) 131072
memory(kbytes) 8154960
coredump(blocks) unlimited
nofiles(descriptors) 4096
vmemory(kbytes) 4194304
#
#
# sysconfig -q proc
proc:
max_proc_per_user = 2048
max_threads_per_user = 2048
per_proc_stack_size = 134217728
max_per_proc_stack_size = 134217748
per_proc_data_size = 3000000000
max_per_proc_data_size = 3000000000
max_per_proc_address_space = 4294967296
per_proc_address_space = 4294967296
executable_stack = 0
autonice = 0
autonice_time = 600
autonice_penalty = 4
open_max_soft = 4096
open_max_hard = 4096
ncallout_alloc_size = 8192
round_robin_switch_rate = 0
sched_min_idle = 0
give_boost = 1
maxusers = 2048
num_wait_queues = 2048
num_timeout_hash_queues = 2048
enhanced_core_name = 0
enhanced_core_max_versions = 16
exec_disable_arg_limit = 0
dump_cores = 1
dump_setugid_cores = 0
# sysconfig -q vm
vm:
ubc_minpercent = 10
ubc_maxpercent = 100
ubc_borrowpercent = 20
vm_max_wrpgio_kluster = 32768
vm_max_rdpgio_kluster = 16384
vm_cowfaults = 4
vm_segmentation = 1
vm_ubcpagesteal = 24
vm_ubcfilemaxdirtypages = 4294967295
vm_ubcdirtypercent = 40
ubc_maxdirtywrites = 5
ubc_maxdirtymetadata_pcnt = 70
ubc_kluster_cnt = 32
vm_ubcseqstartpercent = 50
vm_ubcseqpercent = 10
vm_csubmapsize = 1048576
vm_ubcbuffers = 256
vm_syncswapbuffers = 128
vm_asyncswapbuffers = 4
vm_clustermap = 1048576
vm_clustersize = 65536
vm_syswiredpercent = 80
vm_troll_percent = 4
vm_inswappedmin = 1
vm_page_free_target = 1024
vm_page_free_swap = 522
vm_page_free_hardswap = 16384
vm_page_free_min = 20
vm_page_free_reserved = 10
vm_page_free_optimal = 522
vm_swap_eager = 1
swapdevice = /dev/disk/dsk0b
vm_page_prewrite_target = 2048
vm_ffl = 1
ubc_ffl = 1
vm_rss_maxpercent = 100
anon_rss_enforce = 0
vm_rss_block_target = 522
vm_rss_wakeup_target = 522
kernel_stack_pages = 2
vm_min_kernel_address = 18446741891866165248
malloc_percpu_cache = 1
vm_aggressive_swap = 0
new_wire_method = 1
vm_segment_cache_max = 50
gh_chunks = 0
rad_gh_regions[0] = 0
rad_gh_regions[1] = 0
rad_gh_regions[2] = 0
rad_gh_regions[3] = 0
rad_gh_regions[4] = 0
rad_gh_regions[5] = 0
rad_gh_regions[6] = 0
rad_gh_regions[7] = 0
rad_gh_regions[8] = 0
rad_gh_regions[9] = 0
rad_gh_regions[10] = 0
rad_gh_regions[11] = 0
rad_gh_regions[12] = 0
rad_gh_regions[13] = 0
rad_gh_regions[14] = 0
rad_gh_regions[15] = 0
rad_gh_regions[16] = 0
rad_gh_regions[17] = 0
rad_gh_regions[18] = 0
rad_gh_regions[19] = 0
rad_gh_regions[20] = 0
rad_gh_regions[21] = 0
rad_gh_regions[22] = 0
rad_gh_regions[23] = 0
rad_gh_regions[24] = 0
rad_gh_regions[25] = 0
rad_gh_regions[26] = 0
rad_gh_regions[27] = 0
rad_gh_regions[28] = 0
rad_gh_regions[29] = 0
rad_gh_regions[30] = 0
rad_gh_regions[31] = 0
rad_gh_regions[32] = 0
rad_gh_regions[33] = 0
rad_gh_regions[34] = 0
rad_gh_regions[35] = 0
rad_gh_regions[36] = 0
rad_gh_regions[37] = 0
rad_gh_regions[38] = 0
rad_gh_regions[39] = 0
rad_gh_regions[40] = 0
rad_gh_regions[41] = 0
rad_gh_regions[42] = 0
rad_gh_regions[43] = 0
rad_gh_regions[44] = 0
rad_gh_regions[45] = 0
rad_gh_regions[46] = 0
rad_gh_regions[47] = 0
rad_gh_regions[48] = 0
rad_gh_regions[49] = 0
rad_gh_regions[50] = 0
rad_gh_regions[51] = 0
rad_gh_regions[52] = 0
rad_gh_regions[53] = 0
rad_gh_regions[54] = 0
rad_gh_regions[55] = 0
rad_gh_regions[56] = 0
rad_gh_regions[57] = 0
rad_gh_regions[58] = 0
rad_gh_regions[59] = 0
rad_gh_regions[60] = 0
rad_gh_regions[61] = 0
rad_gh_regions[62] = 0
rad_gh_regions[63] = 0
gh_min_seg_size = 8388608
gh_fail_if_no_mem = 1
vm_bigpg_enabled = 0
vm_bigpg_anon = 64
vm_bigpg_seg = 64
vm_bigpg_shm = 64
vm_bigpg_ssm = 64
vm_bigpg_stack = 64
vm_bigpg_thresh = 6
private_cache_percent = 0
gh_keep_sorted = 0
gh_front_alloc = 1
replicate_user_text = 1
enable_yellow_zone = 0
boost_pager_priority = 0
gsm_enabled = 1
kstack_free_target = 5
i had configed the Weblogic server according it's manual and i had asked the technical supporter ,he advised me to asking hp about it and it could be the problem of JVM .
so ,how can i do :(
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-30-2004 09:29 PM
тАО06-30-2004 09:29 PM
Re: JVM in Tru64 5.1B
In case of java be sure you are using the right swapping mode (because the virtual machine preallocates memory statically and not dynamically). But bea must "support" this method e.g. support means approve it and will support their application in case of trouble.
So a good starter is to ask bea about reference machines at customer sites and what kind of tuning is necessary if using weblogic with java.
We can not make any suggestion for java without knowing what the application vendor supports or suggests for his application.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-29-2004 09:36 PM
тАО11-29-2004 09:36 PM
Re: JVM in Tru64 5.1B
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-02-2004 12:47 AM
тАО12-02-2004 12:47 AM
Re: JVM in Tru64 5.1B
------------------------------
could not allocate code space: no such file or directory ,file
JDEV:[fastvm.srcjava.sys.alpha]md.c;1
, line 154
-------------------------------
Good Luck to you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-10-2005 09:04 AM
тАО02-10-2005 09:04 AM
Re: JVM in Tru64 5.1B
We are also seeing the exact same issue.We are running WL 8.1 sp1 on Tru64 and it dumps the exact log. Have you got any resolution on this one. If so, can you share with us.
thanks
-sheshi
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО02-20-2005 02:28 PM
тАО02-20-2005 02:28 PM
Re: JVM in Tru64 5.1B
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО03-09-2005 03:01 AM
тАО03-09-2005 03:01 AM
Re: JVM in Tru64 5.1B
We are experiencing the same type of problems on another custom application (not BEA Weblogic).
What do you mean with ├в upgrade to jvv 1.4.1-2.bp2 ├в ?
The latest version available is the jvm 1.4.2-4 and we are using the jvm 1.4.2-3 so maybe we are experiencing something different.
Thank you in ad