- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - OpenVMS
- >
- open vms buffered I/O and network performance on D...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2008 01:24 PM
тАО05-17-2008 01:24 PM
open vms buffered I/O and network performance on DS25 vs ES45
The applications also read "transactions" over several multicast channels, and the problem is the system pegs 100% CPU, when the transaction rates go high. Profiling shows 80% of time is used in kernel mode. We use AST driven processing with the use of $QIOs to read/process the multicast streams. Is there any "known" bottleneck with this $QIO approach?
The account that we run the applications have BYPASS privilege and hence they should not be running into any "account limitations" issue.
The account has the following limits/privileges.
Maxjobs: 0 Fillm: 4096 Bytlm: 8192000
Maxacctjobs: 0 Shrfillm: 0 Pbytlm: 0
Maxdetach: 0 BIOlm: 150 JTquota: 65535
Prclm: 50 DIOlm: 150 WSdef: 90000
Prio: 4 ASTlm: 250 WSquo: 90000
Queprio: 0 TQElm: 150 WSextent: 90000
CPU: (none) Enqlm: 2000 Pgflquo: 1200000
Authorized Privileges:
ACNT ALLSPOOL ALTPRI AUDIT BUGCHK BYPASS
CMEXEC CMKRNL DIAGNOSE DOWNGRADE EXQUOTA GROUP
GRPNAM GRPPRV IMPERSONATE IMPORT LOG_IO MOUNT
NETMBX OPER PFNMAP PHY_IO PRMCEB PRMGBL
PRMMBX PSWAPM READALL SECURITY SETPRV SHARE
SHMEM SYSGBL SYSLCK SYSNAM SYSPRV TMPMBX
UPGRADE VOLPRO WORLD
System: PROD1, AlphaServer ES45 Model 2
CPU ownership sets:
Active 0-3
Configure 0-3
CPU state sets:
Potential 0-3
Autostart 0-3
Powered Down None
Not Present None
Failover None
LAN Configuration:
Parent or
Device PrefCPU Medium/User Version Link Speed Duplex Auto BufSize MAC Address Type Description
------ ------- ----------- ------- ---- ----- ------ ---- ------- ---------------- ------------ -----------
EWA0 2 Ethernet X-117 Down - - No 1500 00-06-2B-02-4B-D1 UTP DE500
EWB0 1 Ethernet X-117 Down - - No 1500 00-06-2B-02-4B-D2 UTP DE500
EWC0 0 Ethernet X-117 Up 100 Full No 1500 00-06-2B-02-4B-D3 UTP DE500
EWD0 3 Ethernet X-117 Up 100 Full No 1500 00-06-2B-02-4B-D4 UTP DE500
EWE0 2 Ethernet X-117 Up 100 Full No 1500 00-06-2B-03-F4-45 UTP DE500
EWF0 1 Ethernet X-117 Up 100 Full No 1500 00-06-2B-03-F4-46 UTP DE500
EWG0 0 Ethernet X-117 Up 100 Full No 1500 AA-00-04-00-54-04 UTP DE500
00-06-2B-03-F4-47 (default)
EWH0 3 Ethernet X-117 Up 100 Full No 1500 00-06-2B-03-F4-48 UTP DE500
TCP/IP stack configuration.
$sysconfig -q inet
inet:
icmp_redirecttimeout = 0
icmp_rejectcodemask = 0
icmp_tcpseqcheck = 1
inifaddr_hsize = 32
ip_max_frag_index = 64
ipdefttl = 64
ipdirected_broadcast = 0
ipforwarding = 0
ipfragttl = 60
ipgateway = 0
ipport_userreserved = 65535
ipport_userreserved_min = 49152
ipqmaxlen = 2048
ipqs = 1
ipsendredirects = 1
ipsrcroute = 1
pmtu_decrease_intvl = 1200
pmtu_enabled = 1
pmtu_increase_intvl = 240
pmtu_rt_check_intvl = 20
subnetsarelocal = 1
tcbhashnum = 1
tcbhashsize = 512
tcbquicklisten = 1
tcp_compat_42 = 1
tcp_cwnd_segments = 2
tcp_dont_winscale = 0
tcp_keepalive_default = 0
tcp_keepcnt = 8
tcp_keepidle = 14400
tcp_keepinit = 150
tcp_keepintvl = 150
tcp_msl = 60
tcp_mssdflt = 536
tcpnodelack = 0
tcp_recvspace = 61440
tcp_rexmit_interval_min = 2
tcp_rexmtmax = 128
tcprexmtthresh = 3
tcp_rst_win = -1
tcp_rttdflt = 3
tcp_sendspace = 61440
tcp_syn_win = -1
tcp_ttl = 128
tcptwreorder = 0
tcp_urgent_42 = 1
udpcksum = 1
udp_recvspace = 1048976
udp_sendspace = 9216
udp_ttl = 128
ovms_nobroadcastcheck = 0
ovms_printf_to_opcom = 1
Adapter Configuration:
----------------------
TR Adapter ADP Hose Bus BusArrayEntry Node CSR Vec/IRQ Port Slot Device Name / HW-Id
-- ----------- ----------------- ---- ----------------------- ---- ---------------------- ---- ---- ---------------------------
1 KA2608 FFFFFFFF.81C6E300 0 BUSLESS_SYSTEM
2 PCI FFFFFFFF.81C6E800 0 PCI
FFFFFFFF.81C6EE60 38 FFFFFFFF.9D453800 40 7 ACER 1543 PCI-ISA Bridge
FFFFFFFF.81C6F0B8 60 FFFFFFFF.9D468000 6C 12 HOT_PLUG
FFFFFFFF.81C6F298 80 FFFFFFFF.9D46A000 38 16 00000000.00000ACE (N.)
3 ISA FFFFFFFF.81C6FAC0 0 ISA
FFFFFFFF.81C6FDD8 0 FFFFFFFF.9D456000 0 0 EISA_SYSTEM_BOARD
4 XBUS FFFFFFFF.81C70580 0 XBUS
FFFFFFFF.81C70898 0 FFFFFFFF.9D456000 C 0 MOUS
FFFFFFFF.81C70910 1 FFFFFFFF.9D456000 1 1 KBD
FFFFFFFF.81C70988 2 FFFFFFFF.9D456000 4 SRA: 2 Console Serial Line Driver
FFFFFFFF.81C70A00 3 FFFFFFFF.9D456000 3 TTA: 3 Serial Port
FFFFFFFF.81C70A78 4 FFFFFFFF.9D456000 7 LRA: 4 Line Printer (parallel port)
FFFFFFFF.81C70AF0 5 FFFFFFFF.9D456000 6 DVA: 5 Floppy
5 PCI FFFFFFFF.81C71480 0 PCI
FFFFFFFF.81C56958 80 FFFFFFFF.9D46A000 38 DQA: 16 ACER 5229 IDE Controller
FFFFFFFF.81C569D0 81 FFFFFFFF.9D46A000 3C DQB: 16 ACER 5229 IDE Controller
6 PCI FFFFFFFF.81C71780 1 PCI
FFFFFFFF.81C71B10 8 FFFFFFFF.9D46E800 40 1 00000000.B1548086 (..T1)
FFFFFFFF.81C71B88 10 FFFFFFFF.9D481000 40 2 00000000.B1548086 (..T1)
FFFFFFFF.81C71D68 30 FFFFFFFF.9D493000 68 6 HOT_PLUG
7 PCI FFFFFFFF.81C72A40 1 PCI
FFFFFFFF.81C72F38 220 FFFFFFFF.9D472000 B0 EWA: 4 DE504-BA (quad Fast Ethernet)
FFFFFFFF.81C72FB0 228 FFFFFFFF.9D474800 B4 EWB: 5 DE504-BA (quad Fast Ethernet)
FFFFFFFF.81C73028 230 FFFFFFFF.9D479000 B8 EWC: 6 DE504-BA (quad Fast Ethernet)
FFFFFFFF.81C730A0 238 FFFFFFFF.9D47D800 BC EWD: 7 DE504-BA (quad Fast Ethernet)
8 PCI FFFFFFFF.81C73DC0 1 PCI
FFFFFFFF.81C742B8 320 FFFFFFFF.9D484000 C0 EWE: 4 DE504-BA (quad Fast Ethernet)
FFFFFFFF.81C74330 328 FFFFFFFF.9D486800 C4 EWF: 5 DE504-BA (quad Fast Ethernet)
FFFFFFFF.81C743A8 330 FFFFFFFF.9D48B000 C8 EWG: 6 DE504-BA (quad Fast Ethernet)
FFFFFFFF.81C74420 338 FFFFFFFF.9D48F800 CC EWH: 7 DE504-BA (quad Fast Ethernet)
9 PCI FFFFFFFF.81C75180 2 PCI
10 PCI FFFFFFFF.81C76400 3 PCI
FFFFFFFF.81C76808 10 FFFFFFFF.9D49F000 E0 2 MFPCI
FFFFFFFF.81C769E8 30 FFFFFFFF.9D4AB000 64 6 HOT_PLUG
11 PCI FFFFFFFF.81C776C0 3 PCI
FFFFFFFF.81C779D8 10 FFFFFFFF.9D4A3000 E0 PKA: 2 Adaptec AIC-7899
FFFFFFFF.81C77A50 11 FFFFFFFF.9D4A7100 E4 PKB: 2 Adaptec AIC-7899
Any help will be greatly appreciated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2008 02:54 PM
тАО05-17-2008 02:54 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Many things could be causing this. Without a detailed review of the code, and how it is using the system services, it is pure speculation as to what is happening.
Also, what version of OpenVMS are you running?
- Bob Gezelter, http://www.rlgsc.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО05-17-2008 05:22 PM
тАО05-17-2008 05:22 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Profile the code. DECset PCA might help.
Profile the system. Various tools are available.
Run some raw numbers, too. This could be contention on the memory interlocks, available memory bandwidth, PCI bandwidth, or pretty much anything else that might be spinning away.
For some application loads, processor dedicated activities -- processor affinity -- can help, particularly if you're thrashing the scheduler or the memory caches.
I'm not sure why you posted IP information or the SRM device lists. These multi-cast streams are IP, apparently.
Do look to split I/O processing across the processors; use fast path for the NICs, assuming your OpenVMS Alpha version permits it.
BYPASS overrides object access, and not quota settings or "account limitations".
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-02-2008 05:34 PM
тАО06-02-2008 05:34 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
running open vms 8.3 on AlphaServer ES45 Model 2.
Each system has 4 CPUs (each is EV68, 1000 MHz).
http://h18002.www1.hp.com/products/quickspecs/11042_div/11042_div.HTML
Profiling looked flat.
most of the "cpu time" seemed to be spent in "interrupt mode"... (monitor modes for example). This suggests some bottleneck at the I/O stack at higher transaction rates?
I am not really familiar with tuning the IP stack for better IP Multicast performance. Is there any good reference for tuning Network/IP/Multicast on Open VMS? We have 4 CPUs on each system, and have the "fast path I/O" enabled on the NICs.
example:
V1110> sh dev ewa0: /full
Device EWA0:, device type DE500, is online, network device, error logging is
enabled, device is a template only.
Error count 4 Operations completed 0
Owner process "" Owner UIC [SYSTEM]
Owner process ID 00000000 Dev Prot S:RWPL,O:RWPL,G,W
Reference count 0 Default buffer size 512
Current preferred CPU Id 1 Fastpath 1
Current Interrupt CPU Id 1
Operating characteristics: Link up, Full duplex, Autonegotiation.
Speed (Mbits/sec) 100
Def. MAC addr xx-xx-xx-xx-xx-xx Current MAC addr xx-xx-xx-xx-xx-xx
Not too familiar with "fast path I/O" processing either. Remember seeing some example of using this on file I/O. It will be of great help to understand it, if there was some sample code for doing the fast path I/O on a TCP or UDP socket.
Thank you very much once again for your comments.
-Ganga
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-02-2008 06:45 PM
тАО06-02-2008 06:45 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Upgrade faster NICs, as a start. DE504 quad NICs operate at 100 Mb and that isn't all that fast, and a NIC upgrade is an easy potential fix. The other potential approach is to move to a faster Alpha box.
Fast Path allows you to spread part of the I/O load to specific processors. By default, I/O interrupts go to the primary processor, meaning an SMP box runs at the I/O speed of the primary. Fast Path allows you to have secondaries take up some of the load.
Might pay off to get somebody in to profile the code. This could be I/O, or it could be code hammering on the interlocks, or... And a look at what part of the kernel is busy... What and how far any backups get. And 55,000 or so times the packet size works out to be some amount; possibly significant.
What's the outboard network configuration and network traffic look like here?
Your original topic mentions DS25, and adding processors does not necessarily provide a speedup. You still need to get stuff across the system buses. ES47 has faster buses and can have faster processors here, though I'd prototype before moving to the box.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-02-2008 10:42 PM
тАО06-02-2008 10:42 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-02-2008 10:50 PM
тАО06-02-2008 10:50 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Wim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-04-2008 05:55 PM
тАО06-04-2008 05:55 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
I do really appreciate the help on troubleshooting this rather interesting problem.
The typical configuration of an Alpha server is as follows.
P00>>>show config
hp AlphaServer ES45 Model 2
Firmware
SRM Console: V7.3-2
PALcode: OpenVMS PALcode V1.98-43, Tru64 UNIX PALcode V1.92-33
Serial ROM: V2.22-G
RMC ROM: V1.0
RMC Flash ROM: V2.4
Processors
CPU 0 Alpha EV68CB pass 2.4 1000 MHz 8MB Bcache
CPU 1 Alpha EV68CB pass 2.4 1000 MHz 8MB Bcache
CPU 2 Alpha EV68CB pass 2.4 1000 MHz 8MB Bcache
CPU 3 Alpha EV68CB pass 2.4 1000 MHz 8MB Bcache
Core Logic
Cchip Rev 17
Dchip Rev 17
PPchip 0 Rev 17
PPchip 1 Rev 17
TIG Rev 2.6
Memory
Array Size Base Address Intlv Mode
--------- ---------- ---------------- ----------
0 2048Mb 0000000000000000 1-Way
1 2048Mb 0000000080000000 1-Way
4096 MB of System Memory
Slot Option Hose 0, Bus 0, PCI
7 Acer Labs M1543C Bridge to Bus 1, ISA
12 Yukon PCI Hot-Plug C
16 Acer Labs M1543C IDE dqa.0.0.16.0
dqb.0.1.16.0
dqa0.0.0.16.0 Compaq CRD-8402B
Option Hose 0, Bus 1, ISA
Floppy dva0.0.0.1000.0
Slot Option Hose 1, Bus 0, PCI
1 Intel 21154-*E Bridge to Bus 2, PCI
2 Intel 21154-*E Bridge to Bus 3, PCI
6 Yukon PCI Hot-Plug C
Slot Option Hose 1, Bus 2, PCI
4 DE500-BA Network Con ewa0.0.0.2004.1 00-xx-xx-xx-xx-D1
5 DE500-BA Network Con ewb0.0.0.2005.1 00-xx-xx-xx-xx-D2
6 DE500-BA Network Con ewc0.0.0.2006.1 00-xx-xx-xx-xx-D3
7 DE500-BA Network Con ewd0.0.0.2007.1 00-xx-xx-xx-xx-D4
Slot Option Hose 1, Bus 3, PCI
4 DE500-BA Network Con ewe0.0.0.3004.1 00-xx-xx-xx-xx-45
5 DE500-BA Network Con ewf0.0.0.3005.1 00-xx-xx-xx-xx-46
6 DE500-BA Network Con ewg0.0.0.3006.1 00-xx-xx-xx-xx-47
7 DE500-BA Network Con ewh0.0.0.3007.1 00-xx-xx-xx-xx-48
Slot Option Hose 3, Bus 0, PCI
2/0 Adaptec AIC-7899 pka0.7.0.2.3 SCSI Bus ID 7
dka0.0.0.2.3 COMPAQ BF07285A36
dka100.1.0.2.3 COMPAQ BF3008B26C
dka200.2.0.2.3 COMPAQ BF3008B26C
dka300.3.0.2.3 COMPAQ BF3008B26C
dka400.4.0.2.3 COMPAQ BD03686223
2/1 Adaptec AIC-7899 pkb0.7.0.102.3 SCSI Bus ID 7
6 Yukon PCI Hot-Plug C
P00>>>show memory
Array Size Base Address Intlv Mode
--------- ---------- ---------------- ----------
0 2048Mb 0000000000000000 1-Way
1 2048Mb 0000000080000000 1-Way
4096 MB of System Memory
P00>>>show cpu
Primary CPU: 00
Active CPUs: 00 01 02 03
Configured CPUs: 00 01 02 03
P00>>>show fru
FRUname E Part# Serial# Model/Other Alias/Misc
SMB0 00 54-30292-02.B03 AY12809269
SMB0.CPU0 80 54-30466-04.B1 AY21401331
SMB0.CPU1 80 54-30466-04.A1 AY12505585 233
SMB0.CPU2 80 54-30466-04.B1 AY21504065
SMB0.CPU3 80 54-30466-04.B1 AY21401305
SMB0.MMB0 00 54-30348-02.A03 AY13500962
SMB0.MMB0.J5 00 20-01EBA-09 4136JSPA106 233 ce
SMB0.MMB0.J9 00 20-01EBA-09 4136JSPA106 233 ce
SMB0.MMB1 00 54-30348-02.A03 AY13408701
SMB0.MMB1.J5 00 20-01EBA-09 4136JSPA106 235 ce
SMB0.MMB1.J9 00 20-01EBA-09 4136JSPA106 235 ce
SMB0.MMB2 00 54-30348-02.A03 AY13501087
SMB0.MMB2.J5 00 20-01EBA-09 4136JSPA106 233 ce
SMB0.MMB2.J9 00 20-01EBA-09 4136JSPA106 235 ce
SMB0.MMB3 00 54-30348-02.A03 AY13312624
SMB0.MMB3.J5 00 20-01EBA-09 4136JSPA106 233 ce
SMB0.MMB3.J9 00 20-01EBA-09 4136JSPA106 233 ce
SMB0.CPB0 00 54-30418-01.A05 AY13111135
JIO0 00 54-25575-01 - Junk I/O
SMB0.CPB0.PCI8 00 Intel 2115
SMB0.CPB0.PCI7 00 Intel 2115
SMB0.CPB0.PCI4 00 Adaptec AI
SMB0.CPB0.PCI4 00 Adaptec AI
OCP0 00 70-33894-0x - OCP
PWR0 00 30-49448-01. C07 2P12760460 API-7650 7f
PWR1 00 30-49448-01. C06 2P01739043 API-7650 7f
PWR2 00 30-49448-01. C06 2P02744358 API-7650 7f
FAN1 00 70-40073-01 - Fan
FAN2 00 70-40073-01 - Fan
FAN3 00 70-40072-01 - Fan
FAN4 00 70-40071-01 - Fan
FAN5 00 70-40073-02 - Fan
FAN6 00 70-40074-01 - Fan
P00>>>
When the multicast traffic rates go high, the reader on the Alpha drops messages.The rates cannot go beyond 20 Mbps though.. because we have a pipe that is 20 Megs and other Linux systems on the same network confirms no drops of messages. At the same time, the transaction rate can go really high.. i.e, too many small messages real fast. It does seem to me that Open VMS does indeed have a problem handling too many small packets at once. i.e, in a real bursty feed. Or.. I am totally missing on some fine-tuning that can be done to make open vms behave when it comes down to such a situation. What i am seeing is that the CPU usage goes way high.. interrupt mode goes to 30% use, and the kernel mode goes to 40% use on each CPU. This is really wierd.
The udp specific parameters from sysconfig -q inet are:
udpcksum = 1
udp_recvspace = 1048976
udp_sendspace = 9216
udp_ttl = 128
FUTPH1110> tcpip sh proto udp /para
UDP
Unpriv. broadcast: disabled
Receive Send
Checksum: disabled enabled
Quota: 1048976 9216
We don't deal with much TCP/IP... it is UDP MULTICAST every where..
FUTPH1110> tcpip sh proto tcp /para
TCP
Delay ACK: enabled
Window scale: enabled
Drop count: 8
Probe timer: 7200
Receive Send
Push: disabled disabled
Quota: 61440 61440
Again, I really appreciate your help in troubleshooting this rather interesting problem...
Thanks
-Ganga
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-04-2008 06:58 PM
тАО06-04-2008 06:58 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
I can't tell, for instance, if four gigabytes is too much or enough memory. I'd tend to guess it's too little memory, but without a more detailed look at the system performance and particularly at the performance under load, I can't be certain of that.
Do profile the code, and particularly the protocol implementation. The DECset Performance and Coverage Analyzer (PCA) tool might help here, as PCA can often point to the hot spots in the application code.
Profile the system. Various tools are available. This includes monitoring memory, as well as process and processor and processor mode activity.
Run some raw numbers, too, based on various I/O streams you know are active, as compared with the theoretical bandwidth of the particular component. You can't get 110 Mb through a 100 Mb NIC. You can't get more than a PCI worth of I/O through a PCI.
The underlying limit here could be contention on the memory interlocks, available memory bandwidth, PCI bandwidth, or pretty much anything else that might be spinning away.
For some application loads, processor dedicated activities -- processor affinity -- can help, particularly if you're thrashing the scheduler or the memory caches.
You're already using Fast Path, which implies somebody has taken a look at aggregate performance. It might well be the case that you're simply reached saturation levels with this box.
I'm not sure why you posted what you've posted around FRUs -- field replaceable units -- basically what widgets and spares are in the box -- and other such details. I can only infer that you're not familiar with these sorts of application and system problems, and this inference based on what you're posting. Which also implies that more formal on-line or potentially on-site help can be the most expeditious approach; calling in someone that specializes in system and application performance tuning.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-05-2008 09:01 AM
тАО06-05-2008 09:01 AM
Re: open vms buffered I/O and network performance on DS25 vs ES45
You will likely need to bring in someone or someones for a detailed analysis of your application and its use of resources. Two people who do that work for a living have attempted to offer suggestions here.
Additionally, VMS Engineering has a customer lab in Nashua (soon to move to Marlboro) where a customer-specific environment can be recreated with the intent of solving performance problems, similar to yours.
Good luck.
-- Rob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-05-2008 04:52 PM
тАО06-05-2008 04:52 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Best Regards,
-Ganga
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-05-2008 07:16 PM
тАО06-05-2008 07:16 PM
Re: open vms buffered I/O and network performance on DS25 vs ES45
Perhaps this has nothing to do with your issue, but since you specifically called out multicast traffic as being correlated to high processor utilization, I will ask.
Are all of the interfaces on the same network? If so, and each thread is binding to the same multicast group, then each interface will receive a copy of the same message. How does your application then handle duplicate received packets? If the same received packet is processed by multiple listeners, that could be causing contention for the shared memory.
Are the multiple interfaces intended for redundancy or for performance? The title mentions a DS25, but I don't see anything in the text mentioning it. Did you see better performance on the DS25? Were you expecting the application to scale linearly with additional processors?
Sorry that this doesn't answer any of your questions, but as others have stated, this is a reasonably complex problem, and there doesn't seem to be enough information here to diagnose it.
You stated your profiling showed that the time was on the interrupt stack. You may want to try and determine where the cpu time is being spent with something like the SDA PCS or PRF extension. If the time is being spent in the LAN driver's interrupt service routing, NICs that support Interrupt Coalescing (like the 3X-DEGXA-T* GBit NIC) could possibly help in bursty traffic situations.
Jon
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО06-07-2008 02:59 AM
тАО06-07-2008 02:59 AM
Re: open vms buffered I/O and network performance on DS25 vs ES45
I can't offer much assistance in terms of the VMS side of things, however I'd also like to suggest that the NICs are likely to be the limiting factor here.
If you look at the supported options for the ES45 here:
http://h18002.www1.hp.com/alphaserver/options/ases45/ases45_options.html
you won't find the DE504. That's not say it doesn't work, but you might hit issues...
The tulip chips on the DE500s probably predate the ES45 by a good 10 years or so, so upgrading to something like the 3X-DEGXA-TR would seem a logical step.
Your original post mentions a DS25. That has a Gigabit interface on board. Does that have the same issue ?
A bit more information on the upstream switch arrangements and the other Linux systems might also be useful. Do the Linux boxes have Gb interfaces ?
Hope this helps,
Regards,
Rob
P.S. Do remember to assign points...