Operating System - Linux
1753851 Members
7190 Online
108807 Solutions
New Discussion юеВ

DL380 extremely slow performance with postfix+amavisd

 
Jouni Harjunmaa
New Member

DL380 extremely slow performance with postfix+amavisd

Hello everyone,

We have received and configured new Proliant DL380 servers which will soon be sent to customers. Unfortunately, the performance of sending e-mail is so slow as to render the system nearly unusable. CPU usage is very low, and there is no clear bottleneck that we can find. I suspect there is an I/O problem somewhere. Here is an example of amavis.log:

May 28 13:45:05 server amavis[16685]: (16685-03-4) TIMING [total 25269 ms] - loo
kup_sql: 3274 (13%)13, SMTP pre-DATA-flush: 1960 (8%)21, SMTP DATA: 284 (1%)22,
check_init: 816 (3%)25, digest_hdr: 0 (0%)25, digest_body: 270 (1%)26, gen_mail_
id: 247 (1%)27, mime_decode: 1392 (6%)33, get-file-type1: 870 (3%)36, decompose_
part: 1601 (6%)42, get-file-type1: 738 (3%)45, parts_decode: 91 (0%)46, check_he
ader: 924 (4%)49, spam-wb-list: 4267 (17%)66, update_cache: 180 (1%)67, decide_m
ail_destiny: 1720 (7%)74, fwd-connect: 2487 (10%)84, fwd-mail-pip: 1407 (6%)89,
fwd-rcpt-pip: 114 (0%)90, fwd-data-chkpnt: 144 (1%)90, write-header: 121 (0%)91,
fwd-data-contents: 0 (0%)91, fwd-end-chkpnt: 935 (4%)94, prepare-dsn: 696 (3%)9
7, main_log_entry: 204 (1%)98, update_snmp: 135 (1%)98, SMTP pre-response: 79 (0
%)99, SMTP response: 196 (1%)100, unlink-2-files: 118 (0%)100, rundown: 0 (0%)10
0

As you can see, the process is uniformly slow. Using dmesg, I was able to find something interesting and think this could lead to a solution:

Buffer I/O error on device hda, logical block 4
Buffer I/O error on device hda, logical block 5
...
hda: tray open
end_request: I/O error, dev hda, sector 256

We are using RHEL5 with the latest amavisd and postfix. The hardware is a Proliant DL380 G5 with a RAID1+0 setup. The CCISS driver seems to be loaded and working. Also, this problem occurs with both our DL380 machines, but other models (and manufacturers) with the same setup run without problem.

Are there any good ideas on how I should continue in tracking down this problem?
2 REPLIES 2
Steven E. Protter
Exalted Contributor

Re: DL380 extremely slow performance with postfix+amavisd

Shalom,

Most common cause of this is host name resolution.

An example story.

I changed my firewalls and the systems /etc/resolv.conf was the firewall.

Due to the changes host name resolution slowed to a crawl. There were lots of messages in the mail logs about no host name resolution.

I changed /etc/resolv.conf to the outside ISP's name servers and everything perked up.


This message is critical, though not to performance:

end_request: I/O error, dev hda, sector 256

Thats a bad sector and needs to be resolved. hda has a bad section and you should get a new disk. If there is a degradation of raid performance due to this error, it cold be contributing to the problem.

Check the speed and accuracy of outside host name resolution and if its slow, see what can be done to speed it up.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Jouni Harjunmaa
New Member

Re: DL380 extremely slow performance with postfix+amavisd

Great news, I have finally tracked down the problem! I simply don't know how to solve it yet. Thank you for your insight about the reverse DNS problems, it helped me give some ideas, although it doesn't seem to directly be the problem (actual DNS queries are fast).

The culprit was actually syslog. Once I told amavisd to not use syslog and write to the logfile directly, the speed went up wonderfully. We have the two machines in a cluster, and they both cross-log syslog events, so that may be a likely bottleneck. I have triple-checked that syslog will not reverse DNS lookup, though... However, I have never seen this behavior before, and I verified that these DL380 machines have syslog configured just like other machines where this problem doesn't appear.

I think I can try to find a workaround or track down the problem. This certainly was an interesting problem, what really stumped me was that this only happened on these machines.

Once again, thanks for your help. Things are looking much better now.