Integrity Servers
1753499 Members
4442 Online
108794 Solutions
New Discussion юеВ

Re: rx5670 Disk I/O performance problems :(

 
SOLVED
Go to solution
Joe Bozen
Advisor

rx5670 Disk I/O performance problems :(

I hoping to get some ideas as to how I might go about fixing this problem that I think I have. I have a rx5670 server w 2 1000mhz cpus and I'm having some disk i/o performance issues. On the machine, I have a 73gig disk formatted VxFS and when I copy a 7.5 gig file from one directory to another (on the same drive), it will take close to 40 minutes to make this copy. I would expect to get about 1gig per minute as a copy rate. What do I need to look at to see why my system is so-o-o-o slow.

thanks again to everyone who can help.

joe...
10 REPLIES 10
Uwe Zessin
Honored Contributor
Solution

Re: rx5670 Disk I/O performance problems :(

Joe,
depending on the locality of the data and the buffering capability of you copy program there will still be a lot of time spent moving the heads between different areas of the disk.

Can you find out about how many reads and writes per second are going to this disk?
.
Ted Buis
Honored Contributor

Re: rx5670 Disk I/O performance problems :(

Set the kernel parameter defaultdisk_ir=1. It should speed things up. With it at zero it will not tell the computer that a write is complete until it is on the magnetic media. With it set to one, it will reply immediately that the I/O is complete as soon as it is in the disk drive's buffer. I have seen write intensive cases where this can make a 4X difference. Let us know if this works for you.
Mom 6
Joe Bozen
Advisor

Re: rx5670 Disk I/O performance problems :(

Hi all,

I tried setting the system param to 1 and I didn't notice any gain in performance.


FYI--Here are my copy times:

RAID to DISK: 11 mb/sec
DISK to DISK: 3 mb/sec
DISK to RAID: 3 mb/sec
RAID to RAID: 3 mb/sec

Unforunately, most of the time I'm doing RAID to RAID copies.

Also, here is my FSTAB file:

/dev/vg00/lvol3 / vxfs delaylog 0 1
/dev/vg00/lvol1 /stand vxfs tranflush 0 1
/dev/vg00/lvol4 /tmp vxfs delaylog 0 2
/dev/vg00/lvol5 /home vxfs delaylog 0 2
/dev/vg00/lvol6 /opt vxfs delaylog 0 2
/dev/vg00/lvol7 /usr vxfs delaylog 0 2
/dev/vg00/lvol8 /var vxfs delaylog 0 2
/dev/vg01/lvol9 /usr/local/banner vxfs delaylog 0 2
/dev/vg02/lvol10 /usr/local/banner/oradata vxfs rw,largefiles,delaylog 0 2

vol10 = eXTERNAL RAID 12/h
vol9 = Internal Disk

joe...
Ted Buis
Honored Contributor

Re: rx5670 Disk I/O performance problems :(

I will attach the performance cookbook so that you can review some of the mount options. Does your 12H have a dual controller? If not, I don't think it will pay attention to the defaultdisk_ir setting. If it does, then I think it will do immediate reporting regardless, since the cache is mirrored and data is protected from loss even if one of the controllers failed before the write completed. The 12H disk array used FWD (HVD) SCSI which was limited to 20MB/sec for the bus speed. Internally, it had four 5MB/sec SCSI buses to the disks.
So, once the small cache (96MB) was filled, you would be limited to the performance of the back end. If you get close to filling the disk, you will be moving things to RAID 5 and be incurring the RAID 5 write penalty.
Reading from the array and writing to your internal disk is the best because read come in on one bus and go out on another, and there is no thrashing of the disk heads from seeks back and forth between reading and writing. You can test performance using dd from /dev/zero to a file or from a file to /dev/null. Use a large block size so that you aren't seeing the effects of file system overhead as much. I you create some lvols with different mount options, you can get an idea of the file system overhead with those mount options. I hope this helps, but copying from one device to the same device is always going to be slow unless you have a large block size and don't journal the file system. You can use sam to set some of the mount options. Some are only available with OnLineJFS, which I would recommend.
Mom 6
Joe Bozen
Advisor

Re: rx5670 Disk I/O performance problems :(

I had ordered two systems and both systems seem to be having the same issues. I spoke with my HW vendor and he told me that the UNIX install they performed on this machine followed the same process that HP does on there installs. So, that being the case, I would expect that more people than myself should be having the same problem, correct? I'm guessing that there is probally some 'default' value(s) set that needs to be adjusted. I spoke with HP and their fees are $$$$$$, more than what a small school like ours can afford. I'd hate to go down this route :(

joe...
Ted Buis
Honored Contributor

Re: rx5670 Disk I/O performance problems :(

Could you test a disk to disk transfers from lvol9 to some lvol in vg01?
How much RAM is in your system? What is the value of bufpages, dbc_max_pct and dbc_min_pct?
Mom 6
Joe Bozen
Advisor

Re: rx5670 Disk I/O performance problems :(

Here are my kctune results:

max=8
min=5
bufpages=0

As for the transfer rates:

from vg01 -> vg00 11 mb/sec
from vg00 -> vg01 10 mb/sec

From what I understand, I should be getting a max trasnfer rate of 20 mb/sec.

thanks again and I hope this information helps!

joe...
Joe Bozen
Advisor

Re: rx5670 Disk I/O performance problems :(

Also, I have 4 gig ram

joe...
Ted Buis
Honored Contributor

Re: rx5670 Disk I/O performance problems :(

So let me confirm that you are using an old AutoRAID 12H connected to your new rx5670 server. How many drives are in it? How full is it? If it is the 12H then the max transfer rate on the SCSI bus is 20MB/sec. However, this is the maximum burst transfer rate and doesn't guarantee that you can get to that rate ever even in bursts, much less sustained. There are many factors that can affect performance, the file system block size, the buffering, the queue depth, the disk cache, rotational delays, track switching, controller overhead, intent logging and other mount options. Most HP-UX defaults are likely to be tune toward optimizing random I/Os rather than sequential I/Os. If you go into sam you can easily set the file system mounting to get more performance at the trade-off of less file system integrity protection. Since VxFS is by default a journaled file system, there is overhead in that journaling. OnlineJFS can give you more performance options for some cases. I have included a paper on HP-UX performance. So we could work to optimize the copy transfer, but it might not be the best thing for your normal workload. And there are no guarantees of great success, since you may be limited by the 12H. How many physical disks are in vg00? How many physical disks are in vg02?
Do you have performance issues in vg02? If so, what type of drives are they? Are most of your disk I/Os really sequential or are they random. Please not that when you are reading and writing to the same physical disk, you are likely limited by the track to track switching time. Your expectation of 1GB per minute (~16MB/sec) when copying from and to the same drive is unrealistic even on the latest disk drives on the fastest buses today. Too much time is spend moving the heads to and from the read and write areas. Naturally, there are different ways to make the copy algorithm more efficient in a specific case, but you can test the one-way performance copying or dd'ing a file to /dev/null or dd'ing from /dev/zero to a file.
Mom 6