Operating System - HP-UX
1748180 Members
4287 Online
108759 Solutions
New Discussion юеВ

Performance Disk I/O: slow tar copy from one disk to another

 
SOLVED
Go to solution
chisle
Advisor

Performance Disk I/O: slow tar copy from one disk to another

server: rx2660
OS: HP-UX 11.31 fully patched
disk in vg00 is local 146G 15kHz
disk in vg01 is CLARiiON RAID

command is:
tar -cf . | ( cd /new_dir ; tar -xvf -)

Stats:

19:46:30 device %busy avque r+w/s blks/s avwait avserv
19:46:40 c4t0d0 0.30 0.50 5 51 0.00 0.70
c7t0d0 0.30 0.50 5 38 0.00 0.71
c7t0d2 0.10 0.50 0 0 0.00 0.15
disk3 4.10 0.60 21 249 0.20 4.86
disk23 0.70 0.50 11 90 0.00 0.71
19:46:50 c4t0d0 0.40 0.50 6 65 0.00 0.65
c7t0d0 0.40 0.50 6 21 0.00 0.63
disk3 3.00 0.83 18 102 1.80 5.22
disk23 0.70 0.50 12 86 0.00 0.64

Average c4t0d0 0.35 0.50 6 58 0.00 0.68
Average c7t0d0 0.35 0.50 5 30 0.00 0.67
Average c7t0d2 0.05 0.50 0 0 0.00 0.15
Average disk3 3.55 0.70 20 176 0.93 5.02
Average disk23 0.70 0.50 11 88 0.00 0.67

source is disk3, local SAS drive
destination is c4t0d0, CLARiiON disk

there is nothing running on this host at all.

top shows tar processes at the top
load is 0.05

This looks like a CDROM copy for all intents and purposes!





5 REPLIES 5
DeafFrog
Valued Contributor

Re: Performance Disk I/O: slow tar copy from one disk to another

Hi Erik ,
Local SAS is always slower than FC SAN , since in this case the IO is from SAS thru smartarray controller , fiber , san s/w san controller and than finally the destination san disk, how many path you see for the SAN disk if you do a ioscan -m dsf /dev/dsk/c4t0d0.Also check the firmware verison of sas controller on rx2660.

Regards,
FrogIsDeaf
Solution

Re: Performance Disk I/O: slow tar copy from one disk to another

This is obviously 11iv3... are you sure you have the Clariion setup correctly? There are some specific guidelines for Clariion with 11iv3:

- is the Clariion on at least FLARE26?

- are all the LUNs presented to HP-UX 11.31 system configured in failover mode 4 (ALUA)

If not, then the default load balancing algorithms in 11.31 will cause constant tresapassing, meaning that for every IO the Clariion needs to move the LUN from one controller to another - this doesn't appear as any error messages on the host side, but does result in painfully slow IO.

HTH

Duncan

I am an HPE Employee
Accept or Kudo
chisle
Advisor

Re: Performance Disk I/O: slow tar copy from one disk to another

SAS to SAS is very fast, so that's not the bottleneck.

We're looking into the CLARiiON settings, FW (seems to be at the correct level) and any other settings. Could be we eventually pull PowerPath out altogether and go native path balancing, but EMC cringes at that.

Stand by....
TwoProc
Honored Contributor

Re: Performance Disk I/O: slow tar copy from one disk to another

This is exactly what we saw on our Clariion system years back. For speed, we would hit another NFS drive to speed things up. Can you believe it? NFS was actually faster than local disks. At first we thought it was maybe b/c it was R5, so we made some R0/1 areas. Nope. We thought maybe it was because of the clustered file system (not HPUX obviously), took that out and tested it. Nope, not that either. We thought it was disk layout in the Clariion, so we redid the layout with the most "optimal" layouts we could find from EMC papers. Nope. Still junk. Really, throughput was slower than going through a 10Gb hub using NFS. Well, that was some really old Clarions (some of the first ones to be honest), but still hardly worth much of anything. A single standalone old scsi drive direct attach with 100ms seek times was faster than those Clarions. Glad to see them go when they went.
We are the people our parents warned us about --Jimmy Buffett
chris huys_4
Honored Contributor

Re: Performance Disk I/O: slow tar copy from one disk to another

Hi,

Removing powerpath on a hp-ux 11.31 is always a good first step. ;)

Post then, ioscan -m dsf, ioscan -fn, ioscan -fnN, scsimgr get_info -D and vgdisplay -v /dev/vg01
output.

And also what sort of clariion diskarray is it ?

Greetz,
Chris