HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Poor Disk performance for cp,
Operating System - HP-UX
1834154
Members
2728
Online
110064
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-13-2007 01:30 PM
08-13-2007 01:30 PM
Poor Disk performance for cp,
Hi,
I am having some problems with cp(copy) on our new system. We have a rx3660, 8gb, 1cpu and a MSA 30 DB attached to U320 (both buses). I got filesystem /data 30GB (distributed, extent-striped over 3 drives(pvg1), mirrored(pvg2)) the primary is sitting on one channel, mirror copy on the second channel.
A second filesystem /eom 30GB is stripped accros the mirror disk group (pvg2) on the second channel as well. I cp /data to /eod
The speed is very low around 6-7MB/s. Takes 30-40 min on idle system to copy 16GB of data. In comparisment my old system, RP5440 with U160 is 50% faster. I can't put my finger on why it is so. I have included all my system.perf.sh from 5 minuts into the copy. What bugs me the mosts is sar -d 1 600 info. Here are tails from both old(same disk setup) and new system.
New system:
Average c5t0d0 1.77 0.60 4 68 0.39 8.61
Average c5t1d0 1.24 0.62 3 52 0.56 8.23
Average c1t0d0 15.26 0.50 287 4534 0.00 0.64
Average c1t1d0 32.17 0.50 285 4504 0.01 1.24
Average c1t2d0 15.13 0.50 290 4548 0.00 0.62
Average c2t0d0 26.94 393.47 385 6041 17.61 2.94
Average c2t1d0 25.26 394.05 380 5941 18.22 2.88
Average c2t2d0 24.18 397.63 371 5768 17.60 2.92
Old System:
Average c1t2d0 2.12 0.50 5 50 4.60 6.21
Average c2t2d0 1.19 0.50 2 24 4.75 7.30
Average c4t8d0 26.18 22.99 579 9190 11.62 5.99
Average c4t10d0 19.82 0.61 438 6914 5.14 1.31
Average c4t12d0 15.25 0.51 365 5780 5.06 1.17
Average c5t8d0 37.48 139.12 592 9314 158.76 10.08
Average c5t10d0 43.05 111.74 558 8771 126.57 10.34
Average c5t12d0 43.25 123.32 563 8855 131.06 10.26
why is avque 3-4 times larger ?
This one is killing me, theoreticaly, the new disk system should be 2x faster, not slower.
Thnaks,
Danilo
I am having some problems with cp(copy) on our new system. We have a rx3660, 8gb, 1cpu and a MSA 30 DB attached to U320 (both buses). I got filesystem /data 30GB (distributed, extent-striped over 3 drives(pvg1), mirrored(pvg2)) the primary is sitting on one channel, mirror copy on the second channel.
A second filesystem /eom 30GB is stripped accros the mirror disk group (pvg2) on the second channel as well. I cp /data to /eod
The speed is very low around 6-7MB/s. Takes 30-40 min on idle system to copy 16GB of data. In comparisment my old system, RP5440 with U160 is 50% faster. I can't put my finger on why it is so. I have included all my system.perf.sh from 5 minuts into the copy. What bugs me the mosts is sar -d 1 600 info. Here are tails from both old(same disk setup) and new system.
New system:
Average c5t0d0 1.77 0.60 4 68 0.39 8.61
Average c5t1d0 1.24 0.62 3 52 0.56 8.23
Average c1t0d0 15.26 0.50 287 4534 0.00 0.64
Average c1t1d0 32.17 0.50 285 4504 0.01 1.24
Average c1t2d0 15.13 0.50 290 4548 0.00 0.62
Average c2t0d0 26.94 393.47 385 6041 17.61 2.94
Average c2t1d0 25.26 394.05 380 5941 18.22 2.88
Average c2t2d0 24.18 397.63 371 5768 17.60 2.92
Old System:
Average c1t2d0 2.12 0.50 5 50 4.60 6.21
Average c2t2d0 1.19 0.50 2 24 4.75 7.30
Average c4t8d0 26.18 22.99 579 9190 11.62 5.99
Average c4t10d0 19.82 0.61 438 6914 5.14 1.31
Average c4t12d0 15.25 0.51 365 5780 5.06 1.17
Average c5t8d0 37.48 139.12 592 9314 158.76 10.08
Average c5t10d0 43.05 111.74 558 8771 126.57 10.34
Average c5t12d0 43.25 123.32 563 8855 131.06 10.26
why is avque 3-4 times larger ?
This one is killing me, theoreticaly, the new disk system should be 2x faster, not slower.
Thnaks,
Danilo
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-13-2007 02:10 PM
08-13-2007 02:10 PM
Re: Poor Disk performance for cp,
The problem is that you really don't know what you are measuring because there are too many variables. Copying cooked files throws too many things into the mix. Is is filesystem performance? Is it buffer cache? Is it LVM or VxVM throughput? Is is disk throughput?
You need to drop down to a lower level so that you are measuring no more than absolutely necessary at one time. Moreover, when you read from a cooked file and write to a cooked file what are you actually measuring because you are going through two stacks of the items listed above?
I would approach it like this:
1) timex dd if=/dev/rdsk/c2t1d0 bs=256k \
count=2000 of=/dev/null # this will measure raw disk performance
2) timex dd if=/dev/dsk/c2t1d0 bs=256k \
count=2000 of=/dev/null # this will measure "luke-warm" disk performance
3) timex dd if=/dev/vg05/rlvol1 bs=256k \
count=2000 of=/dev/null # raw LVM
4) timex dd if=/dev/vg05/lvol1 bs=256k \
count=2000 of=/dev/null # "luke-warm" LVM
or for 3-4 do if=/dev/vx/rdsk/dgxxx/volnn
and if=/dev/vx/dsk/dgxxx/volnn for VxVM
5) create a 500MiB cooked file and test it.
timex dd if=/dev/zero bs=256k count=2000
of=/aaa/bbb/myfile # cooked write test
6) timex dd if=/aaa/bbb/myfile bs=256k of=/dev/null # cooked read test
You should repeat the tests several times and average the results and then compare them to the old box and you should have some idea where the bottleneck(s) lie.
You need to drop down to a lower level so that you are measuring no more than absolutely necessary at one time. Moreover, when you read from a cooked file and write to a cooked file what are you actually measuring because you are going through two stacks of the items listed above?
I would approach it like this:
1) timex dd if=/dev/rdsk/c2t1d0 bs=256k \
count=2000 of=/dev/null # this will measure raw disk performance
2) timex dd if=/dev/dsk/c2t1d0 bs=256k \
count=2000 of=/dev/null # this will measure "luke-warm" disk performance
3) timex dd if=/dev/vg05/rlvol1 bs=256k \
count=2000 of=/dev/null # raw LVM
4) timex dd if=/dev/vg05/lvol1 bs=256k \
count=2000 of=/dev/null # "luke-warm" LVM
or for 3-4 do if=/dev/vx/rdsk/dgxxx/volnn
and if=/dev/vx/dsk/dgxxx/volnn for VxVM
5) create a 500MiB cooked file and test it.
timex dd if=/dev/zero bs=256k count=2000
of=/aaa/bbb/myfile # cooked write test
6) timex dd if=/aaa/bbb/myfile bs=256k of=/dev/null # cooked read test
You should repeat the tests several times and average the results and then compare them to the old box and you should have some idea where the bottleneck(s) lie.
If it ain't broke, I can fix that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-14-2007 06:38 AM
08-14-2007 06:38 AM
Re: Poor Disk performance for cp,
Clay I attached the result form the tests you suggested, each test run3 time on the new box. And these look good. I did these to measure performance when we first got the system. Problem seems to be when I try to use cp to copy about 17GB of data. On the old system same operation is complete in 45% less time, on a bus half that speed.
dd times for cooked 500mb file
New system Old System
read(ave) 8s 5s
write(ave) 25s 22s
but once i start the cp, the speed at the end on the new system avreages to 6-7 MB/s while old system is around 11-12MB/s
One thing is fast on new system and thats ignite tapes, they are made at 17-20MB/s on LTO3 drive.
dd times for cooked 500mb file
New system Old System
read(ave) 8s 5s
write(ave) 25s 22s
but once i start the cp, the speed at the end on the new system avreages to 6-7 MB/s while old system is around 11-12MB/s
One thing is fast on new system and thats ignite tapes, they are made at 17-20MB/s on LTO3 drive.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-14-2007 07:23 AM
08-14-2007 07:23 AM
Re: Poor Disk performance for cp,
Running those tests I mentioned only makes sense when you compare each step along the way with old and new. I did notice that your buffer cache is working well. Notice the dramatic reduction in the first time a file is read or written compared to subsequent operations. Anyway, you should do each of those steps on both boxes and look for significant differences. The other things I would look at buffer cache tuning differnces between the two boxes and check the scsi_max_qdepth setting on each disk and compare those to the new.
e.g. scsictl -a /dev/rdsk/c1t5d0
Man scsictl for details. If not overriden, the value defaults to the global_value set by the kernel tunable scsi_max_qdepth.
e.g. scsictl -a /dev/rdsk/c1t5d0
Man scsictl for details. If not overriden, the value defaults to the global_value set by the kernel tunable scsi_max_qdepth.
If it ain't broke, I can fix that.
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP