1849286 Members
6114 Online
104042 Solutions
New Discussion

Re: Tuning dd blocksize

 
SOLVED
Go to solution
Pete Randall
Outstanding Contributor

Tuning dd blocksize

 

Pete
18 REPLIES 18
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Sorry, hit the enter key before I was ready.

I'm copying lvols from one VG to another on a FC60 array. The lvols are 2 GB in size. I tried this with a 1024K block size and it took a lot longer than I would have hoped. Does anybody have any suggestions/stats/benchmarks that might guide me to an optimum copy?

Thanks,
Pete

Pete
James R. Ferguson
Acclaimed Contributor
Solution

Re: Tuning dd blocksize

Hi Pete:

One thing is use the raw device (/dev/rlvol/xxx).

Regards!

...JRF...
Tim D Fulford
Honored Contributor

Re: Tuning dd blocksize

Hi

The text here is blank, so I assume you just want to tune dd blocksize?

I usually only use 64kB. However, I did some work on this a while ago & found that you get very little performance gain above about 512kB-1MB from & to one disk. It really depends on what you want to do.

If you have RAID 0 then I would use the whole stripe width (say 12 disks with 64kB width ==> 768kB).

If you have a single disk with a VxFS file system on it 64kB is fine, but theoretically 8kB should be optimum (block size).

If you have RAID5, again I would use whole stripe width (12 disks with 64kB ==> 704kB)

If you are doing a raw-raw copy of volume groups I would stick to 64kB, or if they are RAIDed the whole stripe.

Tim
-
Juan González
Trusted Contributor

Re: Tuning dd blocksize

Hi Pete,

Just three suggestions:

1) To specify raw devices in if and of parameters

2) You can try with a bigger bs. I usually do it with bs=4194304 (the size of my PV's PE)

3) Do not specify ibs and obs = Poor performance

Best regards
Juan
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

JRF,

Thanks, I should have checked that. I'm actually copying Informix "chunks" which are symbolic links to the raw logical volume, except when some sleepy, hungover SA comes in on a Sunday and rushes the creation of same. In effect, I'm copying from the raw lvol to the cooked lvol. The destination is also set up as RAID 5 and has a failed drive in it, so that's probably not helping matters, either.

Pete

Pete
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Juan/Tim,

Stripe size/PE size - I'll try that - thanks.

Pete

Pete
Tim D Fulford
Honored Contributor

Re: Tuning dd blocksize

Pete

1 - what is your RAID level?
2 - what is the segment size of the LUNs/disks(amdsp -l )
3 - use /dev/vgxx/rlvol for dd
4 - I use (as I said before) 64kB if I can't be bother to think about it.
5 - You have caching on fc60, this will probably interfere with copy as it will be continually being flushed. I do not know if you can turn this off though. ("ammgr -T A:100 " & "ammgr -L A:100 " might work!!!?)

Tim
-
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Tim,

RAID level on the source is 0/1. Destination is 5. I'll be doing further testing to see if I can improve this. We're coming up on 24 hours to copy 140GB - not what I was hoping for. :^(

Points forthcoming,
Pete

Pete
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Tim,

Segement size is 4 on the source and 16 on the destination.

Pete

Pete
Stefan Farrelly
Honored Contributor

Re: Tuning dd blocksize


Weve done lots of dd copies before over the years and with all the stuff weve done ive found 64k to be the best blocksize. But to get the best copy times always kick off multiple dd's at the same time (background them) - this also enchances overall copying time.
Im from Palmerston North, New Zealand, but somehow ended up in London...
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Stefan,

We're doing 10 at a time, then waiting for those before the next batch. Too many? Too few?

Pete

Pete
Eric Buckner
Regular Advisor

Re: Tuning dd blocksize

Pete,
Not sure this will be of much help for you but we are constantly creating new copies of databases for various things. Just using cooked to cooked I am about to copy about 300GB in about 2 hours. Not using dd just plain old cp's with a wrapper script. I have it set to spawn 12 cp's in the background and go to sleep until 1 completes and it will then spawn another. Since we have so many datafiles to copy this works out quite well for us. If you are interested I can forward you the scripts that I use to do this.

Eric
Time is not a test of the truth.
Stefan Farrelly
Honored Contributor

Re: Tuning dd blocksize

Hi Pete,

10 at a time is fine - unless your server can cope with more. We used to kick off 20-30 at a time - but this was on a server with about 8 I/O controllers and it took that many to max it out. After 20-30 we wouldnt get any speed improvement.
Im from Palmerston North, New Zealand, but somehow ended up in London...
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Eric,

I'm not sure your approach would apply, either, but I'd love to take a look at it. You can attach here if you can (so everyone can share) or you can email me at prandall@holstein.com. Thanks.

Stefan,

Thanks for the comparison. I think 10 is probably about as high as we can go. We've got two fibre channels compounded by the Brocade switch to give us four paths but I don't think they all get used. I'll be doing some more testing once I get the failed drive replaced - I think that's really holding things up.

Pete

Pete
Eric Buckner
Regular Advisor

Re: Tuning dd blocksize

Pete,
I attached the script. It uses a control file that lists the file name, FQPath to Source and FQPath to Destination. A bit of modifications to that control file and changes to the script I am thinking you could use this for your dd copies.

Below is the a piece of the control file just to give you an idea of what it looks like.

fact1_ts01.dbf|/u/AIMTEST/dta1/fact1_ts01.dbf|/u/AIMBNCH/dta1/fact1_ts01.dbf
fact1_ts02.dbf|/u/AIMTEST/dta1/fact1_ts02.dbf|/u/AIMBNCH/dta1/fact1_ts02.dbf
fact1_ts03.dbf|/u/AIMTEST/dta1/fact1_ts03.dbf|/u/AIMBNCH/dta1/fact1_ts03.dbf
fact1_ts04.dbf|/u/AIMTEST/dta1/fact1_ts04.dbf|/u/AIMBNCH/dta1/fact1_ts04.dbf
fact1_ts05.dbf|/u/AIMTEST/dta1/fact1_ts05.dbf|/u/AIMBNCH/dta1/fact1_ts05.dbf
fact1_ts06.dbf|/u/AIMTEST/dta1/fact1_ts06.dbf|/u/AIMBNCH/dta1/fact1_ts06.dbf
fact1_ts07.dbf|/u/AIMTEST/dta1/fact1_ts07.dbf|/u/AIMBNCH/dta1/fact1_ts07.dbf
fact1_ts08.dbf|/u/AIMTEST/dta1/fact1_ts08.dbf|/u/AIMBNCH/dta1/fact1_ts08.dbf
fact1_ts09.dbf|/u/AIMTEST/dta1/fact1_ts09.dbf|/u/AIMBNCH/dta1/fact1_ts09.dbf
fact1_ts10.dbf|/u/AIMTEST/dta1/fact1_ts10.dbf|/u/AIMBNCH/dta1/fact1_ts10.dbf
fact1_ts11.dbf|/u/AIMTEST/dta1/fact1_ts11.dbf|/u/AIMBNCH/dta1/fact1_ts11.dbf
fact1_ts12.dbf|/u/AIMTEST/dta1/fact1_ts12.dbf|/u/AIMBNCH/dta1/fact1_ts12.dbf

Hope it is at least somewhat useful.

Eric
Time is not a test of the truth.
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Thanks, Eric.

Pete

Pete
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Using the raw device was the *obvious* answer (excuse me while I kick myself again). After I got the bad drive replaced, I tried one 2GB lvol and it finished in 5 minutes compared to 3 hours for 10 simultaneous copies. Further testing tonight.

Thanks to all,
Pete


Pete
Pete Randall
Outstanding Contributor

Re: Tuning dd blocksize

Well, we managed to shave a measly 22 hours off the copies by switching to where we should have been - raw lvols. I'm going to continue playing to see if I can get better than "real 3:07:06.4", but I can live with that, if need be. If I come up with any revelations, I'll post them here, otherwise: CASE CLOSED!

Thanks again, all.

Pete

Pete