Operating System - HP-UX
1834448 Members
2457 Online
110067 Solutions
New Discussion

Performance question copying files between servers on EMC disks.

 
Stuart Abramson_2
Honored Contributor

Performance question copying files between servers on EMC disks.

Here's an interesting thing I noticed today timing disk filesystem copies on HP-UX servers (a2, a3, a5, w6, etc are UNIX servers):

Fm...NIC...FA.......To:...NIC...FA:.......FS....SZ...Time...Speed
--...---...------...--...----...------...-----..---...---...--------
w6...GigE..4a/13a...w8...GigE...4a/13a.../u413..7.6...24m...19.GB/hr
w7...---....4a/13a...w7...---.....4a/13a.../u501..3.4...15m...13.GB/hr
a3...100...3a/14a...w7...1000...4a/13a.../u78...3.2....8m...24.GB/hr
a2...100...4a/13a...w7...1000...4a/13a.../u31...3.9...15m...16.GB/hr
a5...100...3a/14a...w7...1000...4a/13a.../u31...3.9....8m...29.GB/hr
a5...100...3a/14a...w7...1000...4a/13a.../u31...7.6...12m...38.GB/hr*

The fastest copy I get is across the slowest LAN where the FA port connections are different.

The slowest copy I get is on the VERY SAME SERVER. No LAN involvement at all. But I'm copying from the same FAs to the same FAs. (And the same HBAs to the same HBAs.) I was kind of hoping that the PowerPath would take care of this.

It appears that the HBAs and FAs are the limiting factor here, not the LAN, or the disks.

What do you think? Has anyone done a study of transfer rates on filesystem copies?
7 REPLIES 7
Ashwani Kashyap
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

The FA's on the EMC end have a very high fan out ratio . So i don't think the FA's are the limiting factor .

It appears the HBA's are the limiting factor since read and write are going through the same HBA's .

I haven't seen power path doing what it is supposed to do when required even when patched properly .

I rely pn PV links in this situation .
Todd McDaniel_1
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

Ashwani,

Basically, only 1 path will be used no matter the config, if only pvlinks is utilized.

PVlinks is dumb and will only send data down one path, the alternate path(s) are only used, one at a time, if the primary path fails. Even if there are 2 or more paths, data will only go down 1 path. Sure you can alternate the path that the data goes down for each disk in the Lvol, but data will not go down any other paths...

However, Powerpath or Securepath will use all paths, to a particular LUN, all the time. PVlinks will never be faster than one of these packages utilizing every path available.

If you only have 2 paths to disks, as I do. I guarantee you that Powerpath utilizing BOTH paths will always be faster than PVlinks, which only uses ONE of the 2 paths... IF with powerpath there are 8 paths to disk, then 8 paths are used.



Unix, the other white meat.
Todd McDaniel_1
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

Stuart,

A few questions first.

1) Do you have minchache=direct,convosync=direct? utilizing these will dramatically speed up your reads/writes...
2) Did you perform these cp commands concurrently or individually. And was the overall load on the box similar in all cp attempts?
3) On a5, It appears there is a discrepancy even when copying to/from the same fa/hba as in a5 in your last 2 examples... There seems to be a 9GB/hr difference there.

4)Also, the way your Frame is layed out may have a small bit to do with it. IF the paths are on the same Physicals, you will see a slower read/write. IF they are on different physicals on different directors/FAs it should be a bit faster.

5) Do you have any disks configured as Mirrored or Raid-S for striping? In that some of your database filesystems are mirrored and some are striped?

post back and I will reply.
Unix, the other white meat.
Stuart Abramson_2
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

1) Do you have minchache=direct,convosync=direct? utilizing these will dramatically speed up your reads/writes...

No. I did it once, years ago, and it slowed everything down. I'll never do it again.

2) Did you perform these cp commands concurrently or individually. And was the overall load on the box similar in all cp attempts?

Right. I ran them all several times during the middle of the day, and they are "duplicatable".

3) On a5, It appears there is a discrepancy even when copying to/from the same fa/hba as in a5 in your last 2 examples... There seems to be a 9GB/hr difference there.

Right. The difference is 2 x concurrent (marked with *, but not explained) vs. 1-at-a-time. Concurrent is faster.

4)Also, the way your Frame is layed out may have a small bit to do with it. IF the paths are on the same Physicals, you will see a slower read/write. IF they are on different physicals on different directors/FAs it should be a bit faster.

As much as possible different physicals. I didn't check DAs. FAs are listed, and duplicate in the a2, wX cases.

5) Do you have any disks configured as Mirrored or Raid-S for striping? In that some of your database filesystems are mirrored and some are striped?

All disks are mirrored. No Raid-S. No striping. The "a5" disks are actually BCV copies of the "a2" disks.
Todd McDaniel_1
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

I'm not sure I can offer a good suggestion since your data is subject to the UNIX buffer cache. I would consider that a limiting factor in itself.

Since it has been a while for you, I might look at redirecting the cache again and run your tests again. My DBAs swear by this method and our corporate enterprise solution is to use these settings wherever possible.

It may not work well with DBs other than Oracle, which all mine are.


Nevertheless, I have always found the HBA to be the limiting factor, as well as copying to/from the SAME LUN or same physical, which is why we move the cache to the Frame and dont use the UNIX cache if at all possible.
Unix, the other white meat.
Stuart Abramson_2
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

Okay. I'll try it again.

What is the exact syntax of the line?

That would be on both ends, or just on the target or source end?
Todd McDaniel_1
Honored Contributor

Re: Performance question copying files between servers on EMC disks.

Stuart, I would put it on every box involved that has DB on EMC frame or other storage.

Here is what I use on my EMC DBs...

delaylog,largefiles,nodatainlog,mincache=direct,convosync=direct

It is important to use delaylog and nodatainlog as well.

You can use this command to implement the settings. Then edit your /etc/fstab.

mount -o remount,delaylog,nodatainlog,mincache=direct,convosync=direct

This can be done online, but you may want to wait till a maintenance window.
Unix, the other white meat.