- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Performance question copying files between servers...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 03:22 AM
11-19-2003 03:22 AM
Performance question copying files between servers on EMC disks.
Fm...NIC...FA.......To:...NIC...FA:.......FS....SZ...Time...Speed
--...---...------...--...----...------...-----..---...---...--------
w6...GigE..4a/13a...w8...GigE...4a/13a.../u413..7.6...24m...19.GB/hr
w7...---....4a/13a...w7...---.....4a/13a.../u501..3.4...15m...13.GB/hr
a3...100...3a/14a...w7...1000...4a/13a.../u78...3.2....8m...24.GB/hr
a2...100...4a/13a...w7...1000...4a/13a.../u31...3.9...15m...16.GB/hr
a5...100...3a/14a...w7...1000...4a/13a.../u31...3.9....8m...29.GB/hr
a5...100...3a/14a...w7...1000...4a/13a.../u31...7.6...12m...38.GB/hr*
The fastest copy I get is across the slowest LAN where the FA port connections are different.
The slowest copy I get is on the VERY SAME SERVER. No LAN involvement at all. But I'm copying from the same FAs to the same FAs. (And the same HBAs to the same HBAs.) I was kind of hoping that the PowerPath would take care of this.
It appears that the HBAs and FAs are the limiting factor here, not the LAN, or the disks.
What do you think? Has anyone done a study of transfer rates on filesystem copies?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 03:28 AM
11-19-2003 03:28 AM
Re: Performance question copying files between servers on EMC disks.
It appears the HBA's are the limiting factor since read and write are going through the same HBA's .
I haven't seen power path doing what it is supposed to do when required even when patched properly .
I rely pn PV links in this situation .
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 06:31 AM
11-19-2003 06:31 AM
Re: Performance question copying files between servers on EMC disks.
Basically, only 1 path will be used no matter the config, if only pvlinks is utilized.
PVlinks is dumb and will only send data down one path, the alternate path(s) are only used, one at a time, if the primary path fails. Even if there are 2 or more paths, data will only go down 1 path. Sure you can alternate the path that the data goes down for each disk in the Lvol, but data will not go down any other paths...
However, Powerpath or Securepath will use all paths, to a particular LUN, all the time. PVlinks will never be faster than one of these packages utilizing every path available.
If you only have 2 paths to disks, as I do. I guarantee you that Powerpath utilizing BOTH paths will always be faster than PVlinks, which only uses ONE of the 2 paths... IF with powerpath there are 8 paths to disk, then 8 paths are used.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 08:17 AM
11-19-2003 08:17 AM
Re: Performance question copying files between servers on EMC disks.
A few questions first.
1) Do you have minchache=direct,convosync=direct? utilizing these will dramatically speed up your reads/writes...
2) Did you perform these cp commands concurrently or individually. And was the overall load on the box similar in all cp attempts?
3) On a5, It appears there is a discrepancy even when copying to/from the same fa/hba as in a5 in your last 2 examples... There seems to be a 9GB/hr difference there.
4)Also, the way your Frame is layed out may have a small bit to do with it. IF the paths are on the same Physicals, you will see a slower read/write. IF they are on different physicals on different directors/FAs it should be a bit faster.
5) Do you have any disks configured as Mirrored or Raid-S for striping? In that some of your database filesystems are mirrored and some are striped?
post back and I will reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 08:24 AM
11-19-2003 08:24 AM
Re: Performance question copying files between servers on EMC disks.
No. I did it once, years ago, and it slowed everything down. I'll never do it again.
2) Did you perform these cp commands concurrently or individually. And was the overall load on the box similar in all cp attempts?
Right. I ran them all several times during the middle of the day, and they are "duplicatable".
3) On a5, It appears there is a discrepancy even when copying to/from the same fa/hba as in a5 in your last 2 examples... There seems to be a 9GB/hr difference there.
Right. The difference is 2 x concurrent (marked with *, but not explained) vs. 1-at-a-time. Concurrent is faster.
4)Also, the way your Frame is layed out may have a small bit to do with it. IF the paths are on the same Physicals, you will see a slower read/write. IF they are on different physicals on different directors/FAs it should be a bit faster.
As much as possible different physicals. I didn't check DAs. FAs are listed, and duplicate in the a2, wX cases.
5) Do you have any disks configured as Mirrored or Raid-S for striping? In that some of your database filesystems are mirrored and some are striped?
All disks are mirrored. No Raid-S. No striping. The "a5" disks are actually BCV copies of the "a2" disks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 08:54 AM
11-19-2003 08:54 AM
Re: Performance question copying files between servers on EMC disks.
Since it has been a while for you, I might look at redirecting the cache again and run your tests again. My DBAs swear by this method and our corporate enterprise solution is to use these settings wherever possible.
It may not work well with DBs other than Oracle, which all mine are.
Nevertheless, I have always found the HBA to be the limiting factor, as well as copying to/from the SAME LUN or same physical, which is why we move the cache to the Frame and dont use the UNIX cache if at all possible.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-19-2003 09:14 AM
11-19-2003 09:14 AM
Re: Performance question copying files between servers on EMC disks.
What is the exact syntax of the line?
That would be on both ends, or just on the target or source end?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-20-2003 02:24 AM
11-20-2003 02:24 AM
Re: Performance question copying files between servers on EMC disks.
Here is what I use on my EMC DBs...
delaylog,largefiles,nodatainlog,mincache=direct,convosync=direct
It is important to use delaylog and nodatainlog as well.
You can use this command to implement the settings. Then edit your /etc/fstab.
mount -o remount,delaylog,nodatainlog,mincache=direct,convosync=direct
This can be done online, but you may want to wait till a maintenance window.