- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- LVM Mirroring Performance
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2005 03:11 PM
тАО08-30-2005 03:11 PM
LVM Mirroring Performance
When we mirror one volume group, the mirroring takes place about 20 extents / minute. On a second volume group with many more PV's, we are only getting 10 extents / minute.
Although syncing up a mirrored LV does not appear to have a major impact on server performance (you can see it on GPM, but you have to look for it), the DBA's and end-users report a 50% drop in performance while the extents are being synced-up. This observation has also been made during past migrations.
Currently, we are facing 72 hours to fully sync up the larger volume group. If we have to break up the process (so that mirror syncing is done during off-peak periods), it may take over a week.
What we would like to be able to do is control the rate that the LV's are synced up - making it sync up faster during off-peak periods and slower during peak times. Is there a way to do this?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2005 03:43 PM
тАО08-30-2005 03:43 PM
Re: LVM Mirroring Performance
Once you issue the 'lvextend -m 1' command, everything is out of your control.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2005 05:13 PM
тАО08-30-2005 05:13 PM
Re: LVM Mirroring Performance
Something to consider as the long weekend is on horizon.
UNIX because I majored in cryptology...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2005 08:33 PM
тАО08-30-2005 08:33 PM
Re: LVM Mirroring Performance
I had a similar migration from a Symmetrix 8530 to a DMX-800.
We used a mixed solution for migration because you can't use mirror in all cen├Г┬бrios:
-You can't use it if you have VG in Shared Mode.
-You can't use if lvol is LVM striped.
-You can't use it if the MAX_PV in your VG is near the limit and you're not able to add the disk from the new storage.
On the servers I used mirror, performance was not a problem, be carefull when adding the new PV to VG to use a diferent HBA, do it in a way that one HBA is used to READ and the other is used to WRITE.
On servers that mirroring was not possible, we used SRDF(from EMC) to replicate volumes from one storage to the other, this has no impact at all on performance, you'll need some downtime to enable the replicated volumes on your servers, but carefully planned, you just need some minutes.
If you have Service Guard, don't forget the Lock Disks issue!
Enjoy :)
Pedro
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-30-2005 11:32 PM
тАО08-30-2005 11:32 PM
Re: LVM Mirroring Performance
e.g
dd if=/dev/vga/rlvol1 of=/dev/vgb/rlvol1 \
bs=64k
You could perform this on multiple LVs in parallel depending how your PVs are laid out on the array.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-31-2005 01:26 AM
тАО08-31-2005 01:26 AM
Re: LVM Mirroring Performance
I'm pretty sure I know you from years and years back, so - good to hear that you're about.
For speed, I think you're going have to give up on mirroring. I'm thinking that if you use vxdump you will have the best luck from a speed standpoint. But, that will have to be during down periods as your databases will have to be cold for the duration of the copy - which I'm guessing is what you're trying to avoid.
Another possible (if you're using Oracle), is have your dba's clone your database from a weekend cold backup from your tape storage system, and without "open"ing it, apply archive logs against it from your live environment. When you're ready for your cutover, just switch out the last archive logs from your database to force out the last changes, and then shutdown the original db. Then apply these last changes to your still unopened database from the archive logs( as they come out of the production DB ), then "open" it. This would be for your dbas much like a recovery of their live system. I've used this technique before to minimize downtime in cutting over to new servers, it should work equally well to cut over to new storage.
The beauty of it is you can build it and prepare it all week long after the weekend cold backups run without affecting production, and the total cut-over time for your go-live transition is only about 15 minutes or so (without considering issues affecting your applications' need to actually connect to the new database, which could possibly require lots of planning for that aspect - or it could be as easy as changing one tnsnames.ora file, depending).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-31-2005 01:28 AM
тАО08-31-2005 01:28 AM
Re: LVM Mirroring Performance
lvextend -m 1
is one of the few commands that warns you it is going to take some time. That is because it does.
You might want to measure transfer rates on your SAN to make sure there is no bottleneck making things worse.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-31-2005 02:10 AM
тАО08-31-2005 02:10 AM
Re: LVM Mirroring Performance
Given a slow replication and impact on applications, or a complete database shutdown and disk copies (cpio -p, fbackup|frecover, dd, etc) in a much shorter time, you'll have to decide.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-31-2005 02:27 AM
тАО08-31-2005 02:27 AM
Re: LVM Mirroring Performance
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО08-31-2005 04:43 AM
тАО08-31-2005 04:43 AM
Re: LVM Mirroring Performance
EMC wants to stripe the extents as they are being moved over to the new frame, so we will end up with the data striped across metavolumes that are themselves striped within the DMX frame (so we will end up with double-striped filesystems). EMC wants to do the striping in pairs of metavolumes, so we set up the lvmpvg file so that the extents are arranged in pairs on the destination frame.