- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- LVM/Mirroring - uneven I/O load on Arrays
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2006 11:06 PM
10-18-2006 11:06 PM
LVM/Mirroring - uneven I/O load on Arrays
when using LVM to mirror disks on two separate storage arrays, is there any way to set which disk should be the primary read source? We have the situation where one of our storage arrays (EMC Symmetrix DMX-3) has about double the read I/O load of the other. Write I/O is fairly evenly balanced, as LVM has to write data to both disks in the mirror. Assuming our current growth rate of I/O, soon we will have a performance impact on one array, which will slow down the oracle response times dramatically.
How does LVM choose which disk it reads from?
Thanks in advance.
Stephen
(p.s. I'm not a Unix admin, so I can't provide much technical info on the Host)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2006 11:30 PM
10-18-2006 11:30 PM
Re: LVM/Mirroring - uneven I/O load on Arrays
In reality they are both the read source. The mirror/ux product handles which disk is read on a availablity basis as far as I remember. I do not belive it is user configurable.
Generally this setup improves performance.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2006 11:34 PM
10-18-2006 11:34 PM
Re: LVM/Mirroring - uneven I/O load on Arrays
Let gurus confirm this.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-18-2006 11:50 PM
10-18-2006 11:50 PM
Re: LVM/Mirroring - uneven I/O load on Arrays
You have EMC storage...do you have EMC PowerPath ? You could so easily just set this up and it works great for such problems.
The cost for PowerPath has come down significantly then what it was when we introduced it here. Call your EMC sales person, see if they will let you 'test drive' the software. That might help mgmt see the benefit and be willing to expend the funds for what really is a great product.
Kindest Regards,
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2006 12:01 AM
10-19-2006 12:01 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
thanks for your replies so far.
PowerPath won't help here, as it would only load-balance between alternate paths for each mirror-member. It won't load balance between mirror-copies on 2 different Symmetrixes.
(Incendentally, we have a site license for PowerPath, but our Unix Admins say "no-go". So we only use it on our Windows boxes to Clariion storage)
Regards,
Stephen
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2006 12:04 AM
10-19-2006 12:04 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
It always reads first from the top PV/disk/lun that is listed in the volume group (as per vgdisplay -v ). If that disk is busy, it then reads from the mirror.
You can try to mitigate the uneven load by un-mirroring some (not all) of the heavily used volumes to take off the primary path, removing their PVs from the volume group, then re-adding the PVs to the end of the volume group and re-mirroring.
Actually in the days of mirrored disks, I was a fan of doing it the unbalanced LVM way, because I had an un-tested/wild/silly hypothesis that you were more likely to get one disk fail at a time, not both disks failing at once. Not that that matters any more.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2006 12:29 AM
10-19-2006 12:29 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
I posted this thread about issues with an Oracle to Oracle migration on a DMX3:
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1067723
As far as setting which disk in a mirror s the source - LVM mirroring doesn't work that way.
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2006 03:07 AM
10-19-2006 03:07 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
LVM does not balance the read IO 'till the disk gets busy enough to justify it. so you cant do exactly what you want...
IF you have 4 (or more) physical volumes (say 2 mirrored onto 2); say A[td0] B[td0] A'[td1] B'[td1]. whre A' is the mirror of A and td0 & td1 are the two FC ports the disks are presented down. Under this scenario td0 will redcieve most read and all writes & td1 will recieve a few reads and all the writes. In the lvmtab it probably look like
/dev/dsk/c10t0d0 [A]
/dev/dsk/c10t0d1 [B]
/dev/dsk/c12t0d2 [A']
/dev/dsk/c12t0d3 [B']
You can load balance(ish) the fibre paths td0 & td1 balanced by simply reordering your disks in LVM (vgreduce & vgextend) to get A[td0] B[td1] A'[td1] B'[td0] (you can also use pvchange, but this does not survive reboots)
so in lvmtab terms
/dev/dsk/c10t0d0 [A]
/dev/dsk/c12t0d1 [B]
/dev/dsk/c12t0d2 [A']
/dev/dsk/c10t0d3 [B']
In essence you will NOT beable to load balance the disks, but you can make the fibre channel paths more ballanced
Going back to powerpath. are you positive you cant balance the IO over the FC, the point of powerpath (securepath etc) is to load balance the requests over the SAN cahnnels (as I've sort of done).
regards
tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-19-2006 03:18 AM
10-19-2006 03:18 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2006 03:20 AM
10-20-2006 03:20 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
Mirroring at the OS is just creating more I/O operations on the server.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2006 04:00 AM
10-20-2006 04:00 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
With multiple lvols, flip flop your primaries and your mirror devices.
Other than powerpath not benefiting this situation it certainly has mega benefits for multiple paths with load balance and HA. I sure wonder why the UX admin says no go. They are missing the boat !! Paying a pound of money for the product and it is not being used to it's fullest ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2006 07:19 AM
10-20-2006 07:19 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
And yes I appreciate you have two seperate arrays. You don't mention if they are at seperate sites being mirrored/replicated via SRDF...so I'll assume they are at one site.
IMHO the only way you can get any relief, is via PowerPath. Why your UNIX Admin won't use it on UNIX is beyond my understanding...
I have it on every platform and storage that will run it. And when we put it on UNIX we saw noticed improvements for I/O. We're a solid oracle shop, and our users saw immediate relief when we did this. We are a 95%read and 5%write ratio on our bigest production boxes.
Rgrds,
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-20-2006 08:26 AM
10-20-2006 08:26 AM
Re: LVM/Mirroring - uneven I/O load on Arrays
"There are many processes that issue fairly small I/Os to these volumes and they do pretty well. But the processes that are doing large I/Os are not.
There are many processes that issue reads of 1MB or close to it. These reads result in multiple physical IOs both due to striping and to alignment within LVM. This in itself is not bad since the I/Os are issued in parallel and so theoretically the multiple smaller I/Os would complete faster than one large one.
However, there are two issues that come into play that cause complications.
First is the pathing. It looks like there is some pathing software that is being used -- perhaps EMC power path. This causes the multiple IOs to go down different paths and may confuse the read ahead algorithms on the array so that prefetching is not as efficient. Thus resulting in slower service times.
Second, the read system call time is only as fast as the slowest component. If a 1MB read is broken into 10 different pieces the final read time is only as fast as the slowest one. It is not usual for an occasional IO to take longer than the rest. If 9 physical reads took 2ms and the tenth one took 20ms, the average is 3.8 ms, which is not bad. If each physical I/O is one logical request your logical IO average is 3.8 ms. However, if it takes 10 physical IOs to complete one logical IO and 1 out of ten takes 20ms, then your logical request average is 20ms even though the average physical IO time is 3.8 ms.
I think larger stripes on these volumes would help this workload. If their normal workload includes large I/Os to these volumes they may want to consider changing the striping. If the migration is the only time they will be issuing large I/Os then perhaps they can continue to use the 128k stripes. "
Yes, we are running power path - and yes we will be testing the difference of heavy reads with and without pp running.
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
10-22-2006 07:31 PM
10-22-2006 07:31 PM
Re: LVM/Mirroring - uneven I/O load on Arrays
thanks again for all your responses.
Dave: the Symms are in 2 different locations (not very far apart). We mirror across both in case we "lose" one location. We don't have SRDF ($$$)
Tim (both of you :-) ) : apparantely we are alternating between primary and mirror devices - when the volume groups were created the 2nd symmetrix was specified as the first device for half the volumes. And you're preaching to the converted on PowerPath! But the decision to implement it is not in my hands.
Rita: our setup is something similar - oracle with a very heavy read percentage. And if you say Powerpath brought immediate benefits in performance, it's an argument we could use.
Geoff: I think our I/Os during normal workload are quite small, so I'm not sure if changing stripe size would make a significant difference.
Thanks,
Stephen