Operating System - HP-UX
1833016 Members
3421 Online
110048 Solutions
New Discussion

LVM/Mirroring - uneven I/O load on Arrays

 
Stephen Kebbell
Honored Contributor

LVM/Mirroring - uneven I/O load on Arrays

Hi all,

when using LVM to mirror disks on two separate storage arrays, is there any way to set which disk should be the primary read source? We have the situation where one of our storage arrays (EMC Symmetrix DMX-3) has about double the read I/O load of the other. Write I/O is fairly evenly balanced, as LVM has to write data to both disks in the mirror. Assuming our current growth rate of I/O, soon we will have a performance impact on one array, which will slow down the oracle response times dramatically.

How does LVM choose which disk it reads from?

Thanks in advance.

Stephen
(p.s. I'm not a Unix admin, so I can't provide much technical info on the Host)
13 REPLIES 13
Steven E. Protter
Exalted Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Shalom,

In reality they are both the read source. The mirror/ux product handles which disk is read on a availablity basis as far as I remember. I do not belive it is user configurable.

Generally this setup improves performance.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
RAC_1
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

I think it depends on the sequence in which you did the mirroring. If disk A was mirrored to B, then reads are from A. If you want reads from B, then reduce mirror A and mirror back using B.

Let gurus confirm this.
There is no substitute to HARDWORK
Rita C Workman
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

I'm going to suggest something different.
You have EMC storage...do you have EMC PowerPath ? You could so easily just set this up and it works great for such problems.

The cost for PowerPath has come down significantly then what it was when we introduced it here. Call your EMC sales person, see if they will let you 'test drive' the software. That might help mgmt see the benefit and be willing to expend the funds for what really is a great product.

Kindest Regards,
Rita

Stephen Kebbell
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Hi,

thanks for your replies so far.
PowerPath won't help here, as it would only load-balance between alternate paths for each mirror-member. It won't load balance between mirror-copies on 2 different Symmetrixes.

(Incendentally, we have a site license for PowerPath, but our Unix Admins say "no-go". So we only use it on our Windows boxes to Clariion storage)

Regards,
Stephen
Steve Lewis
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Stephen

It always reads first from the top PV/disk/lun that is listed in the volume group (as per vgdisplay -v ). If that disk is busy, it then reads from the mirror.

You can try to mitigate the uneven load by un-mirroring some (not all) of the heavily used volumes to take off the primary path, removing their PVs from the volume group, then re-adding the PVs to the end of the volume group and re-mirroring.

Actually in the days of mirrored disks, I was a fan of doing it the unbalanced LVM way, because I had an un-tested/wild/silly hypothesis that you were more likely to get one disk fail at a time, not both disks failing at once. Not that that matters any more.

Geoff Wild
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

This is very interesting.

I posted this thread about issues with an Oracle to Oracle migration on a DMX3:

http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=1067723

As far as setting which disk in a mirror s the source - LVM mirroring doesn't work that way.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Tim D Fulford
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Hi.

LVM does not balance the read IO 'till the disk gets busy enough to justify it. so you cant do exactly what you want...

IF you have 4 (or more) physical volumes (say 2 mirrored onto 2); say A[td0] B[td0] A'[td1] B'[td1]. whre A' is the mirror of A and td0 & td1 are the two FC ports the disks are presented down. Under this scenario td0 will redcieve most read and all writes & td1 will recieve a few reads and all the writes. In the lvmtab it probably look like
/dev/dsk/c10t0d0 [A]
/dev/dsk/c10t0d1 [B]
/dev/dsk/c12t0d2 [A']
/dev/dsk/c12t0d3 [B']

You can load balance(ish) the fibre paths td0 & td1 balanced by simply reordering your disks in LVM (vgreduce & vgextend) to get A[td0] B[td1] A'[td1] B'[td0] (you can also use pvchange, but this does not survive reboots)
so in lvmtab terms
/dev/dsk/c10t0d0 [A]
/dev/dsk/c12t0d1 [B]
/dev/dsk/c12t0d2 [A']
/dev/dsk/c10t0d3 [B']

In essence you will NOT beable to load balance the disks, but you can make the fibre channel paths more ballanced

Going back to powerpath. are you positive you cant balance the IO over the FC, the point of powerpath (securepath etc) is to load balance the requests over the SAN cahnnels (as I've sort of done).

regards

tim
-
Tim D Fulford
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

oops, just re-read your reply about two different DMX-3's... so you are right powerpath will not help here at all. The only way (I can think of) is to interleave the primary/mirror disks on the arrays.

Regards

Tim
-
Dave Wherry
Esteemed Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

I have a question? Why are you mirroring at the operating systems level? The Symm is already doing some level of Raid protection on the array.
Mirroring at the OS is just creating more I/O operations on the server.
Tim Nelson
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

From my experience and remeberance LVM mirroring will not start to read from the mirror disk until there is a queue on the primary.

With multiple lvols, flip flop your primaries and your mirror devices.

Other than powerpath not benefiting this situation it certainly has mega benefits for multiple paths with load balance and HA. I sure wonder why the UX admin says no go. They are missing the boat !! Paying a pound of money for the product and it is not being used to it's fullest ?
Rita C Workman
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Well...as folks have pointed out LVM will not do anything on a read until it hits a certain level.
And yes I appreciate you have two seperate arrays. You don't mention if they are at seperate sites being mirrored/replicated via SRDF...so I'll assume they are at one site.

IMHO the only way you can get any relief, is via PowerPath. Why your UNIX Admin won't use it on UNIX is beyond my understanding...
I have it on every platform and storage that will run it. And when we put it on UNIX we saw noticed improvements for I/O. We're a solid oracle shop, and our users saw immediate relief when we did this. We are a 95%read and 5%write ratio on our bigest production boxes.

Rgrds,
Rita
Geoff Wild
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Just want to mention some notes from a HP CE working on a call with me:

"There are many processes that issue fairly small I/Os to these volumes and they do pretty well. But the processes that are doing large I/Os are not.

There are many processes that issue reads of 1MB or close to it. These reads result in multiple physical IOs both due to striping and to alignment within LVM. This in itself is not bad since the I/Os are issued in parallel and so theoretically the multiple smaller I/Os would complete faster than one large one.

However, there are two issues that come into play that cause complications.

First is the pathing. It looks like there is some pathing software that is being used -- perhaps EMC power path. This causes the multiple IOs to go down different paths and may confuse the read ahead algorithms on the array so that prefetching is not as efficient. Thus resulting in slower service times.

Second, the read system call time is only as fast as the slowest component. If a 1MB read is broken into 10 different pieces the final read time is only as fast as the slowest one. It is not usual for an occasional IO to take longer than the rest. If 9 physical reads took 2ms and the tenth one took 20ms, the average is 3.8 ms, which is not bad. If each physical I/O is one logical request your logical IO average is 3.8 ms. However, if it takes 10 physical IOs to complete one logical IO and 1 out of ten takes 20ms, then your logical request average is 20ms even though the average physical IO time is 3.8 ms.

I think larger stripes on these volumes would help this workload. If their normal workload includes large I/Os to these volumes they may want to consider changing the striping. If the migration is the only time they will be issuing large I/Os then perhaps they can continue to use the 128k stripes. "

Yes, we are running power path - and yes we will be testing the difference of heavy reads with and without pp running.

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Stephen Kebbell
Honored Contributor

Re: LVM/Mirroring - uneven I/O load on Arrays

Hi,

thanks again for all your responses.
Dave: the Symms are in 2 different locations (not very far apart). We mirror across both in case we "lose" one location. We don't have SRDF ($$$)
Tim (both of you :-) ) : apparantely we are alternating between primary and mirror devices - when the volume groups were created the 2nd symmetrix was specified as the first device for half the volumes. And you're preaching to the converted on PowerPath! But the decision to implement it is not in my hands.
Rita: our setup is something similar - oracle with a very heavy read percentage. And if you say Powerpath brought immediate benefits in performance, it's an argument we could use.
Geoff: I think our I/Os during normal workload are quite small, so I'm not sure if changing stripe size would make a significant difference.

Thanks,
Stephen