LVM and VxVM

Does having all lvols in one VG affect performance?

 
SOLVED
Go to solution
Geoff Wild
Honored Contributor

Does having all lvols in one VG affect performance?

We just finished setting up a 2.5 TB db here as per a filesystem layout that the dba's wanted:

Filesystem kbytes used avail %used Mounted on
/dev/vg20/lvora 52396032 7402568 44642024 14% /v00
/dev/vg21/lvdata01 524189696 491474688 32459488 94% /v01
/dev/vg21/lvdata02 524189696 356096632 166779880 68% /v02
/dev/vg21/lvdata03 524189696 328187680 194470800 63% /v03
/dev/vg21/lvdata04 524189696 477657976 46168240 91% /v04
/dev/vg21/lvdata05 524189696 395572128 127613008 76% /v05

Now they are trying to tell me that each mount point should be a separate vg?

The LUNS are: 524189696 each - so there are only 5 "disks".

The LUNS are RAID5 (I wanted mirroed striped).

Will it make a performance difference having each LUN in a separate VG?

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
7 REPLIES 7
A. Clay Stephenson
Acclaimed Contributor

Re: Does having all lvols in one VG affect performance?

Rule Number 1: Never listen to DBA's about disk layout and performance.

Rule Number 2: See Rule Number 1.

I know many DBA's who still worry about platters and spindles.

In any event, my best guess is "big woo". I don't think there will be any difference in performance whether the LUN's are in 1 VG or 5. You have already spread the physical I/O and what you are talking about is really a kernel indirection lookup. I can't see much different in that overhead regardless of the configuration.

Of course, the real answer is to measure for yourself. If anything, I would question the RAID5 vs RAID 1/0 layout but even there with modern cache-centric arrays the differences may be very modest. I suppose that an extremely telling statistic would be if dividing into 5 LUN's vs 2 makes any real difference. Depending upon the array, I have seen cases where even the many LUN's vs very few LUN's made no significant difference --- although Glance was much happier with more LUN's.
If it ain't broke, I can fix that.
Steven E. Protter
Exalted Contributor

Re: Does having all lvols in one VG affect performance?

Shalom,

I've actually run some comparisions and found no performance difference.

The layout of the disk, is of much more importance to oracle.

rollback segments, tables and index that need heavy write access should be on raid 1 or raid 10 if possible.

A. Clay's rules are good. I've worked with some really awesome DBA's and a minority of them had a clue on i/o issues.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Tim Nelson
Honored Contributor

Re: Does having all lvols in one VG affect performance?

Some of this just boils down to organization.

Manage a 2TB filesystem or a number of smaller ones.

Manage a volume group with 1 device or many.

Simple typically is better. Think about what if there is a failure. How easily is it recoverable, from devices to filesystems.

I would expect the fsck and recovery of a 2TB filesystem would take longer and have a higher likely hood of failure not to mention the inode management overhead, than a number of smaller ones. Less likely hood of all of them failing. All one bucket scenario. Albeit 1000s of filesystem mounts are the other extreme.

Find a happy medium.


BTW, I agree with rule#1 also.
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: Does having all lvols in one VG affect performance?

Corollary to Rule Number 1: It's OK to pretend to listen; it's OK to even let them think that you care.
If it ain't broke, I can fix that.
TwoProc
Honored Contributor

Re: Does having all lvols in one VG affect performance?

Geoff, I just can't see any reason whether or not you're using a single vg is going to make a difference performance-wise.

What you need to remind the DBA is they should keep in mind that the whole VG idea is really just a space management allocation tool that's only relevant during set up and allocation, after that the scsi i/o software interfaces/controllers/raid hardware/disk drives/etc. do all the I/O work of your database.

Whether you're in one vg or twenty - you still have the same amount of control over how the data is striped (PVG's), laid out, raided etc. No penalty here. So what's left is your ability to change things after set up. If you're using separate vg's for everything then you have less ability to move segments of disk allocation around between mount points,raid sets, etc. But, if that limitation doesn't matter, or you've already planned for those eventualities - then that doesn't matter either.

Addendum to A. Clay's rule: If you find yourself having to listen to a DBA - then become one yourself, that way you can design the disk structures based on the known advantages to layout and structure from BOTH professional disciplines and not just the one.
We are the people our parents warned us about --Jimmy Buffett
TwoProc
Honored Contributor

Re: Does having all lvols in one VG affect performance?

I just noticed that in your initial proposal that you wanted mirrored and striped, so I'm guessing that they wanted RAID 5. Wow, in almost all cases - it's the DBA that wants mirrored and striped no matter what the costs are.

BTW, you're right about that, and they should have followed your advice and gone mirrored and striped.

Later, when they point to statistics showing slowness in sequential writes, non-sequential writes, and slowness in the time it takes to switch redo logs, and write archive logs... You can point to that decision right ... UP THERE.
We are the people our parents warned us about --Jimmy Buffett
Bill Hassell
Honored Contributor

Re: Does having all lvols in one VG affect performance?

And just in case you haven't seen bdfmegs yet, I have attached it -- ideal for multi-Gb to Tb filesystems. Default is Mb for the display, add -g to show sizes in Gb. Add -v for version and largefile support and -l to drop NFS mounts from the list. Like bdf, you can even specify a single file to find the mountpoint:

bdfmegs -v /usr/contrib/bin/q4


Bill Hassell, sysadmin