1833867 Members
2356 Online
110063 Solutions
New Discussion

One VG vs Multiple VGs

 
Alzhy
Honored Contributor

One VG vs Multiple VGs

My Disk Manager insists that I create 1 VG for each LUN that he presents to the server or pool "related" LUNs to a particular VG. The reasoning was it is the only way of guaranteeing separation of I/O or where data reside on the LUN. I contend that even if the physical disks are housed in just VG, I could still control which PV stores which data in addition to the advantage that we will be optimising storage utilization by actually being able to make use of what often will be "residual extents" if the disks are on different PV's. Besides, with today's virtualized "disks" it really does not matter anymore where your data is as hot disks can be handled on the SAN/array infrastructure itself.

Hakuna Matata.
7 REPLIES 7
A. Clay Stephenson
Acclaimed Contributor

Re: One VG vs Multiple VGs

This is one of those "it depends" answers. I typically use at least two LUN's per VG (even in an array) to allow more flexible io paths from the host perspective. Your Disk Manager may be concerned about the limited number of LUN's that his array supports or the limited number of LUN's that may be addressed by some hosts.

I would certainly not let 'residual extents' concern me these days when storage is so cheap; of much greater import is the ease with which your storage allocation scheme can be managed.
If it ain't broke, I can fix that.
Alzhy
Honored Contributor

Re: One VG vs Multiple VGs

"..I would certainly not let 'residual extents' concern me these days when storage is so cheap; of much greater import is the ease with which your storage allocation scheme can be managed..."

True but in this economy? - where even the last Kbyte is being asked to be accounted for?
Hakuna Matata.
Alberto Tagliaferri_1
Occasional Advisor

Re: One VG vs Multiple VGs

I agree with your arguments againts 1LUN=1VG: no need a all to separete LUNs to control distributions among LUNs and it's a waste of space. I'd group LUNs by usagge (ie. all DB data, all application & database binaries and config. files) just to keep order and problem isolation in case of trouble: nothing to do with performance.
About "with today's virtualized "disks" it really does not matter anymore where your data is" I strongly disagree. If you want to get best out of your hw, you'll better undestand pattern accesses of your DB and application, give them to your Disk Manager to help decide strip size, number of spindles in a RAID, etc and for you to plan where to make lvextends. Hw and cache are not a replacement for planning and undestanding your app.
A. Clay Stephenson
Acclaimed Contributor

Re: One VG vs Multiple VGs

I stand by my comments. Disk is cheap; time of skilled admins is expensive in any economy.
If it ain't broke, I can fix that.
Chris Wong
Trusted Contributor

Re: One VG vs Multiple VGs

The cost of disk really depends on the system you are using. Adding storage to an XP or EMC is NOT cheap. It almost sounds like your Disk Manager doesn't understand how you can manage the LVOLs within the one VG. I agree with the poster who suggested the multiple VGs based on usage.

- Chris
Jakes Louw_1
Frequent Advisor

Re: One VG vs Multiple VGs

I agree. I have some VGs with up to 1.2 TB striped on EMC, and detailed analysis with EMCs tools indicates no bottlenecks, and I get to size my VGs perfectly.
Further, we break the other VGs up into usage: database indexes seperated from data VGs, user filesystems seperate from databases, etc., ad nauseum.
Your Disk Manager thinks you are a turkey ;->
Brian M Rawlings
Honored Contributor

Re: One VG vs Multiple VGs

One thing on I/O: LVM establishes I/O queues based on how many PVs it sees for a VG. If you make a VG with one large LUN from any array, LVM will give itself a bottleneck by only allocating one queue for data to that VG. This is one reason why "more is better", up to a point.

I would suggest that two LUNs per VG is the absolute minimum you should have, and I could make a good case for four as a minimum, since it allows much better LVM striping across "PVs" (LUNs): a stripe of four provides better performance (generally) that a stripe of two. And four I/O queues for your VG are better than two I/O queues (both are better than the one queue your Disk Manager is insisting on).

Regards, --bmr

We must indeed all hang together, or, most assuredly, we shall all hang separately. (Benjamin Franklin)