1756989 Members
2412 Online
108858 Solutions
New Discussion юеВ

increasing PE size

 
SOLVED
Go to solution
Todd McDaniel_1
Honored Contributor

increasing PE size

Let me give a bit of background. We have a few boxes with quite a bit of disk attached. I normally get 8 9GB luns on EMC that I stripe and create ~70GB filesystems for my DB folks. On one box I have 89 70GB filesystems... others have 37 FS and another has 66 FS... So I want to be sure of what I am doing now.

NOW, we have decided to help ourselves abit using less virtual controllers and less # of luns presented to make it easier to manage, and go to these larger metavolumes where the 8-9GB volumes are presented to me as 1 large Volume, instead of being sliced up. Saves 14 devices with dual pathing...

Since I am putting 4 of these larger metavolumes in one VG to lessen the # of VGs, I decided to change the default PE size to 8MB from 4MB, so I can have some more flexibility later on if I decide to add more disks to these VGs. Probably not, but who knows.

I had started with 4 of these in 1 VG which at 4MB/PE goes over the 65355~ max PE value, actually around 69k~, which is too high and gives an error. So I increased it to 8MB and knocked the PE value down to 35k~ PEs for the VG. Now it works fine.



Whew!


==========================================
Okay now my question....

What negative effect, if any, would this have on a striped filesystem? Any? none?



Unix, the other white meat.
10 REPLIES 10
Patrick Wallek
Honored Contributor

Re: increasing PE size

Are you talking LVM striped, LVM Distributed striped or EMC striped?

LVM Striped - PE size has absolutely no effect.

LVM Distributed Stiped - the stripe size equals the PE size, so you have larger stripes.

EMC striped - Not real sure, but I don't think it should have much, if any, effect.

By the way, on a 70GB LUN, I would probably go to even higher PE sizes like 16 or 32 MB just to give me some more breathing room.
Todd McDaniel_1
Honored Contributor

Re: increasing PE size

All striping is on the EMC side. However, Im not sure of what the stripe size is or how that works actually.

BTW, what is your reasoning behind increasing the PE size even more? Does that offer more of a sweet spot for performance on a larger disk?

ALSO, My DB data files are around 2GB+ SO would larger PE size offer better performance?



Should I have done that on my 9GB luns which are still 70GB? or is that fairly irrelevant since the lun size is only 9GB instead of 70GB?
Unix, the other white meat.
Patrick Wallek
Honored Contributor

Re: increasing PE size

My reasoning for even larger PE sizes is that it gives WAY MORE flexibility down the line if want to user larger disks/LUNs in a VG.

If I wanted to set up new 146 GB or 250GB disks/LUNs in the future, and just mirror or pvmove my data from the old disks to the new, then I have a much better chance of succeeding with the larger PE sizes.

Alzhy
Honored Contributor

Re: increasing PE size

I've always been a believer of wide thin tripes but that's changing and will reserve it for a future post or response.

Since you are not using VxVM to stripe your EMC LUNs - there should be no impact to having large PE sizes that I can think of and I have done some experiments in the past with varying PE sizes (4-64 MB) and stripe size (128KB to 1MB) with virtually no change in performance - but this was on some other array (EVA + XP) - but I believe there should be no difference. BTW, I use iozone for my disk benchmarks...

With VxVM.. you'll need to modify and adjust accrodingly certain kernel parameterers relating to vxVM.


Since we've multi-TB storage, we've standardised on very large VG's with 16MB PE sizes with maximum VG size up to 3 TB...
Hakuna Matata.
Bill Hassell
Honored Contributor
Solution

Re: increasing PE size

PE sizes in volume groups are just for bookeeping and have no effect on disk I/O. If you're going to continue to grow and will need even larger volume groups, it's best to use 16Mb or even 32Mb and at the same time, allocate 20-40,000 extents. That way, the VG can be in the terabyte range if needed. PE's are kept in a table at the front of the disks but once sized (PE size and maximum extents), you can't add any more LUNs (disks) to the VG when all the PEs are used. You have to backup everything, destroy the VG definition, and re-create with a large enough extent table and PE size.

If you were to do extent-based striping then the PE size may have some effect, but with a specialized (and very pricey) controller dedicated to mirroring and striping, I would just allocate the extents sequentially. Disk I/O sizes (aka, block sizes) are all handled at the kernel level and PE's are just used as method to map filesystem blocks to a place on the LUNs/disks.


Bill Hassell, sysadmin
Tim D Fulford
Honored Contributor

Re: increasing PE size

Hi

I'd agree with most of the above... but with the caveat that you are NOT using LVM mirroring. If you use LVM mirroring the PE's will need to syncronise from time to time, which means bigger LVM syncs per PE if the PE is bigger... If you make an update to one PE (say 2kB) and one LVM write was not returned as sucessful, then LVM will resync that PE, thus with a relatively small update you will e syncing a larger PE.

Granted you are using EMC and this situation may be unlikly, but for those of us using things like MSA1000, it does have an effect.

Obviously if you are mirroring witin the EMC the above is total wibble.... but no one mentioned it....

Regards

Tim


Regards

Tim
-
Todd McDaniel_1
Honored Contributor

Re: increasing PE size

Actually I just woke up and remebered these are NOT striped whatsoever, rather they are mirrored on the EMC side.

All my previously created VGs/Lvols are striped on the Host side but use Raid-s Luns.

On these new ones, there is no host side LVM manipulation by me at all. Everything is on the Frame.
Unix, the other white meat.
Alzhy
Honored Contributor

Re: increasing PE size

Todd,

If you are not striping these already RAIDed LUNS (either S or 10 or whatever your EMC uses) on the host level - then I suspect you're not getting the performance that your EMC should give you.. Are these older Symmetrix or the newer DMX series? Either way .. EMC metas on UNIX hosts whose VM's feature striping capability should use striping - either 4 or 8 way should be sufficient. If it is just a EMC Clarion - don't even bother.

But if you're apps are happy with just your simple concats of your large LUNs then I suppose you can stay with it...
Hakuna Matata.
Sridhar Bhaskarla
Honored Contributor

Re: increasing PE size

Todd,

If you are combining multiple smaller LUNs as few bigger LUNs, most likely your system may bottleneck itself unable to drive multiple IOs to amongst few disks. It doesn't matter how EMC handles it on the backend, system by default is only configured to queue upto 8 requests per PV. So, if you see any degradation of performance, then you may want to consider increasing your scsi_queue_depth value a bit higher, may be 24-32 instead of sticking with the default. 'man scsictl' for more information.

-Sri
You may be disappointed if you fail, but you are doomed if you don't try