- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- increasing PE size
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 08:00 AM
тАО12-03-2004 08:00 AM
NOW, we have decided to help ourselves abit using less virtual controllers and less # of luns presented to make it easier to manage, and go to these larger metavolumes where the 8-9GB volumes are presented to me as 1 large Volume, instead of being sliced up. Saves 14 devices with dual pathing...
Since I am putting 4 of these larger metavolumes in one VG to lessen the # of VGs, I decided to change the default PE size to 8MB from 4MB, so I can have some more flexibility later on if I decide to add more disks to these VGs. Probably not, but who knows.
I had started with 4 of these in 1 VG which at 4MB/PE goes over the 65355~ max PE value, actually around 69k~, which is too high and gives an error. So I increased it to 8MB and knocked the PE value down to 35k~ PEs for the VG. Now it works fine.
Whew!
==========================================
Okay now my question....
What negative effect, if any, would this have on a striped filesystem? Any? none?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 08:10 AM
тАО12-03-2004 08:10 AM
Re: increasing PE size
LVM Striped - PE size has absolutely no effect.
LVM Distributed Stiped - the stripe size equals the PE size, so you have larger stripes.
EMC striped - Not real sure, but I don't think it should have much, if any, effect.
By the way, on a 70GB LUN, I would probably go to even higher PE sizes like 16 or 32 MB just to give me some more breathing room.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 08:18 AM
тАО12-03-2004 08:18 AM
Re: increasing PE size
BTW, what is your reasoning behind increasing the PE size even more? Does that offer more of a sweet spot for performance on a larger disk?
ALSO, My DB data files are around 2GB+ SO would larger PE size offer better performance?
Should I have done that on my 9GB luns which are still 70GB? or is that fairly irrelevant since the lun size is only 9GB instead of 70GB?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 08:23 AM
тАО12-03-2004 08:23 AM
Re: increasing PE size
If I wanted to set up new 146 GB or 250GB disks/LUNs in the future, and just mirror or pvmove my data from the old disks to the new, then I have a much better chance of succeeding with the larger PE sizes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 08:36 AM
тАО12-03-2004 08:36 AM
Re: increasing PE size
Since you are not using VxVM to stripe your EMC LUNs - there should be no impact to having large PE sizes that I can think of and I have done some experiments in the past with varying PE sizes (4-64 MB) and stripe size (128KB to 1MB) with virtually no change in performance - but this was on some other array (EVA + XP) - but I believe there should be no difference. BTW, I use iozone for my disk benchmarks...
With VxVM.. you'll need to modify and adjust accrodingly certain kernel parameterers relating to vxVM.
Since we've multi-TB storage, we've standardised on very large VG's with 16MB PE sizes with maximum VG size up to 3 TB...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 09:18 AM
тАО12-03-2004 09:18 AM
SolutionIf you were to do extent-based striping then the PE size may have some effect, but with a specialized (and very pricey) controller dedicated to mirroring and striping, I would just allocate the extents sequentially. Disk I/O sizes (aka, block sizes) are all handled at the kernel level and PE's are just used as method to map filesystem blocks to a place on the LUNs/disks.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 09:29 AM
тАО12-03-2004 09:29 AM
Re: increasing PE size
I'd agree with most of the above... but with the caveat that you are NOT using LVM mirroring. If you use LVM mirroring the PE's will need to syncronise from time to time, which means bigger LVM syncs per PE if the PE is bigger... If you make an update to one PE (say 2kB) and one LVM write was not returned as sucessful, then LVM will resync that PE, thus with a relatively small update you will e syncing a larger PE.
Granted you are using EMC and this situation may be unlikly, but for those of us using things like MSA1000, it does have an effect.
Obviously if you are mirroring witin the EMC the above is total wibble.... but no one mentioned it....
Regards
Tim
Regards
Tim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 09:34 AM
тАО12-03-2004 09:34 AM
Re: increasing PE size
All my previously created VGs/Lvols are striped on the Host side but use Raid-s Luns.
On these new ones, there is no host side LVM manipulation by me at all. Everything is on the Frame.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 02:53 PM
тАО12-03-2004 02:53 PM
Re: increasing PE size
If you are not striping these already RAIDed LUNS (either S or 10 or whatever your EMC uses) on the host level - then I suspect you're not getting the performance that your EMC should give you.. Are these older Symmetrix or the newer DMX series? Either way .. EMC metas on UNIX hosts whose VM's feature striping capability should use striping - either 4 or 8 way should be sufficient. If it is just a EMC Clarion - don't even bother.
But if you're apps are happy with just your simple concats of your large LUNs then I suppose you can stay with it...
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО12-03-2004 03:08 PM
тАО12-03-2004 03:08 PM
Re: increasing PE size
If you are combining multiple smaller LUNs as few bigger LUNs, most likely your system may bottleneck itself unable to drive multiple IOs to amongst few disks. It doesn't matter how EMC handles it on the backend, system by default is only configured to queue upto 8 requests per PV. So, if you see any degradation of performance, then you may want to consider increasing your scsi_queue_depth value a bit higher, may be 24-32 instead of sticking with the default. 'man scsictl' for more information.
-Sri