- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Tuning Faculties on LVM Abstraction Layer
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-26-2005 10:29 PM
тАО10-26-2005 10:29 PM
I was asked by our Informix DBA if there were (layout) parameters during the creation of volumes that could have a performance impact on later disk I/O.
I replied my assumption being that the tuning possibilities, probably unlike filesystem creation and mounting options (which don't play a significant role as Informix performs raw disk I/O), are pretty limited.
I think it boils down to the two options Nos. of PEs and PE size (i.e. -e and -s) during a vgcreate.
The question was whether fewer but larger PEs were favourable over more smaller ones,
especially with regard to the restricted size of the volume header for this meta data,
which sometimes leaves you no choice but to select a bigger PE size.
But even if the fancied maximum largest PV size of the VG would allow for smaller PEs
my feeling is that bigger PEs would be more advantageous.
Another issue that came to my mind was how one could (if at all) achieve to hit the exact spindle bounds from the SAN disk subsystem
because from our SAN admin I am afaik only provided with what I would call virtual disks,
viz. some sort of disk chunk that appears as a PV on an ioscan to the OS.
Herein also falls the value of LVM striping.
My suspicion is that as long as I cannot guarantee that every LVM stripe would in reality map to a different spindle I am better off without any LVM striping.
I would be interested to hear your views
(outch, one of my English improvement asides, can one really say "hear your views", or isn't that some kind of an oxymoron or paradox?)
Regards
Ralph
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-26-2005 10:37 PM
тАО10-26-2005 10:37 PM
Re: Tuning Faculties on LVM Abstraction Layer
When the volumes size is very big, you will have to go for bigger pe size and vice a versa.
I personally have never gone beyond 32mb.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-26-2005 10:39 PM
тАО10-26-2005 10:39 PM
Re: Tuning Faculties on LVM Abstraction Layer
OK, here is an "opinion" for you to hear.
It would seem to me that a larger PE size would function like a read-ahead buffer, making more data available in memory, assuming the application can take advantage of it. Mostly sequential applications would benefit, while truly random applications would probably not. On the other hand, truly random applications would probably benefit from not having to fetch so much data if the PE size was smaller. So, in this theory, it depends on the application.
Pete
Pete
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-27-2005 12:31 AM
тАО10-27-2005 12:31 AM
Re: Tuning Faculties on LVM Abstraction Layer
Allthough the mount options and file system creation options do not apply here as you mentioned the usage of this volume is only for raw devices. The stripping can still play a big role, reason being most of the LUNs allocated from storages are from 3 or 4 physical disks depending upon the configuration. (Allthough new arrays also support 7D+1P configuration Where it will be across 8 disks). Additional spanning of sequential IOs in this event can also be done by doing an OS level stripping across multiple LUNs.
But for this care must be taken that that two or multiple LUNs you are stripping across are from different RAID groups so that actually the data is spread across multiple disks.
PE Size and no. of PE shall depend upon the situation as stated by others. The situations shall not include the type of applications but also the size of LUNs, af with small PE the Max_PE_PER_PV can not be very huge.
HTH,
Devender
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-27-2005 12:56 AM
тАО10-27-2005 12:56 AM
SolutionHere are my views:
Dont create chunks greater than 4Gb unless you are forced to, because the informix cleaners can struggle.
Don't use informix mirroring for anything.
Keep the physical and logical logs apart from the database, especially the first chunk and especially if you use unbuffered logging.
Striping a logical volume/chunk over 2-4 LUNs is better than distributed and better than 1:1 CHUNK:LUN .
You still need to map out your storage in advance.
KAIO is good, but set IFMX_HPKAIO_NUM_REQ >=2000 and maybe up to 4000.
Disk arrays these days have variable read-ahead and this can make informix tuning of RA_ parameters difficult. Keep your firmware up-to-date in this respect (xp12000 at 50-04-31)
Last but by no means least...it depends on the application.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-27-2005 12:57 AM
тАО10-27-2005 12:57 AM
Re: Tuning Faculties on LVM Abstraction Layer
On the LVM striping query; We are primarily an EMC shop so I will base my response using EMC examples. I'm sure other high-end arrays are similar.
One EMC Symmetrix frams you can verify that each LUN in your stripe set is on a different spindle. A quick way to be fairly sure is to use sequential LUN numbers (for meta-devices see below). If the frame was set up correctly then each hypervolume (aka slice) should be created in order using different physical drives. Example:
/dev/rdsk/c4t0d1 EMC SYMMETRIX 5670 3854E000 8838720
/dev/rdsk/c4t0d2 EMC SYMMETRIX 5670 3854F000 8838720
/dev/rdsk/c4t0d3 EMC SYMMETRIX 5670 38550000 8838720
Symm devices 54E, 54F, and 550 should all be on different spindles. Of course it will eventually wrap around and use the first spindle again, etc. If you wish to double-check you could run 'symdev show 54F' and look for the backend disk information. In my example about I found that hyper 54F is using the physical drives at "16D, C, 0" and "02C, D, 0" (mirrored set). Hyper 550 is using physical drives at "15D, C, 1" and "01C, D, 1". So I can be sure those are not on the same spindles.
Now if striped meta devices are used on the frame then you have to ask yourself if striping on both the frame and the host is the way to go. For a long time (and probably still) Oracle recommended striping on both host and frame. In recent years EMC found this often hurt performance. Striping (true) in LVM is also a pain as you cannot mirror logical volumes that are striped. This can be a pain if you want to do certain migration methods. I have used EMC's striped meta devices and the performance over host-based striping single hypervolumes seems to be very good. Note: when using meta-devices the symm device numbers will not be completely sequential. If they are 4-way metas then the device numbers will be in factors of 4 (540, 544, 548, 54B, etc.).
Lastly, picking spindle location. The storage guys should be able to provide some information to assist in locating the sweet spots, but personnally I don't think its worth the effort. Don't forget that there is a large amount of cache on the front end of these arrays. There is no direct write/read to/from the spindles.
Other arrays may or may not have these features, etc. so you will need to look at what you have.
David
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-27-2005 01:19 AM
тАО10-27-2005 01:19 AM
Re: Tuning Faculties on LVM Abstraction Layer
many thanks for you valuable suggestions and opinions.
I will share your views with our Informix and SAN admins.
I am faithful we will derive a sound solution from them.
Kind regards
Ralph
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-27-2005 02:20 AM
тАО10-27-2005 02:20 AM
Re: Tuning Faculties on LVM Abstraction Layer
You are correct in assuming that the SAN has eliminated any possibility of tuning your data layout to match physical disks. That's why you pay so much for the array controller--you want a very fast, special purpose box to optimize the disks. And when you take into account a large disk cache in the controller, any striping done at the HP-UX level is meaningless since the data may be partially (or fully) in controller memory.
So once you get a SAN, you treat the disk as a commodity and let the SAN do all the work to optimize things. Think about striping and physical disk layouts with JBODs only.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-28-2005 02:13 AM
тАО10-28-2005 02:13 AM
Re: Tuning Faculties on LVM Abstraction Layer
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-28-2005 03:14 AM
тАО10-28-2005 03:14 AM
Re: Tuning Faculties on LVM Abstraction Layer
For LVM: PE size does not matter period.
For VxVM: no such parameter exists.
As with regards to how you "assemble" these "components (disks - virtual or real)":
For JBODS:
You've no choce - for protection's sake - you have to stripe and mirror (or mirror and stripe). You can even use RAID-5 if your application's I/O needs are not that great.
For Cache-Centric Arrays:
Always stripe, period. Stripe wide and thin for OLTP. Stripe wide and mid-thin for DSS/warehouses. How wide? No less than 4 - no more than 8 "components" - with each component sourced from different areas (controllers/domains) inside your array. How "thin" (stripe-width)? - It depends on your application. I find 64K to be the most neutral for mixed use - OLTP/DSS.
For Controller-Centric Arrays:
Forget about any striping. Striping is done on the backend for you. Except of course if you have several Controller-Centric arrays (e.e, 8 EVAs each to its own pair of Fibre) -- then I will stripe accross those 8 arrays.