- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- To "LVM" or not to "LVM"
Categories
Company
Local Language
Forums
Discussions
Knowledge Base
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Knowledge Base
Forums
Discussions
- Cloud Mentoring and Education
- Software - General
- HPE OneView
- HPE Ezmeral Software platform
- HPE OpsRamp
Knowledge Base
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2002 08:13 AM
11-11-2002 08:13 AM
I am running hpux 11.11 and I have (2) HBA's connect to the storage. I have redundancy groups 1 & 2 (RAID 0+1). I am in a quandry as to whether to use LVM or not. If I do, I can build LV's that utilize disk across the multiple redundancy groups and I/O channels. However this add overhead to the OS for LVM routines. It certainly gives me some flexabiity going forward, but is it worth it? If I use straight mounts with JFS and balance my I/O within Oracle, I save the overhead of LVM and let Oracle & Hardware dictate the performance. Does anyone have an experienced opinion?
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2002 08:26 AM
11-11-2002 08:26 AM
SolutionThe ease of increasing LVOL size and the ability to stripe across multiple paths mores than offsets the tiny additional I/O overhead.
By the way, 11.11 Oracle seems to actually perform better with 'cooked' I/O (i.e. using the buffer cache) rather than bypassing with convosync=direct,mincache=direct which usually performed better on 10.20 and 11.0 Oracle).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2002 08:46 AM
11-11-2002 08:46 AM
Re: To "LVM" or not to "LVM"
VA2405? Did you mean VA74x0 with one or more DS2405 shelves added? The 74x0 is a Virtual Array, and it can have one or more 2405 disk shelves added to it. By itself, a DS2405 is simply dual-pathed FC JBOD.
Nothing I can add to what Clay said other than, "Yeah, what he said." However, I can mention a bit on segregation since you have the luxury of two redundancy groups with the VA74x0.
Examine current database usage, and analyze where the bulk of your I/O resides with respect to data, index, RBS, redo logs, archive logs, etc. You will want to split that I/O between the redundancy groups. BTW, you should probably set each redundancy group to use a unique FC path. That way, when you hammer data and index (for example), they will be firing on different sets of spindles and FC paths.
Cheers,
Jim
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2002 09:01 AM
11-11-2002 09:01 AM
Re: To "LVM" or not to "LVM"
Jim... You are correct about the VA model...
A side question for you folks.. Being a VA newby... I am assuming when I grab XXXgb from a redundancy group it grabs an equal amount from each drive in the redundency group. Thereby spreading out the I/O as far and wide as possible. Correct??
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
11-11-2002 12:20 PM - last edited on 06-18-2021 04:30 AM by Ramya_Heera
11-11-2002 12:20 PM - last edited on 06-18-2021 04:30 AM by Ramya_Heera
Re: To "LVM" or not to "LVM"
Correct. (Dual) Parity is striped (spiralled, actually) across all disks in the RG. One RG group uses the even-numbered drives, and the other RG uses the odd-numbered disks. So don't scratch your head when you see that pattern during operation.
How the data/parity is striped across the disks, and (in the case of the "AutoRAID" setting) what data is kept as RAID0+1 vice RAID5 is determined by the controller firmware and is out of your reach.
In case you're wondering, all frequently written or read data is kept at RAID0+1. Seldom used data is shelved to RAID5 thus maximizing the available space. On some installations, I segregate data (mountpoints) onto different RG's and FC paths as I mentioned before. In other instances, I've created one big LUN in each RG and striped the LV's across both.
And talk about bandwidth! These little hummers fly. I set up a test with dd to perform some very vile acts upon a VA w/one DS. This went beyond even some of the unspeakable things that Oracle tries to do. I was able to sustain a measured throughput of 90MB/s per FC path with no disk queue at all -- zero. Every I/O request was being satisfied in real time. I think I had about 30 - 35 read and write streams all cranking at the same time using various block sizes. I also made sure I was touching far more data than the controller cache could hold. I started to build a consistent disk queue when I cranked the number of full-blast, concurrent I/O streams up to around 50. Needless to say, I'm very pleased with the VA performance.
I've attached a technical paper that discusses the mathmatics behind RAID5 Dual-Parity in some detail.
Cheers,
Jim