- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- File System Performance vs. # of VGs or # of LVs
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-02-2009 07:25 AM
тАО07-02-2009 07:25 AM
File System Performance vs. # of VGs or # of LVs
Before we made the changes our consultant recommended I had the following layout (just examples):
/dev/vg05/lvol1 mounted as /oradata
The above file system contained all data files, indexes and redo log groups.
After the changes we would up with this:
/dev/vg05/lvol1 mounted as /oradata (Now with only data and index files)
/dev/vg06/lvol1 mounted as /ora_redo1
/dev/vg07/lvol1 mounted as /ora_redo2
The above being for the two redo log groups
/dev/vg08/lvol1 mounted as /ora_archlogs
The last file system being for new archive logs since we enabled archive logging.
As I said, I was assuming that since the file systems were all housed on the same group of disks sliced out of the SAN and going over the same fiber paths, switches, etc... that there would be no change. But the changes did reduce some of the disk I/O load and there was a bit of a performance increase. Enough for us to live until we moved to new servers and the new SAN.
Later on I asked someone from HP how this could have made a difference and it was explained to me that it would have reduced some LVM overhead in kernel space. That answer cleared things up a bit. But now I'm certain I've gone about this all wrong since I've created a completely new volume group for each file system, instead of creating logical volumes. Doing this (on HP-UX 11.11 and 11.23) I run into the maxvg limits which I had to raise on one system to 25 and might wind up passing that.
So would I still keep the LVM kernel overhead down if I move to a layout like the following?
/dev/ora_vg/oracle10g mounted as /oracle10g
/dev/ora_vg/oradata mounted as /oradata
/dev/ora_vg/redo1 mounted as /ora_redo1
/dev/ora_vg/redo2 mounted as /ora_redo2
/dev/ora_vg/archlogs mounted as /ora_archlogs
That way I'd have one VG with multiple LVs. It would also make my management of the LUNs on the SAN a lot easier. (I'm using LVM to stripe across two LUNs with multiple alternate paths for manual load balancing)
Or should I stick with my current multiple VGs for better performance?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-02-2009 07:52 AM
тАО07-02-2009 07:52 AM
Re: File System Performance vs. # of VGs or # of LVs
Its always better to have data, index, redo on diffrent luns & if in archive mode than arch also on diffrent.
we can have max 256 VG's in system abd default is 10
Regards
Sanjeev
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-02-2009 08:31 AM
тАО07-02-2009 08:31 AM
Re: File System Performance vs. # of VGs or # of LVs
This seems like a reasonable suggestion.
Years ago, Oracle said to me: index, data and redo need to be on raid 1 or raid 10 disk, because they are write intensive.
How the writes take place to index and data is very different than how redo works. So it might make sense to have rdo on its own disk.
Since a volume group can handle 255 physical disks or LUNs, you can still keep it all in one volume group, yet do a little specialization.
Why raid 1 or raid 10? To do less actual writes. Raid 5 stripes your data over a disk array. Performance increases with the number of disks, how over in a typical raid 5 parity nine configuration a single disk write needs to take place in 45 disk locations.
That takes times and disk heads need to do a lot of moving around to make it happen.
raid 1 or 10 can reduce the actual number of writes needed.
This is worth a try. You can have a single volume group for the database, which you need to have if you go with a high availability SG solution, because your database startup script will also activate your volume group.
How you build the volume group is up to you.
I'd like to see you try this and report results. Do a series of performance tests before the change and after so you have a good benchmark.
>>
So would I still keep the LVM kernel overhead down if I move to a layout like the following?
/dev/ora_vg/oracle10g mounted as /oracle10g
/dev/ora_vg/oradata mounted as /oradata
/dev/ora_vg/redo1 mounted as /ora_redo1
/dev/ora_vg/redo2 mounted as /ora_redo2
/dev/ora_vg/archlogs mounted as /ora_archlogs
<<
I don't think this is an LVM kernel overhead problem. It is more likely write contention between redo, index and data, or poor sql statements hammering the database. So you may do all this work and have to go back to the application developers and have them improve their sql.
Your DBA should be a gatekeeper to keep sql statements from making your system and therefore you unhappy.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-02-2009 09:45 AM
тАО07-02-2009 09:45 AM
Re: File System Performance vs. # of VGs or # of LVs
While the best solution is simply to increase the queue length, you have to be careful because not all devices can benefit from this. I think for example that internal SCSI disks should not have their queue length increased.
11.31 comes with a feature named DDR which lets you assign low-level parameters such as the queue length using a mask, and it will be set automatically depending on the exact model SAN array the OS detects, it's much easier to manage. I simply set the DDR to put automatically the queue length to 32 for EVA arrays, and default for everything else.
Good luck
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-02-2009 11:23 AM
тАО07-02-2009 11:23 AM
Re: File System Performance vs. # of VGs or # of LVs
/package1/ors Raid5 (ORACLE_HOME)
/package1/ora_data1 Raid5
/package1/ora_data2 Raid5
/package1/ora_redo1 Raid1
/package1/ora_redo2 Raid1
/package1/ora_exarch Raid1 (export and Archivelogs)
We have the Luns in a similar way you see above.
On HPUX11.23 we have increased the "queue_depth" for alls Disks (LUNS) with following command
scsictl -m queue_depth=16 /dev/rdsk/c4t0d1
scsictl -m queue_depth=16 /dev/rdsk/c4t0d2
scsictl -m queue_depth=16 /dev/rdsk/c4t0d3
greeting,
Butti
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-02-2009 04:29 PM
тАО07-02-2009 04:29 PM
Re: File System Performance vs. # of VGs or # of LVs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-03-2009 08:34 AM
тАО07-03-2009 08:34 AM
Re: File System Performance vs. # of VGs or # of LVs
Before we make alot of assumptions you may want to extract the lvm and disk metrics from measureware and see what the req-queue and
extract -xp -d -f /tmp/disks.txt
look at bydsk_request_queue and the other metrics
extract -xp -z -f /tmp/lv.txt
look at the LV_RED_RATE and write rate
I make no assumptions till I look at the data.
Good luck.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-03-2009 08:55 AM
тАО07-03-2009 08:55 AM
Re: File System Performance vs. # of VGs or # of LVs
# armdsp -a
to check your current config?
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-07-2009 06:17 AM
тАО07-07-2009 06:17 AM
Re: File System Performance vs. # of VGs or # of LVs
I am trying to get GlancePlus installed (we supposedly have a license that we bought but that's another topic). Is Measureware part of that or is it a separate product? I'd like to get some metrics on our real disk I/O performance so that I can determine if we'd benefit from increasing the SCSI queue lengths for our Oracle LUNs.
As much as I'm not enjoying managing ten LUNs for Oracle (originally it was only two LUNs when everything was on one VG) if it is the best route to maintaining the best disk I/O, I'll live with it. If changing the SCSI queue lengths will help squeeze out any more performance, I'll take that too.
Regarding the SQL itself, we don't have a DBA. I'm about as close as we get since I can work at the sqlplus prompt. The application that makes the SQL statements (from a third-party vendor) is not under our control. We can't alter the SQL they make and expect to be supported. We did call in a SQL consultant who looked over the various statements that are frequently used and he seemed to be of the opinion that things are being done "right". But we have no ability to tune the SQL. So the best I can do is provide the best disk I/O possible via a combination of alternate paths, LVM striping (2 stripes per file system) and now possibly tuning the SCSI queue lengths.
If anyone else has any further suggestions regarding LVM and disk I/O I'd love to hear them. If not, I'll close this thread in a few days. Thanks to everyone who responded! I've been on ITRC for a few years but using my supervisor's account. I finally decided to reactivate my own, but as usual ITRC forums do not disappoint!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-07-2009 08:18 PM
тАО07-07-2009 08:18 PM
Re: File System Performance vs. # of VGs or # of LVs
- data are converted to RAID5 (not enough free space to keep this mirrored)
- no caching because of bad batteries
- you access LUNs via the alternate controller instead of primary
...
that's why I asked you for the armdsp output.
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!

- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-08-2009 06:02 AM
тАО07-08-2009 06:02 AM
Re: File System Performance vs. # of VGs or # of LVs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-08-2009 06:04 AM
тАО07-08-2009 06:04 AM
Re: File System Performance vs. # of VGs or # of LVs
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-08-2009 06:09 AM
тАО07-08-2009 06:09 AM
Re: File System Performance vs. # of VGs or # of LVs
Hope this helps!
Regards
Torsten.
__________________________________________________
There are only 10 types of people in the world -
those who understand binary, and those who don't.
__________________________________________________
No support by private messages. Please ask the forum!
If you feel this was helpful please click the KUDOS! thumb below!
