HPE GreenLake Administration
- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Relationship between VA and LVM performance
Operating System - HP-UX
1834208
Members
2434
Online
110066
Solutions
Forums
Categories
Company
Local Language
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-25-2003 08:26 PM
02-25-2003 08:26 PM
Relationship between VA and LVM performance
I have a single rp5470 connected to a VA-7100 on a single FC interface with a 256MB cache.
My VA has 1 redundancy group, 3 LUN???s made up of a total of 9, 36GB, 10k drives. When I run armperf I get read latencies mostly clustered in the 10-20ms range and write latencies in the 5-81ms range. The same armperf data also shows a Total I/O Throughput per second of 314 for a 6 day average with a peak of 571 and only a hand full of other Total I/O measurements above 500. My feeling is that the RAID is performing well.
On the server, I have vg10 with a single lvol configured that is made up of three PV???s, c6t0d0, d1 and d2 and sar shows the following for each PV:
Device %busy await
C6t0d0 78 73
C6t0d1 49 20
C6t0d2 31 40
Glance shows disk utilization hitting 100% much of the time also. My feeling is that there is a problem either in the measurement of this or it truly indicates a problem.
I think it is fair to say this system is pretty busy with I/O and there are a huge number of file on the system (50M+). I would say most of the I/O???s on this system are small to many files, although there are a few large files (1G+) that get hit frequently. The files are mostly flat indexed database files with a handful of PostgreSQL clusters.
A few questions, if I may, as I am trying to better understand the relationship between the VA performance and the performance of LVM.
Just how busy can a LVM device file get?
If I were to add 3 more LUN???s and was able to equalize the data across them all, increase RAID performance only to put more pressure on LVM?
Is increasing controller cache the key, if there is one.
Is there a problem I should be worried about?
How much more can the I/O take?
I am preparing to create a SAN and will be adding another controller to the VA and another FC HBA to the 5470, but also another rp5470 with dual FC HBA???s. Will the additional controller and HBA???s only add to the LVM pressure with the additional bandwidth?
Is there a tool that will help analyze this end to end, accurately?
Thanks for any information you can provide.
Tim
My VA has 1 redundancy group, 3 LUN???s made up of a total of 9, 36GB, 10k drives. When I run armperf I get read latencies mostly clustered in the 10-20ms range and write latencies in the 5-81ms range. The same armperf data also shows a Total I/O Throughput per second of 314 for a 6 day average with a peak of 571 and only a hand full of other Total I/O measurements above 500. My feeling is that the RAID is performing well.
On the server, I have vg10 with a single lvol configured that is made up of three PV???s, c6t0d0, d1 and d2 and sar shows the following for each PV:
Device %busy await
C6t0d0 78 73
C6t0d1 49 20
C6t0d2 31 40
Glance shows disk utilization hitting 100% much of the time also. My feeling is that there is a problem either in the measurement of this or it truly indicates a problem.
I think it is fair to say this system is pretty busy with I/O and there are a huge number of file on the system (50M+). I would say most of the I/O???s on this system are small to many files, although there are a few large files (1G+) that get hit frequently. The files are mostly flat indexed database files with a handful of PostgreSQL clusters.
A few questions, if I may, as I am trying to better understand the relationship between the VA performance and the performance of LVM.
Just how busy can a LVM device file get?
If I were to add 3 more LUN???s and was able to equalize the data across them all, increase RAID performance only to put more pressure on LVM?
Is increasing controller cache the key, if there is one.
Is there a problem I should be worried about?
How much more can the I/O take?
I am preparing to create a SAN and will be adding another controller to the VA and another FC HBA to the 5470, but also another rp5470 with dual FC HBA???s. Will the additional controller and HBA???s only add to the LVM pressure with the additional bandwidth?
Is there a tool that will help analyze this end to end, accurately?
Thanks for any information you can provide.
Tim
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-25-2003 09:34 PM
02-25-2003 09:34 PM
Re: Relationship between VA and LVM performance
First, December 2002 Patch bundle. There are lvm performance patches and other patches that can deal with these issues.
Second, search itrc for more lvm patches.
Third, try this little attached performance script. IT will collect data in a file and you can modify it to change the data collection period. It runs in the background.
It will help you.
Long run, consider a second fiber card and pvlinks to do a little load balancing and failover on the fiber card interfaces.
I think though you can totally deal with this issue by measuring performance and installing appropriate patches.
SEP
Second, search itrc for more lvm patches.
Third, try this little attached performance script. IT will collect data in a file and you can modify it to change the data collection period. It runs in the background.
It will help you.
Long run, consider a second fiber card and pvlinks to do a little load balancing and failover on the fiber card interfaces.
I think though you can totally deal with this issue by measuring performance and installing appropriate patches.
SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-26-2003 06:39 PM
02-26-2003 06:39 PM
Re: Relationship between VA and LVM performance
So I assume you believe I would have a problem and that problem would be with LVM? That isn???t a hard sell for me, as that is what I have been eying. Just trying to collect as much information as I can.
Thanks,
Tim
Thanks,
Tim
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
Company
Events and news
Customer resources
© Copyright 2025 Hewlett Packard Enterprise Development LP