- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Disk Performance issue
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-25-2006 12:56 AM
тАО10-25-2006 12:56 AM
I am experiencing disk preformance issues on our CRM system. Disk utilization is often running above 90% according to glance & sar. Performance Manager narrows it down to 3 disks (LUNs) in one Volume Group & one Logical Volume.
The Logical Volume is a 3 way stripe on 3 x 70G LUNs. The 70G LUNs are striped 8-way on a DMX 3000. So it is stripe on stripe.
Should I increase the LV to a 4-way stripe? or 6-way or 8-way?
Can LVM relayout an LV the way VxVM can?
Any other pearls of wisdom.
Further Information:
* Both paths are experiencing heavy utilization. The HBA's, FC cable & switch ports are all 2G.
* The HBA ports in the FC switches are only 5% used. The FA ports in the FC switches are 35% used.
regards,
Shane
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-25-2006 01:14 AM
тАО10-25-2006 01:14 AM
SolutionDis utilization is not a good measure to judge whether to take action.
Application response is a good issue. If the application responds properly and everyone is happy, find something else to do.
In general there is one common problem in which raid 5 and other striping methodology are used to host write intensive databases. Raid 5 inherently writes more slowly because the data needs to be written in many places. Moving write intensive applications to Raid 1 or Raid 1/0 storage can dramatically improve write performance.
If you must remain in the striped world having the most disks is the accepted norm for good performance. Any of the changes recommended above are usually handled on the disk array by LUN reconfiguration.
SEP
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-25-2006 01:38 AM
тАО10-25-2006 01:38 AM
Re: Disk Performance issue
I am not using RAID5. It is just a 3-way stripe. The 70G LUNs are mirrored in the DMX.
The 70G LUNs are striped 8-way in the DMX, so ideally I'd like to relayout at the OS layer.
I could create a new 4-way LV on 4 new 70G LUNs and move the data to it - what do you think?
regards,
Shane
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО10-25-2006 01:51 AM
тАО10-25-2006 01:51 AM
Re: Disk Performance issue
Unfortunately you cannot do an online relayout or better yet a "plex replacement" as can be done with VxVM.
As far as performance, here are my suggestions based on our experiences:
1. Use 4 or 8 way stripes.. (for CRM use a stripe width of 64). With Hitachi based frames and EMC, we never noticed any difference between RAID5 and RAID10 LUN components on these stripes..
2. Make sure your filesystems are mounted with DirectIO enabled (fstab ... convosync=direct...)
3. And make sure your buffer cache is no more than 800 to 1600 MB (IF your server purely does DB and APP serving...)
Favourite Toy:
AMD Athlon II X6 1090T 6-core, 16GB RAM, 12TB ZFS RAIDZ-2 Storage. Linux Centos 5.6 running KVM Hypervisor. Virtual Machines: Ubuntu, Mint, Solaris 10, Windows 7 Professional, Windows XP Pro, Windows Server 2008R2, DOS 6.22, OpenFiler
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО11-27-2006 02:41 AM
тАО11-27-2006 02:41 AM