- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: Xiotech Magnitude as 1 device
Operating System - HP-UX
1748280
Members
4020
Online
108761
Solutions
Forums
Categories
Company
Local Language
юдл
back
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Discussions
Discussions
Forums
Forums
Discussions
юдл
back
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Blogs
Information
Community
Resources
Community Language
Language
Forums
Blogs
Go to solution
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-19-2002 01:01 PM
тАО07-19-2002 01:01 PM
Hello,
I am getting 100% busy on a Xiotech device with almost any activity on the database server. CPU utilization is very low, we're waiting almost exclusively on disk I/O.
The device is one huge 300Gig drive served up by a Xiotech Magnitude. I have HPUX 11.0 partition up the single device into 150 2Gig raw chunks, via LVM, that the Informix database server can use.
My question is - does HPUX allocate buffers, etc to each device? Meaning that HPUX would think that this is just a single disk... instead of the 150 disks that I use it as?
This may very well just be a disk I/O issue, but I'm curious how HPUX treats devices -vs- LVM partitions.
Any help appreciated,
Thanks!
I am getting 100% busy on a Xiotech device with almost any activity on the database server. CPU utilization is very low, we're waiting almost exclusively on disk I/O.
The device is one huge 300Gig drive served up by a Xiotech Magnitude. I have HPUX 11.0 partition up the single device into 150 2Gig raw chunks, via LVM, that the Informix database server can use.
My question is - does HPUX allocate buffers, etc to each device? Meaning that HPUX would think that this is just a single disk... instead of the 150 disks that I use it as?
This may very well just be a disk I/O issue, but I'm curious how HPUX treats devices -vs- LVM partitions.
Any help appreciated,
Thanks!
Solved! Go to Solution.
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-19-2002 01:17 PM
тАО07-19-2002 01:17 PM
Re: Xiotech Magnitude as 1 device
You are correct.
If you look at vgdisplay -v
you will see your 150 logical volumes and 1 physical volume at the very bottom.
A more interesting display would be looking at pvdisplay -v /dev/dsk/c_t_d_
and see how each LV's Logical Extents map to matching Physical Extents.
I'm pretty sure that all of the buffering is done at the physical device level.
If you look at vgdisplay -v
you will see your 150 logical volumes and 1 physical volume at the very bottom.
A more interesting display would be looking at pvdisplay -v /dev/dsk/c_t_d_
and see how each LV's Logical Extents map to matching Physical Extents.
I'm pretty sure that all of the buffering is done at the physical device level.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-19-2002 01:24 PM
тАО07-19-2002 01:24 PM
Solution
Many times this is only an apparent problem. Your HP box simply sees this as one device and has no idea that it is many physical devices. All Glance (or sar) knows is that a tremendous amount of I/O is going through this device. This 'problem' would exist on any kind of array. I've sometimes seen an array divided into several LUN's sharing the same SCSI bus so that it appears that the system is not I/O bound on 1 device but the actual I/O remains the same or sometimes a little worse.
Your better answer is to allocate your array as a few large LUN's over different paths. This may mean that you need more I/O cards in your array and in your host computer. You can then combine these LUN's into a VG and stripe all the LVOL's across these LUN's in 64k chunks. This will give you more paths to the data and total I/O rate will increase quite a bit.
By the way, it is perfectly normal for your box in this case to be waiting on I/O. What else would it be doing?
If memory serves me, Magnitudes intentionally have no cache and since you are using RAW I/O there is no buffer cache either. This implies that you need a very large database cache.
You might try running conventional file I/O; you just might be surprised; if you were running 11.11 I am all but sure that your performance would actually improve over raw I/O; I have actually measured this in Oracle.
Your better answer is to allocate your array as a few large LUN's over different paths. This may mean that you need more I/O cards in your array and in your host computer. You can then combine these LUN's into a VG and stripe all the LVOL's across these LUN's in 64k chunks. This will give you more paths to the data and total I/O rate will increase quite a bit.
By the way, it is perfectly normal for your box in this case to be waiting on I/O. What else would it be doing?
If memory serves me, Magnitudes intentionally have no cache and since you are using RAW I/O there is no buffer cache either. This implies that you need a very large database cache.
You might try running conventional file I/O; you just might be surprised; if you were running 11.11 I am all but sure that your performance would actually improve over raw I/O; I have actually measured this in Oracle.
If it ain't broke, I can fix that.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
тАО07-20-2002 08:54 AM
тАО07-20-2002 08:54 AM
Re: Xiotech Magnitude as 1 device
As far as I/O buffers go, a raw device is 100% the responsibility of the application program. Unlike filesystems that are automatically buffered through the HP-UX buffer cache, Informix must do what it can to reduce I/O. This is similar to Oracle and Sybase where choices made by the DBA represent about 95% of the performance issues. Poorly written SQL or too many serial reads (not enough indexes) all contribute to the disk I/O.
There is virtually nothing you can do to improve raw disk I/O except as mentioned by Clay (more I/O channels, assuming your disk array supports that). You might get some benefits by switching to filesystem rather than raw I/O, and gain the advantage of being able to backup the data with conventional filesystem backup tools like fbackup.
Bill Hassell, sysadmin
There is virtually nothing you can do to improve raw disk I/O except as mentioned by Clay (more I/O channels, assuming your disk array supports that). You might get some benefits by switching to filesystem rather than raw I/O, and gain the advantage of being able to backup the data with conventional filesystem backup tools like fbackup.
Bill Hassell, sysadmin
The opinions expressed above are the personal opinions of the authors, not of Hewlett Packard Enterprise. By using this site, you accept the Terms of Use and Rules of Participation.
News and Events
Support
© Copyright 2024 Hewlett Packard Enterprise Development LP