- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Re: High I/O rates to disk array; ideas on improvi...
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 03:10 AM
03-02-2006 03:10 AM
On the PV's for the primary Informix DB space we are, for long periods approaching 8 hours, running close to 100% disk utilization on both HBA's. Below is a 10 hour sar -d output for the two mirrored PV's:
device %busy avque r+w/s blks/s avwait avserv
Average c27t2d4 54.50 0.68 288 2263 5.06 3.81
Average c26t2d4 85.12 0.70 427 2941 5.05 3.76
What options should we consider for improving the throughput in a SAN and array environment?
Stuart
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 03:17 AM
03-02-2006 03:17 AM
Re: High I/O rates to disk array; ideas on improving performance?
http://hpux.cs.utah.edu/hppd/hpux/Sysadmin/lsof-4.76/
and see what processes are writing to files on those disks. If there are multiple files involved, you could possible spread them out over many disks. If theres only a few (or even one large one), then it's a utilization problem with too many processes going after the same data, or a couple of very poorly written queries.
mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 03:21 AM
03-02-2006 03:21 AM
Re: High I/O rates to disk array; ideas on improving performance?
Will lsof will help with read/write database?
Stuart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 03:23 AM
03-02-2006 03:23 AM
Re: High I/O rates to disk array; ideas on improving performance?
Let me state my question with more clarity:
Will lsof help with identifying read/write to a database lvol?
Stuart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 03:44 AM
03-02-2006 03:44 AM
Re: High I/O rates to disk array; ideas on improving performance?
One of the things of which you should be aware is that host-based tools like sar and Glance are not very good at analyzing disk arrays. All they know (or can know) is whole heaping helping of i/o is going through what it sees as one physical disk -- nevermind that the LUN may acctually be stripped across 10 disks. One of the things that you can do to make things APPEAR better to sar is to divide your big LUNS into equivalent smaller LUN's. The total i/o per disk will be reduced and things will APPEAR to be better; the actual throughput may be (and likely will be) unchanged.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 03:51 AM
03-02-2006 03:51 AM
Re: High I/O rates to disk array; ideas on improving performance?
I figured that breaking the PV in to smaller PV's would reduce the alarm numbers, but it wouldn't help the actual throughput. And if the HBA is max'd out then is there anything else to try before I go to 2Gb HBA's or additional interfaces? Another option is to moving the mirroring to the arrays and then split the PV configuration. But both of these options involve capital, that I don't have this year.
To follow up; what other system performance measurements would be good to evaluate database performance?
Thankfully we aren't receiving complaints from users on a regular basis about slow performance, but I'd like to stay ahead of that problem.
Stuart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 04:09 AM
03-02-2006 04:09 AM
Re: High I/O rates to disk array; ideas on improving performance?
Now if you can rethink your mirroring methodology to use CA (Continuous Access) so that the mirroring occurs behind the scenes then it may be possible to actually improve i/o.
The idea is that rather than using 1 large LUN per VG you divide that into as many LUN's as you have dedicated I/O channels between the host and the array. For example, suppose that you need a 300GB LUN and that you have 2 Fibre cards in the host. Create 2 150GB LUN's. LUN0's primary path should be SCSI 0, alternate 1; LUN1's primary path shoould be SCSI 1, alternate 0. You then stripe each LVOL in the VG across both LUN's in 64-128KB stripes. This will efficiently distribute your i/o across the available i/o path's. What we are really trying to do is throw the data at the array just as fast as we can and let it then decide what to do with it -- after all, that's what them expensive arrays is good at.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 05:39 AM
03-02-2006 05:39 AM
Re: High I/O rates to disk array; ideas on improving performance?
Would you say that your third statement really only applies once I move mirroring away from the OS?
Stuart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 05:49 AM
03-02-2006 05:49 AM
Re: High I/O rates to disk array; ideas on improving performance?
Have you run fcmsutil to see if HBA is producing errors?
fcmsutil [dev] stat
where [dev] is the fully qualified path to the device file for the HBA. On my system, it's /dev/td0
mark
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 05:57 AM
03-02-2006 05:57 AM
Re: High I/O rates to disk array; ideas on improving performance?
Excellant idea; however, that's not the problems.
Here's the top of the command for both HBA's:
$ sudo fcmsutil /dev/td1 stat
Thu Mar 2 13:53:54 2006
Channel Statistics
Statistics From Link Status Registers ...
Loss of signal 0 Bad Rx Char 255
Loss of Sync 2 Link Fail 0
Received EOFa 0 Discarded Frame 0
Bad CRC 0 Protocol Error 0
$ sudo fcmsutil /dev/td0 stat
Thu Mar 2 13:53:32 2006
Channel Statistics
Statistics From Link Status Registers ...
Loss of signal 0 Bad Rx Char 255
Loss of Sync 2 Link Fail 0
Received EOFa 0 Discarded Frame 0
Bad CRC 0 Protocol Error 0
I talked to our DBA and he believes that there are some month-end accounting activity going on right now. I'm going to look back a few days to see if performance was this high. I may have caught disk I/O when it was the most intense.
Stuart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 06:51 AM
03-02-2006 06:51 AM
SolutionExample reasons for this are:
Too many informix cleaner threads simultaneously cleaning LRUS. This is a key lun saturator. Reduce your LRU writes if at all possible. You may have to directly manage your checkpoints at certain periods just to prevent cleaning. Remember that during a checkpoint LRUS are allowed to clean down to LRU_MIN_DIRTY before the chunk writes occur, which can extend the time if your luns are hot.
Multiple chunks on a single lun. This is a big no-no on informix, as writes to chunks happen simultaneously in checkpoints. This increases the write queue. I have seen it go up to 1000 on really bad systems. To reduce this horror, follow ACS's advice on LUN striping and spread out your chunks more. I find that 4 LUNs to an LVOL/chunk works well with Informix. Extent based striping is not quite so good. The default queue depth in PH-UX lvm is 8 per device - the more devices, the less the queue, unless the array itself is saturated.
Poor table and index placement in the database. If the database is joining tables in the same dbspace/lun, the heads will be flapping all over the place and striping can only do so much.
Slow/ insufficient fibre. Go to multiple strands and stripe all LUNs over all strands, all of the time.
Poor RAID choice. Use RAID 0/1 always. (http://www.baarf.com).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 06:54 AM
03-02-2006 06:54 AM
Re: High I/O rates to disk array; ideas on improving performance?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 07:43 AM
03-02-2006 07:43 AM
Re: High I/O rates to disk array; ideas on improving performance?
How "remote" is your other EVA4000? Will the usage of just a 1GB FC-HBA mean its really far out --- as in you're using dark fibre accross campuses, accross cities?
You SAR stats appear healthy to me. In fact it appears normal. And since you're using EVAs, no amount of host based RAIDing (striping) will help you.
If you're seeing 100% disk utilization - that is probably because you do have valid load from your Informix DB. If you've issues with your storage infra .. you'll be seeing queueing (which you do not have) and bigger response and service times on inidividual LUNs.
I think you're okay and those 1GB FC channels to your EVAs AND your EVAs are working fine. What you're having is just plain and simple load that your server can crunch.
Hope this helps.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-02-2006 11:39 PM
03-02-2006 11:39 PM
Re: High I/O rates to disk array; ideas on improving performance?
I've got our DBA looking at some DB performance issues. Our read and writes to memory are in the 90%, which is good, and the flush to disk wait times are very low, also a good sign. The 100% number I'm getting is from Glance. After further investigation we've determined that such a number doesn't mean all HBA's are saturated. As a matter of fact we have seen transmission rates over 1Gb/sec on a 1Gb fiber. We are going to work with the Informix people to make sure we are monitoring the correct parameters for throughput.
Clay,
I understand you comments about a CA solution for mirroring. Another option we will be considering, if we are in a disk limiting situation is array mirroring.
Nelson,
The arrays are physically about 600' apart. I'm hoping that we can find information to support your hunch. I've looked at historical system data back to June of '05 and have seen the same levels of disk I/O then as we are seeing now. Our users are normally quick to complain if system performance drops below expected levels, and no one has complained. So either we have a bottle neck and they are used to it, or we really don't have a bottle neck and I need to adjust my expectations of the new daily reports I'm getting from OV Performance Manager.
Stuart
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
03-03-2006 01:22 AM
03-03-2006 01:22 AM
Re: High I/O rates to disk array; ideas on improving performance?
As far as mirroring, I've always entrusted mirroring my SAN disks (for whole array redundacy that is) to VxVM and not to In-Array solutions like BCVs, SRDF or CA. Bandwidth, CPU cycles and busses are significantly faster this days that most of my clients I've convinced to using this approach. One glaring benefit in using VxVM host based mirroring is that you're not beholden to your array vendor.. You can kick out or introduce any array from any vendor to the mix without disruption.