Operating System - HP-UX
1748280 Members
4020 Online
108761 Solutions
New Discussion юеВ

Re: Xiotech Magnitude as 1 device

 
SOLVED
Go to solution
Nick Miles_1
New Member

Xiotech Magnitude as 1 device

Hello,

I am getting 100% busy on a Xiotech device with almost any activity on the database server. CPU utilization is very low, we're waiting almost exclusively on disk I/O.

The device is one huge 300Gig drive served up by a Xiotech Magnitude. I have HPUX 11.0 partition up the single device into 150 2Gig raw chunks, via LVM, that the Informix database server can use.

My question is - does HPUX allocate buffers, etc to each device? Meaning that HPUX would think that this is just a single disk... instead of the 150 disks that I use it as?

This may very well just be a disk I/O issue, but I'm curious how HPUX treats devices -vs- LVM partitions.

Any help appreciated,
Thanks!
3 REPLIES 3
Rich Wright
Trusted Contributor

Re: Xiotech Magnitude as 1 device

You are correct.
If you look at vgdisplay -v
you will see your 150 logical volumes and 1 physical volume at the very bottom.

A more interesting display would be looking at pvdisplay -v /dev/dsk/c_t_d_
and see how each LV's Logical Extents map to matching Physical Extents.

I'm pretty sure that all of the buffering is done at the physical device level.
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: Xiotech Magnitude as 1 device

Many times this is only an apparent problem. Your HP box simply sees this as one device and has no idea that it is many physical devices. All Glance (or sar) knows is that a tremendous amount of I/O is going through this device. This 'problem' would exist on any kind of array. I've sometimes seen an array divided into several LUN's sharing the same SCSI bus so that it appears that the system is not I/O bound on 1 device but the actual I/O remains the same or sometimes a little worse.

Your better answer is to allocate your array as a few large LUN's over different paths. This may mean that you need more I/O cards in your array and in your host computer. You can then combine these LUN's into a VG and stripe all the LVOL's across these LUN's in 64k chunks. This will give you more paths to the data and total I/O rate will increase quite a bit.

By the way, it is perfectly normal for your box in this case to be waiting on I/O. What else would it be doing?

If memory serves me, Magnitudes intentionally have no cache and since you are using RAW I/O there is no buffer cache either. This implies that you need a very large database cache.

You might try running conventional file I/O; you just might be surprised; if you were running 11.11 I am all but sure that your performance would actually improve over raw I/O; I have actually measured this in Oracle.




If it ain't broke, I can fix that.
Bill Hassell
Honored Contributor

Re: Xiotech Magnitude as 1 device

As far as I/O buffers go, a raw device is 100% the responsibility of the application program. Unlike filesystems that are automatically buffered through the HP-UX buffer cache, Informix must do what it can to reduce I/O. This is similar to Oracle and Sybase where choices made by the DBA represent about 95% of the performance issues. Poorly written SQL or too many serial reads (not enough indexes) all contribute to the disk I/O.

There is virtually nothing you can do to improve raw disk I/O except as mentioned by Clay (more I/O channels, assuming your disk array supports that). You might get some benefits by switching to filesystem rather than raw I/O, and gain the advantage of being able to backup the data with conventional filesystem backup tools like fbackup.


Bill Hassell, sysadmin