Simpler Navigation for Servers and Operating Systems
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
Showing results for 
Search instead for 
Did you mean: 

Disk I/O Bottleneckness

Disk I/O Bottleneckness

We have one N class Server. Where Informix databases is running.We have two FC60 controller card. One controller card is holding one LUN and another controller card is holding two LUN. Per LUN is containing 6 disks. All volume group on those LUN has alternate link in different controller card.

We have 6 CPU.. According to top maximum time its IDLE time is 75 to 80%. Swapinfo is also ok.. But disk I/O maximum time is 100%. I have checked with glance plus also. It is showing raw I/O is high. We have lot of raw disk for Informix.

I don't have any IDEA about this FC60 array. Can you tell me how can improve our disk I/O ? If you know any link where I can get some idea about FC60 then it would be better.

Knowledge is only most valuable things which can't buy.
Esteemed Contributor

Re: Disk I/O Bottleneckness

link for FC60 manual

basics ideas about disk performance is try to balance disk access on each LUN. like index and datafiles should be in seperate LUN , if possible.
enjoy any little thing in my life
Honored Contributor

Re: Disk I/O Bottleneckness

You need to try to identify the access routes to the array are nicely balanced.
This means that your volume group configuration given by strings /etc/lvmtab and vgdisplay -v vgfc60 should be configured well ie distributed eveny across luns, ideally of same raid level.
When creating luns on an fc60 (use amdsp -a) you should ensure that each disk in the lun is on a seperate fc60 bus. Ie in 30 disk split bus mode, the lun can have up to 6 busses. If you created your lun left to right on an sc10 withing the fc60 you'll run into problems especially in full bus mode where all the disks in each sc10 is on the same bus.
Try to spread your lun therefore across sc10s as much as possible.

Your vg should be spead across fc60 controllers by carefull creation of the vg.

The logical volumes that your fsystems are on should ideally be striped across disks, which is going to be the case in raid 1/0 and raid 5.
The fastest raid is raid 1/0 and large disk collections of raid 5 ie 16disk raid 5 will be slow.

The information you need is,
ioscan -fnk to verify how the fc60 shares fc connectivity with other devices.
vgdisplay to check how the luns are accessed
amdsp to find out disk per lun configuration.

It works for me (tm)

Re: Disk I/O Bottleneckness

Hi Bill,

I am attaching ioscan & amdsp output . Pls look it & if you could find out any discrepency pls let me know.

How I will get stripe size of disk array ?

Knowledge is only most valuable things which can't buy.

Re: Disk I/O Bottleneckness

It looks like the stripe size is 4KB.
You have 3 RAID5 LUNs each spanning across
6 disks. Probably you have an high contention
rate on one or more LUNs.
You can find out through Glance, looking at
Logical volumes on the same LUN with high I/O

You should identify which devices hold most
I/O and which kind of I/O (read/write) and move them to dedicated different disks;
maybe you can riconfigure one of the LUNS
into 3 RAID1 LUNS and move log I/O,
which normally is I/O intensive (write), on
one of them.

You should keep log, data and indexes on
different LUNs moving write intensive devices on RAID1 LUNs.
Eventually look at Informix configuration to
make sure memory configuration is suitable
for your database.

Hope this helps, Alessandro Bocchino
We work in the dark, we do what we can, we give what we have, our doubt is our passion, and our passion is our task - the rest, is the madness of art - Henry James