Operating System - HP-UX
1752790 Members
6304 Online
108789 Solutions
New Discussion юеВ

Informix Version 10 Performance Considerations

 
Greg Laws
Advisor

Informix Version 10 Performance Considerations

I am getting ready to install an Informix 10 instance on an HPUX 11.11 box attached to an XP128 Array. We have approximately 200GB of space presented to the server from one RAID5 Raid Group (3D+1P) over redundant 2Gbit fibre connections. That's giving me 15 13.56GB LUNS to use to build the raw LVOLS for the Informix data.

I was hoping to get some advice on the best practices for laying out the LVOLS for best performance. Now that IDS10 supports chunk sizes greater than 1GB I am drawn toward creating a smaller number of larger LVOLS to present to Informix. Is this a good idea, or would it be better to go with a larger number of smaller LVOLS? Also, what are the feelings on striping the LVOLS on the OS side? Seems like if I'm careful to alternate the primary disk paths when I create the VG it may be beneficial, but I don't have any proof of it.

Thanks for your input!
-Greg
2 REPLIES 2
Jean-Luc Oudart
Honored Contributor

Re: Informix Version 10 Performance Considerations

Hi Greg

not an informix specialist, but for any kind of database I would recommend RAID1+0 available with XP128.

Regards
Jean-Luc
fiat lux
Steve Lewis
Honored Contributor

Re: Informix Version 10 Performance Considerations

On the 64bit chunk size question, these are the results of our testing:

Making the chunk size 16-20Gb reduces the amount of initial admin and set-up, however if you do a lot of DB updates (e.g. dbimport, batch runs) the page cleaners cannot keep up.
-multiple page cleaners per chunk produces hot disk syndrome with large i/o waits.

-a single page cleaner per chunk takes ages to clean and checkpoints take forever. Page cleaners aren't very efficient even though they are multi-threaded, you can still end up with head-flapping even on striped lvols, plus with RAID 5 you will get even worse performance.

For example, a single 8Gb data load into a single dbspace that has just one massive chunk sends all the i/o down to a single lvol which, when mapped to a single disk device(or lun) causes high disk queues.

Upping the kernel parameter scsi_max_queue_depth merely pushes the bottleneck from the o/s into to the array. Fundamentally the disk array lun cannot keep up with the work, so you must also LVM stripe your chunks over multiple LUNS on multiple controllers.

My advice is not to make your large chunks any bigger than 4Gb.

I don't know which version you are coming from, but v10 includes a variety of other changes like no longer supporting DEFAULT_ATTACH that will require testing, particularly stress/load tests and DBA tasks tests. I recommend you wait for at least FC3 before diving in.