1846661 Members
4637 Online
110256 Solutions
New Discussion

Large data file

 
SOLVED
Go to solution
Larry Basford
Regular Advisor

Large data file

I have a database with a UNIVERSE data file over 2GB. The system has 3 processors and 3 GB of memory. This file is read over and over again all day. Would I be able to improve performance of the system with more memory ? My reporting server with 4 processors and 4 GB of memory also choks on the file.
Desaster recovery? Right !
4 REPLIES 4
Geoff Wild
Honored Contributor
Solution

Re: Large data file

Check the output of vmstat

Say, vmstat 5,5

Are you getting a lot of pi's /po's?

Is the sr (scan rate) high? (above 0)

If it is, then you are swapping - and you probably need more memory...CPU will go up if the scanrate is high....

Also, what is output of swapinfo?

What is dbc_max_pct set to? if 50%, then you may want to tune your kernel...set it so that it equals between 400 and 800 MB (try 20%) - that will give you more memory for the application...

How the Buffer Cache Grows:

As the kernel reads in files from the file system, it will try to store data in the buffer cache. If memory is available, and the buffer cache has not reached its maximum size, the kernel will grow the buffer cache to make room for the new data. As long as there is memory available, the kernel will keep growing the buffer cache until it reaches its max size (50% by default).

For performance reasons, you want the buffer cache hit ratio on reads to be higher then 90%.

sar -b and watch %rcache

Maybe more memory will help - maybe not...

Rgds...Geoff
Proverbs 3:5,6 Trust in the Lord with all your heart and lean not on your own understanding; in all your ways acknowledge him, and he will make all your paths straight.
Julio Yamawaki
Esteemed Contributor

Re: Large data file

Hi,

when you have large tables in database, you must see the database configuration, besides UNIX configuration:
1. Size of tables
2. Number of access per table
3. Number of disk devices
4. Possibility of sharing access in more than one disk device
5. Possibility of table partitioning
6. Verify if you have contention in database memory
7. Verify if you have CPU contention

After this all verified, you can conclude if you have to upgrade your server.
In many cases, modifying database configuration is enough.
Bill Hassell
Honored Contributor

Re: Large data file

UNIVERSE uses hundreds of files, so many that it must close some so it can open others. You will find that opens/closes are massive especially if your UNIVERSE config file specifies the maximum number of files open at the same less than 100. Change this value to 200 to 300 and you'll see a huge increase in performance. HOWEVER, the kernel parameter nfile must be dramatically increased too. If you are running 100 copies of the UNIVERSE application, then nfile must be increased accordingly. If you add 100 to 200 files, you'll need add 10,000 to 20,000 to nfile.


Bill Hassell, sysadmin
Larry Basford
Regular Advisor

Re: Large data file

I see ocasional po values of 40 - 50
but pi never more than 1

The sr is 0

Universe open MFILE = 300
and MAX_OPEN_FILES = 320
This seems to be correct.

nfile 149984 (by formula)
dbc_max_pct 33

for the 4GB system dbc_max_pct 25



The data is on and EMC 3830 2 path SCSI on a striped filesystem to get the most through put.
No PowerPath or ONJFS

Thanks Bill, It nice to know someone else knows what Universe is.

Seems we have the best we can get. We are going to an 8530 with fiber and PowerPath.
Hopefully we can get some of the old data purged out of the system to speed things up.
Desaster recovery? Right !