- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Increasing LVM performance.
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 02:41 AM
08-25-2004 02:41 AM
I have a HP N4000-55, attached to a EMC Clarrion FC4700. I am experiencing progressively slow I/O performance on a filesystem with 130,000+ files inside of it. We have tried defragmenting the filesystem with little success or improvement on performance. We have also tried removing the filesystem, re-creating it and restoring the data. This did help I/O performance, but only for a few weeks before we were crawling again. I ran the following defragment commands several times to see if they made a difference:
fsadm -F vxfs -d -D -e -E /var/opt/oneworld/PrintQueue
fsadm -F vxfs -a 1 -d /var/opt/oneworld/PrintQueue
fsadm -F vxfs -l 2048 -e /var/opt/oneworld/PrintQueue
I did these at lease 10-20 times over a 2 hour period, but this didn't seem to do much. The filesystem in question is configured as PVG-strict/distributed.
Here's some more details.
HP N4000-55
Processors: 8
Clock Frequency: 550 MHz
Kernel Width Support: 64
Physical Memory: 16398.7 MB
OS Identification: B.11.11 U
uteudc11[/root]# swapinfo -tam
Mb Mb Mb PCT START/ Mb
TYPE AVAIL USED FREE USED LIMIT RESERVE PRI NAME
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol2
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol3
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol4
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol5
dev 944 0 944 0% 0 - 1 /dev/vg00/lvol20
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol21
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol22
dev 1024 0 1024 0% 0 - 1 /dev/vg00/lvol23
dev 4096 0 4096 0% 0 - 1 /dev/vg00/lvol24
dev 4096 0 4096 0% 0 - 1 /dev/vg00/lvol25
reserve - 8606 -8606
memory 12600 3218 9382 26%
total 28904 11824 17080 41% - 0 -
Filesystem in question:
uteudc11[/opt/jde/oneworld/app/PrintQueue]# bdf .
Filesystem kbytes used avail %used Mounted on
/dev/vgmcjde/lvol4 32768000 23586956 8607247 73% /opt/jde/oneworld/app/PrintQueue
Lvdisplay -v of /dev/vgmcjde/lvol4:
--- Logical volumes ---
LV Name /dev/vgmcjde/lvol4
VG Name /dev/vgmcjde
LV Permission read/write
LV Status available/syncd
Mirror copies 0
Consistency Recovery MWC
Schedule parallel
LV Size (Mbytes) 32000
Current LE 4000
Allocated PE 4000
Stripes 0
Stripe Size (Kbytes) 0
Bad block NONE
Allocation PVG-strict/distributed
IO Timeout (Seconds) 180
--- Distribution of logical volume ---
PV Name LE on PV PE on PV
/dev/dsk/c12t0d6 1863 1863
/dev/dsk/c12t0d7 1867 1867
/dev/dsk/c7t1d4 68 68
/dev/dsk/c12t1d5 68 68
/dev/dsk/c7t1d6 67 67
/dev/dsk/c12t1d7 67 67
--- Logical extents ---
LE PV1 PE1 Status 1
00000 /dev/dsk/c12t0d6 02201 current
00001 /dev/dsk/c12t0d7 02199 current
00002 /dev/dsk/c12t0d6 02202 current
00003 /dev/dsk/c12t0d7 02200 current
00004 /dev/dsk/c12t0d6 02203 current
00005 /dev/dsk/c12t0d7 02201 current
00006 /dev/dsk/c12t0d6 02204 current
00007 /dev/dsk/c12t0d7 02202 current
00008 /dev/dsk/c12t0d6 02205 current
00009 /dev/dsk/c12t0d7 02203 current
00010 /dev/dsk/c12t0d6 02206 current
etc, etc...
Volume group info:
--- Volume groups ---
VG Name /dev/vgmcjde
VG Write Access read/write
VG Status available, exclusive
Max LV 255
Cur LV 5
Open LV 5
Max PV 96
Cur PV 6
Act PV 6
Max PE per PV 10240
VGDA 12
PE Size (Mbytes) 8
Total PE 11444
Alloc PE 8650
Free PE 2794
Total PVG 1
Total Spare PVs 0
Total Spare PVs in use 0
The bloated filesystem:
uteudc11[/opt/jde/oneworld/app/PrintQueue]# ls -l |wc
136891 1232012 11898612
uteudc11[/opt/jde/oneworld/app/PrintQueue]#
Any suggestions or ideas would be appreciated.
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 02:47 AM
08-25-2004 02:47 AM
Re: Increasing LVM performance.
Directory Fragmentation Report
Dirs Total Immed Immeds Dirs to Blocks to
Searched Blocks Dirs to Add Reduce Reduce
total 79 5087 39 0 5 3103
Were there any blocks to reduce?
If yes, then keep on defragging until down to (or close to) zero...
Rgds...Geoff
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 02:47 AM
08-25-2004 02:47 AM
Re: Increasing LVM performance.
find /opt/jde/oneworld/app/PrintQueue -atime +7 -exec mv {} /archivearea
This would move files unaccessed in last 7 days to /archivearea. A cron job to do this would allow you to keep the files (if they are indeed needed), but at least maintain performance on the main filesystem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 03:09 AM
08-25-2004 03:09 AM
Re: Increasing LVM performance.
Do you alaternate paths?? If so set it properly. This wont help much, but do it.
With the information that you have given seems that these files are print siles. If yes, are they required all the time?? If not move the ones which are not required.
Another thing would be setting up a stripped LV. You will have to remove LV, prepare one with striped across your disks in that lvol.
Anil
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 03:11 AM
08-25-2004 03:11 AM
Re: Increasing LVM performance.
So let's address your printqueue, since you say you removed & replaced it and saw some improvement before.
/var is a very busy mountpoint...is does stuff and it is writing all the time ! It is a critical filesystem (fill up /var and you stop..). Remember /var/opt/* has logs writing to it; /var/adm/syslog more logs writing. Are you running MWA..even more /var/opt/perf/datafiles. See my point.
If you want to try and improve performance to known 'busy' directories, then set up seperate disk and create a lvol just for that mountpoint with the disks that are just for them. Like:
/var
/var/opt/spool
/var/opt/oneworld
/var/opt/perf
...and maybe you can think of others you could set up their own disk so they won't be struggling to get I/O attention from others.
Just a thought, HTH
Rita
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 03:22 AM
08-25-2004 03:22 AM
Re: Increasing LVM performance.
As mentioned already 130,000+ files si quite a lot for same directory.
According to the Filesystem name under /var this could be temporary files.
You give info for :
/opt/jde/oneworld/app/PrintQueue
and you ran fsadm command on :
var/opt/oneworld/PrintQueue
Are they the same fs or did I miss something ?
Regards,
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 03:25 AM
08-25-2004 03:25 AM
Solutionyou can check these threads and attached doc :
http://forums1.itrc.hp.com/service/forums/questionanswer.do?threadId=99401
http://www.hpworld.com/pubcontent/enterprise/may00/08sysadx.html
Regards,
Jean-Luc
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 04:28 AM
08-25-2004 04:28 AM
Re: Increasing LVM performance.
This filesystem resides on its own EMC fiberchannel disks. We are setup for dual paths.
/dev/vgmcjde/lvol4 32768000 23638232 8559220 73% /opt/jde/oneworld/app/PrintQueue
I will check with the application folks to see if moving older files to a alternate filesystem via a cronjob would not break the application. To my understanding, these are print jobs that may (or may not) need to be recalled at any given time.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-25-2004 07:16 AM
08-25-2004 07:16 AM
Re: Increasing LVM performance.
As for performance, 130,000 files in a single directory is GUARANTEED to cause major performance issues if the directory is searched. Now that does NOT mean open and read a specific file which will run at full speed. It means when creating a new file or listing files (like ls, especially when using pattern matching such as ls a* to find all files that start with "a"). You will probably see massive numbers in sar -a 5 20 when performance is bad. This is inevitable when asking the system to search the directory structure. The only fix is an application rewrite: either keep a local index in the program of all files by name so it doesn't kill the system searching for filenames, or eliminate the 130,000 files and use a real database with just a few files.
Bill Hassell, sysadmin
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2004 01:09 AM
08-30-2004 01:09 AM
Re: Increasing LVM performance.
man vxtunefs
If using JFS 3.5, there are a couple more tunables in relation to very large inode numbers (file entries..)
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2004 07:27 AM
08-30-2004 07:27 AM
Re: Increasing LVM performance.
uteudc11[/etc/vx]# vxtunefs /opt/jde/oneworld/app/PrintQueue
Filesystem i/o parameters for /opt/jde/oneworld/app/PrintQueue
read_pref_io = 65536
read_nstream = 1
read_unit_io = 65536
write_pref_io = 65536
write_nstream = 1
write_unit_io = 65536
pref_strength = 10
buf_breakup_size = 131072
discovered_direct_iosz = 262144
max_direct_iosz = 1048576
default_indir_size = 8192
qio_cache_enable = 0
max_diskq = 1048576
initial_extent_size = 8
max_seqio_extent_size = 2048
max_buf_data_size = 8192
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
08-30-2004 07:45 AM
08-30-2004 07:45 AM
Re: Increasing LVM performance.
My mistake.. not vxtunefs (although there probably will be some in there that you could tune) but look at your kernel parameters relating to VxFS specifically:
vx_fancyra_enable
vx_ncsize (default is just 1024...!)
vx_ninode
vxfs_max_ra_kbytes
ncsize
Watch your memory ...
HTH.