1844337 Members
3906 Online
110230 Solutions
New Discussion

2K Blocks filesystem

 
SOLVED
Go to solution
Larry Basford
Regular Advisor

2K Blocks filesystem

After doing some testing with different block sizes our findings are that our Universe database runs best with 2K block sizes.
My concern is the error message I get when I create one HP recomends against it.
---
vxfs mkfs: It is advisable to have block size of 8192 or larger
vxfs mkfs: for file systems greater than 51GB
---
I will be creating 100MB filesystems on EMC striped meta should I be OK with this? And is there any limitations I may run into?


**** This is how it was created. ****
_________________________________________
mkfs -F vxfs -o ninode=unlimited,bsize=2048,version=4,inosize=256,logsize=256,largefiles /dev/vgtest7/lvol1 53501952
vxfs mkfs: It is advisable to have block size of 8192 or larger
vxfs mkfs: for file systems greater than 51GB
version 4 layout
53501952 sectors, 26750976 blocks of size 2048, log size 256 blocks
unlimited inodes, largefiles supported
26750976 data blocks, 26747208 free data blocks
817 allocation units of 32768 blocks, 32768 data blocks
last allocation unit has 12288 data blocks
Desaster recovery? Right !
4 REPLIES 4
Bharat Katkar
Honored Contributor
Solution

Re: 2K Blocks filesystem

To the best of my knowledge Block Size of a filesystem affect the performance (Reading and writing files i.e. blocks) and usage of the filesystem. Since you said you already tested your database and it is working fine for 2K, there should not any problem.

And this is not an error but warning.
You need to know a lot to actually know how little you know
Bharat Katkar
Honored Contributor

Re: 2K Blocks filesystem

One more thing :
Since you r creating 100MB filesystem, at OS level there may not be much FileSystem Overheads as compared to >50GB filesystems.
You need to know a lot to actually know how little you know
Bill Hassell
Honored Contributor

Re: 2K Blocks filesystem

The block size for a filesystem has very little connection with the apps that use it. The reason is that the block size for the filesystem is used for housekeeping, that is, allocation of space, and does not affect what the application wants to do. In fact, for sequential disk reads or writes, the disk queueing code will 'hook' multiple requests together to form one long I/O which is much longer than the filesystem blocksize. To make this even less important, the disk array is also buffering the read/write requests, making the block size on the filesystem invisible.

Since your filesystems are so small (100 Mb), the better performance with 2K blocks may be a unique artifact between Universe, the VxFS filesystem and the EMC.

BTW: My experience with Universe has shown that by far the greatest improvement in performance is found by increasing MFILE. Universe has hundreds to thousands of files and internally restricts the total number of files that are opened at the same time with the MFILE parameter. Increasing MFILE from 90 to 300 produced a 200% performance improvement in one environment. Naturally, you must increase the kernel parameter NFILE to handle the large increase. If you increase MFILE by 100, then increase NFILE in the kernel by 100*Universe-instances. It's not unusual for a high performance Universe environment to have NFILE=200000.


Bill Hassell, sysadmin
Larry Basford
Regular Advisor

Re: 2K Blocks filesystem

nfile 149984 - (320*(NPROC+16+MAXUSERS)/10+32+2*(NPTY+NSTRPTY+NSTRTEL))


We have maxfiles = 320

Thanks for the tip. I think we are OK here.

The 2K was a significant factor in performance of Universe for us.
Desaster recovery? Right !