Operating System - HP-UX
1834150 Members
2376 Online
110064 Solutions
New Discussion

Re: File system block sizes

 
BOB BARBA
Contributor

File system block sizes

I have two separate HP-UX 11 systems. I have a vxfs file system built on both systems. This file system is meant to be identical however when I copy files from one system to the other the file system fills up!

My suspicion is that the rogue file system was built using a different blocking factor from the original.

1. Is there a command that allows me to view a file systems attributes?
2. If it is a blocking factor problem can this be resolved or would it entail a rebuild?
3. Am I completely on the wrong track?

Many thanks in advance ...... BobB
8 REPLIES 8
Andreas Voss
Honored Contributor

Re: File system block sizes

Hi,

to your fisrt question:

fstyp -v /dev/vgXX/lvolX

will display filesystem parameters.

Regards
Vincenzo Restuccia
Honored Contributor

Re: File system block sizes

Find the type of the file system on a disk, /dev/dsk/c1t6d0:

fstyp /dev/dsk/c1t6d0


Find the type of the file system on a logical volume, /dev/vg00/lvol6:

fstyp /dev/vg00/lvol6

Find the file system type for a particular device file and also information about its super block:

fstyp -v /dev/dsk/c1t6d0

Steven Sim Kok Leong
Honored Contributor

Re: File system block sizes

Hi,

You have to take care of the block sizes especially when your filesystem is populated by small files. This is prevalent when you are migrating from HFS to JFS because HFS has a default block size of 8 kb while JFS has a default block size of only 1 kb.

You cannot change your filesystem block size without a full backup and restore of your filesystem. I am afraid you will need to redo the newfs of your logical volume with the correct block size.

Hope this helps. Regards.

Steven Sim Kok Leong
Brainbench MVP for Unix Admin
http://www.brainbench.com
Printaporn_1
Esteemed Contributor

Re: File system block sizes

Hi,

I think you case is not with blocksize , I mean if your lvm file system have similar capacity. just copy files over is not make too much diffrent in space usage.
just check with bdf or lvdisplay does it really similar,
enjoy any little thing in my life
Wieslaw Krajewski
Honored Contributor

Re: File system block sizes

Hi,
It does not look as a problem with the file system blocks.
Both file systems are almost identical, as you mention, and probably use the same default block size.
Another possibilities are:
1. use getext command to see if some extents have been reserved.
2. Maybe some old files on the second filesystem exist, or maybe you need to do defragmentation, if you have On-line JFS.
As a last resort try to use fsdb command, but remember its output and interface are a little hermetic.
Permanent training makes master
John Palmer
Honored Contributor

Re: File system block sizes

One possibility is that the 'rogue' filesystem was created with a specific (small) number of i-nodes.

These are 256 bytes apiece in a VXFS filesystem and the default number is very generous (especially for a very large filesystem that will only contain a few large files). The default will account for about 3% of the whole filesystem (approx 320Mb for a 10Gb filesystem).

Do bdf -i for each and compare the 'iused' and 'ifree' values.

Regards,
John
Mike Whitton
Occasional Advisor

Re: File system block sizes


Hi bob, did you find a resolution to your problem, i am having the exact same issue.

thanks.

mike whitton
whitton@dwsd.org
Bill Hassell
Honored Contributor

Re: File system block sizes

This is a very common scenario with database systems and it is caused by sparse files. Unix has the abilitty to write records to a new file at any record number. Consider this:

create file
write rec#1
write rec#1000000
close file

So the resultant file has only 2 records but are separated by 999999 empty records. If a program tries to read rec#2, it will receive a buffer of zeros. Same with rec#999998 or any other undefined record.

So when you copy a sparse file to a new location, the filesystem code returns a stream of zeros for the missing records. But these zeros will be written to the copied file and will of course occupy a great deal more space. Now application programs will not see any difference since undefined records are still a stream of zeros, but the copy will always be larger.


Bill Hassell, sysadmin