1850507 Members
2274 Online
104054 Solutions
New Discussion

Wrong bdf report

 
CHRIS_ANORUO
Honored Contributor

Wrong bdf report

I am running 10.20 on two servers. when I increased /var, /opt and /tmp; using lvextend -L *** /dev/vg00/lvol* and extendfs -F vxfs or ((hfs) as applicable) /dev/vg00/rlvol*.
After reboot, SAM and vgdisplay -v on both machines shows correct LV sizes, but bdf on the server with hfs filesystem for /var, /opt and /tmp was showing wrong and lower values for the LV sizes.
Please, can somebody throw some light on this issue.
When We Seek To Discover The Best In Others, We Somehow Bring Out The Best In Ourselves.
11 REPLIES 11
Gadi
Advisor

Re: Wrong bdf report

Hi,

it is normal on hfs file system. The default
of "lost" capacity is 10%.

try man newfs_hfs and read about the -m
option.

If you want to change this situation and to
save some space, you have to backup the file
systems and then try for exampl:

#newfs -F hfs -m 1 /dev/vgoX/rlvolXX

Before you begin newfs on existing file system you can examine it by typing:

#tunefs -v /dev/vgoX/rlvolXX

read the minfree percent. If it is 10% as I
think, it means only the user root can use
this space.

I hop it's clear now.





CHRIS_ANORUO
Honored Contributor

Re: Wrong bdf report

Gadi,

I don't recommend doing newfs on a mounted filesystem. The strange thing is that /var (hfs) shows 699mb and /opt (hfs)724mb through bdf while /var and /opt (vxfs) on the other machine are 741mb, they are all configured same size (SAM and vgdisplay reports that). I might have to convert all to vxfs when the server is free.
I still need to know why
When We Seek To Discover The Best In Others, We Somehow Bring Out The Best In Ourselves.
Andy Monks
Honored Contributor

Re: Wrong bdf report

Chris,

Gadi is probably right with the difference being minfree.

If you do 'tunefs -v /dev/vg00/lvol?' you'll see what minfree is set too. You can then change it using 'tunefs -m 1 /dev/vg00/lvol?' to set it from 10 to 1%.

Would be interesting to see the output from tunefs and fstyp -v from the filesystems concerned.
CHRIS_ANORUO
Honored Contributor

Re: Wrong bdf report

Yes Andy, I know all about minfree 10% but not using newfs on a mounted file system
When We Seek To Discover The Best In Others, We Somehow Bring Out The Best In Ourselves.
Gadi
Advisor

Re: Wrong bdf report

Hi Chris,

It was not my idea to make a newfs on a mounted file system.

1) Backup all your files in /var and /opt.

2) Put '#' in the entry of /var,
/opt and /tmp in the /etc/fstab.

4) shutdown -yr 0.

5) The system will boot while /tmp, /var and
/opt are unmounted.

6) Try the newfs on the logical volume that belongs to the mount point of /tmp first.
See what happend (with no risc of loosing
data).

*Note Andy was right, you can do tunefs -m 1,
but if the logical volume is not empty, you'll not get what you wanted.

Good luck.
Dave Wherry
Esteemed Contributor

Re: Wrong bdf report

Chris,
Slow down and read the reply again. Gad said to backup the data and then do the newfs. The part he left out was to restore the data after the newfs.
backup
unmount
newfs
mount
restore
Andy Monks
Honored Contributor

Re: Wrong bdf report

Chris,

You don't have to newfs to change minfree. Just unmount it and run tunefs!
CHRIS_ANORUO
Honored Contributor

Re: Wrong bdf report

Thank you all for finding time to assist me. I have don't all these steps even fsck -f without a change in single user mode. When system came up, it was still the same.
My point is, could it be I need a patch to update bdf? And why is /var and /opt different in size when SAM and vgdisplay show them to be equal?
When We Seek To Discover The Best In Others, We Somehow Bring Out The Best In Ourselves.
Andy Monks
Honored Contributor

Re: Wrong bdf report

Chris,

Email me the output from :-

lvdisplay, bdf, tunefs, fstyp for all the lvols/filesystem involved.

andy_monks@hp.com
Andy Monks
Honored Contributor

Re: Wrong bdf report

Chris emailed me the info I asked for. Here's the summary :-

Hmmm. Well /opt (lvol8) looks pretty normal.

However, from the superblock info /opt and /var have the same number of blocks. So, we know that you correctly extended the filesystem.

So, what it comes down to is inodes!.

The hfs filesystem uses cylinder groups and each 'cg' looks after a number of inodes. When the filesystem is created (ok these were done by the O/S), you can override the default number of cg's and the number of inodes in a cg. This is what's happened here.

For /opt :-

ncg = 138
ipg = 832

There are also, cpg and bpg but they aren't that important here

For /var :-

ncg = 160
ipg = 1984

So, here's the maths!

Each inode is 128bytes, so we have :-

/opt = 128 * 138 * 832 = 14,696,448

/var = 128 * 160 * 1,984 = 40,632,320

Which is 26MB difference, which by some magically coincide is what the difference reported by bdf is.
CHRIS_ANORUO
Honored Contributor

Re: Wrong bdf report

Thank you very much Andy,

I have converted the file system from hfs to vxfs, every thing is working fine.I used the following steps:

1.Backup all the data from the filesystems involved
2.Copy /etc/fstab to /etc/fstab.old, changed the /etc/fstab file for the /dev/vg00/lvoln to replace the hfs with vxfs
3. Rebooted the system to single user mode (thereby unmounting the filesystems)
4.newfs -F vxfs /dev/vg00/rlvoln
5.fsck -F vxfs /dev/vg00/lvoln
6.mount /dev/vg00/lvoln
7.fstyp /dev/vg00/lvoln (To confirm that the filesystem is now vxfs)
8.Restore from backup
9.Reboot the system.

Filesystem are okay I hope this will help somebody out there.

Best Regards!
When We Seek To Discover The Best In Others, We Somehow Bring Out The Best In Ourselves.