- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Wrong bdf report
Categories
Company
Local Language
Forums
Discussions
Knowledge Base
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 01:59 AM
06-29-2000 01:59 AM
Wrong bdf report
After reboot, SAM and vgdisplay -v on both machines shows correct LV sizes, but bdf on the server with hfs filesystem for /var, /opt and /tmp was showing wrong and lower values for the LV sizes.
Please, can somebody throw some light on this issue.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 03:05 AM
06-29-2000 03:05 AM
Re: Wrong bdf report
it is normal on hfs file system. The default
of "lost" capacity is 10%.
try man newfs_hfs and read about the -m
option.
If you want to change this situation and to
save some space, you have to backup the file
systems and then try for exampl:
#newfs -F hfs -m 1 /dev/vgoX/rlvolXX
Before you begin newfs on existing file system you can examine it by typing:
#tunefs -v /dev/vgoX/rlvolXX
read the minfree percent. If it is 10% as I
think, it means only the user root can use
this space.
I hop it's clear now.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 03:38 AM
06-29-2000 03:38 AM
Re: Wrong bdf report
I don't recommend doing newfs on a mounted filesystem. The strange thing is that /var (hfs) shows 699mb and /opt (hfs)724mb through bdf while /var and /opt (vxfs) on the other machine are 741mb, they are all configured same size (SAM and vgdisplay reports that). I might have to convert all to vxfs when the server is free.
I still need to know why
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 03:57 AM
06-29-2000 03:57 AM
Re: Wrong bdf report
Gadi is probably right with the difference being minfree.
If you do 'tunefs -v /dev/vg00/lvol?' you'll see what minfree is set too. You can then change it using 'tunefs -m 1 /dev/vg00/lvol?' to set it from 10 to 1%.
Would be interesting to see the output from tunefs and fstyp -v from the filesystems concerned.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 04:56 AM
06-29-2000 04:56 AM
Re: Wrong bdf report
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 06:00 AM
06-29-2000 06:00 AM
Re: Wrong bdf report
It was not my idea to make a newfs on a mounted file system.
1) Backup all your files in /var and /opt.
2) Put '#' in the entry of /var,
/opt and /tmp in the /etc/fstab.
4) shutdown -yr 0.
5) The system will boot while /tmp, /var and
/opt are unmounted.
6) Try the newfs on the logical volume that belongs to the mount point of /tmp first.
See what happend (with no risc of loosing
data).
*Note Andy was right, you can do tunefs -m 1,
but if the logical volume is not empty, you'll not get what you wanted.
Good luck.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 06:02 AM
06-29-2000 06:02 AM
Re: Wrong bdf report
Slow down and read the reply again. Gad said to backup the data and then do the newfs. The part he left out was to restore the data after the newfs.
backup
unmount
newfs
mount
restore
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 06:05 AM
06-29-2000 06:05 AM
Re: Wrong bdf report
You don't have to newfs to change minfree. Just unmount it and run tunefs!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 06:50 AM
06-29-2000 06:50 AM
Re: Wrong bdf report
My point is, could it be I need a patch to update bdf? And why is /var and /opt different in size when SAM and vgdisplay show them to be equal?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 06:58 AM
06-29-2000 06:58 AM
Re: Wrong bdf report
Email me the output from :-
lvdisplay, bdf, tunefs, fstyp for all the lvols/filesystem involved.
andy_monks@hp.com
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-29-2000 11:38 PM
06-29-2000 11:38 PM
Re: Wrong bdf report
Hmmm. Well /opt (lvol8) looks pretty normal.
However, from the superblock info /opt and /var have the same number of blocks. So, we know that you correctly extended the filesystem.
So, what it comes down to is inodes!.
The hfs filesystem uses cylinder groups and each 'cg' looks after a number of inodes. When the filesystem is created (ok these were done by the O/S), you can override the default number of cg's and the number of inodes in a cg. This is what's happened here.
For /opt :-
ncg = 138
ipg = 832
There are also, cpg and bpg but they aren't that important here
For /var :-
ncg = 160
ipg = 1984
So, here's the maths!
Each inode is 128bytes, so we have :-
/opt = 128 * 138 * 832 = 14,696,448
/var = 128 * 160 * 1,984 = 40,632,320
Which is 26MB difference, which by some magically coincide is what the difference reported by bdf is.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
06-30-2000 03:26 AM
06-30-2000 03:26 AM
Re: Wrong bdf report
I have converted the file system from hfs to vxfs, every thing is working fine.I used the following steps:
1.Backup all the data from the filesystems involved
2.Copy /etc/fstab to /etc/fstab.old, changed the /etc/fstab file for the /dev/vg00/lvoln to replace the hfs with vxfs
3. Rebooted the system to single user mode (thereby unmounting the filesystems)
4.newfs -F vxfs /dev/vg00/rlvoln
5.fsck -F vxfs /dev/vg00/lvoln
6.mount /dev/vg00/lvoln
7.fstyp /dev/vg00/lvoln (To confirm that the filesystem is now vxfs)
8.Restore from backup
9.Reboot the system.
Filesystem are okay I hope this will help somebody out there.
Best Regards!