Operating System - HP-UX
1752489 Members
5920 Online
108788 Solutions
New Discussion юеВ

Re: hpux shows bdf wrongly

 
Tuan Nguyen_2
Frequent Advisor

hpux shows bdf wrongly

HPUX shows the size incorrectly, why? (IA, 11.31 with update from 0909)


I did a lvextend for 1GB to make sure the FS is done correctly.
lvextend -L 601000m /dev/vgdoc09/dm_data05 (OK)
fsadm -F vxfs -b 601024m /data/dm_data05 (OK)

nB: There are less then 1000 files on FS dm_data05, running with 1kb blocksize, FS version 1 (lvmadm -l).



I then check:
[test:tng]/data/dm_data05 # bdf .
Filesystem kbytes used avail %used Mounted on
/dev/vgdoc09/dm_data05
615448576 24165277 554328103 4% /data/dm_data05 (1 big LUNs a 600GB)

why only 554328103?? I expect almost 600GB.


in Gbytes:
[test:tng]/data/dm_data05 # bdfgigs /data/dm_data05
File-System Gbytes Used Avail %Used Mounted on
/dev/vgdoc09/dm_data05 587 23 529 4% /data/dm_data05

why only 587G ??


which df command:
[test:tng]/data/dm_data05 # df -k .
/data/dm_data05 (/dev/vgdoc09/dm_data05) : 578493594 total allocated Kb
554324814 free allocated Kb
24168780 used allocated Kb
5 % allocation used

There are 600GB lun, why only show 578493594?? there are less than 22GB!


--Reference: Another FS on the server, which shows OK. blocksize is 1K and with default mount option. FS version 1

[test:tng]/data/dm_data05 # bdf /data/dm_data04
Filesystem kbytes used avail %used Mounted on
/dev/vgdoc07/dm_data04
1048281088 837389112 197711245 81% /data/dm_data04 (5 LUNs a 200GB)

[test:tng]/data # bdfgigs /data/dm_data04
File-System Gbytes Used Avail %Used Mounted on
/dev/vgdoc07/dm_data04 1000 809 179 82% /data/dm_data04



Why different sizes? notice the bdf shows correctly for an another FS on the same server.

What did I done wrong?


Regards
Tuan


[test:tng]/data # vgdisplay -v vgdoc09
--- Volume groups ---
VG Name /dev/vgdoc09
VG Write Access read/write
VG Status available
Max LV 255
Cur LV 1
Open LV 1
Max PV 128
Cur PV 2
Act PV 2
Max PE per PV 30000
VGDA 4
PE Size (Mbytes) 32
Total PE 38398
Alloc PE 37564
Free PE 834
Total PVG 2
Total Spare PVs 0
Total Spare PVs in use 0
VG Version 1.0
VG Max Size 120000g
VG Max Extents 3840000

--- Logical volumes ---
LV Name /dev/vgdoc09/dm_data05
LV Status available/syncd
LV Size (Mbytes) 601024
Current LE 18782
Allocated PE 37564
Used PV 2


--- Physical volumes ---
PV Name /dev/disk/disk697
PV Status available
Total PE 19199
Free PE 417
Autoswitch On
Proactive Polling On

PV Name /dev/disk/disk703
PV Status available
Total PE 19199
Free PE 417
Autoswitch On
Proactive Polling On


--- Physical volume groups ---
PVG Name CL-IX-doc09
PV Name /dev/disk/disk697

PVG Name CL-9M-doc09
PV Name /dev/disk/disk703


[test:tng]/data #
9 REPLIES 9
g3jza
Esteemed Contributor

Re: hpux shows bdf wrongly

Hi.


You extended the LV to have final size of 586.91GB . Then you also extended the FS to have the same size.
The bdf output shows these columns from the left side: KBYTES (total allocated) , USED (which data was used from the total and then available(free) .

And it's exactly the same as the LV/FS you extended:

615448576/1024/1024 = 587GB

So what's the problem?
Tuan Nguyen_2
Frequent Advisor

Re: hpux shows bdf wrongly

hi,

you are right with 587GB, sorry I didn├В┬┤t mentioned my problem clearly enough. The problem is when I add the numbers.

/dev/vgdoc09/dm_data05 587 23 529 4% /data/dm_data05

23 + 529 = 552GB (it should be 587GB)

there are less than 1000 files in that FS.
g3jza
Esteemed Contributor

Re: hpux shows bdf wrongly

Tuan Nguyen_2
Frequent Advisor

Re: hpux shows bdf wrongly

hi

Thanks for the link

[test:tng]/ # bdfgigs /data/dm_data05
File-System Gbytes Used Avail %Used Mounted on
/dev/vgdoc09/dm_data05 587 29 523 5% /data/dm_data05

[test:tng]/ # lsof |grep dm_data05 (nothing)

Smh shows the value correctly, one explanation can be ├в someone had removed a HUGE file at dm_data05 , which one of the process attach to (that inode). The available space will not show until that process is taken down.

Regards Tuan
Patrick Wallek
Honored Contributor

Re: hpux shows bdf wrongly

To show unlinked files with lsof, do:

lsof +L1

Tuan Nguyen_2
Frequent Advisor

Re: hpux shows bdf wrongly

thanks for the tip, but unfortunately I can't find anything belong to dm_data05/vgdoc09 (see attachment).

http://www.akadia.com/services/lsof_quickstart.txt

Tuan Nguyen_2
Frequent Advisor

Re: hpux shows bdf wrongly


The solution is:

We manage how Vxfs handles this stuff by looking at the infos below.
/dev/vgdoc04/data21: 1000-789=211 (14G "missing") ; 1.042.210 files
/dev/vgdoc09/dm_data05: 1000-757=143 (49G "missing") ; 36.909 files


"The available size will be change dynamically depending how much you fill your FS". Another way to say ├в the huge reservation will be decreased when you are
beginning to use the mountpoint"


Below showing the last column becoming bigger when the mountpoints is filling with data. Last column equals ├в Used + Avail├в .

[nubia:root]/tnng # bdfgigs /tnng
File-System Gbytes Used Avail %Used Mounted on
/dev/vgtnng/tnng 300 0 281 0% /tnng 281 (missing 19GB)
/dev/vgtnng/tnng 300 3 278 1% /tnng 281
/dev/vgtnng/tnng 300 8 274 3% /tnng 282
/dev/vgtnng/tnng 300 14 268 5% /tnng 282
/dev/vgtnng/tnng 300 25 257 9% /tnng 282
/dev/vgtnng/tnng 300 35 249 12% /tnng 284
/dev/vgtnng/tnng 300 44 240 15% /tnng 284
/dev/vgtnng/tnng 300 98 189 34% /tnng 287
/dev/vgtnng/tnng 300 136 153 47% /tnng 289
/dev/vgtnng/tnng 300 159 132 55% /tnng 291
/dev/vgtnng/tnng 300 183 109 63% /tnng 292
/dev/vgtnng/tnng 300 200 94 68% /tnng 294
/dev/vgtnng/tnng 300 220 75 75% /tnng 295 (it is closed to the total size 300GB)

The last row showing vxfs uses only 5GB instead of 19GB for the overhead/metadata....

So don't worry what you see with bdf/df on a new mountpoint (100GB+) after a while you will get what you are looking for. This has nothing to do with
hanging process (lsof) or "bad behavious" bdf.


Best Regards
Tuan


Tuan Nguyen_2
Frequent Advisor

Re: hpux shows bdf wrongly

Bill Hassell
Honored Contributor

Re: hpux shows bdf wrongly

Also note:

bdfmegs and bdfgigs (scripts I wrote) will try to use gdf --no-sync or bdf -s to speed things up. This is the sync option which requires quite a bit of time for a given mountpoint to sync when it is very busy adding/extending/deleting space. If you run bdf -s and then bdf (no option) then you'll see the difference. You are correct that it will eventually sync after a while.

The no-sync feature is useful for very large systems (terabytes with dozens of lvols and busy mountpoints. Just be aware that updating the stats can take some time.


Bill Hassell, sysadmin