<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: hpux shows bdf wrongly in Operating System - HP-UX</title>
    <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795735#M534596</link>
    <description>please read Symantec doc page 64 (or seacrh for "reservation")&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://sort.symantec.com/public/documents/sfha/5.1sp1/hp-ux/productguides/pdf/vxfs_admin_51sp1_hpux.pdf" target="_blank"&gt;https://sort.symantec.com/public/documents/sfha/5.1sp1/hp-ux/productguides/pdf/vxfs_admin_51sp1_hpux.pdf&lt;/A&gt;&lt;BR /&gt;</description>
    <pubDate>Wed, 15 Jun 2011 10:13:12 GMT</pubDate>
    <dc:creator>Tuan Nguyen_2</dc:creator>
    <dc:date>2011-06-15T10:13:12Z</dc:date>
    <item>
      <title>hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795727#M534588</link>
      <description>HPUX shows the size incorrectly, why?  (IA, 11.31 with update from 0909)&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I did a lvextend for 1GB to make sure the FS is done correctly.&lt;BR /&gt;lvextend -L 601000m /dev/vgdoc09/dm_data05 (OK)&lt;BR /&gt;fsadm -F vxfs -b 601024m /data/dm_data05 (OK)&lt;BR /&gt;&lt;BR /&gt;nB: There are less then 1000 files on FS dm_data05, running with 1kb blocksize, FS version 1 (lvmadm -l).&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;I then check:&lt;BR /&gt;[test:tng]/data/dm_data05 # bdf .&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vgdoc09/dm_data05&lt;BR /&gt;                   615448576 24165277 554328103    4% /data/dm_data05  (1 big LUNs a 600GB)&lt;BR /&gt;&lt;BR /&gt;why only 554328103?? I expect almost 600GB.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;in Gbytes:&lt;BR /&gt;[test:tng]/data/dm_data05 # bdfgigs /data/dm_data05&lt;BR /&gt;File-System             Gbytes    Used   Avail %Used Mounted on&lt;BR /&gt;/dev/vgdoc09/dm_data05     587      23     529    4% /data/dm_data05&lt;BR /&gt;&lt;BR /&gt;why only 587G ??&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;which df command:&lt;BR /&gt;[test:tng]/data/dm_data05 # df -k .&lt;BR /&gt;/data/dm_data05        (/dev/vgdoc09/dm_data05) : 578493594 total allocated Kb&lt;BR /&gt;                                                  554324814 free allocated Kb&lt;BR /&gt;                                                  24168780 used allocated Kb&lt;BR /&gt;                                                         5 % allocation used&lt;BR /&gt;&lt;BR /&gt;There are 600GB lun, why only show 578493594?? there are less than 22GB!&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;--Reference: Another FS on the server, which shows OK. blocksize is 1K and with default mount option. FS version 1&lt;BR /&gt;&lt;BR /&gt;[test:tng]/data/dm_data05 # bdf /data/dm_data04&lt;BR /&gt;Filesystem          kbytes    used   avail %used Mounted on&lt;BR /&gt;/dev/vgdoc07/dm_data04&lt;BR /&gt;                   1048281088 837389112 197711245   81% /data/dm_data04   (5 LUNs a 200GB)&lt;BR /&gt;&lt;BR /&gt;[test:tng]/data # bdfgigs /data/dm_data04&lt;BR /&gt;File-System             Gbytes    Used   Avail %Used Mounted on&lt;BR /&gt;/dev/vgdoc07/dm_data04    1000     809     179   82% /data/dm_data04&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Why different sizes? notice the bdf shows correctly for an another FS on the same server.&lt;BR /&gt;&lt;BR /&gt;What did I done wrong?&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Regards&lt;BR /&gt;Tuan&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;[test:tng]/data # vgdisplay -v vgdoc09&lt;BR /&gt;--- Volume groups ---&lt;BR /&gt;VG Name                     /dev/vgdoc09&lt;BR /&gt;VG Write Access             read/write&lt;BR /&gt;VG Status                   available&lt;BR /&gt;Max LV                      255&lt;BR /&gt;Cur LV                      1&lt;BR /&gt;Open LV                     1&lt;BR /&gt;Max PV                      128&lt;BR /&gt;Cur PV                      2&lt;BR /&gt;Act PV                      2&lt;BR /&gt;Max PE per PV               30000&lt;BR /&gt;VGDA                        4&lt;BR /&gt;PE Size (Mbytes)            32&lt;BR /&gt;Total PE                    38398&lt;BR /&gt;Alloc PE                    37564&lt;BR /&gt;Free PE                     834&lt;BR /&gt;Total PVG                   2&lt;BR /&gt;Total Spare PVs             0&lt;BR /&gt;Total Spare PVs in use      0&lt;BR /&gt;VG Version                  1.0&lt;BR /&gt;VG Max Size                 120000g&lt;BR /&gt;VG Max Extents              3840000&lt;BR /&gt;&lt;BR /&gt;   --- Logical volumes ---&lt;BR /&gt;   LV Name                     /dev/vgdoc09/dm_data05&lt;BR /&gt;   LV Status                   available/syncd&lt;BR /&gt;   LV Size (Mbytes)            601024&lt;BR /&gt;   Current LE                  18782&lt;BR /&gt;   Allocated PE                37564&lt;BR /&gt;   Used PV                     2&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volumes ---&lt;BR /&gt;   PV Name                     /dev/disk/disk697&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    19199&lt;BR /&gt;   Free PE                     417&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;   PV Name                     /dev/disk/disk703&lt;BR /&gt;   PV Status                   available&lt;BR /&gt;   Total PE                    19199&lt;BR /&gt;   Free PE                     417&lt;BR /&gt;   Autoswitch                  On&lt;BR /&gt;   Proactive Polling           On&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;   --- Physical volume groups ---&lt;BR /&gt;   PVG Name                    CL-IX-doc09&lt;BR /&gt;   PV Name                     /dev/disk/disk697&lt;BR /&gt;&lt;BR /&gt;   PVG Name                    CL-9M-doc09&lt;BR /&gt;   PV Name                     /dev/disk/disk703&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;[test:tng]/data #&lt;BR /&gt;</description>
      <pubDate>Mon, 06 Jun 2011 07:26:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795727#M534588</guid>
      <dc:creator>Tuan Nguyen_2</dc:creator>
      <dc:date>2011-06-06T07:26:12Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795728#M534589</link>
      <description>Hi.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;You extended the LV to have final size of 586.91GB . Then you also extended the FS to have the same size. &lt;BR /&gt;The bdf output shows these columns from the left side: KBYTES (total allocated) , USED (which data was used from the total and then available(free) . &lt;BR /&gt;&lt;BR /&gt;And it's exactly the same as the LV/FS you extended:&lt;BR /&gt;&lt;BR /&gt;615448576/1024/1024 = 587GB&lt;BR /&gt;&lt;BR /&gt;So what's the problem?</description>
      <pubDate>Mon, 06 Jun 2011 07:43:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795728#M534589</guid>
      <dc:creator>g3jza</dc:creator>
      <dc:date>2011-06-06T07:43:49Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795729#M534590</link>
      <description>hi,&lt;BR /&gt;&lt;BR /&gt;you are right with 587GB, sorry I didnÂ´t mentioned my problem clearly enough. The problem is when I add the numbers.&lt;BR /&gt;&lt;BR /&gt;/dev/vgdoc09/dm_data05 587 23 529 4% /data/dm_data05&lt;BR /&gt;&lt;BR /&gt;23 + 529 = 552GB (it should be 587GB) &lt;BR /&gt;&lt;BR /&gt;there are less than 1000 files in that FS.</description>
      <pubDate>Mon, 06 Jun 2011 09:04:34 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795729#M534590</guid>
      <dc:creator>Tuan Nguyen_2</dc:creator>
      <dc:date>2011-06-06T09:04:34Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795730#M534591</link>
      <description>&lt;P&gt;Oh I see...&lt;BR /&gt;&lt;BR /&gt;Check this thread please:&lt;BR /&gt;&lt;A href="http://h30499.www3.hp.com/t5/LVM-and-VxVM/File-system-used-and-available-space-descrepancy/m-p/4792901#M39215" target="_blank"&gt;http://h30499.www3.hp.com/t5/LVM-and-VxVM/File-system-used-and-available-space-descrepancy/m-p/4792901#M39215&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 06 Jul 2011 17:39:19 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795730#M534591</guid>
      <dc:creator>g3jza</dc:creator>
      <dc:date>2011-07-06T17:39:19Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795731#M534592</link>
      <description>hi&lt;BR /&gt;&lt;BR /&gt;Thanks for the link&lt;BR /&gt;&lt;BR /&gt;[test:tng]/ # bdfgigs /data/dm_data05&lt;BR /&gt;File-System             Gbytes    Used   Avail %Used Mounted on&lt;BR /&gt;/dev/vgdoc09/dm_data05     587      29     523    5% /data/dm_data05&lt;BR /&gt;&lt;BR /&gt;[test:tng]/ # lsof |grep dm_data05 (nothing)&lt;BR /&gt;&lt;BR /&gt;Smh shows the value correctly, one explanation can be â  someone had removed a HUGE file at dm_data05 , which one of the process attach to (that inode). The available space will not show until that process is taken down.&lt;BR /&gt;&lt;BR /&gt;Regards Tuan</description>
      <pubDate>Tue, 07 Jun 2011 14:22:58 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795731#M534592</guid>
      <dc:creator>Tuan Nguyen_2</dc:creator>
      <dc:date>2011-06-07T14:22:58Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795732#M534593</link>
      <description>To show unlinked files with lsof, do:&lt;BR /&gt;&lt;BR /&gt;lsof +L1&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Tue, 07 Jun 2011 15:05:07 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795732#M534593</guid>
      <dc:creator>Patrick Wallek</dc:creator>
      <dc:date>2011-06-07T15:05:07Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795733#M534594</link>
      <description>thanks for the tip, but unfortunately I can't find anything belong to dm_data05/vgdoc09 (see attachment). &lt;BR /&gt;&lt;BR /&gt;&lt;A href="http://www.akadia.com/services/lsof_quickstart.txt" target="_blank"&gt;http://www.akadia.com/services/lsof_quickstart.txt&lt;/A&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Jun 2011 14:27:49 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795733#M534594</guid>
      <dc:creator>Tuan Nguyen_2</dc:creator>
      <dc:date>2011-06-08T14:27:49Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795734#M534595</link>
      <description>&lt;BR /&gt;The solution is:&lt;BR /&gt;&lt;BR /&gt;We manage how Vxfs handles this stuff by looking at the infos below.&lt;BR /&gt;/dev/vgdoc04/data21: 1000-789=211 (14G "missing")              ;  1.042.210 files&lt;BR /&gt;/dev/vgdoc09/dm_data05: 1000-757=143  (49G "missing")    ; 36.909 files&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;"The available size will be change dynamically depending how much you fill your FS". Another way to say â  the huge reservation will be decreased when you are &lt;BR /&gt;beginning to use the mountpoint"&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Below showing the last column becoming bigger when the mountpoints is filling with data. Last column equals â  Used + Availâ  .&lt;BR /&gt;&lt;BR /&gt;[nubia:root]/tnng # bdfgigs /tnng&lt;BR /&gt;File-System       Gbytes    Used   Avail %Used Mounted on&lt;BR /&gt;/dev/vgtnng/tnng     300       0     281    0% /tnng        281  (missing 19GB)&lt;BR /&gt;/dev/vgtnng/tnng     300       3     278    1% /tnng        281&lt;BR /&gt;/dev/vgtnng/tnng     300       8     274    3% /tnng        282&lt;BR /&gt;/dev/vgtnng/tnng     300      14     268    5% /tnng        282&lt;BR /&gt;/dev/vgtnng/tnng     300      25     257    9% /tnng        282&lt;BR /&gt;/dev/vgtnng/tnng     300      35     249   12% /tnng        284&lt;BR /&gt;/dev/vgtnng/tnng     300      44     240   15% /tnng        284&lt;BR /&gt;/dev/vgtnng/tnng     300      98     189   34% /tnng        287&lt;BR /&gt;/dev/vgtnng/tnng     300     136     153   47% /tnng        289&lt;BR /&gt;/dev/vgtnng/tnng     300     159     132   55% /tnng        291&lt;BR /&gt;/dev/vgtnng/tnng     300     183     109   63% /tnng        292&lt;BR /&gt;/dev/vgtnng/tnng     300     200      94   68% /tnng        294&lt;BR /&gt;/dev/vgtnng/tnng     300     220      75   75% /tnng        295 (it is closed to the total size 300GB)&lt;BR /&gt;&lt;BR /&gt;The last row showing vxfs uses only 5GB instead of 19GB for the overhead/metadata....&lt;BR /&gt;&lt;BR /&gt;So don't worry what you see with bdf/df on a new mountpoint (100GB+) after a while you will get what you are looking for. This has nothing to do with &lt;BR /&gt;hanging process (lsof) or "bad behavious" bdf. &lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;Best Regards&lt;BR /&gt;Tuan&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Jun 2011 09:18:50 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795734#M534595</guid>
      <dc:creator>Tuan Nguyen_2</dc:creator>
      <dc:date>2011-06-15T09:18:50Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795735#M534596</link>
      <description>please read Symantec doc page 64 (or seacrh for "reservation")&lt;BR /&gt;&lt;BR /&gt;&lt;A href="https://sort.symantec.com/public/documents/sfha/5.1sp1/hp-ux/productguides/pdf/vxfs_admin_51sp1_hpux.pdf" target="_blank"&gt;https://sort.symantec.com/public/documents/sfha/5.1sp1/hp-ux/productguides/pdf/vxfs_admin_51sp1_hpux.pdf&lt;/A&gt;&lt;BR /&gt;</description>
      <pubDate>Wed, 15 Jun 2011 10:13:12 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795735#M534596</guid>
      <dc:creator>Tuan Nguyen_2</dc:creator>
      <dc:date>2011-06-15T10:13:12Z</dc:date>
    </item>
    <item>
      <title>Re: hpux shows bdf wrongly</title>
      <link>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795736#M534597</link>
      <description>Also note:&lt;BR /&gt; &lt;BR /&gt;bdfmegs and bdfgigs (scripts I wrote) will try to use gdf --no-sync or bdf -s to speed things up. This is the sync option which requires quite a bit of time for a given mountpoint to sync when it is very busy adding/extending/deleting space. If you run bdf -s and then bdf (no option) then you'll see the difference. You are correct that it will eventually sync after a while. &lt;BR /&gt; &lt;BR /&gt;The no-sync feature is useful for very large systems (terabytes with dozens of lvols and busy mountpoints. Just be aware that updating the stats can take some time.</description>
      <pubDate>Wed, 15 Jun 2011 13:41:57 GMT</pubDate>
      <guid>https://community.hpe.com/t5/operating-system-hp-ux/hpux-shows-bdf-wrongly/m-p/4795736#M534597</guid>
      <dc:creator>Bill Hassell</dc:creator>
      <dc:date>2011-06-15T13:41:57Z</dc:date>
    </item>
  </channel>
</rss>

