Simpler Navigation for Servers and Operating Systems
Completed: a much simpler Servers and Operating Systems section of the Community. We combined many of the older boards, so you won't have to click through so many levels to get at the information you need. Check the consolidated boards here as many sub-forums are now single boards.
Operating System - Tru64 Unix
Showing results for 
Search instead for 
Did you mean: 

tru64 5.1b SP3 rmvol failure

Graham Charley_1
Occasional Visitor

tru64 5.1b SP3 rmvol failure

I am trying to remove a volume from a file domain and I get the error
# rmvol: Can't remove the volume
rmvol: Error = E_BMT_NOT_EMPTY (-1175)
rmvol: Can't remove volume '/dev/vol/datavol5a' from domain 'sapmain'

Can anyone suggest how I cancomplete the volume removal cleanly without losing any data.
Michael Schulte zur Sur
Honored Contributor

Re: tru64 5.1b SP3 rmvol failure


please have a look at this page. It seems to address your problem.


Ross Minkov
Esteemed Contributor

Re: tru64 5.1b SP3 rmvol failure


E_BMT_NOT_EMPTY error means that an attempt to remove a volume failed because the contents of the inode table could not be moved off of the volume.

can you post the output from showfdmn -k domain_name here

also have you tried rebooting and trying again?

also you can try rebooting to single user mode; run "bcheckrc" command; and then try rmvol again.

Just some ideas...


Graham Charley_1
Occasional Visitor

Re: tru64 5.1b SP3 rmvol failure

Thanks for your reply.
Here is the output from showfdmn

Id Date Created LogPgs Version Domain Name
40a1e833.000239f3 Wed May 12 10:02:43 2004 512 4 sapmain

Vol 1K-Blks Free % Used Cmode Rblks Wblks Vol Name
1L 4192256 0 100% on 512 512 /dev/vol/sapexec
2 8885584 8839032 1% on 512 512 /dev/vol/datavol5a
3 4429824 2356376 47% on 512 512 /dev/vol/sapexec2
---------- ---------- ------
17507664 11195408 36%

I'll try your other suggestions when I can get some downtime on the system as its a production DB
Ross Minkov
Esteemed Contributor

Re: tru64 5.1b SP3 rmvol failure

you might want to try balance on that advfs domain till you wait for down time window. the volume that you are trying to remove is 100% full, while the other 2 are pretty much empty. you should get into the habit to balance your advfs domains on a regular basis. also i noticed that the volume you want to remove is acctually you log volume. that shouldn't matter in regards to reoving it -- the system should be able to switch automatically to using another volume for logging. what i'm thinking is because the log volume is 100% full there might be not enough space on it to do the actual logging, so the system does not allow you some advfs operations (like rmvol). if balance gives you the same error try to delete some files from the filesystems that sit on top of this advfs domain and check with showfdmn again to see if they happened to free some space on that particular volume.

Ross Minkov
Esteemed Contributor

Re: tru64 5.1b SP3 rmvol failure

Michael Schulte zur Sur
Honored Contributor

Re: tru64 5.1b SP3 rmvol failure


what about my patch suggestion?


Graham Charley_1
Occasional Visitor

Re: tru64 5.1b SP3 rmvol failure

Thanks for your patch suggestion but this is in service pack 2 and the system has service pack 3 applied which already incorporates the patch.
Johan Brusche
Honored Contributor

Re: tru64 5.1b SP3 rmvol failure

After a failed rmvol and before you retry, you must use "chvol -A":

chvol -A /dev/vol/datavol5a sapmain

Now you redo the rmvol with "-v", which will migth fail again, but now you will know the filename of the data that was being moved.

You can then use "showfile -x " to check if the file really still has some pages on the volume to be removed. If yes you can try to "migrate" that file individually.

Further troubleshooting can be done with the help of the utilities in /sbin/advfs.
Eg: Knowing that a free page in the BMT has 28 free mcells, you can search for the page in the BMT that has less than 28 free mcells as follows: (the example implies that the volume to be removed is the 2nd)

/sbin/advfs/nvbmtpg -rv sapmain 2 -f | grep -v "28 free"

Below an output that shows several BMT pages are not free:

DOMAIN "scratch_ciney" VDI 1 (/dev/rdisk/dsk31c) lbn 48 BMT page 0
There are 258 pages in the BMT on this volume.
The BMT uses 3 extents (out of 33) in 2 mcells.
first free pg 190
BMT pg 190 has 26 free mcells. Next free pg 189
BMT pg 131 has 27 free mcells. Next free pg 130
BMT pg 128 has 27 free mcells. Next free pg 127
BMT pg 110 has 27 free mcells. Next free pg 109
BMT pg 0 has 17 free mcells. Next free pg 133
BMT pg 104 has 27 free mcells. Next free pg 96
There are 258 pages on the free list with a total of 7207 free mcells.

In above example page 190 of the BMT on volume 1 has an mcell that is not free.

I can now examine that page with the command:

/sbin/advfs/nvbmtpg -rv scratch_ciney 1 190

In the output you migth see things like below. Of interest is the number next to "tag", 13935 in this example.

CELL 26 linkSegment 0 bfSetTag 1 (1.8001) tag 13935 (366f.8003)
next mcell volume page cell 0 0 0

RECORD 0 bCnt 92 version 0 BSR_ATTR (2)
type BSRA_VALID (3)
bfPgSz 16 transitionId 2
cloneId 0 cloneCnt 0 maxClonePgs 0
deleteWithClone 0 outOfSyncClone 0
cl.dataSafety BFD_NIL (0)
cl reqServices 1 optServices 0 extendSize 0 rsvd1 0
rsvd2 0 acl 0 rsvd_sec1 0 rsvd_sec2 0 rsvd_sec3 0

RECORD 1 bCnt 80 version 0 BSR_XTNTS (1)
chain mcell volume page cell 1 190 27
blksPerPage 16 segmentSize 1595287540
delLink next page,cell 0,0 prev page,cell 0,0
delRst volume,page,cell 0,0,0 xtntIndex 0 offset 0 blocks 0
firstXtnt mcellCnt 2 xCnt 2
bsXA[ 0] bsPage 0 vdBlk 304 (0x130)
bsXA[ 1] bsPage 936 vdBlk -1

RECORD 2 bCnt 92 version 0 BMTR_FS_STAT (255)

Now you can use the "tag2name" utility to find the name of the file, that still seems to have some data in this BMT-page.

/sbin/advfs/tag2name -r scratch_ciney 1 13935

In my example the output is "dummy", because that is the file I intentionally left on this volume.

If you want to log a case with HP support services, for escalation to AdvFS engineering, please run /sbin/advfs/savemeta, before you do anything drastic like recreating the domain.