1838092 Members
3596 Online
110124 Solutions
New Discussion

blocks count

 
SOLVED
Go to solution
Mauro Gatti
Valued Contributor

blocks count

Hi all, let me show a strange thing:

virgo# ls -lsgo SysInfo204.shar
392 -rw-r--r-- 1 199907 Jan 28 09:55 SysInfo204.shar
virgo# du SysInfo204.shar
392 SysInfo204.shar
virgo# sum SysInfo204.shar
45707 391 SysInfo204.shar

Why does sum report a different blocks number?
If a block is 512 byte long...
199907/512=390,44 so it should be 391 blocks.

Could You explain how block count works?
Ubi maior, minor cessat!
9 REPLIES 9
Pete Randall
Outstanding Contributor

Re: blocks count

Mauro,

That's interesting! Reading the man page for sum, it says it's obsolescent and cksum should be used instead. What does cksum report? Maybe that's why sum is obsolescent?


Pete

Pete
Pete Randall
Outstanding Contributor

Re: blocks count

I should have checked first - cksum just gives you the byte count, nothing about blocks.


Pete

Pete
Mauro Gatti
Valued Contributor

Re: blocks count

It seems sum checks the right size and the other calculate one more block... Isn't it?
Ubi maior, minor cessat!
Pete Randall
Outstanding Contributor

Re: blocks count

That's certainly the way it seems. I have to wonder if du isn't correct, though with 392 blocks. That would allow one extra for header info or whatever. It's a guess, but it makes sense to me.


Pete

Pete
Mauro Gatti
Valued Contributor

Re: blocks count

I have to explain what I'd like to do...
I'd like to make a script usig awk (for exemple) thath identifies "sparse" files doing a check between blocks and bytes occupied.
Ubi maior, minor cessat!
Pete Randall
Outstanding Contributor

Re: blocks count

The man page for du warns that "Block counts are incorrect for files that contain holes". You would need to do some testing to see if your calculated block size is consistently one less than what du reports. If that is true, then it should be valid to calculate a block size, add one to it, and then compare that to the value reported by du. If different, the file must be sparse.


Pete

Pete
Mauro Gatti
Valued Contributor

Re: blocks count

For some file this works fine:
blocks reported by ls -s = bytes reportes by ls -l /512 + 1
But for others there is a difference of 1 block. (ls -ls reports one bloc more then ls -l/5121 +1)
For well known sparse file this difference is greter.

I don't think in block count HP-UX calculate also inode block.
Ubi maior, minor cessat!
Kent Ostby
Honored Contributor

Re: blocks count

Is this a rounding issue ?

If you don't come out exactly at a block marker, I would think you would need a partial extra block.

"Well, actually, she is a rocket scientist" -- Steve Martin in "Roxanne"
A. Clay Stephenson
Acclaimed Contributor
Solution

Re: blocks count

Now that I know what you are trying to do, I think the correct answer is to forget about it. The whole point about sparse files is that they are completely invisible to the application. "Missing" characters are simply filled in with ASCII NUL by the read() system call. Consider an actual file with "X" written at offset 1 and offset 1000000 and no data in between. This would be a classic sparse file. Now consider exactly the same file except that NUL's were physically written to fill in the holes. No command could distinguish these two files even though internally they are quite different. Even commands like fbackup -- which support restoration of sparse files -- don't have a clue. The read() system calls fills in the holes with NUL's just as it would with any application. So how are sparse files able to be restored? The restore program (frecover) in this example, looks at the input stream, if it detects long sequences of ASCII NUL's, it can then issue an lseek to write data at the next non-NUL offset.


If it ain't broke, I can fix that.