- Community Home
- >
- Servers and Operating Systems
- >
- Operating Systems
- >
- Operating System - HP-UX
- >
- Vxfs filesystem overhead?
Categories
Company
Local Language
Forums
Discussions
Forums
- Data Protection and Retention
- Entry Storage Systems
- Legacy
- Midrange and Enterprise Storage
- Storage Networking
- HPE Nimble Storage
Discussions
Forums
Discussions
Discussions
Discussions
Forums
Discussions
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
- BladeSystem Infrastructure and Application Solutions
- Appliance Servers
- Alpha Servers
- BackOffice Products
- Internet Products
- HPE 9000 and HPE e3000 Servers
- Networking
- Netservers
- Secure OS Software for Linux
- Server Management (Insight Manager 7)
- Windows Server 2003
- Operating System - Tru64 Unix
- ProLiant Deployment and Provisioning
- Linux-Based Community / Regional
- Microsoft System Center Integration
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Discussion Boards
Community
Resources
Forums
Blogs
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 06:26 AM
02-17-2004 06:26 AM
I had a 4GB hfs filesystem that was 88% full. I had read several postings saying that vxfs was faster so I decided to backup my old filesystem and create a new vxfs filesystem. I decided that I might as well make it a little bigger at the same time so I created a 4.5GB logical volume and a new vxfs filesystem. All of that worked without incident. Next, I started tar to restore the files from backup and that was working fine until the filesystem ran out of space. I checked to make sure that the filesystem was 4.5GB. Bdf reports the correct size of 4718592 kbyes.
Can the overhead of a vxfs filesystem be that much greater than an hfs?
I am running HP-UX 11.0 on an N-class with OnlineJFS.
TIA,
Derek Card
Solved! Go to Solution.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 06:31 AM
02-17-2004 06:31 AM
SolutionI suspect that you are the victim of 1 or 2 things (and possibly both).
1) Sparse files -- tar cannot resore sparse files as sparse files.
2) Symbolic links --- did you use the -h tar option to follow symbolic links.
I'm betting that it's 1).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 06:46 AM
02-17-2004 06:46 AM
Re: Vxfs filesystem overhead?
Here's a snippet of C:
int fdes;
char buf[512];
fdes = open("myfile",O_RDWR | OCREAT,0660);
(void) write(fdes,(void *) buf,sizeof(buf);
(void) lseek(fdes,1000000L,0);
(void) write(fdes,(void *) buf,sizeof(buf);
(void) close(fdes);
The idea is that we ocreat a new file,write 512 bytes, next we seek to offset 1,000,000 and write another 512 bytes -- leaving a big hole in the middle.
ls -l myfile would report the size as 1,000,512 bytes but to bdf only 2 blocks would be used. When the file is read (as in tar) the "holes" are filled in with ASCII NUL's. Tar and cpio cannot recreate the holes but frestore or OmniBack can.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 06:51 AM
02-17-2004 06:51 AM
Re: Vxfs filesystem overhead?
3) You had a filesystem mounted within that filesystem. If your original was /fs, you could have another filesystem /fs/anotherfs. When you backup the original you would get everything in /fs including /fs/anotherfs.
If you didn't remount /fs/anotherfs after you recreated /fs, then you could very well see this problem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 06:51 AM
02-17-2004 06:51 AM
Re: Vxfs filesystem overhead?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 06:56 AM
02-17-2004 06:56 AM
Re: Vxfs filesystem overhead?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 07:11 AM
02-17-2004 07:11 AM
Re: Vxfs filesystem overhead?
If this really bugs you there is a method to accomplish this but it would require a bit of C or Perl. Believe it or not, fbackup or OB2/DP or the UNIX read() system call actually knows NOTHING about sparse files. When the read determines that a given block address points to nothing, it simply fills in the blanks with enough NUL's to satisfy the read.
The 'trick' is done on the restore. When a number of consecutive NUL's is seen on the input stream, a seek is done and data is written at the next non-NUL file offset. Using that trick, you could read a file and when enough consecutive NUL's (I don't know the number) are encounted you could seek and write the desired data to a new file. When finished you mv the temp file to the original.
For this method to work, you would first have to restore the file with tar and then "shrink" the files. You could then backup with fbackup and recreate your LVOL and filesystem. If you are really Gung-Ho you could read the tar image and do the file extraction and sparse file restoration yourself without having to have a larger filesystem --- but that exercise is left to the reader.
Next time, look before your leap.
Regards, Clay
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
02-17-2004 07:30 AM
02-17-2004 07:30 AM
Re: Vxfs filesystem overhead?
Thanks,
Derek Card