Operating System - HP-UX
1826291 Members
3897 Online
109692 Solutions
New Discussion

Problems with large files on rx7620s

 
kaushikbramesh
Advisor

Problems with large files on rx7620s

Hi,

We are having a strange problem with some of the applications on 7620s running, HP-UX B11.23.
We found this problem when we were trying to backup the filesystems on the 7620 to an NFS, using vxdump.
vxdump seemed to have completed backingup the filesystems however when vxdump reached towards the end of the backup, it just stalled and the CPU utilization was 100%. We tried killing the vxdump processes but these processes cannot be removed from the process table and the CPU util stays at 100%.
Tried backing up the filesystems using fbackup, we had the same problem.
We tried to copy over a large file, cp command hung as well with CPU maxed out.

In the past we had problems with gzip & compress, which turned out to be the problem with the application being 32bit/64 bit. We tried the 64 bit gzip and compress which worked without any problems.

We tried the same exercise on a rx2620 based systems ( such as vxdump,fbackup and cp ) they all seem to work.

We also checked the 32bit/64 bit applications on both 2620 and 7620, they both seem to be the same.

I'm not sure where to start debugging the problem.

Thanks in advance for all your suggestions.

Regards
Kaushik
3 REPLIES 3
Robert-Jan Goossens
Honored Contributor

Re: Problems with large files on rx7620s

Hi Kaushik,

Could it be that your NFS mounted filesystem has not been setup with largesfiles ?

On the NFS server.

# fsadm /mount_point

or if you do not have OnlineJFS

# fsadm /dev/vgXX/rlvolX

Regards,
Robert-Jan
Steven E. Protter
Exalted Contributor

Re: Problems with large files on rx7620s

Shalom Kaushik,

Largefiles still may stop growing at 2 GB under the following scenarios:

1) Connecting to a filesystem that was not created with largefiles enabled.
2) Connecting to an NFS mount that uses v2 NFS.
3) Using an old version of tar etc that is limited to 2 GB files.

Interesting thing here is that fbackup has never had a problem with large files since I've been using it on 11.00 but it generally goes to tape.

Other ideas:

The file being backed up has an open process on it, is a database file and the system is having trouble getting a good copy.

Note that fbackup can not back up open database files. The database should be shut down to get a clean copy.

There is probably more I can do once you get a chance to look around and elaborate.

SEP
Steven E Protter
Owner of ISN Corporation
http://isnamerica.com
http://hpuxconsulting.com
Sponsor: http://hpux.ws
Twitter: http://twitter.com/hpuxlinux
Founder http://newdatacloud.com
Bill Hassell
Honored Contributor

Re: Problems with large files on rx7620s

This definitely sounds like NFS. Only recently has NFS been able to handle large files and of course, both sides (server and client) must allow large files. fbackup has handled largefiles ever since they were available but I'm guessing that you did not use fbackup on the local machine to a local tape. Most likely, the machine with the large files does not have a tape dirve and everything must be backed over the network, so fbackup was used on a remote machine. NFS is definitely not transparent as a filesystem--you must always be aware of the performance issues, compatibility and difficulties with unreliable network connections.

fbackup has the ability to send data over thbe network without using NFS. Just setup the remote machine to allow root access using remsh and you can specify fbakcup's tape drive as: -f remote_system:/dev/rmt/0m

Note also that machines without a local tape drive must have a network Ignite server so you can restore the system disk in case of failure.


Bill Hassell, sysadmin