Operating System - Linux
1823082 Members
3518 Online
109646 Solutions
New Discussion юеВ

NFS file transfer limitation

 
SOLVED
Go to solution
siva3492
Advisor

NFS file transfer limitation

Hello friends,

i have a problem in nfs file transfer

i mounted hp-unix directory into linux directory, in order to use the free space in the linux side

the command i used:

mount <Ipaddress>:/dir1 /dir2


i have already done everything in linux side for exporting.

then i tried to copy a file (tar file) which one size is 18 gb but file transfer stops exactly 16 gb.

i tried more than 6 times and i also tried in some other machines.
but the problem still exits.

please guide me how can i overcome this problem.

 

 

P.S. This thread has been moved from from Networking to Linux > sysadmin. - Hp Forum Moderator

13 REPLIES 13
Ken Grabowski
Respected Contributor

Re: NFS file transfer limitation

Need much more information to get an idea of what the problem might be.

 

What brand and release of Linux?

What version of HP-UX?

Your mount command shows no -F file system type or -o mount options. Did you use any?

What where the options you used to mount the file system on HP-UX?

What type of Linux file system did you create to receive the 18GB file?

Do you have any file size limited set in Linux:

     ulimit -a to view "file size" settings

    /etc/security/imits.conf for fsize limits.

What are your Linux export settings?

What is the hp-UX "bdf /dir1" result and Linux "df /dir2" result?

 

I've copied much larger files with this cross-mounted file systems RHEN 6.5  ext4 to HP-UX 11.31 with these settings;

HP-UX /etc/fstab: (Linux IP) ###.###.###.### :/transfer /transfer nfs rw,bg,hard,intr,rsize=32768,timeo=600,noac,forcedirectio 0 0

RHEN /etc/exports: /transfer (HP-UX IP) ###.###.###.###(rw,all_squash,sync)

 

 

 

 

 

siva3492
Advisor

Re: NFS file transfer limitation

i used Linux 2.6.18-348.el5 and HP version 11.11.

I didn't use any options for the mount (-F and -o options).

I tried to transfer a single tar file which one size is 18GB approximately

And i dont have any file size limitation.

 

[root@OWP-15 ~]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) 102400000
pending signals                 (-i) 14288
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14288
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

 And output of limit.conf file is

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

*               hard    fsize           102400000

 

 

And Export settings are

 

[root@OWP-15 ~]# vi /etc/exports
/home/back/ *(rw,insecure,sync,no_wdelay,insecure_locks,no_root_squash)

 

Please help me.....

 

siva3492
Advisor

Re: NFS file transfer limitation

Actually i tried to take a tar backup of one directory from HP-UX  server and i want to safe that created  tar file in to that exported file system of linux machine.

 

The directory size i want to take a backup is 23.75 GB,  Exported file system has free size of 150 GB.

 

Still i am facing that problem.

 

I tried different backup options like dd and fbackup commands

 

But still i am facing the same problem

 

Ken Grabowski
Respected Contributor

Re: NFS file transfer limitation

You do have a file size limit defined in limit.conf of 102400000 which works out to about 97.6GB, so that still isn't the cause of the problem.

 

HP-UX 11.11 is pretty old. Not sure if all the suggested NFS mount options are supported in that version, but you definitely need to be using mount options.  

 

First, change your options in Linux /etc/exports options from  (rw,insecure,sync,no_wdelay,insecure_locks,no_root_squash) to (rw,all_squash,sync) and reexport it.

 

Next, add an /etc/fstab line in the HP-UX host: 

OWP-15:/home/back /transfer nfs rw,bg,hard,intr,rsize=32768,timeo=600,noac,forcedirectio 0 0

 

If needed change the /transfer mount point to where your mounting the NFS file system on HP-UX.

 

Once you've done that, on HP-UX use the command "mount /transfer" to mount the file system. Now try your backup. 

Dave Olker
Neighborhood Moderator

Re: NFS file transfer limitation

Hi Ken,

 

I'm curious about your comment "you definitely need to be using mount options".  Which of the mount options you listed affects file sizes?  Also, some of the options you recommended will potentially negatively affect the performance of the filesystem, especially "noac" and "forcedirectio".  Is there a specific reason to bypass the attribute cache and the buffer cache in this test?  Also, I don't think "forcedirectio" was a valid option at 11.11.   I believe we added this feature in 11.23.

 

 

siva3492 - 

 

Before I tried anything else I would take NFS out of the picture and first verify that you are able to create a 20+ GB file locally in the Linux filesystem using whatever tool you prefer, such as dd.  If that works, I would then try transfering the TAR archive from the HP-UX box to the Linux box using something like FTP or SCP.  If both of those methods work without failure then I would go back to NFS and try again.  If NFS still fails I would try collecting a network trace and a tusc output at the end of the file transfer to see what kind of errors are being returned both at the NFS layer and the OS layer when the file creation fails.

 

Regards,

 

Dave



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Ken Grabowski
Respected Contributor

Re: NFS file transfer limitation

To the best of my knowledge they center around timeouts during transfers and Linux file creation issues.  While those options can have negative effect on speed of transfer, if the transfer doesn't work without them then a little extra speed is not important.  

 

As I said, not sure all the options are available in 11.11.  If not, I would see if there are any other options for turning off caching and buffering.

 

I'm passing on the settings I discovered while having a similar problem moving large database exports across an NFS mount.  Pretty sure I found the information at Oracle's knowledge base, or one of the Oracle community websites.  It's not an unknown problem, or solution.  Since making these changes to my HP-UX to Linux mount, the very large file transfers have had no further problem.

siva3492
Advisor

Re: NFS file transfer limitation

Hi friends,

 

I made all changes like you said. But still i am having the problem

Tar: end of tape
Tar: to continue, enter device/file name when ready or null string to quit.

User entered a null name for next device file.

 

I checked created file size.

-rw-r--r--   1 root       sys        17247252480 Dec 30 13:34 romam_new.tar.gz

 

It stoped exactly at 17.24 GB. i tried more than 3 times. it repeated again and again.

 

Please help me....................

Ken Grabowski
Respected Contributor

Re: NFS file transfer limitation

That is actually 16.06GB.  Did you do what Dave suggested, creating a file the size you expect to create?  When you ran the mount command, did you get any error messages?   Did you see any messages in /var/adm/syslog.log or /var/log/messages?

 

Please show your changed fstab and exports information.

siva3492
Advisor
Solution

Re: NFS file transfer limitation

Hi Dave,
Thanks for your guidance.
I tried tar backup on Linux machine which I needed to backup.
As you said there is a file size limitation in the Linux server. That is the reason why tar backup aborted at that level.

/home/back/pallava_backup/fbackup_backup/c3t2d0DDSnb
/home/back/pallava_backup/fbackup_backup/stape_config
/home/back/romam_new.tar.gz
tar: /home/backup/back.tar.gz: Cannot write: No space left on device
tar: Error is not recoverable: exiting now
[root@OWP-15 back]#
[root@OWP-15 back]#
[root@OWP-15 back]#
[root@OWP-15 back]#
[root@OWP-15 back]# df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
                     276535241 155700449 106783874  60% /
/dev/mapper/VolGroup00-LogVol01
                     199146641 122482178  66545705  65% /home
/dev/sda1               118523     11323    101178  11% /boot
tmpfs                   916268         0    916268   0% /dev/shm
[root@OWP-15 back]#
[root@OWP-15 back]#
[root@OWP-15 back]# cd /home/backup
[root@OWP-15 backup]#
[root@OWP-15 backup]#
[root@OWP-15 backup]# ls
back.tar.gz
[root@OWP-15 backup]# ll
total 16909071
-rw-r--r-- 1 root root 17247252480 Dec 31 06:04 back.tar.gz
[root@OWP-15 backup]#

 

 

 

siva3492
Advisor

Re: NFS file transfer limitation

Hi friends,
Any one please guide me, what i supposed to do to overcome this issue.
I checked file size limitation by executing the command ulimit.

 

ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) 102400000
pending signals                 (-i) 14288
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 14288
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

 It showed that allowable file size is 100GB approximately.

Dennis Handly
Acclaimed Contributor

Re: NFS file transfer limitation

> what I supposed to do to overcome this issue?

 

Are you talking about the 66 GB free space here, after it aborts as being full?

    199146641 122482178 66545705 65% /home

 

This is a linux sysadmin question, not HP-UX.  Perhaps you have quotas?

Or thin provisioning on a 3PAR?  :-)

 

>I checked file size limitation

 

This would give a different error, a signal SIGXFSZ.

Dave Olker
Neighborhood Moderator

Re: NFS file transfer limitation

As Dennis mentioned, this is a Linux issue so this thread will likely be moved to a different forum by one of the moderators.

 

That said, now that you know you are looking at a Linux issue where the filesystem appearing to have plenty of space is reporting no space you may want to do some Google searches for that condition.  The first couple searches I tried showed things like checking whether you are running out of inodes (df -i) or it could be that some large files were deleted from the filesystem but some process is still holding a reference to those files so the underlying OS has not given back the disk blocks yet.

 

If you have another filesystem on the same system with sufficient space I would suggest trying the operation in a different filesystem.  You could also try rebooting the Linux system to eliminate the possibility of a rogue process holding file system space hostage for files that have been deleted.

 

Dave



I work at HPE
HPE Support Center offers support for your HPE services and products when and how you need it. Get started with HPE Support Center today.
[Any personal opinions expressed are mine, and not official statements on behalf of Hewlett Packard Enterprise]
Accept or Kudo
Dennis Handly
Acclaimed Contributor

Re: NFS file transfer limitation

>it could be that some large files were deleted from the filesystem but some process is still holding a reference

 

Except df(1) thinks there is space.  I would think that even if Linux is "better" than UNIX, it wouldn't try improving on the concept of what's free.

Of course you need to run df(1) at the same time as the error.
 

>You could also try rebooting the Linux system to eliminate the possibility of a rogue process

 

lsof could be used to find those proceses and save you a reboot.