FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > CentOS > CentOS

 
 
LinkBack Thread Tools
 
Old 11-19-2010, 08:16 PM
"Michael D. Berger"
 
Default Fail Transfer of Large Files

On my intranet, I sometimes transfer large files, about 4G,
to an CentOS old box that I use for a web server. I transfer
with ftp or sftp. Usually, before the file is complete, the
transfer "stalls". At that point, ping from the destination box
to the router fails. I then deactivate the net interface on the
destination box and then activate it. Ping is then successful,
and the transfer is completed. The transferred file is correct,
as verified with sha1sum.

All connections are via cat6 wire.

So what do you think? Should I try changing the net card?
Any tests to run? Any other suggestions?

Thanks for your help.

Mike.



_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-20-2010, 07:01 AM
Ben McGinnes
 
Default Fail Transfer of Large Files

On 20/11/10 8:16 AM, Michael D. Berger wrote:
> On my intranet, I sometimes transfer large files, about 4G,
> to an CentOS old box that I use for a web server. I transfer
> with ftp or sftp.

Have you tried scp or rsync?


Regards,
Ben

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-20-2010, 01:24 PM
Kwan Lowe
 
Default Fail Transfer of Large Files

On Fri, Nov 19, 2010 at 4:16 PM, Michael D. Berger
<m_d_berger_1900@yahoo.com> wrote:
> On my intranet, I sometimes transfer large files, about 4G,
> to an CentOS old box that I use for a web server. *I transfer
> with ftp or sftp. *Usually, before the file is complete, the
> transfer "stalls". *At that point, ping from the destination box
> to the router fails. *I then deactivate the net interface on the
> destination box and then activate it. *Ping is then successful,
> and the transfer is completed. *The transferred file is correct,
> as verified with sha1sum.
>
> All connections are via cat6 wire.
>
> So what do you think? *Should I try changing the net card?
> Any tests to run? Any other suggestions?
>
It could be buffering the transfer then writing it. I notice this on
a small xen image I use as a file server.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-20-2010, 04:35 PM
Les Mikesell
 
Default Fail Transfer of Large Files

On 11/19/10 3:16 PM, Michael D. Berger wrote:
> On my intranet, I sometimes transfer large files, about 4G,
> to an CentOS old box that I use for a web server. I transfer
> with ftp or sftp. Usually, before the file is complete, the
> transfer "stalls". At that point, ping from the destination box
> to the router fails. I then deactivate the net interface on the
> destination box and then activate it. Ping is then successful,
> and the transfer is completed. The transferred file is correct,
> as verified with sha1sum.
>
> All connections are via cat6 wire.
>
> So what do you think? Should I try changing the net card?
> Any tests to run? Any other suggestions?

I haven't seen anything like that, at least in many years so it probably is
hardware related - but make sure your software is up to date. As a workaround,
you might try using rsync with the --bwlimit option to limit the speed of the
transfer - and the -P option so you can restart a failed transfer from the point
it stalled on the last attempt.

--
Les Mikesell
lesmikesell@gmail.com


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-20-2010, 04:38 PM
Timo Schoeler
 
Default Fail Transfer of Large Files

On 11/20/2010 06:35 PM, Les Mikesell wrote:
> On 11/19/10 3:16 PM, Michael D. Berger wrote:
>> On my intranet, I sometimes transfer large files, about 4G,
>> to an CentOS old box that I use for a web server. I transfer
>> with ftp or sftp. Usually, before the file is complete, the
>> transfer "stalls". At that point, ping from the destination box
>> to the router fails. I then deactivate the net interface on the
>> destination box and then activate it. Ping is then successful,
>> and the transfer is completed. The transferred file is correct,
>> as verified with sha1sum.
>>
>> All connections are via cat6 wire.
>>
>> So what do you think? Should I try changing the net card?
>> Any tests to run? Any other suggestions?
>
> I haven't seen anything like that, at least in many years so it probably is
> hardware related - but make sure your software is up to date. As a workaround,
> you might try using rsync with the --bwlimit option to limit the speed of the
> transfer - and the -P option so you can restart a failed transfer from the point
> it stalled on the last attempt.

If you have a managed switch, check its counters for errors (CRC,
giants, runts, etc) and check whether speed and duplex settings are
appropriate for all machines connected.

You should also check whether all devices involved are able to handle
the MTU you use. I had a similar issue recently with Cisco gear that
wouldn't play with the MTUs I had set on some of my machines.

Cheers,

Timo
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-20-2010, 11:17 PM
Jay Leafey
 
Default Fail Transfer of Large Files

Les Mikesell wrote:

On 11/19/10 3:16 PM, Michael D. Berger wrote:

On my intranet, I sometimes transfer large files, about 4G,
to an CentOS old box that I use for a web server. I transfer
with ftp or sftp. Usually, before the file is complete, the
transfer "stalls". At that point, ping from the destination box
to the router fails. I then deactivate the net interface on the
destination box and then activate it. Ping is then successful,
and the transfer is completed. The transferred file is correct,
as verified with sha1sum.

All connections are via cat6 wire.

So what do you think? Should I try changing the net card?
Any tests to run? Any other suggestions?


I haven't seen anything like that, at least in many years so it probably is
hardware related - but make sure your software is up to date. As a workaround,
you might try using rsync with the --bwlimit option to limit the speed of the
transfer - and the -P option so you can restart a failed transfer from the point
it stalled on the last attempt.




This does ring a bell, but the circumstances were a bit different. In
our case we were transferring large files between "home" and a remote
site. SFTP/SCP transfers were stalling part-way through in an
unpredictable manner. It turned out to be a bug in the selective
acknowledgment functionality in the TCP stack. Short story, adding the
following line to /etc/sysctl.conf fixed the issue:



net.ipv4.tcp_sack = 0


Of course, you can set it on-the-fly using the sysctl command:


sysctl -w net.ipv4.tcp_sack=0


It helped in our case, no way of telling if it will help you. As usual,
your mileage may vary.

--
Jay Leafey - jay.leafey@mindless.com
Memphis, TN
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-21-2010, 02:28 AM
"Michael D. Berger"
 
Default Fail Transfer of Large Files

On Sat, 20 Nov 2010 18:17:23 -0600, Jay Leafey wrote:

> Les Mikesell wrote:
[...]
> This does ring a bell, but the circumstances were a bit different. In
> our case we were transferring large files between "home" and a remote
> site. SFTP/SCP transfers were stalling part-way through in an
> unpredictable manner. It turned out to be a bug in the selective
> acknowledgment functionality in the TCP stack. Short story, adding the
> following line to /etc/sysctl.conf fixed the issue:
>
>> net.ipv4.tcp_sack = 0
>
> Of course, you can set it on-the-fly using the sysctl command:
>
>> sysctl -w net.ipv4.tcp_sack=0
>
> It helped in our case, no way of telling if it will help you. As usual,
> your mileage may vary.

Googing around, I get the impression that disabling SACK might
lead to other problems. Any thoughts on this?

Thanks,
Mike.

_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-21-2010, 10:47 AM
Nico Kadel-Garcia
 
Default Fail Transfer of Large Files

On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
<m_d_berger_1900@yahoo.com> wrote:
> On Sat, 20 Nov 2010 18:17:23 -0600, Jay Leafey wrote:
>
>> Les Mikesell wrote:
> [...]
>> This does ring a bell, but the circumstances were a bit different. *In
>> our case we were transferring large files between "home" and a remote
>> site. *SFTP/SCP transfers were stalling part-way through in an
>> unpredictable manner. *It turned out to be a bug in the selective
>> acknowledgment functionality in the TCP stack. * Short story, adding the
>> following line to /etc/sysctl.conf fixed the issue:
>>
>>> net.ipv4.tcp_sack = 0
>>
>> Of course, you can set it on-the-fly using the sysctl command:
>>
>>> sysctl -w net.ipv4.tcp_sack=0
>>
>> It helped in our case, no way of telling if it will help you. *As usual,
>> your mileage may vary.
>
> Googing around, I get the impression that disabling SACK might
> lead to other problems. *Any thoughts on this?
>
> Thanks,
> Mike.

>From decades of experience in many environments, I can tell you that
reliable transfer of large files with protocols that require
uninterrupted transfer is awkward. The larger the file, the larger the
chance that any interruption at any point between the repository and
the client will break things, and with a lot of ISP's over-subscribing
their available bandwidth, such large transfers are, by their nature,
unreliable.

Consider fragmenting the large file: Bittorrent transfers do this
automatically: the old "shar" and "split" tools also work well, and
tools like "rsync" and the lftp "mirror" utility are very good at
mirroring directories of such split up contents quite efficiently.
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-21-2010, 02:02 PM
"Michael D. Berger"
 
Default Fail Transfer of Large Files

On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:

> On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
> <m_d_berger_1900@yahoo.com> wrote:
[...]
>
>>From decades of experience in many environments, I can tell you that
> reliable transfer of large files with protocols that require
> uninterrupted transfer is awkward. The larger the file, the larger the
> chance that any interruption at any point between the repository and the
> client will break things, and with a lot of ISP's over-subscribing their
> available bandwidth, such large transfers are, by their nature,
> unreliable.
>
> Consider fragmenting the large file: Bittorrent transfers do this
> automatically: the old "shar" and "split" tools also work well, and
> tools like "rsync" and the lftp "mirror" utility are very good at
> mirroring directories of such split up contents quite efficiently.

What, then, is the largest file size that you would consider
appropriate?

Thanks,
Mike.


_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 
Old 11-21-2010, 03:49 PM
Nico Kadel-Garcia
 
Default Fail Transfer of Large Files

On Sun, Nov 21, 2010 at 10:02 AM, Michael D. Berger
<m_d_berger_1900@yahoo.com> wrote:
> On Sun, 21 Nov 2010 06:47:04 -0500, Nico Kadel-Garcia wrote:
>
>> On Sat, Nov 20, 2010 at 10:28 PM, Michael D. Berger
>> <m_d_berger_1900@yahoo.com> wrote:
> [...]
>>
>>>From decades of experience in many environments, I can tell you that
>> reliable transfer of large files with protocols that require
>> uninterrupted transfer is awkward. The larger the file, the larger the
>> chance that any interruption at any point between the repository and the
>> client will break things, and with a lot of ISP's over-subscribing their
>> available bandwidth, such large transfers are, by their nature,
>> unreliable.
>>
>> Consider fragmenting the large file: Bittorrent transfers do this
>> automatically: the old "shar" and "split" tools also work well, and
>> tools like "rsync" and the lftp "mirror" utility are very good at
>> mirroring directories of such split up contents quite efficiently.
>
> What, then, is the largest file size that you would consider
> appropriate?

Good question. I don't have a hard rule of thumb, but I'd estimate
that any one file that takes more than 10 minutes to transfer is too
big. So transferring CD images over a high bandwidth local connection
at 1 MByte/second, sure, no problem! But for DSL that may have only 80
KB/second, 80 KB/second * 60 seconds/minute * 10 minutes = 48 Meg. So
splitting a CD down to lumps of of, say, 50 Megs seems reasonable.

If you look at how Bittorent works, and the old "shar" utilities used
for sending binaries as compressed text lumps over Usenet and email,
you'll see what I mean. Even commercial tools from the Windows world
like WinRAR do something like this.

>
> Thanks,
> Mike.
>
>
> _______________________________________________
> CentOS mailing list
> CentOS@centos.org
> http://lists.centos.org/mailman/listinfo/centos
>
_______________________________________________
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
 

Thread Tools




All times are GMT. The time now is 11:57 AM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org