FAQ Search Today's Posts Mark Forums Read
» Video Reviews

» Linux Archive

Linux-archive is a website aiming to archive linux email lists and to make them easily accessible for linux users/developers.


» Sponsor

» Partners

» Sponsor

Go Back   Linux Archive > Debian > Debian User

 
 
LinkBack Thread Tools
 
Old 06-22-2012, 11:57 AM
Bartek Krawczyk
 
Default how to increase through put of LAN to 1GB

2012/6/22 Muhammad Yousuf Khan <sirtcp@gmail.com>:
>> Try using -u or f.i. -w 2M with TCP.
>> But your results are quite good already.
>
> UDP only
>
> root@nasbox:/# iperf -c 10.X.X.7 -u -r
> ------------------------------------------------------------
> Server listening on UDP port 5001
> Receiving 1470 byte datagrams
> UDP buffer size: *110 KByte (default)
> ------------------------------------------------------------
> ------------------------------------------------------------
> Client connecting to 10.X.X.7, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: *110 KByte (default)
> ------------------------------------------------------------
> [ *4] local 10.X.X.15 port 34677 connected with 10.X.X.7 port 5001
> [ ID] Interval * * * Transfer * * Bandwidth
> [ *4] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec
> [ *4] Sent 893 datagrams
> [ *4] Server Report:
> [ *4] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec * 0.028 ms * *0/ *893 (0%)
> [ *3] local 10.X.X.15 port 5001 connected with 10.X.X.7 port 44331
> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec * 0.002 ms * *0/ *893 (0%)
>
>
>
> lion:/mnt/vmbk# iperf -s -u
> ------------------------------------------------------------
> Server listening on UDP port 5001
> Receiving 1470 byte datagrams
> UDP buffer size: * 130 KByte (default)
> ------------------------------------------------------------
> [ *3] local 10.X.X.7 port 5001 connected with 10.X.X.15 port 34677
> [ ID] Interval * * * Transfer * * Bandwidth * * * Jitter * Lost/Total Datagrams
> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec *0.028 ms * *0/ *893 (0%)
> ------------------------------------------------------------
> Client connecting to 10.X.X.15, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: * 130 KByte (default)
> ------------------------------------------------------------
> [ *3] local 10.X.X.7 port 44331 connected with 10.X.X.15 port 5001
> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec
> [ *3] Sent 893 datagrams
> [ *3] Server Report:
> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec *0.001 ms * *0/ *893 (0%)
>

Use -b 1024M with -u. Forgot about that.


--
Bartek Krawczyk


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/CAFp_H4tyyi8za4FSPYLY=4oL7SZAsafNSY32VLUvKXvUG20v1 Q@mail.gmail.com
 
Old 06-22-2012, 11:58 AM
Muhammad Yousuf Khan
 
Default how to increase through put of LAN to 1GB

TCP Result

iperf -c 10.X.X.7 -r -w 2M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.X.X.7, TCP port 5001
TCP window size: 256 KByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
[ 5] local 10.X.X.15 port 33832 connected with 10.X.X.7 port 5001
[ ID] Interval Transfer Bandwidth
[ 5] 0.0-10.0 sec 754 MBytes 633 Mbits/sec
[ 4] local 10.X.X.15 port 5001 connected with 10.X.X.7 port 56931
[ 4] 0.0-10.0 sec 856 MBytes 718 Mbits/sec




lion:/mnt/vmbk# iperf -s -w 2M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 256 KByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
[ 4] local 10.X.X.7 port 5001 connected with 10.X.X.15 port 33832
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 754 MBytes 632 Mbits/sec
------------------------------------------------------------
Client connecting to 10.X.X.15, TCP port 5001
TCP window size: 256 KByte (WARNING: requested 2.00 MByte)
------------------------------------------------------------
[ 4] local 10.X.X.7 port 56931 connected with 10.X.X.15 port 5001
Waiting for server threads to complete. Interrupt again to force quit.
[ 4] 0.0-10.0 sec 856 MBytes 718 Mbits/sec



> Try using -u or f.i. -w 2M with TCP.
> But your results are quite good already.
>
> Regards,
> --
> Bartek Krawczyk
>


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/CAGWVfMk124KKW5d37n=G2F56Qhphvsq2JsS5MOW1RYjC278nw Q@mail.gmail.com
 
Old 06-22-2012, 12:08 PM
Muhammad Yousuf Khan
 
Default how to increase through put of LAN to 1GB

root@nasbox:/# iperf -c 10.X.X.7 -r -u -b 1024M
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 110 KByte (default)
------------------------------------------------------------
------------------------------------------------------------
Client connecting to 10.X.X.7, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 110 KByte (default)
------------------------------------------------------------
[ 4] local 10.X.X.15 port 59300 connected with 10.X.X.7 port 5001
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 777 MBytes 652 Mbits/sec
[ 4] Sent 554058 datagrams
[ 4] Server Report:
[ 4] 0.0-10.0 sec 773 MBytes 648 Mbits/sec 0.057 ms 2895/554057 (0.52%)
[ 4] 0.0-10.0 sec 1 datagrams received out-of-order
[ 3] local 10.X.X.15 port 5001 connected with 10.X.X.7 port 40630
[ 3] 0.0-10.0 sec 585 MBytes 490 Mbits/sec 0.053 ms 393567/810629 (49%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order


lion:/mnt/vmbk# iperf -s -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 130 KByte (default)
------------------------------------------------------------
[ 3] local 10.X.X.7 port 5001 connected with 10.X.X.15 port 59300
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0-10.0 sec 773 MBytes 648 Mbits/sec 0.058 ms 2895/554057 (0.52%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order
------------------------------------------------------------
Client connecting to 10.X.X.15, UDP port 5001
Sending 1470 byte datagrams
UDP buffer size: 130 KByte (default)
------------------------------------------------------------
[ 3] local 10.X.X.7 port 40630 connected with 10.X.X.15 port 5001
[ 3] 0.0-10.0 sec 1.11 GBytes 953 Mbits/sec
[ 3] Sent 810631 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 585 MBytes 490 Mbits/sec 0.053 ms 393567/810629 (49%)
[ 3] 0.0-10.0 sec 1 datagrams received out-of-order



On Fri, Jun 22, 2012 at 4:57 PM, Bartek Krawczyk
<bbartlomiej.mail@gmail.com> wrote:
> 2012/6/22 Muhammad Yousuf Khan <sirtcp@gmail.com>:
>>> Try using -u or f.i. -w 2M with TCP.
>>> But your results are quite good already.
>>
>> UDP only
>>
>> root@nasbox:/# iperf -c 10.X.X.7 -u -r
>> ------------------------------------------------------------
>> Server listening on UDP port 5001
>> Receiving 1470 byte datagrams
>> UDP buffer size: *110 KByte (default)
>> ------------------------------------------------------------
>> ------------------------------------------------------------
>> Client connecting to 10.X.X.7, UDP port 5001
>> Sending 1470 byte datagrams
>> UDP buffer size: *110 KByte (default)
>> ------------------------------------------------------------
>> [ *4] local 10.X.X.15 port 34677 connected with 10.X.X.7 port 5001
>> [ ID] Interval * * * Transfer * * Bandwidth
>> [ *4] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec
>> [ *4] Sent 893 datagrams
>> [ *4] Server Report:
>> [ *4] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec * 0.028 ms * *0/ *893 (0%)
>> [ *3] local 10.X.X.15 port 5001 connected with 10.X.X.7 port 44331
>> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec * 0.002 ms * *0/ *893 (0%)
>>
>>
>>
>> lion:/mnt/vmbk# iperf -s -u
>> ------------------------------------------------------------
>> Server listening on UDP port 5001
>> Receiving 1470 byte datagrams
>> UDP buffer size: * 130 KByte (default)
>> ------------------------------------------------------------
>> [ *3] local 10.X.X.7 port 5001 connected with 10.X.X.15 port 34677
>> [ ID] Interval * * * Transfer * * Bandwidth * * * Jitter * Lost/Total Datagrams
>> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec *0.028 ms * *0/ *893 (0%)
>> ------------------------------------------------------------
>> Client connecting to 10.X.X.15, UDP port 5001
>> Sending 1470 byte datagrams
>> UDP buffer size: * 130 KByte (default)
>> ------------------------------------------------------------
>> [ *3] local 10.X.X.7 port 44331 connected with 10.X.X.15 port 5001
>> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec
>> [ *3] Sent 893 datagrams
>> [ *3] Server Report:
>> [ *3] *0.0-10.0 sec *1.25 MBytes *1.05 Mbits/sec *0.001 ms * *0/ *893 (0%)
>>
>
> Use -b 1024M with -u. Forgot about that.
>
>
> --
> Bartek Krawczyk
>
>
> --
> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> Archive: http://lists.debian.org/CAFp_H4tyyi8za4FSPYLY4oL7SZAsafNSY32VLUvKXvUG20v1Q @mail.gmail.com
>


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/CAGWVfMmViuijMgUghB35noswQKOxcWZBWAnAgmmO=4vw7PZou w@mail.gmail.com
 
Old 06-22-2012, 12:14 PM
Bartek Krawczyk
 
Default how to increase through put of LAN to 1GB

2012/6/22 Muhammad Yousuf Khan <sirtcp@gmail.com>:
> root@nasbox:/# iperf -c 10.X.X.7 -r -u -b 1024M
> ------------------------------------------------------------
> Server listening on UDP port 5001
> Receiving 1470 byte datagrams
> UDP buffer size: *110 KByte (default)
> ------------------------------------------------------------
> ------------------------------------------------------------
> Client connecting to 10.X.X.7, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: *110 KByte (default)
> ------------------------------------------------------------
> [ *4] local 10.X.X.15 port 59300 connected with 10.X.X.7 port 5001
> [ ID] Interval * * * Transfer * * Bandwidth
> [ *4] *0.0-10.0 sec * 777 MBytes * 652 Mbits/sec
> [ *4] Sent 554058 datagrams
> [ *4] Server Report:
> [ *4] *0.0-10.0 sec * 773 MBytes * 648 Mbits/sec * 0.057 ms 2895/554057 (0.52%)
> [ *4] *0.0-10.0 sec *1 datagrams received out-of-order
> [ *3] local 10.X.X.15 port 5001 connected with 10.X.X.7 port 40630
> [ *3] *0.0-10.0 sec * 585 MBytes * 490 Mbits/sec * 0.053 ms 393567/810629 (49%)
> [ *3] *0.0-10.0 sec *1 datagrams received out-of-order
>
>
> lion:/mnt/vmbk# iperf -s -u
> ------------------------------------------------------------
> Server listening on UDP port 5001
> Receiving 1470 byte datagrams
> UDP buffer size: * 130 KByte (default)
> ------------------------------------------------------------
> [ *3] local 10.X.X.7 port 5001 connected with 10.X.X.15 port 59300
> [ ID] Interval * * * Transfer * * Bandwidth * * * Jitter * Lost/Total Datagrams
> [ *3] *0.0-10.0 sec * *773 MBytes * *648 Mbits/sec *0.058 ms 2895/554057 (0.52%)
> [ *3] *0.0-10.0 sec *1 datagrams received out-of-order
> ------------------------------------------------------------
> Client connecting to 10.X.X.15, UDP port 5001
> Sending 1470 byte datagrams
> UDP buffer size: * 130 KByte (default)
> ------------------------------------------------------------
> [ *3] local 10.X.X.7 port 40630 connected with 10.X.X.15 port 5001
> [ *3] *0.0-10.0 sec *1.11 GBytes * *953 Mbits/sec
> [ *3] Sent 810631 datagrams
> [ *3] Server Report:
> [ *3] *0.0-10.0 sec * *585 MBytes * *490 Mbits/sec *0.053 ms 393567/810629 (49%)
> [ *3] *0.0-10.0 sec *1 datagrams received out-of-order

So you can get about 700Mbps of TCP traffic and ideally 953Mbit of UDP
traffic. Try tuning your server-client applications and those sysctl
parameters which I posted you in my first reply.

Regards,
--
Bartek Krawczyk


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: CAFp_H4vWmEJtnQ8ED3ZfFyfnb+X6qS4S1GmsUoprQnD5qt+jT g@mail.gmail.com">http://lists.debian.org/CAFp_H4vWmEJtnQ8ED3ZfFyfnb+X6qS4S1GmsUoprQnD5qt+jT g@mail.gmail.com
 
Old 06-22-2012, 12:27 PM
Muhammad Yousuf Khan
 
Default how to increase through put of LAN to 1GB

>[ 3] 0.0-10.0 sec 1.11 GBytes 953 Mbits/sec
>[ 3] 0.0-10.0 sec 585 MBytes 490 Mbits/sec 0.053 ms 393567/810629

can you please explain what these two lines mean in the output.
i can understand the values but i can not understand it like what is
1.11GBytes and what is 953 and etc.


> So you can get about 700Mbps of TCP traffic and ideally 953Mbit of UDP
> traffic. Try tuning your server-client applications and those sysctl
> parameters which I posted you in my first reply.
>

ok ill update you accordingly.

Thanks,

> Regards,
> --
> Bartek Krawczyk
>
>
> --
> To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> Archive: http://lists.debian.org/CAFp_H4vWmEJtnQ8ED3ZfFyfnb+X6qS4S1GmsUoprQnD5qt+jT g@mail.gmail.com
>


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: CAGWVfMn85AL8jdC4NYhjexcaYDikEdL-UDrDXcVukW-KX7chRw@mail.gmail.com">http://lists.debian.org/CAGWVfMn85AL8jdC4NYhjexcaYDikEdL-UDrDXcVukW-KX7chRw@mail.gmail.com
 
Old 06-22-2012, 12:32 PM
Bartek Krawczyk
 
Default how to increase through put of LAN to 1GB

2012/6/22 Muhammad Yousuf Khan <sirtcp@gmail.com>:
>>[ *3] *0.0-10.0 sec *1.11 GBytes * *953 Mbits/sec
>>[ *3] *0.0-10.0 sec * *585 MBytes * *490 Mbits/sec *0.053 ms 393567/810629
>
> can you please explain what these two lines mean in the output.
> i can understand the values but i can not understand it like what is
> 1.11GBytes and what is 953 and etc.

Those lines mean that your transfer wasn't stable (due to networking
or your hardware). In the first lune you see that the test took 10s
(0.0-10.0 sec) and iptraf transfered 1.11GBytes so it's 963Mbit/s. The
second line is the same test but in the other way (due to "-r" option
in iptraf). It can tell you that either one PC is sending the data way
slower or the other is receiving it slower than the other one or maybe
that just your network isn't just that stable to sustain 1gbps
throughput for a longer time.

To get more reliable results use -t 60 to test for 60 seconds and with
-i you can change the reports interval i.e. to "1".

--
Bartek Krawczyk


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: CAFp_H4sfMrGJ2dnOPpP0KXpF8Va-ZTxg3TJgsj84L8AegVqpOg@mail.gmail.com">http://lists.debian.org/CAFp_H4sfMrGJ2dnOPpP0KXpF8Va-ZTxg3TJgsj84L8AegVqpOg@mail.gmail.com
 
Old 06-22-2012, 12:44 PM
Muhammad Yousuf Khan
 
Default how to increase through put of LAN to 1GB

On Fri, Jun 22, 2012 at 5:32 PM, Bartek Krawczyk
<bbartlomiej.mail@gmail.com> wrote:
> 2012/6/22 Muhammad Yousuf Khan <sirtcp@gmail.com>:
>>>[ *3] *0.0-10.0 sec *1.11 GBytes * *953 Mbits/sec
>>>[ *3] *0.0-10.0 sec * *585 MBytes * *490 Mbits/sec *0.053 ms 393567/810629
>>
>> can you please explain what these two lines mean in the output.
>> i can understand the values but i can not understand it like what is
>> 1.11GBytes and what is 953 and etc.
>
> Those lines mean that your transfer wasn't stable (due to networking
> or your hardware). In the first lune you see that the test took 10s
> (0.0-10.0 sec) and iptraf transfered 1.11GBytes so it's 963Mbit/s. The
> second line is the same test but in the other way (due to "-r" option
> in iptraf). It can tell you that either one PC is sending the data way
> slower or the other is receiving it slower than the other one or maybe
> that just your network isn't just that stable to sustain 1gbps
> throughput for a longer time.
>
> To get more reliable results use -t 60 to test for 60 seconds and with
> -i you can change the reports interval i.e. to "1".
>

>net.core.rmem_max = 16777216
>net.core.wmem_max = 16777216
>net.ipv4.tcp_rmem = 4096 87380 16777216
>net.ipv4.tcp_wmem = 4096 65536 16777216
>net.ipv4.tcp_window_scaling = 1
>net.ipv4.tcp_timestamps = 1
>net.ipv4.tcp_sack = 1
>net.ipv4.tcp_rfc1337 = 1

ok i am pasting above lines in /etc/sysctl.conf with out reading much
since i am testing, i'll read the details later. ill update you the
results shortly.

Thanks,

> --
> Bartek Krawczyk


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: http://lists.debian.org/CAGWVfM=yxoJOQLvFemPzgvp==bZeqbycUQc3KODjAKxJL3o50 Q@mail.gmail.com
 
Old 06-22-2012, 10:22 PM
Stan Hoeppner
 
Default how to increase through put of LAN to 1GB

On 6/22/2012 5:45 AM, Muhammad Yousuf Khan wrote:

> [ ID] Interval Transfer Bandwidth
> [ 5] 0.0-10.0 sec 744 MBytes 624 Mbits/sec
> [ 4] 0.0-10.0 sec 876 MBytes 734 Mbits/sec

> [ ID] Interval Transfer Bandwidth
> [ 4] 0.0-10.0 sec 744 MBytes 623 Mbits/sec
> [ 4] 0.0-10.0 sec 876 MBytes 735 Mbits/sec

This shows sustained short duration transfer rates of 78MB/s and 91MB/s.
That's not bad, but can be higher. With good NICs, proper TCP tuning,
and jumbo frames, you should be able to hit a theoretical peak of around
117MB/s, or 936Mb/s. That's about the limit after all the protocol
overhead. And this assumes your PCI/e bus, mobo chipset, and host CPU
are up to the task.

These test numbers are a bit meaningless in real world use however, as
most of your iSCSI/CIFS/etc traffic will comprise concurrent small IOs,
transactional in nature, as is the case with the vast majority of server
workloads.

So instead concentrating on your raw point-to-point GbE bandwidth, you
need to concentrate on the IO latency of your iSCSI and virtualization
servers. Maximizing the random IO performance of these systems will do
far more for overall network performance than spending countless hours
trying to maximize point-to-point GbE throughput.

One of the few applications requiring long duration throughput is
network based backup. And even in this case you're not streaming large
files, but typically many small files. So again, system latency is a
bigger factor than throughput.

And in the event you do find yourself transferring vary large files on a
regular basis, and need max throughput, it's most often much easier to
attain that throughput using LACP with two NICs than to spend days/weeks
attempting to maximize the performance of a single NIC.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4FE4F032.2020809@hardwarefreak.com">http://lists.debian.org/4FE4F032.2020809@hardwarefreak.com
 
Old 06-25-2012, 08:08 AM
Muhammad Yousuf Khan
 
Default how to increase through put of LAN to 1GB

On Sat, Jun 23, 2012 at 3:22 AM, Stan Hoeppner <stan@hardwarefreak.com> wrote:
> On 6/22/2012 5:45 AM, Muhammad Yousuf Khan wrote:
>
>> [ ID] Interval * * * Transfer * * Bandwidth
>> [ *5] *0.0-10.0 sec * 744 MBytes * 624 Mbits/sec
>> [ *4] *0.0-10.0 sec * 876 MBytes * 734 Mbits/sec
>
>> [ ID] Interval * * * Transfer * * Bandwidth
>> [ *4] *0.0-10.0 sec * *744 MBytes * *623 Mbits/sec
>> [ *4] *0.0-10.0 sec * *876 MBytes * *735 Mbits/sec
>
> This shows sustained short duration transfer rates of 78MB/s and 91MB/s.
> *That's not bad, but can be higher. *With good NICs, proper TCP tuning,
> and jumbo frames, you should be able to hit a theoretical peak of around
> 117MB/s, or 936Mb/s. *That's about the limit after all the protocol
> overhead. *And this assumes your PCI/e bus, mobo chipset, and host CPU
> are up to the task.
>
> These test numbers are a bit meaningless in real world use however, as
> most of your iSCSI/CIFS/etc traffic will comprise concurrent small IOs,
> transactional in nature, as is the case with the vast majority of server
> workloads.
>
> So instead concentrating on your raw point-to-point GbE bandwidth, you
> need to concentrate on the IO latency of your iSCSI and virtualization
> servers. *Maximizing the random IO performance of these systems will do
> far more for overall network performance than spending countless hours
> trying to maximize point-to-point GbE throughput.
>
> One of the few applications requiring long duration throughput is

agreed, KVM backup is is one of them. i am facing this problem most
often so as a workaround i need to backup all the VMs locally and then
i have to scp them to other network storage.


> network based backup. *And even in this case you're not streaming large
> files, but typically many small files. *So again, system latency is a
> bigger factor than throughput.
>
> And in the event you do find yourself transferring vary large files on a
> regular basis, and need max throughput, it's most often much easier to
> attain that throughput using LACP with two NICs than to spend days/weeks
> attempting to maximize the performance of a single NIC.
>
> --
> Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: CAGWVfMnVpd80sA8J2JLM1GdK48JKAn--V9Cw0aGSTbOqD6XvSQ@mail.gmail.com">http://lists.debian.org/CAGWVfMnVpd80sA8J2JLM1GdK48JKAn--V9Cw0aGSTbOqD6XvSQ@mail.gmail.com
 
Old 06-25-2012, 05:06 PM
Stan Hoeppner
 
Default how to increase through put of LAN to 1GB

On 6/25/2012 3:08 AM, Muhammad Yousuf Khan wrote:

> agreed, KVM backup is is one of them. i am facing this problem most
> often so as a workaround i need to backup all the VMs locally and then
> i have to scp them to other network storage.

scp is not the proper application for this due to overhead, and the fact
that VM image files don't have security issues requiring encryption on
the wire. You should be using ftp or some other file transfer program
that doesn't use encryption.

With VM images you'll need to snapshot them anyway due to state. And I
suppose that you've tried to snapshot directly to a Samba share will
poor performance. Snapping to an iSCSI target should help here,
assuming you do a god job with your self made storage host exposing the
LUNs.

--
Stan


--
To UNSUBSCRIBE, email to debian-user-REQUEST@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
Archive: 4FE89A82.40800@hardwarefreak.com">http://lists.debian.org/4FE89A82.40800@hardwarefreak.com
 

Thread Tools




All times are GMT. The time now is 10:24 PM.

VBulletin, Copyright ©2000 - 2014, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO ©2007, Crawlability, Inc.
Copyright 2007 - 2008, www.linux-archive.org