Understanding tc qdisc and iperf Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Community Moderator Election Results Why I closed the “Why is Kali so hard” questionHaving several qdisc for each device, which is the first to process?Xen and traffic shapingGet list of qdiscs and filters that are supported by tc toolUsing qdisc prio under htb classI want to limit a SFQ qdisc by bytesHTB traffic shaper - download speed degradeTC (Traffic control) and trickle: apply different bandwidth to (multiple requests) same IP or URL during upload and downloadHow can I permanently associate tc qdisc commands with a particular interface?htb -> netem -> pfifo_fast qdisc drops all packets under high traffictc-netem ignores set rate limit and packet reordering
At the end of Thor: Ragnarok why don't the Asgardians turn and head for the Bifrost as per their original plan?
Withdrew £2800, but only £2000 shows as withdrawn on online banking; what are my obligations?
Why did the IBM 650 use bi-quinary?
List of Python versions
What causes the vertical darker bands in my photo?
Generate an RGB colour grid
How do pianists reach extremely loud dynamics?
Why didn't this character "real die" when they blew their stack out in Altered Carbon?
If a contract sometimes uses the wrong name, is it still valid?
Should I discuss the type of campaign with my players?
How does the particle を relate to the verb 行く in the structure「A を + B に行く」?
Why was the term "discrete" used in discrete logarithm?
How do I keep my slimes from escaping their pens?
Is it fair for a professor to grade us on the possession of past papers?
Extract all GPU name, model and GPU ram
What's the meaning of 間時肆拾貳 at a car parking sign
When do you get frequent flier miles - when you buy, or when you fly?
How to bypass password on Windows XP account?
How do I name drop voicings
What is the logic behind the Maharil's explanation of why we don't say שעשה ניסים on Pesach?
Okay to merge included columns on otherwise identical indexes?
Identify plant with long narrow paired leaves and reddish stems
Can an alien society believe that their star system is the universe?
Apollo command module space walk?
Understanding tc qdisc and iperf
Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Community Moderator Election Results
Why I closed the “Why is Kali so hard” questionHaving several qdisc for each device, which is the first to process?Xen and traffic shapingGet list of qdiscs and filters that are supported by tc toolUsing qdisc prio under htb classI want to limit a SFQ qdisc by bytesHTB traffic shaper - download speed degradeTC (Traffic control) and trickle: apply different bandwidth to (multiple requests) same IP or URL during upload and downloadHow can I permanently associate tc qdisc commands with a particular interface?htb -> netem -> pfifo_fast qdisc drops all packets under high traffictc-netem ignores set rate limit and packet reordering
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty margin-bottom:0;
I'm trying to limit bandwidth with tc
and check the results with iperf
. I started like this:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35213 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 830 MBytes 696 Mbits/sec
The two instances are directly connected with through Ethernet.
I then set up a htb
qdisc
with one default class to limit bandwidth to 1mbit/sec:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
But I don't get what I expect:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35217 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-12.8 sec 768 KBytes 491 Kbits/sec
If I double the rate, the measured bandwidth does not change. What am I missing? Why doesn't the the measured bandwidth correspond to the 1mbit from the rate
parameter? What parameters do I need to set to limit the bandwidth to an exact given rate?
However, the man
page says that tbf
should be the qdisc
of choice for this task:
The Token Bucket Filter is suited for slowing traffic down to a precisely configured rate. Scales well to large bandwidths.
tbf
requires parameters rate
, burst
and (limit
| latency
). So I tried the following without understanding how burst
and (limit
| latency
) affect the available bandwidth:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k
This got me a measured bandwidth of 113 Kbits/sec. Playing around with those parameters didn't change that much until I noticed that adding a value for mtu
changes things drastically:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k mtu 5000
resulted in a measured bandwidth of 1.00 Mbits/sec.
What parameters would I need to set to limit the bandwidth to an exact given rate?
Should I use the htb
or tbf
queueing discipline for this?
EDIT:
Based on these resources, I have made some tests:
- https://help.ubuntu.com/community/UbuntuBonding
- https://help.ubuntu.com/community/LinkAggregation
- /usr/share/doc/ifenslave-2.6/README.Debian.gz http://lartc.org/
I have tried the following setups.
On a Physical Machine
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto br0
iface br0 inet dhcp
bridge_ports eth0
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.4 port 51804 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.4 port 51804
[ 4] 0.0-13.7 sec 1.62 MBytes 993 Kbits/sec
On a Virtual Machine without Bonding
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 34347 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.3 sec 1.62 MBytes 1.21 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.7 port 34347
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on eth0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49054 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49054
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on bond0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49055 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-13.3 sec 768 KBytes 475 Kbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49055
[ 4] 0.0-14.1 sec 768 KBytes 446 Kbits/sec
The result does not change if I remove eth1
(the passive interface) from the bond.
Conclusion
Traffic Control on a bond interface does not work, or at least not as expected. I will have to investigate further.
As a workaround one could add the queueing disciplines directly to the interfaces belonging to the bond.
tc
add a comment |
I'm trying to limit bandwidth with tc
and check the results with iperf
. I started like this:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35213 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 830 MBytes 696 Mbits/sec
The two instances are directly connected with through Ethernet.
I then set up a htb
qdisc
with one default class to limit bandwidth to 1mbit/sec:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
But I don't get what I expect:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35217 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-12.8 sec 768 KBytes 491 Kbits/sec
If I double the rate, the measured bandwidth does not change. What am I missing? Why doesn't the the measured bandwidth correspond to the 1mbit from the rate
parameter? What parameters do I need to set to limit the bandwidth to an exact given rate?
However, the man
page says that tbf
should be the qdisc
of choice for this task:
The Token Bucket Filter is suited for slowing traffic down to a precisely configured rate. Scales well to large bandwidths.
tbf
requires parameters rate
, burst
and (limit
| latency
). So I tried the following without understanding how burst
and (limit
| latency
) affect the available bandwidth:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k
This got me a measured bandwidth of 113 Kbits/sec. Playing around with those parameters didn't change that much until I noticed that adding a value for mtu
changes things drastically:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k mtu 5000
resulted in a measured bandwidth of 1.00 Mbits/sec.
What parameters would I need to set to limit the bandwidth to an exact given rate?
Should I use the htb
or tbf
queueing discipline for this?
EDIT:
Based on these resources, I have made some tests:
- https://help.ubuntu.com/community/UbuntuBonding
- https://help.ubuntu.com/community/LinkAggregation
- /usr/share/doc/ifenslave-2.6/README.Debian.gz http://lartc.org/
I have tried the following setups.
On a Physical Machine
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto br0
iface br0 inet dhcp
bridge_ports eth0
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.4 port 51804 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.4 port 51804
[ 4] 0.0-13.7 sec 1.62 MBytes 993 Kbits/sec
On a Virtual Machine without Bonding
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 34347 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.3 sec 1.62 MBytes 1.21 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.7 port 34347
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on eth0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49054 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49054
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on bond0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49055 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-13.3 sec 768 KBytes 475 Kbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49055
[ 4] 0.0-14.1 sec 768 KBytes 446 Kbits/sec
The result does not change if I remove eth1
(the passive interface) from the bond.
Conclusion
Traffic Control on a bond interface does not work, or at least not as expected. I will have to investigate further.
As a workaround one could add the queueing disciplines directly to the interfaces belonging to the bond.
tc
Strangely enough, this seems to have worked for this guy: blog.tinola.com/?e=22
– Matías E. Fernández
May 9 '12 at 18:33
1
I think with htb, you have to usetc filter
to put the packets into classes. You may also need to change some of the htb parameters (tune it just like tbf). I suggest looking intotcng
, which is a front-end to tc. (These are quick pointers...)
– derobert
Oct 22 '12 at 15:37
I did not see any filters in your post. What commands are you using to match the traffic so that it can be rate limited?
– user93961
Dec 5 '14 at 18:49
add a comment |
I'm trying to limit bandwidth with tc
and check the results with iperf
. I started like this:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35213 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 830 MBytes 696 Mbits/sec
The two instances are directly connected with through Ethernet.
I then set up a htb
qdisc
with one default class to limit bandwidth to 1mbit/sec:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
But I don't get what I expect:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35217 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-12.8 sec 768 KBytes 491 Kbits/sec
If I double the rate, the measured bandwidth does not change. What am I missing? Why doesn't the the measured bandwidth correspond to the 1mbit from the rate
parameter? What parameters do I need to set to limit the bandwidth to an exact given rate?
However, the man
page says that tbf
should be the qdisc
of choice for this task:
The Token Bucket Filter is suited for slowing traffic down to a precisely configured rate. Scales well to large bandwidths.
tbf
requires parameters rate
, burst
and (limit
| latency
). So I tried the following without understanding how burst
and (limit
| latency
) affect the available bandwidth:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k
This got me a measured bandwidth of 113 Kbits/sec. Playing around with those parameters didn't change that much until I noticed that adding a value for mtu
changes things drastically:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k mtu 5000
resulted in a measured bandwidth of 1.00 Mbits/sec.
What parameters would I need to set to limit the bandwidth to an exact given rate?
Should I use the htb
or tbf
queueing discipline for this?
EDIT:
Based on these resources, I have made some tests:
- https://help.ubuntu.com/community/UbuntuBonding
- https://help.ubuntu.com/community/LinkAggregation
- /usr/share/doc/ifenslave-2.6/README.Debian.gz http://lartc.org/
I have tried the following setups.
On a Physical Machine
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto br0
iface br0 inet dhcp
bridge_ports eth0
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.4 port 51804 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.4 port 51804
[ 4] 0.0-13.7 sec 1.62 MBytes 993 Kbits/sec
On a Virtual Machine without Bonding
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 34347 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.3 sec 1.62 MBytes 1.21 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.7 port 34347
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on eth0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49054 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49054
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on bond0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49055 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-13.3 sec 768 KBytes 475 Kbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49055
[ 4] 0.0-14.1 sec 768 KBytes 446 Kbits/sec
The result does not change if I remove eth1
(the passive interface) from the bond.
Conclusion
Traffic Control on a bond interface does not work, or at least not as expected. I will have to investigate further.
As a workaround one could add the queueing disciplines directly to the interfaces belonging to the bond.
tc
I'm trying to limit bandwidth with tc
and check the results with iperf
. I started like this:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35213 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 830 MBytes 696 Mbits/sec
The two instances are directly connected with through Ethernet.
I then set up a htb
qdisc
with one default class to limit bandwidth to 1mbit/sec:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
But I don't get what I expect:
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 35217 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-12.8 sec 768 KBytes 491 Kbits/sec
If I double the rate, the measured bandwidth does not change. What am I missing? Why doesn't the the measured bandwidth correspond to the 1mbit from the rate
parameter? What parameters do I need to set to limit the bandwidth to an exact given rate?
However, the man
page says that tbf
should be the qdisc
of choice for this task:
The Token Bucket Filter is suited for slowing traffic down to a precisely configured rate. Scales well to large bandwidths.
tbf
requires parameters rate
, burst
and (limit
| latency
). So I tried the following without understanding how burst
and (limit
| latency
) affect the available bandwidth:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k
This got me a measured bandwidth of 113 Kbits/sec. Playing around with those parameters didn't change that much until I noticed that adding a value for mtu
changes things drastically:
# tc qdisc add dev bond0 root tbf rate 1mbit limit 10k burst 10k mtu 5000
resulted in a measured bandwidth of 1.00 Mbits/sec.
What parameters would I need to set to limit the bandwidth to an exact given rate?
Should I use the htb
or tbf
queueing discipline for this?
EDIT:
Based on these resources, I have made some tests:
- https://help.ubuntu.com/community/UbuntuBonding
- https://help.ubuntu.com/community/LinkAggregation
- /usr/share/doc/ifenslave-2.6/README.Debian.gz http://lartc.org/
I have tried the following setups.
On a Physical Machine
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto br0
iface br0 inet dhcp
bridge_ports eth0
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.4 port 51804 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.4 port 51804
[ 4] 0.0-13.7 sec 1.62 MBytes 993 Kbits/sec
On a Virtual Machine without Bonding
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.7 port 34347 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.3 sec 1.62 MBytes 1.21 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.7 port 34347
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on eth0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev eth0 root handle 1: htb default 12
# tc class add dev eth0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49054 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-11.9 sec 1.62 MBytes 1.14 Mbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49054
[ 4] 0.0-14.0 sec 1.62 MBytes 972 Kbits/sec
On a Virtual Machine with Bonding (tc configured on bond0)
/etc/network/interfaces
:
auto lo
iface lo inet loopback
auto eth0
allow-bond0 eth0
iface eth0 inet manual
bond-master bond0
bond-primary eth0 eth1
auto eth1
allow-bond0 eth1
iface eth1 inet manual
bond-master bond0
bond-primary eth0 eth1
auto bond0
iface bond0 inet dhcp
bond-slaves none
bond-mode 1
# bond-arp-interval 250
# bond-arp-ip-target 192.168.2.1
# bond-arp-validate 3
Measurement with iperf
:
# tc qdisc add dev bond0 root handle 1: htb default 12
# tc class add dev bond0 parent 1: classid 1:12 htb rate 1mbit
# iperf -c 192.168.2.1
------------------------------------------------------------
Client connecting to 192.168.2.1, TCP port 5001
TCP window size: 23.5 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.2.9 port 49055 connected with 192.168.2.1 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-13.3 sec 768 KBytes 475 Kbits/sec
Whereas the iperf
server calculated a different bandwidth:
[ 4] local 192.168.2.1 port 5001 connected with 192.168.2.9 port 49055
[ 4] 0.0-14.1 sec 768 KBytes 446 Kbits/sec
The result does not change if I remove eth1
(the passive interface) from the bond.
Conclusion
Traffic Control on a bond interface does not work, or at least not as expected. I will have to investigate further.
As a workaround one could add the queueing disciplines directly to the interfaces belonging to the bond.
tc
tc
edited May 9 '12 at 18:27
Matías E. Fernández
asked May 6 '12 at 15:41
Matías E. FernándezMatías E. Fernández
76114
76114
Strangely enough, this seems to have worked for this guy: blog.tinola.com/?e=22
– Matías E. Fernández
May 9 '12 at 18:33
1
I think with htb, you have to usetc filter
to put the packets into classes. You may also need to change some of the htb parameters (tune it just like tbf). I suggest looking intotcng
, which is a front-end to tc. (These are quick pointers...)
– derobert
Oct 22 '12 at 15:37
I did not see any filters in your post. What commands are you using to match the traffic so that it can be rate limited?
– user93961
Dec 5 '14 at 18:49
add a comment |
Strangely enough, this seems to have worked for this guy: blog.tinola.com/?e=22
– Matías E. Fernández
May 9 '12 at 18:33
1
I think with htb, you have to usetc filter
to put the packets into classes. You may also need to change some of the htb parameters (tune it just like tbf). I suggest looking intotcng
, which is a front-end to tc. (These are quick pointers...)
– derobert
Oct 22 '12 at 15:37
I did not see any filters in your post. What commands are you using to match the traffic so that it can be rate limited?
– user93961
Dec 5 '14 at 18:49
Strangely enough, this seems to have worked for this guy: blog.tinola.com/?e=22
– Matías E. Fernández
May 9 '12 at 18:33
Strangely enough, this seems to have worked for this guy: blog.tinola.com/?e=22
– Matías E. Fernández
May 9 '12 at 18:33
1
1
I think with htb, you have to use
tc filter
to put the packets into classes. You may also need to change some of the htb parameters (tune it just like tbf). I suggest looking into tcng
, which is a front-end to tc. (These are quick pointers...)– derobert
Oct 22 '12 at 15:37
I think with htb, you have to use
tc filter
to put the packets into classes. You may also need to change some of the htb parameters (tune it just like tbf). I suggest looking into tcng
, which is a front-end to tc. (These are quick pointers...)– derobert
Oct 22 '12 at 15:37
I did not see any filters in your post. What commands are you using to match the traffic so that it can be rate limited?
– user93961
Dec 5 '14 at 18:49
I did not see any filters in your post. What commands are you using to match the traffic so that it can be rate limited?
– user93961
Dec 5 '14 at 18:49
add a comment |
4 Answers
4
active
oldest
votes
When you are unsure about how tc works you can still monitor tc and look how the packets flow? You can use my script to monitor tc and need to run it in a terminal with lifted privilege. You can change wlan0 to another interface and you also need grep and awk:
#!/bin/sh
INTERVAL=15
while sleep $INTERVAL
do
/usr/sbin/tc -s -d class show dev wlan0
uptime
more /proc/meminfo | grep MemFree | grep -v grep
echo cache-name num-active-objs total-objs obj-size
SKBUFF=`more /proc/slabinfo | grep skbuff | grep -v grep | awk
'print $2 print $3 print $4'`
echo skbuff_head_cache: $SKBUFF
done
add a comment |
Try increasing the burst
/limit
values. The token bucket algorithms scale well, but have a limited accuracy/speed ratio.
Accuracy is achieved by using a small bucket, speed by increasing the size of the tokens. Large tokens mean the rate at which they are replenished is decreased (tokens per second = bytes per second / bytes per token).
The rate
parameter gives the average rate that is not to be exceeded, the burst
or limit
parameters give the size of the averaging window. As sending out a packet at line speed exceeds the set rate for the time where the packet is transferred, the averaging window needs to be at least large enough that sending a single packet does not push the entire window over the limit; if more packets fit in the window, the algorithm will have a better chance of hitting the target exactly.
add a comment |
run this before add queue discipline on bonding interface (bond0 in this case)
ipconfig bond0 txqueuelen 1000
it not work because software virtual device like bonding interface has no default queue.
add a comment |
Since bond
devices doesn't have defined queue, setting the qdisc
size explicitly fixes the issue for me.
Here is an example for a leaf qdisc
to be used under HTB
structure :tc qdisc add dev $dev parent $parent handle $handle pfifo limit 1000
add a comment |
Your Answer
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "106"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f38029%2funderstanding-tc-qdisc-and-iperf%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
When you are unsure about how tc works you can still monitor tc and look how the packets flow? You can use my script to monitor tc and need to run it in a terminal with lifted privilege. You can change wlan0 to another interface and you also need grep and awk:
#!/bin/sh
INTERVAL=15
while sleep $INTERVAL
do
/usr/sbin/tc -s -d class show dev wlan0
uptime
more /proc/meminfo | grep MemFree | grep -v grep
echo cache-name num-active-objs total-objs obj-size
SKBUFF=`more /proc/slabinfo | grep skbuff | grep -v grep | awk
'print $2 print $3 print $4'`
echo skbuff_head_cache: $SKBUFF
done
add a comment |
When you are unsure about how tc works you can still monitor tc and look how the packets flow? You can use my script to monitor tc and need to run it in a terminal with lifted privilege. You can change wlan0 to another interface and you also need grep and awk:
#!/bin/sh
INTERVAL=15
while sleep $INTERVAL
do
/usr/sbin/tc -s -d class show dev wlan0
uptime
more /proc/meminfo | grep MemFree | grep -v grep
echo cache-name num-active-objs total-objs obj-size
SKBUFF=`more /proc/slabinfo | grep skbuff | grep -v grep | awk
'print $2 print $3 print $4'`
echo skbuff_head_cache: $SKBUFF
done
add a comment |
When you are unsure about how tc works you can still monitor tc and look how the packets flow? You can use my script to monitor tc and need to run it in a terminal with lifted privilege. You can change wlan0 to another interface and you also need grep and awk:
#!/bin/sh
INTERVAL=15
while sleep $INTERVAL
do
/usr/sbin/tc -s -d class show dev wlan0
uptime
more /proc/meminfo | grep MemFree | grep -v grep
echo cache-name num-active-objs total-objs obj-size
SKBUFF=`more /proc/slabinfo | grep skbuff | grep -v grep | awk
'print $2 print $3 print $4'`
echo skbuff_head_cache: $SKBUFF
done
When you are unsure about how tc works you can still monitor tc and look how the packets flow? You can use my script to monitor tc and need to run it in a terminal with lifted privilege. You can change wlan0 to another interface and you also need grep and awk:
#!/bin/sh
INTERVAL=15
while sleep $INTERVAL
do
/usr/sbin/tc -s -d class show dev wlan0
uptime
more /proc/meminfo | grep MemFree | grep -v grep
echo cache-name num-active-objs total-objs obj-size
SKBUFF=`more /proc/slabinfo | grep skbuff | grep -v grep | awk
'print $2 print $3 print $4'`
echo skbuff_head_cache: $SKBUFF
done
edited 11 hours ago
Rui F Ribeiro
42.1k1484142
42.1k1484142
answered Jul 24 '12 at 10:36
BytemainBytemain
2,27142232
2,27142232
add a comment |
add a comment |
Try increasing the burst
/limit
values. The token bucket algorithms scale well, but have a limited accuracy/speed ratio.
Accuracy is achieved by using a small bucket, speed by increasing the size of the tokens. Large tokens mean the rate at which they are replenished is decreased (tokens per second = bytes per second / bytes per token).
The rate
parameter gives the average rate that is not to be exceeded, the burst
or limit
parameters give the size of the averaging window. As sending out a packet at line speed exceeds the set rate for the time where the packet is transferred, the averaging window needs to be at least large enough that sending a single packet does not push the entire window over the limit; if more packets fit in the window, the algorithm will have a better chance of hitting the target exactly.
add a comment |
Try increasing the burst
/limit
values. The token bucket algorithms scale well, but have a limited accuracy/speed ratio.
Accuracy is achieved by using a small bucket, speed by increasing the size of the tokens. Large tokens mean the rate at which they are replenished is decreased (tokens per second = bytes per second / bytes per token).
The rate
parameter gives the average rate that is not to be exceeded, the burst
or limit
parameters give the size of the averaging window. As sending out a packet at line speed exceeds the set rate for the time where the packet is transferred, the averaging window needs to be at least large enough that sending a single packet does not push the entire window over the limit; if more packets fit in the window, the algorithm will have a better chance of hitting the target exactly.
add a comment |
Try increasing the burst
/limit
values. The token bucket algorithms scale well, but have a limited accuracy/speed ratio.
Accuracy is achieved by using a small bucket, speed by increasing the size of the tokens. Large tokens mean the rate at which they are replenished is decreased (tokens per second = bytes per second / bytes per token).
The rate
parameter gives the average rate that is not to be exceeded, the burst
or limit
parameters give the size of the averaging window. As sending out a packet at line speed exceeds the set rate for the time where the packet is transferred, the averaging window needs to be at least large enough that sending a single packet does not push the entire window over the limit; if more packets fit in the window, the algorithm will have a better chance of hitting the target exactly.
Try increasing the burst
/limit
values. The token bucket algorithms scale well, but have a limited accuracy/speed ratio.
Accuracy is achieved by using a small bucket, speed by increasing the size of the tokens. Large tokens mean the rate at which they are replenished is decreased (tokens per second = bytes per second / bytes per token).
The rate
parameter gives the average rate that is not to be exceeded, the burst
or limit
parameters give the size of the averaging window. As sending out a packet at line speed exceeds the set rate for the time where the packet is transferred, the averaging window needs to be at least large enough that sending a single packet does not push the entire window over the limit; if more packets fit in the window, the algorithm will have a better chance of hitting the target exactly.
answered Jul 24 '12 at 14:08
Simon RichterSimon Richter
2,5091213
2,5091213
add a comment |
add a comment |
run this before add queue discipline on bonding interface (bond0 in this case)
ipconfig bond0 txqueuelen 1000
it not work because software virtual device like bonding interface has no default queue.
add a comment |
run this before add queue discipline on bonding interface (bond0 in this case)
ipconfig bond0 txqueuelen 1000
it not work because software virtual device like bonding interface has no default queue.
add a comment |
run this before add queue discipline on bonding interface (bond0 in this case)
ipconfig bond0 txqueuelen 1000
it not work because software virtual device like bonding interface has no default queue.
run this before add queue discipline on bonding interface (bond0 in this case)
ipconfig bond0 txqueuelen 1000
it not work because software virtual device like bonding interface has no default queue.
answered Mar 28 '13 at 2:15
FanFan
1
1
add a comment |
add a comment |
Since bond
devices doesn't have defined queue, setting the qdisc
size explicitly fixes the issue for me.
Here is an example for a leaf qdisc
to be used under HTB
structure :tc qdisc add dev $dev parent $parent handle $handle pfifo limit 1000
add a comment |
Since bond
devices doesn't have defined queue, setting the qdisc
size explicitly fixes the issue for me.
Here is an example for a leaf qdisc
to be used under HTB
structure :tc qdisc add dev $dev parent $parent handle $handle pfifo limit 1000
add a comment |
Since bond
devices doesn't have defined queue, setting the qdisc
size explicitly fixes the issue for me.
Here is an example for a leaf qdisc
to be used under HTB
structure :tc qdisc add dev $dev parent $parent handle $handle pfifo limit 1000
Since bond
devices doesn't have defined queue, setting the qdisc
size explicitly fixes the issue for me.
Here is an example for a leaf qdisc
to be used under HTB
structure :tc qdisc add dev $dev parent $parent handle $handle pfifo limit 1000
answered Mar 18 '15 at 13:21
SagiLowSagiLow
1609
1609
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f38029%2funderstanding-tc-qdisc-and-iperf%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
-tc
Strangely enough, this seems to have worked for this guy: blog.tinola.com/?e=22
– Matías E. Fernández
May 9 '12 at 18:33
1
I think with htb, you have to use
tc filter
to put the packets into classes. You may also need to change some of the htb parameters (tune it just like tbf). I suggest looking intotcng
, which is a front-end to tc. (These are quick pointers...)– derobert
Oct 22 '12 at 15:37
I did not see any filters in your post. What commands are you using to match the traffic so that it can be rate limited?
– user93961
Dec 5 '14 at 18:49