Content-type: text/html
iperf -c server [options]
iperf -u -s [options]
iperf -u -c server [options]
iperf 2 is a testing tool which performs network traffic measurements using network sockets. The performance metrics supported include throughput and latency (or link capacity and responsiveness.) Latency measurements include both one way delay (OWD) and round trip times (RTTs.) Iperf can use both TCP and UDP sockets (or protocols.) It supports unidirectional, full duplex (same socket) and bidirectional traffic, and supports multiple, simultaneous traffic streams. It supports multicast traffic including source specific multicast (SSM) joins. Its multi-threaded design allows for peak performance. Metrics displayed help to characterize host to host network performance. Setting the enhanced (-e) option provides all available metrics. Note: the metrics are at the socket level reads and writes. They do not include the overhead associated with lower level protocol layer headers.
The user must establish both a server (to receive traffic) and a client (to generate and send traffic) for a test to occur. The client and server typically are on different hosts or computers but need not be.
TCP tests (client)
iperf -c <host> -e -i 1
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 256370 (1/0 flows/load)
Write buffer size: 131072 Byte
TCP congestion control using cubic
TOS set to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 100 MByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 41024 connected with 192.168.1.35 port 5001 (sock=3) (icwnd/mss/irtt=14/1448/158) (ct=0.21 ms) on 2024-03-26 10:48:47.867 (PDT)
[ ID] Interval Transfer Bandwidth Write/Err Rtry InF(pkts)/Cwnd(pkts)/RTT(var) NetPwr
[ 1] 0.00-1.00 sec 201 MBytes 1.68 Gbits/sec 1605/0 73 1531K(1083)/1566K(1108)/13336(112) us 15775
[ 1] 1.00-2.00 sec 101 MBytes 846 Mbits/sec 807/0 0 1670K(1181)/1689K(1195)/14429(83) us 7331
[ 1] 2.00-3.00 sec 101 MBytes 847 Mbits/sec 808/0 0 1790K(1266)/1790K(1266)/15325(97) us 6911
[ 1] 3.00-4.00 sec 134 MBytes 1.13 Gbits/sec 1075/0 0 1858K(1314)/1892K(1338)/16188(99) us 8704
[ 1] 4.00-5.00 sec 101 MBytes 846 Mbits/sec 807/0 1 1350K(955)/1370K(969)/11620(98) us 9103
[ 1] 5.00-6.00 sec 121 MBytes 1.01 Gbits/sec 966/0 0 1422K(1006)/1453K(1028)/12405(118) us 10207
[ 1] 6.00-7.00 sec 115 MBytes 962 Mbits/sec 917/0 0 1534K(1085)/1537K(1087)/13135(105) us 9151
[ 1] 7.00-8.00 sec 101 MBytes 844 Mbits/sec 805/0 0 1532K(1084)/1580K(1118)/13582(136) us 7769
[ 1] 8.00-9.00 sec 134 MBytes 1.13 Gbits/sec 1076/0 0 1603K(1134)/1619K(1145)/13858(105) us 10177
[ 1] 9.00-10.00 sec 101 MBytes 846 Mbits/sec 807/0 0 1602K(1133)/1650K(1167)/14113(105) us 7495
[ 1] 10.00-10.78 sec 128 KBytes 1.34 Mbits/sec 1/0 0 0K(0)/1681K(1189)/14424(111) us 11.64
[ 1] 0.00-10.78 sec 1.18 GBytes 941 Mbits/sec 9674/0 74 0K(0)/1681K(1189)/14424(111) us 8154
TCP tests (server)
iperf -s -e -i 1 -l 8K
------------------------------------------------------------
Server listening on TCP port 5001 with pid 13430
Read buffer size: 8.00 KByte
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 45.33.58.123 port 5001 connected with 45.56.85.133 port 49960
[ ID] Interval Transfer Bandwidth Reads Dist(bin=1.0K)
[ 4] 0.00-1.00 sec 124 MBytes 1.04 Gbits/sec 22249 798:2637:2061:767:2165:1563:589:11669
[ 4] 1.00-2.00 sec 136 MBytes 1.14 Gbits/sec 24780 946:3227:2227:790:2427:1888:641:12634
[ 4] 2.00-3.00 sec 137 MBytes 1.15 Gbits/sec 24484 1047:2686:2218:810:2195:1819:728:12981
[ 4] 3.00-4.00 sec 126 MBytes 1.06 Gbits/sec 20812 863:1353:1546:614:1712:1298:547:12879
[ 4] 4.00-5.00 sec 117 MBytes 984 Mbits/sec 20266 769:1886:1828:589:1866:1350:476:11502
[ 4] 5.00-6.00 sec 143 MBytes 1.20 Gbits/sec 24603 1066:1925:2139:822:2237:1827:744:13843
[ 4] 6.00-7.00 sec 126 MBytes 1.06 Gbits/sec 22635 834:2464:2249:724:2269:1646:608:11841
[ 4] 7.00-8.00 sec 110 MBytes 921 Mbits/sec 21107 842:2437:2747:592:2871:1903:496:9219
[ 4] 8.00-9.00 sec 126 MBytes 1.06 Gbits/sec 22804 1038:1784:2639:656:2738:1927:573:11449
[ 4] 9.00-10.00 sec 133 MBytes 1.11 Gbits/sec 23091 1088:1654:2105:710:2333:1928:723:12550
[ 4] 0.00-10.02 sec 1.25 GBytes 1.07 Gbits/sec 227306 9316:22088:21792:7096:22893:17193:6138:120790
TCP tests (server with --trip-times on client)
iperf -s -i 1 -w 4M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 MByte (WARNING: requested 4.00 MByte)
------------------------------------------------------------
[ 4] local 192.168.1.4%eth0 port 5001 connected with 192.168.1.7 port 44798 (trip-times) (MSS=1448) (peer 2.0.14-alpha)
[ ID] Interval Transfer Bandwidth Burst Latency avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 4] 0.00-1.00 sec 19.0 MBytes 159 Mbits/sec 52.314/10.238/117.155/19.779 ms (151/131717) 1.05 MByte 380.19 781=306:253:129:48:18:15:8:4
[ 4] 1.00-2.00 sec 20.0 MBytes 168 Mbits/sec 53.863/21.264/79.252/12.277 ms (160/131080) 1.08 MByte 389.38 771=294:236:126:60:18:24:10:3
[ 4] 2.00-3.00 sec 18.2 MBytes 153 Mbits/sec 58.718/22.000/137.944/20.397 ms (146/130964) 1.06 MByte 325.64 732=299:231:98:52:18:19:10:5
[ 4] 3.00-4.00 sec 19.7 MBytes 165 Mbits/sec 50.448/ 8.921/82.728/14.627 ms (158/130588) 997 KByte 409.00 780=300:255:121:58:15:18:7:6
[ 4] 4.00-5.00 sec 18.8 MBytes 158 Mbits/sec 53.826/11.169/115.316/15.541 ms (150/131420) 1.02 MByte 366.24 761=302:226:134:52:22:17:7:1
[ 4] 5.00-6.00 sec 19.5 MBytes 164 Mbits/sec 50.943/11.922/76.134/14.053 ms (156/131276) 1.03 MByte 402.00 759=273:246:149:45:16:18:4:8
[ 4] 6.00-7.00 sec 18.5 MBytes 155 Mbits/sec 57.643/10.039/127.850/18.950 ms (148/130926) 1.05 MByte 336.16 710=262:228:133:37:16:20:8:6
[ 4] 7.00-8.00 sec 19.6 MBytes 165 Mbits/sec 52.498/12.900/77.045/12.979 ms (157/131003) 1.00 MByte 391.78 742=288:200:135:68:16:23:4:8
[ 4] 8.00-9.00 sec 18.0 MBytes 151 Mbits/sec 58.370/ 8.026/150.243/21.445 ms (144/131255) 1.06 MByte 323.81 716=268:241:108:51:20:17:8:3
[ 4] 9.00-10.00 sec 18.4 MBytes 154 Mbits/sec 56.112/12.419/79.790/13.668 ms (147/131194) 1.05 MByte 343.70 822=330:303:120:26:16:14:9:4
[ 4] 10.00-10.06 sec 1.03 MBytes 146 Mbits/sec 69.880/45.175/78.754/10.823 ms (9/119632) 1.74 MByte 260.40 62=26:30:5:1:0:0:0:0
[ 4] 0.00-10.06 sec 191 MBytes 159 Mbits/sec 54.183/ 8.026/150.243/16.781 ms (1526/131072) 1.03 MByte 366.98 7636=2948:2449:1258:498:175:185:75:48
TCP tests (with one way delay sync check -X and --trip-times on the client)
iperf -c 192.168.1.4 -X -e --trip-times -i 1 -t 2
------------------------------------------------------------
Client connecting to 192.168.1.4, TCP port 5001 with pid 16762 (1 flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 1] Clock sync check (ms): RTT/Half=(3.361/1.680) OWD-send/ack/asym=(2.246/1.115/1.131)
[ 1] local 192.168.1.1%ap0 port 47466 connected with 192.168.1.4 port 5001 (MSS=1448) (trip-times) (sock=3) (peer 2.1.4-master)
[ ID] Interval Transfer Bandwidth Write/Err Rtry Cwnd/RTT NetPwr
[ 1] 0.00-1.00 sec 9.50 MBytes 79.7 Mbits/sec 77/0 0 2309K/113914 us 87
[ 1] 1.00-2.00 sec 7.12 MBytes 59.8 Mbits/sec 57/0 0 2492K/126113 us 59
[ 1] 2.00-2.42 sec 128 KBytes 2.47 Mbits/sec 2/0 0 2492K/126113 us 2
[ 1] 0.00-2.42 sec 16.8 MBytes 58.0 Mbits/sec 136/0 0 2492K/126113 us 57
UDP tests (client)
iperf -c <host> -e -i 1 -u -b 10m
------------------------------------------------------------
Client connecting to <host>, UDP port 5001 with pid 5169
Sending 1470 byte datagrams, IPG target: 1176.00 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 45.56.85.133 port 32943 connected with 45.33.58.123 port 5001
[ ID] Interval Transfer Bandwidth Write/Err PPS
[ 3] 0.00-1.00 sec 1.19 MBytes 10.0 Mbits/sec 852/0 851 pps
[ 3] 1.00-2.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 2.00-3.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 3.00-4.00 sec 1.19 MBytes 10.0 Mbits/sec 851/0 850 pps
[ 3] 4.00-5.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 5.00-6.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 6.00-7.00 sec 1.19 MBytes 10.0 Mbits/sec 851/0 850 pps
[ 3] 7.00-8.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 8.00-9.00 sec 1.19 MBytes 10.0 Mbits/sec 851/0 850 pps
[ 3] 0.00-10.00 sec 11.9 MBytes 10.0 Mbits/sec 8504/0 850 pps
[ 3] Sent 8504 datagrams
[ 3] Server Report:
[ 3] 0.00-10.00 sec 11.9 MBytes 10.0 Mbits/sec 0.047 ms 0/ 8504 (0%) 0.537/ 0.392/23.657/ 0.497 ms 850 pps 2329.37
UDP tests (server)
iperf -s -i 1 -w 4M -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 MByte (WARNING: requested 4.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.1.4 port 5001 connected with 192.168.1.1 port 60027 (WARN: winsize=8.00 MByte req=4.00 MByte) (trip-times) (0.0) (peer 2.0.14-alpha)
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Latency avg/min/max/stdev PPS inP NetPwr
[ 3] 0.00-1.00 sec 44.5 MBytes 373 Mbits/sec 0.071 ms 52198/83938 (62%) 75.185/ 2.367/85.189/14.430 ms 31854 pps 3.64 MByte 620.58
[ 3] 1.00-2.00 sec 44.8 MBytes 376 Mbits/sec 0.015 ms 59549/143701 (41%) 79.609/75.603/85.757/ 1.454 ms 31954 pps 3.56 MByte 590.04
[ 3] 2.00-3.00 sec 44.5 MBytes 373 Mbits/sec 0.017 ms 59494/202975 (29%) 80.006/75.951/88.198/ 1.638 ms 31733 pps 3.56 MByte 583.07
[ 3] 3.00-4.00 sec 44.5 MBytes 373 Mbits/sec 0.019 ms 59586/262562 (23%) 79.939/75.667/83.857/ 1.145 ms 31767 pps 3.56 MByte 583.57
[ 3] 4.00-5.00 sec 44.5 MBytes 373 Mbits/sec 0.081 ms 59612/322196 (19%) 79.882/75.400/86.618/ 1.666 ms 31755 pps 3.55 MByte 584.40
[ 3] 5.00-6.00 sec 44.7 MBytes 375 Mbits/sec 0.064 ms 59571/381918 (16%) 79.767/75.571/85.339/ 1.556 ms 31879 pps 3.56 MByte 588.02
[ 3] 6.00-7.00 sec 44.6 MBytes 374 Mbits/sec 0.041 ms 58990/440820 (13%) 79.722/75.662/85.938/ 1.087 ms 31820 pps 3.58 MByte 586.73
[ 3] 7.00-8.00 sec 44.7 MBytes 375 Mbits/sec 0.027 ms 59679/500548 (12%) 79.745/75.704/84.731/ 1.094 ms 31869 pps 3.55 MByte 587.46
[ 3] 8.00-9.00 sec 44.3 MBytes 371 Mbits/sec 0.078 ms 59230/559499 (11%) 80.346/75.514/94.293/ 2.858 ms 31590 pps 3.58 MByte 577.97
[ 3] 9.00-10.00 sec 44.4 MBytes 373 Mbits/sec 0.073 ms 58782/618394 (9.5%) 79.125/75.511/93.638/ 1.643 ms 31702 pps 3.55 MByte 588.99
[ 3] 10.00-10.08 sec 3.53 MBytes 367 Mbits/sec 0.129 ms 6026/595236 (1%) 94.967/80.709/99.685/ 3.560 ms 31107 pps 3.58 MByte 483.12
[ 3] 0.00-10.08 sec 449 MBytes 374 Mbits/sec 0.129 ms 592717/913046 (65%) 79.453/ 2.367/99.685/ 5.200 ms 31776 pps (null) 587.91
Isochronous UDP tests (client)
iperf -c 192.168.100.33 -u -e -i 1 --isochronous=60:100m,10m --realtime
------------------------------------------------------------
Client connecting to 192.168.100.33, UDP port 5001 with pid 14971
UDP isochronous: 60 frames/sec mean= 100 Mbit/s, stddev=10.0 Mbit/s, Period/IPG=16.67/0.005 ms
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.76 port 42928 connected with 192.168.100.33 port 5001
[ ID] Interval Transfer Bandwidth Write/Err PPS frames:tx/missed/slips
[ 3] 0.00-1.00 sec 12.0 MBytes 101 Mbits/sec 8615/0 8493 pps 62/0/0
[ 3] 1.00-2.00 sec 12.0 MBytes 100 Mbits/sec 8556/0 8557 pps 60/0/0
[ 3] 2.00-3.00 sec 12.0 MBytes 101 Mbits/sec 8586/0 8586 pps 60/0/0
[ 3] 3.00-4.00 sec 12.1 MBytes 102 Mbits/sec 8687/0 8687 pps 60/0/0
[ 3] 4.00-5.00 sec 11.8 MBytes 99.2 Mbits/sec 8468/0 8468 pps 60/0/0
[ 3] 5.00-6.00 sec 11.9 MBytes 99.8 Mbits/sec 8519/0 8520 pps 60/0/0
[ 3] 6.00-7.00 sec 12.1 MBytes 102 Mbits/sec 8694/0 8694 pps 60/0/0
[ 3] 7.00-8.00 sec 12.1 MBytes 102 Mbits/sec 8692/0 8692 pps 60/0/0
[ 3] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 8537/0 8537 pps 60/0/0
[ 3] 9.00-10.00 sec 11.8 MBytes 99.0 Mbits/sec 8450/0 8450 pps 60/0/0
[ 3] 0.00-10.01 sec 120 MBytes 100 Mbits/sec 85867/0 8574 pps 602/0/0
[ 3] Sent 85867 datagrams
[ 3] Server Report:
[ 3] 0.00-9.98 sec 120 MBytes 101 Mbits/sec 0.009 ms 196/85867 (0.23%) 0.665/ 0.083/ 1.318/ 0.174 ms 8605 pps 18903.85
Isochronous UDP tests (server)
iperf -s -e -u --udp-histogram=100u,2000 --realtime
------------------------------------------------------------
Server listening on UDP port 5001 with pid 5175
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.33 port 5001 connected with 192.168.100.76 port 42928 isoch (peer 2.0.13-alpha)
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Latency avg/min/max/stdev PPS NetPwr Frames/Lost
[ 3] 0.00-9.98 sec 120 MBytes 101 Mbits/sec 0.010 ms 196/85867 (0.23%) 0.665/ 0.083/ 1.318/ 0.284 ms 8585 pps 18903.85 601/1
[ 3] 0.00-9.98 sec T8(f)-PDF: bin(w=100us):cnt(85671)=1:2,2:844,3:10034,4:8493,5:8967,6:8733,7:8823,8:9023,9:8901,10:8816,11:7730,12:4563,13:741,14:1 (5.00/95.00%=3/12,Outliers=0,obl/obu=0/0)
[ 3] 0.00-9.98 sec F8(f)-PDF: bin(w=100us):cnt(598)=15:2,16:1,17:27,18:68,19:125,20:136,21:103,22:83,23:22,24:23,25:5,26:3 (5.00/95.00%=17/24,Outliers=0,obl/obu=0/0)
Rate limiting: The -b option supports read and write rate limiting at the application level. The -b option on the client also supports variable offered loads through the <mean>,<standard deviation> format, e.g. -b 100m,10m. The distribution used is log normal. Similar for the isochronous option. The -b on the server rate limits the reads. Socket based pacing is also supported using the --fq-rate long option. This will work with the --reverse and --full-duplex options as well.
IP tos and dscp: Specifies the type-of-service or DSCP class for connections. Accepted values are af11, af12, af13, af21, af22, af23, af31, af32, af33, af41, af42, af43, cs0, cs1, cs2, cs3, cs4, cs5, cs6, cs7, ef, le, nqb, nqb2, ac_be, ac_bk, ac_vi, ac_vo, lowdelay, throughput, reliability, a numeric value, or none to use the operating system default. The ac_xx values are the four access categories defined in WMM for Wi-Fi, and they are aliases for DSCP values that will be mapped to the corresponding ACs under the assumption that the device uses the DSCP-to-UP mapping table specified in IETF RFC 8325.
One can set the tos byte using:
--dscp on the client
--tos or -S on the client or server though the server side setting has caveats (per next item)
--tos-override for server side reversed traffic (use case is for bleaching the return path)
The --dscp value on the client is merely a convenience in that the user can set the dscp field's six bits vs set the tos byte.
The --tos-override allows one to test the reflected TOS feature of APs. This is when the network bleaches the client side tos setting and the reverse traffic will then have no TOS setting. The AP is expected to apply the upstream traffic's TOS to the reverse traffic. The use case is that the --tos-overide would by zero simulating network bleaching of the tos byte.
If there is no --tos setting then the setsockopt will never be called and the default values are used.
--trip-times The --trip-times option enables many one way delay (OWD) metrics. Also note that using --trip-times on a TCP client will cause --tcp-write-prefetch to be set to a small value if tcp-write-prefetch hasn't hasn't also been set. This is done to reduce send side bloat latency (which is unrelated to network induced latency.) Set --tcp-write-prefetch to zero to disable this (which will disable TCP_NOTSENT_LOWAT) and will allow for send side bloat.
Synchronized clocks: The --trip-times option indicates that the client's and server's clocks are synchronized to a common reference. Network Time Protocol (NTP) or Precision Time Protocol (PTP) are commonly used for this. The reference clock(s) error and the synchronization protocols will affect the accuracy of any end to end latency measurements. See bounceback NOTES section on clock unsynchronized detections
Histograms and non-parametric statistics: The --histograms option provides the raw data where nothing is averaged. This is useful for non-parametric distributions, e.g. latency. The standard output does use the central limit theorem to produce average, minimum, maximum and variation. This loses information when the underlining distribution is not Gaussian. Histograms are supported so this information is made available.
Histogram output interpretation: Below is an example bounceback histogram and how to interpret it
[ 1] 0.00-5.10 sec BB8-PDF:
bin(w=100us):cnt(50)=35:1,37:1,39:1,40:3,41:4,42:1,43:1,52:1,57:1,65:1,68:1,69:1,70:1,72:2,74:1,75:5,78:1,79:2,80:4,81:3,82:1,83:1,88:2,90:2,92:1,94:1,117:1,126:1,369:1,1000:1,1922:1,3710:1 (5.00/95.00/99.7%=39/1000/3710,Outliers=4,obl/obu=0/0)
Binding is done at the logical level of port and ip address (or layer 3) using the -B option and a colon as the separator between port and the ip addr. Binding at the device (or layer 2) level requires the percent (%) as the delimiter (for both the client and the server.) An example for src port and ip address is -B 192.168.1.1:6001. To bind the src port only and let the operating system choose the source ip address use 0.0.0.0, e.g. -B 0.0.0.0:6001. On the client, the -B option affects the bind(2) system call, and will set the source ip address and the source port, e.g. iperf -c <host> -B 192.168.100.2:6002. This controls the packet's source values but not routing. These can be confusing in that a route or device lookup may not be that of the device with the configured source IP. So, for example, if the IP address of eth0 is used for -B and the routing table for the destination IP address resolves the output interface to be eth1, then the host will send the packet out device eth1 while using the source IP address of eth0 in the packet. To affect the physical output interface (e.g. dual homed systems) either use -c <host>%<dev> (requires root) which bypasses this host route table lookup, or configure policy routing per each -B source address and set the output interface appropriately in the policy routes. On the server or receive, only packets destined to -B IP address will be received. It's also useful for multicast. For example, iperf -s -B 224.0.0.1%eth0 will only accept ip multicast packets with dest ip 224.0.0.1 that are received on the eth0 interface, while iperf -s -B 224.0.0.1 will receive those packets on any interface, Finally, the device specifier is required for v6 link-local, e.g. -c [v6addr]%<dev> -V, to select the output interface.
Reverse, full-duplex, dualtest (-d) and tradeoff (-r): The --reverse (-R) and --full-duplex options can be confusing when compared to the older options of --dualtest (-d) and --tradeoff (-r). The newer options of --reverse and --full-duplex only open one socket and read and write to the same socket descriptor, i.e. use the socket in full duplex mode. The older -d and -r open second sockets in the opposite direction and do not use a socket in full duplex mode. Note that full duplex applies to the socket and not to the network devices and that full duplex sockets are supported by the operating systems regardless if an underlying network supports full duplex transmission and reception. It's suggested to use --reverse if you want to test through a NAT firewall (or -R on non-windows systems). This applies role reversal of the test after opening the full duplex socket. (Note: Firewall piercing may be required to use -d and -r if a NAT gateway is in the path.)
Also, the --reverse -b <rate> setting behaves differently for TCP and UDP. For TCP it will rate limit the read side, i.e. the iperf client (role reversed to act as a server) reading from the full duplex socket. This will in turn flow control the reverse traffic per standard TCP congestion control. The --reverse -b <rate> will be applied on transmit (i.e. the server role reversed to act as a client) for UDP since there is no flow control with UDP. There is no option to directly rate limit the writes with TCP testing when using --reverse.
Bounceback The bounceback test allows one to measure network responsiveness (which, in this test, is an inverse of latency.) The units are responses per second or rps. Latency is merely delay in units of time. Latency metrics require one to know the delay of what's being measured. For bounceback it's a client write to a server read followed by a server write and then the client read. The original write is bounce backed. Iperf 2 sets up the socket with TCP_NODELAY and possibly TCP_QUICKACK (unless disabled). The client sends a small write (which defaults to 100 bytes unless -l is set) and issues a read waiting for the "bounceback" from the server. The server waits for a read and then optionally delays before sending the payload back. This repeats until the traffic ends. Results are shown in units of rps and time delays.
The TCP_QUICKACK socket option will be enabled during bounceback tests when the bounceback-hold is set to a non-zero value. The socket option is applied after every read() on the server and before the hold delay call. It's also applied on the client. Use --bounceback-no-quickack to have TCP run in default mode per the socket (which is most likely TCP_QUICKACK being off.)
Unsynchronized clock detections with --bounceback and --trip-times (as of March 19, 2023): Iperf 2 can detect when the clocks have synchronization errors larger than the bounceback RTT. This is done via the client's send timestamp (clock A), the server's recieve timestamp (clock B) and the client's final receive timestamp (clock A.) The check, done on each bounceback, is write(A) < read(B) < read(A). This is supported in bounceback tests with a slight adjustment: clock write(A) < clock read(B) < clock read(A) - (clock write(B) - clock read(B)). All the timestamps are sampled on the initial write or read (not the completion of.) Error output looks as shown below and there is no output for a zero value.
[ 1] 0.00-10.00 sec Clock sync error count = 100
TCP Connect times: The TCP connect time (or three way handshake) can be seen on the iperf client when the -e (--enhanced) option is set. Look for the ct=<value> in the connected message, e.g.in '[ 3] local 192.168.1.4 port 48736 connected with 192.168.1.1 port 5001 (ct=1.84 ms)' shows the 3WHS took 1.84 milliseconds.
Port-range Port ranges are supported using the hyphen notation, e.g. 6001-6009. This will cause multiple threads, one per port, on either the listener/server or the client. The user needs to take care that the ports in the port range are available and not already in use per the operating system. The -P is supported on the client and will apply to each destination port within the port range. Finally, this can be used for a workaround for Windows UDP and -P > 1 as Windows doesn't dispatch UDP per a server's connect and the quintuple.
Packet per second (pps) calculation The packets per second calculation is done as a derivative, i.e. number of packets divided by time. The time is taken from the previous last packet to the current last packet. It is not the sample interval time. The last packet can land at different times within an interval. This means that pps does not have to match rx bytes divided by the sample interval. Also, with --trip-times set, the packet time on receive is set by the sender's write time so pps indicates the end to end pps with --trip-times. The RX pps calculation is receive side only when -e is set and --trip-times is not set.
Little's Law in queuing theory is a theorem that determines the average number of items (L) in a stationary queuing system based on the average waiting time (W) of an item within a system and the average number of items arriving at the system per unit of time (lambda). Mathematically, it's L = lambda * W. As used here, the units are bytes. The arrival rate is taken from the writes.
Network power: The network power (NetPwr) metric is experimental. It's a convenience function defined as throughput/delay. For TCP transmits, the delay is the sampled RTT times. For TCP receives, the delay is the write to read latency. For UDP the delay is the end/end latency. Don't confuse this with the physics definition of power (delta energy/delta time) but more of a measure of a desirable property divided by an undesirable property. Also note, one must use -i interval with TCP to get this as that's what sets the RTT sampling rate. The metric is scaled to assist with human readability.
Multicast: Iperf 2 supports multicast with a couple of caveats. First, multicast streams cannot take advantage of the -P option. The server will serialize multicast streams. Also, it's highly encouraged to use a -t on a server that will be used for multicast clients. That is because the single end of traffic packet sent from client to server may get lost and there are no redundant end of traffic packets. Setting -t on the server will kill the server thread in the event this packet is indeed lost.
TCP_QUICKACK: The TCP_QUICKACK socket option will be applied after every read() on the server such that TCP acks are sent immediately, rather than possibly delayed.
TCP_TX_DELAY (--tcp-tx-delay):
Iperf 2 flows can set different delays, simulating real world conditions. Units is microseconds.
This requires FQ packet scheduler or a EDT-enabled NIC.
Note that FQ packet scheduler limits might need some tweaking
man tc-fq
PARAMETERS
limit
Hard limit on the real queue size. When this limit is
reached, new packets are dropped. If the value is lowered,
packets are dropped so that the new limit is met. Default
is 10000 packets.
flow_limit
Hard limit on the maximum number of packets queued per
flow. Default value is 100.
Use of TCP_TX_DELAY option will increase number of skbs in FQ qdisc, so packets would be dropped if any of the previous limit is hit. Using big delays might very well trigger old bugs in TSO auto defer logic and/or sndbuf limited detection.
Fast Sampling: Use ./configure --enable-fastsampling and then compile from source to enable four digit (e.g. 1.0000) precision in reports' timestamps. Useful for sub-millisecond sampling.
Source code at http://sourceforge.net/projects/iperf2/
"Unix Network Programming, Volume 1: The Sockets Networking API (3rd Edition) 3rd Edition" by W. Richard Stevens (Author), Bill Fenner (Author), Andrew M. Rudoff (Author)