svcadm(8)을 검색하려면 섹션에서 8 을 선택하고, 맨 페이지 이름에 svcadm을 입력하고 검색을 누른다.
iperf(1)
IPERF(1) User Manuals IPERF(1)
NAME
iperf - perform network traffic tests using network sockets. Metrics
include throughput and latency or link capacity and responsiveness.
SYNOPSIS
iperf -s [options]
iperf -c server [options]
iperf -u -s [options]
iperf -u -c server [options]
DESCRIPTION
iperf 2 is a testing tool which performs network traffic measurements
using network sockets. The performance metrics supported include
throughput and latency (or link capacity and responsiveness.) Latency
measurements include both one way delay (OWD) and round trip times
(RTTs.) Iperf can use both TCP and UDP sockets (or protocols.) It sup‐
ports unidirectional, full duplex (same socket) and bidirectional traf‐
fic, and supports multiple, simultaneous traffic streams. It supports
multicast traffic including source specific multicast (SSM) joins. Its
multi-threaded design allows for peak performance. Metrics displayed
help to characterize host to host network performance. Setting the
enhanced (-e) option provides all available metrics. Note: the metrics
are at the socket level reads and writes. They do not include the over‐
head associated with lower level protocol layer headers.
The user must establish both a server (to receive traffic) and a client
(to generate and send traffic) for a test to occur. The client and
server typically are on different hosts or computers but need not be.
GENERAL OPTIONS
-b, --bandwidth
set the target bandwidth and optional standard deviation per
<mean>,[<stdev>] (See NOTES for suffixes) Setting the target
bitrate on the client to 0 will disable bitrate limits (particu‐
larly useful for UDP tests). Will limit the read rate on the
server.
-e, --enhanced
Display enhanced output in reports otherwise use legacy report
(ver 2.0.5) formatting (see NOTES)
-f, --format [abkmgBKMG]
format to report: adaptive, bits, Bytes, Kbits, Mbits, Gbits,
KBytes, MBytes, GBytes (see NOTES for more)
-h, --help
print a help synopsis
--hide-ips
obscure ip addresses in output (useful when wanting to publish
results and not display the full ip addresses. v4 only)
-i, --interval < t | f >
sample or display interval reports every t seconds (default) or
every frame or burst, i.e. if f is used then the interval will
be each frame or burst. The frame interval reporting is experi‐
mental. Also suggest a compile with fast-sampling, i.e. ./con‐
figure --enable-fastsampling
-l, --len n[kmKM]
set read/write buffer size (TCP) or length (UDP) to n (TCP
default 128K, UDP default 1470)
--l2checks
perform layer 2 length checks on received UDP packets (requires
systems that support packet sockets, e.g. Linux)
-m, --print_mss
print TCP maximum segment size
--NUM_REPORT_STRUCTS <count>
Override the default shared memory size between the traffic
thread(s) and reporter thread in order to mitigate mutex lock
contentions. The default value of 5000 should be sufficient for
1Gb/s networks. Increase this upon seeing the Warning message of
reporter thread too slow. If the Warning message isn't seen,
then increasing this won't have any significant effect (other
than to use some additional memory.)
-o, --output filename
output the report or error message to this specified file
--permit-key [=<value>]
Set a key value that must match for the server to accept traffic
on a connection. If the option is given without a value on the
server a key value will be autogenerated and displayed in its
initial settings report. The lifetime of the key is set using
--permit-key-timeout and defaults to twenty seconds. The value
on clients required the use of '=', e.g. --permit-key=password
(even though it's required command line option.) The server will
auto-generate a value if '=password' is not given. The value
will also be used as part of the transfer id in reports. The
option set on the client but not the server will also cause the
server to reject the client's traffic. TCP only, no UDP support.
-p, --port m[-n]
set client or server port(s) to send or listen on per m (default
5001) w/optional port range per m-n (e.g. -p 6002-6008) (see
NOTES)
--sum-dstip
sum traffic threads based upon the destination IP address
(default is source ip address)
--sum-only
set the output to sum reports only. Useful for -P at large val‐
ues
--tcp-tx-delay n,[<prob>]
Set TCP_TX_DELAY on the socket. Delay units are milliseconds and
probability is prob >= 0 and prob <= 1. Values takes float. See
Notes for qdisc requirements.
-t, --time n
time in seconds to listen for new traffic connections, receive
traffic or send traffic
-u, --udp
use UDP rather than TCP
--utc
use coordinated universal time (UTC) when outputting time (oth‐
erwise use local time)
-w, --window n[kmKM]
TCP window size (socket buffer size)
-z, --realtime
Request real-time scheduler, if supported.
-B, --bind host[:port][%dev]
bind to host, ip address or multicast address, optional port or
device (see NOTES)
-C, --compatibility
for use with older versions does not sent extra msgs
-M, --mss n
set TCP maximum segment size using TCP_MAXSEG
-N, --nodelay
set TCP no delay, disabling Nagle's Algorithm
-v, --version
print version information and quit
-x, --reportexclude [CDMSV]
exclude C(connection) D(data) M(multicast) S(settings) V(server)
reports
-y, --reportstyle C|c
if set to C or c report results as CSV (comma separated values)
--tcp-cca
Set the congestion control algorithm to be used for TCP connec‐
tions. See SPECIFIC OPTIONS for more
--working-load-cca
Set the congestion control algorithm to be used for TCP working
loads. See SPECIFIC OPTIONS for more
-Z, --tcp-congestion
Set the default congestion control algorithm to be used for new
connections. Platforms must support setsockopt's TCP_CONGESTION.
(Notes: See sysctl and tcp_allowed_congestion_control for avail‐
able options. May require root privileges.)
SERVER SPECIFIC OPTIONS
-1, --singleclient
set the server to process only one client at a time
-b, --bandwidth n[kmgKMG]
set target read rate to n bits/sec. TCP only for the server.
-s, --server
run in server mode
--histograms[=binwidth[u],bincount,[lowerci],[upperci]]
enable latency histograms for udp packets (-u), for tcp writes
(with --trip-times), or for either udp or tcp with --isochronous
clients, or for --bounceback. The binning can be modified. Bin
widths (default 1 millisecond, append u for microseconds, m for
milliseconds) bincount is total bins (default 1000), ci is con‐
fidence interval between 0-100% (default lower 5%, upper 95%, 3
stdev 99.7%)
--jitter-histograms[=<binwidth>]
enable jitter histograms for udp packets (-u). Optional value is
the bin width where units are microseconds and defaults to 100
usecs
--permit-key [=<value>]
Set a key value that must match for the server to accept traffic
from a client (also set with --permit-key.) The server will
auto-generate a globally unique key when the option is given
without a value. This value will be displayed in the server's
initial settings report. The lifetime of the key is set using
--permit-key-timeout and defaults to twenty seconds. TCP only,
no UDP support.
--permit-key-timeout <value>
Set the lifetime of the permit key in seconds. Defaults to 20
seconds if not set. A value of zero will disable the timer.
--tap-dev <dev>
Set the receive interface to the TAP device as specified.
--tcp-rx-window-clamp n[kmKM]
Set the socket option of TCP_WINDOW_CLAMP, units is bytes.
--test-exchange-timeout <value>
Set the maximum wait time for a test excahnge in seconds.
Defaults to 60 seconds if not set. A value of zero will disable
the timeout.
-t, --time n
time in seconds to listen for new traffic connections and/or
receive traffic (defaults to infinite)
--tos-override <val>
set the socket's IP_TOS value for reverse or full duplex traf‐
fic. Supported in versions 2.1.5 or greater. Previous versions
won't set IP_TOS on reverse traffic. See NOTES for values.
-B, --bind ip | ip%device
bind src ip addr and optional src device for receiving
-D, --daemon
run the server as a daemon. On Windows this will run the speci‐
fied command-line under the IPerfService, installing the service
if necessary. Note the service is not configured to auto-start
or restart - if you need a self-starting service you will need
to create an init script or use Windows "sc" commands.
-H, --ssm-host host
Set the source host (ip addr) per SSM multicast, i.e. the S of
the S,G
-R, --remove
remove the IPerfService (Windows only).
-U, --single_udp
run in single threaded UDP mode
-V, --ipv6_domain
Enable IPv6 reception by setting the domain and socket to
AF_INET6 (Can receive on both IPv4 and IPv6)
--tcp-cca
Set the congestion control algorithm to be used for TCP connec‐
tions - will overide any client side settings (same as --tcp-
congestion)
--working-load
Enable support for TCP working loads on UDP traffic streams
--working-load-cca
Set the congestion control algorithm to be used for TCP working
loads - will overide any client side settings
CLIENT SPECIFIC OPTIONS
-b, --bandwidth n[kmgKMG][,n[kmgKMG]] | n[kmgKMG]pps
set target bandwidth to n bits/sec (default 1 Mbit/sec) or n
packets per sec. This may be used with TCP or UDP. Optionally,
for variable loads, use format of mean,standard deviation
--bounceback[=n]
run a TCP bounceback or rps test with optional number writes in
a burst per value of n. The default is ten writes every period
and the default period is one second (Note: set size with
--bounceback-request). See NOTES on clock unsynchronized detec‐
tions.
--bounceback-hold n
request the server to insert a delay of n milliseconds between
its read and write (default is no delay)
--bounceback-no-quickack
request the server not set the TCP_QUICKACK socket option (dis‐
abling TCP ACK delays) during a bounceback test (see NOTES)
--bounceback-period[=n]
request the client schedule its send(s) every n seconds (default
is one second, use zero value for immediate or continuous back
to back)
--bounceback-request n
set the bounceback request size in units bytes. Default value is
100 bytes.
--bounceback-reply n
set the bounceback reply size in units bytes. This supports
asymmetric message sizes between the request and the reply.
Default value is zero, which uses the value of --bounceback-
request.
--bounceback-txdelay n
request the client to delay n seconds between the start of the
working load and the bounceback traffic (default is no delay)
--burst-period n
Set the burst period in seconds. Defaults to one second. (Note:
assumed use case is low duty cycle traffic bursts)
--burst-size n
Set the burst size in bytes. Defaults to 1M if no value is
given.
-c, --client host | host%device
run in client mode, connecting to host where the optional %dev
will SO_BINDTODEVICE that output interface (requires root and
see NOTES)
--connect-only[=n]
only perform a TCP connect (or 3WHS) without any data transfer,
useful to measure TCP connect() times. Optional value of n is
the total number of connects to do (zero is run forever.) Note
that -i will rate limit the connects where -P will create bursts
and -t will end the client and hence end its connect attempts.
--connect-retry-time n
time value in seconds for application level retries of TCP con‐
nect(s). See --connect-retry-timer for the retry time interval.
See operating system information for the details of system or
kernel TCP connect related settings. This is an application
level retry of the connect() call and not the system level con‐
nect.
--connect-retry-timer n
The minimum time value in seconds to wait before retrying the
connect. Note: This a minimum time to wait between retries and
can be longer dependent upon the system connect time taken. See
operating system information for the details of system or kernel
TCP connect related settings.
--dscp
set the DSCP field (masking ECN bits) in the TOS byte (used by
IP_TOS & setsockopt)
-d, --dualtest
Do a bidirectional test simultaneous test using two unidirec‐
tional sockets
--fq-rate n[kmgKMG]
Set a rate to be used with fair-queuing based socket-level pac‐
ing, in bytes or bits per second. Only available on platforms
supporting the SO_MAX_PACING_RATE socket option. (Note: Here the
suffixes indicate bytes/sec or bits/sec per use of uppercase or
lowercase, respectively)
--fq-rate-step n[kmgKMG]
Set a step of rate to be used with fair-queuing based socket-
level pacing, in bytes or bits per second. Step occurs every fq-
rate-step-interval (defaults to one second)
--fq-rate-step-interval n
Time in seconds before stepping the fq-rate
--full-duplex
run a full duplex test, i.e. traffic in both transmit and
receive directions using the same socket
--histograms[=binwidth[u],bincount,[lowerci],[upperci]]
enable select()/write() histograms with --tcp-write-times or
--bounceback (these options are mutually exclusive.) The binning
can be modified. Bin widths (default 100 microseconds, append u
for microseconds, m for milliseconds) bincount is total bins
(default 10000), ci is confidence interval between 0-100%
(default lower 5%, upper 95%, 3 stdev 99.7%)
--ignore-shutdown
don't wait on the TCP shutdown or close (fin & finack) rather
use the final write as the ending event
--incr-dstip
increment the destination ip address when using the parallel
(-P) or port range option
--incr-dstport
increment the destination port when using the parallel (-P) or
port range option
--incr-srcip
increment the source ip address when using the parallel (-P) or
port range option
--incr-srcport
increment the source ip address when using the parallel (-P) or
port range option, requires -B to set the src port
--ipg n
set the inter-packet gap to n (units of seconds) for packets or
within a frame/burst when --isochronous is set
--isochronous[=fps:mean,stdev]
send isochronous traffic with frequency frames per second and
load defined by mean and standard deviation using a log normal
distribution, defaults to 60:20m,0. (Note: Here the suffixes
indicate bytes/sec or bits/sec per use of uppercase or lower‐
case, respectively. Also the p suffix is supported to set the
burst size in packets, e.g. isochronous=2:25p will send two 25
packet bursts every second, or one 25 packet burst every 0.5
seconds.)
--local-only[=1|0]
Set 1 to limit traffic to the local network only (through the
use of SO_DONTROUTE) set to zero otherwise with optional over‐
ride of compile time default (see configure --default-localonly)
--near-congestion[=n]
Enable TCP write rate limiting per the sampled RTT. The delay is
applied after the -l number of bytes have completed. The
optional value is the multiplier to the RTT and defines the time
delay. This value defaults to 0.5 if it is not set. Values less
than 1 are supported but the value cannot be negative. This is
an experimental feature. It is not likely stable on live net‐
works. Suggested use is over controlled test networks.
--no-connect-sync
By default, parallel traffic threads (per -P greater than 1)
will synchronize after their TCP connects and prior to each
sending traffic, i.e. all the threads first complete (or error)
the TCP 3WHS before any traffic thread will start sending. This
option disables that synchronization such that each traffic
thread will start sending immediately after completing its suc‐
cessful connect.
--no-udp-fin
Don't perform the UDP final server to client exchange which
means there won't be a final server report displayed on the
client. All packets per the test will be from the client to the
server and no packets should be sent in the other direction.
It's highly suggested that -t be set on the server if this
option is being used. This is because there will be only one
trigger ending packet sent from client to server and if it's
lost then the server will continue to run. (Requires ver 2.0.14
or better)
-n, --num n[kmKM]
number of bytes to transmit (instead of -t)
--permit-key [=<value>]
Set a key value that must match the server's value (also set
with --permit-key) in order for the server to accept traffic
from the client. TCP only, no UDP support.
--sync-transfer-id
Pass the clients' transfer id(s) to the server so both will use
the same id in their respective outputs
-r, --tradeoff
Do a bidirectional test individually - client-to-server, fol‐
lowed by a reversed test, server-to-client
--tcp-cca
Set the congestion control algorithm to be used for TCP connec‐
tions & exchange with the server (same as --tcp-congestion)
--tcp-quickack
Set TCP_QUICKACK on the socket
--tcp-write-prefetch n[kmKM]
Set TCP_NOTSENT_LOWAT on the socket and use event based writes
per select() on the socket.
--tcp-write-times
Measure the socket write times
-t, --time n|0
time in seconds to transmit traffic, use zero for infinite
(default is 10 secs)
--trip-times
enable the measurement of end to end write to read latencies
(client and server clocks must be synchronized.) See notes about
tcp-write-prefetch being enabled.
--txdelay-time
time in seconds to hold back or delay after the TCP connect and
prior to the socket writes. For UDP it's the delay between the
traffic thread starting and the first write.
--txstart-time n.n
set the txstart-time to n.n using unix or epoch time format
(supports microsecond resolution, e.g 1536014418.123456) An
example to delay one second using command substitution is iperf
-c 192.168.1.10 --txstart-time $(expr $(date +%s) + 1).$(date
+%N)
-B, --bind ip | ip:port | ipv6 -V | [ipv6]:port -V
bind src ip addr and optional port as the source of traffic (see
NOTES)
-F, --fileinput name
input the data to be transmitted from a file
-I, --stdin
input the data to be transmitted from stdin
-L, --listenport n
port to receive bidirectional tests back on
-P, --parallel n
number of parallel client threads to run
-R, --reverse
reverse the traffic flow (useful for testing through firewalls,
see NOTES)
-S, --tos <val>
set the socket's IP_TOS value. Versions 2.1.5 or greater will
reflect this tos setting back with --reverse or --full-duplex
option. (Previous versions won't set tos on the reverse traf‐
fic.) Note: use server side --tos-override to override. See
NOTES for values.
-T, --ttl n
time-to-live, for multicast (default 1)
--working-load[=up|down|bidir][,n]
request a concurrent working load, currently TCP stream(s),
defaults to full duplex (or bidir) unless the up or down option
is provided. The number of TCP streams defaults to 1 and can be
changed via the n value, e.g. --working-load=down,4 will use
four TCP streams from server to the client as the working load.
The IP ToS will be BE (0x0) for working load traffic.
--working-load-cca
Set the congestion control algorithm to be used for TCP working
loads, exchange with the server
-V, --ipv6_domain
Set the domain to IPv6 (send packets over IPv6)
-X, --peerdetect
run peer version detection prior to traffic.
-Z, --linux-congestion algo
set TCP congestion control algorithm (Linux only)
EXAMPLES
TCP tests (client) iperf -c <host> -e -i 1
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 256370 (1/0
flows/load)
Write buffer size: 131072 Byte
TCP congestion control using cubic
TOS set to 0x0 (dscp=0,ecn=0) (Nagle on)
TCP window size: 100 MByte (default)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 41024 connected with 192.168.1.35
port 5001 (sock=3) (icwnd/mss/irtt=14/1448/158) (ct=0.21 ms) on
2024-03-26 10:48:47.867 (PDT)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
InF(pkts)/Cwnd(pkts)/RTT(var) NetPwr
[ 1] 0.00-1.00 sec 201 MBytes 1.68 Gbits/sec 1605/0 73
1531K(1083)/1566K(1108)/13336(112) us 15775
[ 1] 1.00-2.00 sec 101 MBytes 846 Mbits/sec 807/0 0
1670K(1181)/1689K(1195)/14429(83) us 7331
[ 1] 2.00-3.00 sec 101 MBytes 847 Mbits/sec 808/0 0
1790K(1266)/1790K(1266)/15325(97) us 6911
[ 1] 3.00-4.00 sec 134 MBytes 1.13 Gbits/sec 1075/0 0
1858K(1314)/1892K(1338)/16188(99) us 8704
[ 1] 4.00-5.00 sec 101 MBytes 846 Mbits/sec 807/0 1
1350K(955)/1370K(969)/11620(98) us 9103
[ 1] 5.00-6.00 sec 121 MBytes 1.01 Gbits/sec 966/0 0
1422K(1006)/1453K(1028)/12405(118) us 10207
[ 1] 6.00-7.00 sec 115 MBytes 962 Mbits/sec 917/0 0
1534K(1085)/1537K(1087)/13135(105) us 9151
[ 1] 7.00-8.00 sec 101 MBytes 844 Mbits/sec 805/0 0
1532K(1084)/1580K(1118)/13582(136) us 7769
[ 1] 8.00-9.00 sec 134 MBytes 1.13 Gbits/sec 1076/0 0
1603K(1134)/1619K(1145)/13858(105) us 10177
[ 1] 9.00-10.00 sec 101 MBytes 846 Mbits/sec 807/0 0
1602K(1133)/1650K(1167)/14113(105) us 7495
[ 1] 10.00-10.78 sec 128 KBytes 1.34 Mbits/sec 1/0 0
0K(0)/1681K(1189)/14424(111) us 11.64
[ 1] 0.00-10.78 sec 1.18 GBytes 941 Mbits/sec 9674/0 74
0K(0)/1681K(1189)/14424(111) us 8154
where (per -e,)
ct= TCP connect time (or three way handshake time 3WHS)
Write/Err Total number of successful socket writes. Total number
of non-fatal socket write errors
Rtry Total number of TCP retries
Inf(pkts)/Cwnd/RTT(var) (*nix only) TCP byes and packets
inflight, congestion window and round trip time (sampled where
NA indicates no value). Infight is in units of Kbytes and pack‐
ets where packets_in_flight = (tcp_info_buf.tcpi_unacked -
tcp_info_buf.tcpi_sacked - tcp_info_buf.tcpi_lost +
tcp_info_buf.tcpi_retrans) RTT (var) is RTT variance.
NetPwr (*nix only) Network power defined as (throughput / RTT)
iperf -c host.domain.com -i 1 --bounceback --permit-key=mytest --hide-
ips
------------------------------------------------------------
Client connecting to (**hidden**), TCP port 5001
Bursting: 100 Byte writes 10 times every 1.00 second(s)
Bounce-back test (size= 100 Byte) (server hold req=0 usecs)
TCP window size: 16.0 KByte (default)
------------------------------------------------------------
[mytest(1)] local *.*.*.96 port 38044 connected with *.*.*.123
port 5001 (bb len/hold=100/0) (icwnd/mss/irtt=14/1448/10605)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS
[mytest(1)] 0.00-1.00 sec 1.95 KBytes 16.0 Kbits/sec
10=11.949/9.662/19.597/3.127 ms 0 14K/10930 us 83 rps
[mytest(1)] 1.00-2.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.004/9.651/10.322/0.232 ms 0 14K/10244 us 99 rps
[mytest(1)] 2.00-3.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.582/9.720/14.831/1.573 ms 0 14K/10352 us 94 rps
[mytest(1)] 3.00-4.00 sec 1.95 KBytes 16.0 Kbits/sec
10=11.303/9.940/15.114/2.026 ms 0 14K/10832 us 88 rps
[mytest(1)] 4.00-5.00 sec 1.95 KBytes 16.0 Kbits/sec
10=11.148/9.671/14.803/1.837 ms 0 14K/10858 us 89 rps
[mytest(1)] 5.00-6.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.207/9.695/10.729/0.356 ms 0 14K/10390 us 97 rps
[mytest(1)] 6.00-7.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.871/9.770/14.387/1.547 ms 0 14K/10660 us 91 rps
[mytest(1)] 7.00-8.00 sec 1.95 KBytes 16.0 Kbits/sec
10=11.224/9.760/14.993/1.837 ms 0 14K/11027 us 89 rps
[mytest(1)] 8.00-9.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.719/9.887/14.553/1.455 ms 0 14K/10620 us 93 rps
[mytest(1)] 9.00-10.00 sec 1.95 KBytes 16.0 Kbits/sec
10=10.775/9.689/14.746/1.562 ms 0 14K/10596 us 92 rps
[mytest(1)] 0.00-10.02 sec 19.5 KBytes 16.0 Kbits/sec
100=10.878/9.651/19.597/1.743 ms 0 14K/11676 us 91 rps
[ 1] 0.00-10.02 sec BB8(f)-PDF:
bin(w=100us):cnt(100)=97:5,98:8,99:10,100:8,101:12,102:10,103:6,104:7,105:2,106:2,107:3,108:3,109:2,110:1,114:1,115:1,118:1,120:2,121:1,124:1,125:1,128:1,140:1,143:1,144:1,146:2,148:1,149:2,150:1,151:1,152:1,196:1
(5.00/95.00/99.7%=97/149/196,Outliers=0,obl/obu=0/0)
where BB cnt=avg/min/max/stdev Count of bouncebacks, average time,
minimum time, maximum time, standard deviation units of ms
Rtry Total number of TCP retries
Cwnd/RTT (*nix only) TCP congestion window and round trip time
(sampled where NA indicates no value)
RPS Responses per second
TCP tests (server)
iperf -s -e -i 1 -l 8K
------------------------------------------------------------
Server listening on TCP port 5001 with pid 13430
Read buffer size: 8.00 KByte
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[ 4] local 45.33.58.123 port 5001 connected with 45.56.85.133 port
49960
[ ID] Interval Transfer Bandwidth Reads
Dist(bin=1.0K)
[ 4] 0.00-1.00 sec 124 MBytes 1.04 Gbits/sec 22249
798:2637:2061:767:2165:1563:589:11669
[ 4] 1.00-2.00 sec 136 MBytes 1.14 Gbits/sec 24780
946:3227:2227:790:2427:1888:641:12634
[ 4] 2.00-3.00 sec 137 MBytes 1.15 Gbits/sec 24484
1047:2686:2218:810:2195:1819:728:12981
[ 4] 3.00-4.00 sec 126 MBytes 1.06 Gbits/sec 20812
863:1353:1546:614:1712:1298:547:12879
[ 4] 4.00-5.00 sec 117 MBytes 984 Mbits/sec 20266
769:1886:1828:589:1866:1350:476:11502
[ 4] 5.00-6.00 sec 143 MBytes 1.20 Gbits/sec 24603
1066:1925:2139:822:2237:1827:744:13843
[ 4] 6.00-7.00 sec 126 MBytes 1.06 Gbits/sec 22635
834:2464:2249:724:2269:1646:608:11841
[ 4] 7.00-8.00 sec 110 MBytes 921 Mbits/sec 21107
842:2437:2747:592:2871:1903:496:9219
[ 4] 8.00-9.00 sec 126 MBytes 1.06 Gbits/sec 22804
1038:1784:2639:656:2738:1927:573:11449
[ 4] 9.00-10.00 sec 133 MBytes 1.11 Gbits/sec 23091
1088:1654:2105:710:2333:1928:723:12550
[ 4] 0.00-10.02 sec 1.25 GBytes 1.07 Gbits/sec 227306
9316:22088:21792:7096:22893:17193:6138:120790
where (per -e,)
Reads Total number of socket reads
Dist(bin=size) Eight bin histogram of the socket reads returned
byte count. Bin width is set per size. Bins are separated by a
colon. In the example, the bins are 0-1K, 1K-2K, .., 7K-8K.
TCP tests (server with --trip-times on client) iperf -s -i 1 -w 4M
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 8.00 MByte (WARNING: requested 4.00 MByte)
------------------------------------------------------------
[ 4] local 192.168.1.4%eth0 port 5001 connected with 192.168.1.7 port
44798 (trip-times) (MSS=1448) (peer 2.0.14-alpha)
[ ID] Interval Transfer Bandwidth Burst Latency
avg/min/max/stdev (cnt/size) inP NetPwr Reads=Dist
[ 4] 0.00-1.00 sec 19.0 MBytes 159 Mbits/sec
52.314/10.238/117.155/19.779 ms (151/131717) 1.05 MByte 380.19
781=306:253:129:48:18:15:8:4
[ 4] 1.00-2.00 sec 20.0 MBytes 168 Mbits/sec
53.863/21.264/79.252/12.277 ms (160/131080) 1.08 MByte 389.38
771=294:236:126:60:18:24:10:3
[ 4] 2.00-3.00 sec 18.2 MBytes 153 Mbits/sec
58.718/22.000/137.944/20.397 ms (146/130964) 1.06 MByte 325.64
732=299:231:98:52:18:19:10:5
[ 4] 3.00-4.00 sec 19.7 MBytes 165 Mbits/sec 50.448/
8.921/82.728/14.627 ms (158/130588) 997 KByte 409.00
780=300:255:121:58:15:18:7:6
[ 4] 4.00-5.00 sec 18.8 MBytes 158 Mbits/sec
53.826/11.169/115.316/15.541 ms (150/131420) 1.02 MByte 366.24
761=302:226:134:52:22:17:7:1
[ 4] 5.00-6.00 sec 19.5 MBytes 164 Mbits/sec
50.943/11.922/76.134/14.053 ms (156/131276) 1.03 MByte 402.00
759=273:246:149:45:16:18:4:8
[ 4] 6.00-7.00 sec 18.5 MBytes 155 Mbits/sec
57.643/10.039/127.850/18.950 ms (148/130926) 1.05 MByte 336.16
710=262:228:133:37:16:20:8:6
[ 4] 7.00-8.00 sec 19.6 MBytes 165 Mbits/sec
52.498/12.900/77.045/12.979 ms (157/131003) 1.00 MByte 391.78
742=288:200:135:68:16:23:4:8
[ 4] 8.00-9.00 sec 18.0 MBytes 151 Mbits/sec 58.370/
8.026/150.243/21.445 ms (144/131255) 1.06 MByte 323.81
716=268:241:108:51:20:17:8:3
[ 4] 9.00-10.00 sec 18.4 MBytes 154 Mbits/sec
56.112/12.419/79.790/13.668 ms (147/131194) 1.05 MByte 343.70
822=330:303:120:26:16:14:9:4
[ 4] 10.00-10.06 sec 1.03 MBytes 146 Mbits/sec
69.880/45.175/78.754/10.823 ms (9/119632) 1.74 MByte 260.40
62=26:30:5:1:0:0:0:0
[ 4] 0.00-10.06 sec 191 MBytes 159 Mbits/sec 54.183/
8.026/150.243/16.781 ms (1526/131072) 1.03 MByte 366.98
7636=2948:2449:1258:498:175:185:75:48
where (per -e,)
Burst Latency One way TCP write() to read() latency in mean/min‐
imum/maximum/standard deviation format (Note: requires the
client's and server's system clocks to be synchronized to a com‐
mon reference, e.g. using precision time protocol PTP. A GPS
disciplined OCXO is a recommended reference.)
cnt Number of completed bursts received and used for the burst
latency calculations
size Average burst size in bytes (computed average and estimate
only)
inP inP, short for in progress, is the average number of bytes
in progress or in flight. This is taken from the application
level write to read perspective. Note this is a mean value. The
parenthesis value is the standard deviation from the mean.
(Requires --trip-times on client. See Little's law in NOTES.)
NetPwr Network power defined as (throughput / one way latency)
TCP tests (with one way delay sync check -X and --trip-times on the
client)
iperf -c 192.168.1.4 -X -e --trip-times -i 1 -t 2
------------------------------------------------------------
Client connecting to 192.168.1.4, TCP port 5001 with pid 16762 (1
flows)
Write buffer size: 131072 Byte
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 1] Clock sync check (ms): RTT/Half=(3.361/1.680) OWD-
send/ack/asym=(2.246/1.115/1.131)
[ 1] local 192.168.1.1%ap0 port 47466 connected with 192.168.1.4 port
5001 (MSS=1448) (trip-times) (sock=3) (peer 2.1.4-master)
[ ID] Interval Transfer Bandwidth Write/Err Rtry
Cwnd/RTT NetPwr
[ 1] 0.00-1.00 sec 9.50 MBytes 79.7 Mbits/sec 77/0 0
2309K/113914 us 87
[ 1] 1.00-2.00 sec 7.12 MBytes 59.8 Mbits/sec 57/0 0
2492K/126113 us 59
[ 1] 2.00-2.42 sec 128 KBytes 2.47 Mbits/sec 2/0 0
2492K/126113 us 2
[ 1] 0.00-2.42 sec 16.8 MBytes 58.0 Mbits/sec 136/0 0
2492K/126113 us 57
UDP tests (client)
iperf -c <host> -e -i 1 -u -b 10m
------------------------------------------------------------
Client connecting to <host>, UDP port 5001 with pid 5169
Sending 1470 byte datagrams, IPG target: 1176.00 us (kalman adjust)
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 45.56.85.133 port 32943 connected with 45.33.58.123 port
5001
[ ID] Interval Transfer Bandwidth Write/Err PPS
[ 3] 0.00-1.00 sec 1.19 MBytes 10.0 Mbits/sec 852/0 851 pps
[ 3] 1.00-2.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 2.00-3.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 3.00-4.00 sec 1.19 MBytes 10.0 Mbits/sec 851/0 850 pps
[ 3] 4.00-5.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 5.00-6.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 6.00-7.00 sec 1.19 MBytes 10.0 Mbits/sec 851/0 850 pps
[ 3] 7.00-8.00 sec 1.19 MBytes 10.0 Mbits/sec 850/0 850 pps
[ 3] 8.00-9.00 sec 1.19 MBytes 10.0 Mbits/sec 851/0 850 pps
[ 3] 0.00-10.00 sec 11.9 MBytes 10.0 Mbits/sec 8504/0 850 pps
[ 3] Sent 8504 datagrams
[ 3] Server Report:
[ 3] 0.00-10.00 sec 11.9 MBytes 10.0 Mbits/sec 0.047 ms 0/ 8504
(0%) 0.537/ 0.392/23.657/ 0.497 ms 850 pps 2329.37
where (per -e,)
Write/Err Total number of successful socket writes. Total number
of non-fatal socket write errors
PPS Transmit packet rate in packets per second
UDP tests (server) iperf -s -i 1 -w 4M -u
------------------------------------------------------------
Server listening on UDP port 5001
Receiving 1470 byte datagrams
UDP buffer size: 8.00 MByte (WARNING: requested 4.00 MByte)
------------------------------------------------------------
[ 3] local 192.168.1.4 port 5001 connected with 192.168.1.1 port 60027
(WARN: winsize=8.00 MByte req=4.00 MByte) (trip-times) (0.0) (peer
2.0.14-alpha)
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Latency avg/min/max/stdev PPS inP NetPwr
[ 3] 0.00-1.00 sec 44.5 MBytes 373 Mbits/sec 0.071 ms 52198/83938
(62%) 75.185/ 2.367/85.189/14.430 ms 31854 pps 3.64 MByte 620.58
[ 3] 1.00-2.00 sec 44.8 MBytes 376 Mbits/sec 0.015 ms
59549/143701 (41%) 79.609/75.603/85.757/ 1.454 ms 31954 pps 3.56 MByte
590.04
[ 3] 2.00-3.00 sec 44.5 MBytes 373 Mbits/sec 0.017 ms
59494/202975 (29%) 80.006/75.951/88.198/ 1.638 ms 31733 pps 3.56 MByte
583.07
[ 3] 3.00-4.00 sec 44.5 MBytes 373 Mbits/sec 0.019 ms
59586/262562 (23%) 79.939/75.667/83.857/ 1.145 ms 31767 pps 3.56 MByte
583.57
[ 3] 4.00-5.00 sec 44.5 MBytes 373 Mbits/sec 0.081 ms
59612/322196 (19%) 79.882/75.400/86.618/ 1.666 ms 31755 pps 3.55 MByte
584.40
[ 3] 5.00-6.00 sec 44.7 MBytes 375 Mbits/sec 0.064 ms
59571/381918 (16%) 79.767/75.571/85.339/ 1.556 ms 31879 pps 3.56 MByte
588.02
[ 3] 6.00-7.00 sec 44.6 MBytes 374 Mbits/sec 0.041 ms
58990/440820 (13%) 79.722/75.662/85.938/ 1.087 ms 31820 pps 3.58 MByte
586.73
[ 3] 7.00-8.00 sec 44.7 MBytes 375 Mbits/sec 0.027 ms
59679/500548 (12%) 79.745/75.704/84.731/ 1.094 ms 31869 pps 3.55 MByte
587.46
[ 3] 8.00-9.00 sec 44.3 MBytes 371 Mbits/sec 0.078 ms
59230/559499 (11%) 80.346/75.514/94.293/ 2.858 ms 31590 pps 3.58 MByte
577.97
[ 3] 9.00-10.00 sec 44.4 MBytes 373 Mbits/sec 0.073 ms
58782/618394 (9.5%) 79.125/75.511/93.638/ 1.643 ms 31702 pps 3.55 MByte
588.99
[ 3] 10.00-10.08 sec 3.53 MBytes 367 Mbits/sec 0.129 ms
6026/595236 (1%) 94.967/80.709/99.685/ 3.560 ms 31107 pps 3.58 MByte
483.12
[ 3] 0.00-10.08 sec 449 MBytes 374 Mbits/sec 0.129 ms
592717/913046 (65%) 79.453/ 2.367/99.685/ 5.200 ms 31776 pps (null)
587.91
where (per -e,)
Latency End to end latency in mean/minimum/maximum/standard
deviation format (Note: requires the client's and server's sys‐
tem clocks to be synchronized to a common reference, e.g. using
precision time protocol PTP. A GPS disciplined OCXO is a recom‐
mended reference.)
PPS Received packet rate in packets per second
inP inP, short for in progress, is the average number of bytes
in progress or in flight. This is taken from an application
write to read perspective. (Requires --trip-times on client. See
Little's law in NOTES.)
NetPwr Network power defined as (throughput / latency)
Isochronous UDP tests (client)
iperf -c 192.168.100.33 -u -e -i 1 --isochronous=60:100m,10m --realtime
------------------------------------------------------------
Client connecting to 192.168.100.33, UDP port 5001 with pid 14971
UDP isochronous: 60 frames/sec mean= 100 Mbit/s, stddev=10.0 Mbit/s,
Period/IPG=16.67/0.005 ms
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.76 port 42928 connected with 192.168.100.33
port 5001
[ ID] Interval Transfer Bandwidth Write/Err PPS
frames:tx/missed/slips
[ 3] 0.00-1.00 sec 12.0 MBytes 101 Mbits/sec 8615/0 8493 pps
62/0/0
[ 3] 1.00-2.00 sec 12.0 MBytes 100 Mbits/sec 8556/0 8557 pps
60/0/0
[ 3] 2.00-3.00 sec 12.0 MBytes 101 Mbits/sec 8586/0 8586 pps
60/0/0
[ 3] 3.00-4.00 sec 12.1 MBytes 102 Mbits/sec 8687/0 8687 pps
60/0/0
[ 3] 4.00-5.00 sec 11.8 MBytes 99.2 Mbits/sec 8468/0 8468 pps
60/0/0
[ 3] 5.00-6.00 sec 11.9 MBytes 99.8 Mbits/sec 8519/0 8520 pps
60/0/0
[ 3] 6.00-7.00 sec 12.1 MBytes 102 Mbits/sec 8694/0 8694 pps
60/0/0
[ 3] 7.00-8.00 sec 12.1 MBytes 102 Mbits/sec 8692/0 8692 pps
60/0/0
[ 3] 8.00-9.00 sec 11.9 MBytes 100 Mbits/sec 8537/0 8537 pps
60/0/0
[ 3] 9.00-10.00 sec 11.8 MBytes 99.0 Mbits/sec 8450/0 8450 pps
60/0/0
[ 3] 0.00-10.01 sec 120 MBytes 100 Mbits/sec 85867/0 8574 pps
602/0/0
[ 3] Sent 85867 datagrams
[ 3] Server Report:
[ 3] 0.00-9.98 sec 120 MBytes 101 Mbits/sec 0.009 ms 196/85867
(0.23%) 0.665/ 0.083/ 1.318/ 0.174 ms 8605 pps 18903.85
where (per -e,)
frames:tx/missed/slips Total number of isochronous frames or
bursts. Total number of frame ids not sent. Total number of
frame slips
Isochronous UDP tests (server)
iperf -s -e -u --udp-histogram=100u,2000 --realtime
------------------------------------------------------------
Server listening on UDP port 5001 with pid 5175
Receiving 1470 byte datagrams
UDP buffer size: 208 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.100.33 port 5001 connected with 192.168.100.76 port
42928 isoch (peer 2.0.13-alpha)
[ ID] Interval Transfer Bandwidth Jitter Lost/Total
Latency avg/min/max/stdev PPS NetPwr Frames/Lost
[ 3] 0.00-9.98 sec 120 MBytes 101 Mbits/sec 0.010 ms 196/85867
(0.23%) 0.665/ 0.083/ 1.318/ 0.284 ms 8585 pps 18903.85 601/1
[ 3] 0.00-9.98 sec T8(f)-PDF:
bin(w=100us):cnt(85671)=1:2,2:844,3:10034,4:8493,5:8967,6:8733,7:8823,8:9023,9:8901,10:8816,11:7730,12:4563,13:741,14:1
(5.00/95.00%=3/12,Outliers=0,obl/obu=0/0)
[ 3] 0.00-9.98 sec F8(f)-PDF:
bin(w=100us):cnt(598)=15:2,16:1,17:27,18:68,19:125,20:136,21:103,22:83,23:22,24:23,25:5,26:3
(5.00/95.00%=17/24,Outliers=0,obl/obu=0/0)
where, Frames/lost Total number of frames (or bursts) received. Total
number of bursts lost or error-ed
T8-PDF(f) Latency histogram for packets
F8-PDF(f) Latency histogram for frames
ENVIRONMENT
Note: The environment variable option settings haven't been maintained
well. See the source code if these are of interest.
NOTES
Numeric options: Some numeric options support format characters per
'<value>c' (e.g. 10M) where the c format characters are k,m,g,K,M,G.
Lowercase format characters are 10^3 based and uppercase are 2^n based,
e.g. 1k = 1000, 1K = 1024, 1m = 1,000,000 and 1M = 1,048,576
Rate limiting: The -b option supports read and write rate limiting at
the application level. The -b option on the client also supports vari‐
able offered loads through the <mean>,<standard deviation> format, e.g.
-b 100m,10m. The distribution used is log normal. Similar for the
isochronous option. The -b on the server rate limits the reads. Socket
based pacing is also supported using the --fq-rate long option. This
will work with the --reverse and --full-duplex options as well.
IP tos: Specifies the type-of-service or DSCP class for connections.
Accepted values are af11, af12, af13, af21, af22, af23, af31, af32,
af33, af41, af42, af43, cs0, cs1, cs2, cs3, cs4, cs5, cs6, cs7, ef, le,
nqb, nqb2, ac_be, ac_bk, ac_vi, ac_vo, lowdelay, throughput, reliabil‐
ity, a numeric value, or none to use the operating system default. The
ac_xx values are the four access categories defined in WMM for Wi-Fi,
and they are aliases for DSCP values that will be mapped to the corre‐
sponding ACs under the assumption that the device uses the DSCP-to-UP
mapping table specified in IETF RFC 8325.
--trip-times The --trip-times option enables many one way delay (OWD)
metrics. Also note that using --trip-times on a TCP client will cause
--tcp-write-prefetch to be set to a small value if tcp-write-prefetch
hasn't hasn't also been set. This is done to reduce send side bloat
latency (which is unrelated to network induced latency.) Set --tcp-
write-prefetch to zero to disable this (which will disable TCP_NOT‐
SENT_LOWAT) and will allow for send side bloat.
Synchronized clocks: The --trip-times option indicates that the
client's and server's clocks are synchronized to a common reference.
Network Time Protocol (NTP) or Precision Time Protocol (PTP) are com‐
monly used for this. The reference clock(s) error and the synchroniza‐
tion protocols will affect the accuracy of any end to end latency mea‐
surements. See bounceback NOTES section on clock unsynchronized detec‐
tions
Histograms and non-parametric statistics: The --histograms option pro‐
vides the raw data where nothing is averaged. This is useful for non-
parametric distributions, e.g. latency. The standard output does use
the central limit theorem to produce average, minimum, maximum and
variation. This loses information when the underlining distribution is
not Gaussian. Histograms are supported so this information is made
available.
Histogram output interpretation: Below is an example bounceback his‐
togram and how to interpret it
[ 1] 0.00-5.10 sec BB8-PDF:
bin(w=100us):cnt(50)=35:1,37:1,39:1,40:3,41:4,42:1,43:1,52:1,57:1,65:1,68:1,69:1,70:1,72:2,74:1,75:5,78:1,79:2,80:4,81:3,82:1,83:1,88:2,90:2,92:1,94:1,117:1,126:1,369:1,1000:1,1922:1,3710:1
(5.00/95.00/99.7%=39/1000/3710,Outliers=4,obl/obu=0/0)
where, [ 1] The traffic thread number
0.00-5.10 sec The time interval of the histogram
BB8-PDF BB8 is the histogram name and the PDF indicates a his‐
togram raw output
bin(w=100us) provides the bin width. The bin width of this his‐
togram is 100 microseconds
cnt(50) provides the total number of samples in the histogram.
There are 50 samples in this histogram
35:1 provides the bin no then the number of samples in that bin.
Bin 35 with bin width 100us is 3.4 ms - 3.5 ms and there was one
sample that landed there
5.00/95.00/99.7%=39/1000/3710 provides the bin confidence inter‐
vals (per the integrated cumulative distribution function.) 5%
landed in 3.9 ms or better (recall bin number multiplies by bin
width.) 95% landed in 10 ms or better. 99.7% or 3 standards of
deviation landed in 37.1 ms or better
Outliers=4 provides the outlier count, similar to 3IQR (3 times
the inter quartile range) but uses 10% and 90% for inner & outer
fence post, then 3 times that for outlier detection.
obl/obu=0/0 out of bounds lower and out of bands upper, provides
the number of samples that could not be binned because the value
landed outside of all possible bins
Binding is done at the logical level of port and ip address (or layer
3) using the -B option and a colon as the separator between port and
the ip addr. Binding at the device (or layer 2) level requires the per‐
cent (%) as the delimiter (for both the client and the server.) An
example for src port and ip address is -B 192.168.1.1:6001. To bind the
src port only and let the operating system choose the source ip address
use 0.0.0.0, e.g. -B 0.0.0.0:6001. On the client, the -B option
affects the bind(2) system call, and will set the source ip address and
the source port, e.g. iperf -c <host> -B 192.168.100.2:6002. This con‐
trols the packet's source values but not routing. These can be confus‐
ing in that a route or device lookup may not be that of the device with
the configured source IP. So, for example, if the IP address of eth0
is used for -B and the routing table for the destination IP address
resolves the output interface to be eth1, then the host will send the
packet out device eth1 while using the source IP address of eth0 in the
packet. To affect the physical output interface (e.g. dual homed sys‐
tems) either use -c <host>%<dev> (requires root) which bypasses this
host route table lookup, or configure policy routing per each -B source
address and set the output interface appropriately in the policy
routes. On the server or receive, only packets destined to -B IP
address will be received. It's also useful for multicast. For example,
iperf -s -B 224.0.0.1%eth0 will only accept ip multicast packets with
dest ip 224.0.0.1 that are received on the eth0 interface, while iperf
-s -B 224.0.0.1 will receive those packets on any interface, Finally,
the device specifier is required for v6 link-local, e.g. -c
[v6addr]%<dev> -V, to select the output interface.
Reverse, full-duplex, dualtest (-d) and tradeoff (-r): The --reverse
(-R) and --full-duplex options can be confusing when compared to the
older options of --dualtest (-d) and --tradeoff (-r). The newer options
of --reverse and --full-duplex only open one socket and read and write
to the same socket descriptor, i.e. use the socket in full duplex mode.
The older -d and -r open second sockets in the opposite direction and
do not use a socket in full duplex mode. Note that full duplex applies
to the socket and not to the network devices and that full duplex sock‐
ets are supported by the operating systems regardless if an underlying
network supports full duplex transmission and reception. It's sug‐
gested to use --reverse if you want to test through a NAT firewall (or
-R on non-windows systems). This applies role reversal of the test
after opening the full duplex socket. (Note: Firewall piercing may be
required to use -d and -r if a NAT gateway is in the path.)
Also, the --reverse -b <rate> setting behaves differently for TCP and
UDP. For TCP it will rate limit the read side, i.e. the iperf client
(role reversed to act as a server) reading from the full duplex socket.
This will in turn flow control the reverse traffic per standard TCP
congestion control. The --reverse -b <rate> will be applied on transmit
(i.e. the server role reversed to act as a client) for UDP since there
is no flow control with UDP. There is no option to directly rate limit
the writes with TCP testing when using --reverse.
Bounceback The bounceback test allows one to measure network respon‐
siveness (which, in this test, is an inverse of latency.) The units
are responses per second or rps. Latency is merely delay in units of
time. Latency metrics require one to know the delay of what's being
measured. For bounceback it's a client write to a server read followed
by a server write and then the client read. The original write is
bounce backed. Iperf 2 sets up the socket with TCP_NODELAY and possibly
TCP_QUICKACK (unless disabled). The client sends a small write (which
defaults to 100 bytes unless -l is set) and issues a read waiting for
the "bounceback" from the server. The server waits for a read and then
optionally delays before sending the payload back. This repeats until
the traffic ends. Results are shown in units of rps and time delays.
The TCP_QUICKACK socket option will be enabled during bounceback tests
when the bounceback-hold is set to a non-zero value. The socket option
is applied after every read() on the server and before the hold delay
call. It's also applied on the client. Use --bounceback-no-quickack to
have TCP run in default mode per the socket (which is most likely
TCP_QUICKACK being off.)
Unsynchronized clock detections with --bounceback and --trip-times (as
of March 19, 2023): Iperf 2 can detect when the clocks have synchro‐
nization errors larger than the bounceback RTT. This is done via the
client's send timestamp (clock A), the server's recieve timestamp
(clock B) and the client's final receive timestamp (clock A.) The
check, done on each bounceback, is write(A) < read(B) < read(A). This
is supported in bounceback tests with a slight adjustment: clock
write(A) < clock read(B) < clock read(A) - (clock write(B) - clock
read(B)). All the timestamps are sampled on the initial write or read
(not the completion of.) Error output looks as shown below and there
is no output for a zero value.
[ 1] 0.00-10.00 sec Clock sync error count = 100
TCP Connect times: The TCP connect time (or three way handshake) can be
seen on the iperf client when the -e (--enhanced) option is set. Look
for the ct=<value> in the connected message, e.g.in '[ 3] local
192.168.1.4 port 48736 connected with 192.168.1.1 port 5001 (ct=1.84
ms)' shows the 3WHS took 1.84 milliseconds.
Port-range Port ranges are supported using the hyphen notation, e.g.
6001-6009. This will cause multiple threads, one per port, on either
the listener/server or the client. The user needs to take care that the
ports in the port range are available and not already in use per the
operating system. The -P is supported on the client and will apply to
each destination port within the port range. Finally, this can be used
for a workaround for Windows UDP and -P > 1 as Windows doesn't dispatch
UDP per a server's connect and the quintuple.
Packet per second (pps) calculation The packets per second calculation
is done as a derivative, i.e. number of packets divided by time. The
time is taken from the previous last packet to the current last packet.
It is not the sample interval time. The last packet can land at differ‐
ent times within an interval. This means that pps does not have to
match rx bytes divided by the sample interval. Also, with --trip-times
set, the packet time on receive is set by the sender's write time so
pps indicates the end to end pps with --trip-times. The RX pps calcula‐
tion is receive side only when -e is set and --trip-times is not set.
Little's Law in queuing theory is a theorem that determines the average
number of items (L) in a stationary queuing system based on the average
waiting time (W) of an item within a system and the average number of
items arriving at the system per unit of time (lambda). Mathematically,
it's L = lambda * W. As used here, the units are bytes. The arrival
rate is taken from the writes.
Network power: The network power (NetPwr) metric is experimental. It's
a convenience function defined as throughput/delay. For TCP transmits,
the delay is the sampled RTT times. For TCP receives, the delay is the
write to read latency. For UDP the delay is the end/end latency.
Don't confuse this with the physics definition of power (delta
energy/delta time) but more of a measure of a desirable property
divided by an undesirable property. Also note, one must use -i interval
with TCP to get this as that's what sets the RTT sampling rate. The
metric is scaled to assist with human readability.
Multicast: Iperf 2 supports multicast with a couple of caveats. First,
multicast streams cannot take advantage of the -P option. The server
will serialize multicast streams. Also, it's highly encouraged to use a
-t on a server that will be used for multicast clients. That is because
the single end of traffic packet sent from client to server may get
lost and there are no redundant end of traffic packets. Setting -t on
the server will kill the server thread in the event this packet is
indeed lost.
TCP_QUICKACK: The TCP_QUICKACK socket option will be applied after
every read() on the server such that TCP acks are sent immediately,
rather than possibly delayed.
TCP_TX_DELAY (--tcp-tx-delay): Iperf 2 flows can set different delays,
simulating real world conditions. Units is microseconds. This requires
FQ packet scheduler or a EDT-enabled NIC. Note that FQ packet sched‐
uler limits might need some tweaking
man tc-fq
PARAMETERS
limit
Hard limit on the real queue size. When this limit is
reached, new packets are dropped. If the value is lowered,
packets are dropped so that the new limit is met. Default
is 10000 packets.
flow_limit
Hard limit on the maximum number of packets queued per
flow. Default value is 100.
Use of TCP_TX_DELAY option will increase number of skbs in FQ qdisc, so
packets would be dropped if any of the previous limit is hit. Using
big delays might very well trigger old bugs in TSO auto defer logic
and/or sndbuf limited detection.
Fast Sampling: Use ./configure --enable-fastsampling and then compile
from source to enable four digit (e.g. 1.0000) precision in reports'
timestamps. Useful for sub-millisecond sampling.
DIAGNOSTICS
Use ./configure --enable-thread-debug and then compile from source to
enable both asserts and advanced debugging of the tool itself.
BUGS
See https://sourceforge.net/p/iperf2/tickets/
AUTHORS
Iperf2, based from iperf (originally written by Mark Gates and Alex
Warshavsky), has a goal of maintenance with some feature enhancement.
Other contributions from Ajay Tirumala, Jim Ferguson, Jon Dugan <jdugan
at x1024 dot net>, Feng Qin, Kevin Gibbs, John Estabrook <jestabro at
ncsa.uiuc.edu>, Andrew Gallatin <gallatin at gmail.com>, Stephen Hem‐
minger <shemminger at linux-foundation.org>, Tim Auckland <tim.auckland
at gmail.com>, Robert J. McMahon <rjmcmahon at rjmcmahon.com>
SEE ALSO
accept(2),bind(2),close(2),connect(2),fcntl(2),getpeername(2),getsock‐
name(2),getsockopt(2),listen(2),read(2),recv(2),select(2),send(2),set‐
sockopt(2),shutdown(2),write(2),ip(7),socket(7),tcp(7),udp(7)
Source code at http://sourceforge.net/projects/iperf2/
"Unix Network Programming, Volume 1: The Sockets Networking API (3rd
Edition) 3rd Edition" by W. Richard Stevens (Author), Bill Fenner
(Author), Andrew M. Rudoff (Author)
NLANR/DAST March 2024 IPERF(1)