iPerf is an open source network testing and troubleshooting tool. The purpose of iPerf is to measure end-to-end network throughput and link quality within data centers and internet connection. iPerf is also used to baseline cloud performance and troubleshoot network issues. iPerf does not provide reliable latency testing or application metrics.
The software is based on a client/server model where TCP or UDP traffic streams are generated between client and server sockets. This is referred to as memory-to-memory socket-level communication for network testing. iPerf3 supports disk read/write tests as well that identify server hardware as the performance bottleneck instead of the network. It is also a popular tool used to troubleshoot network and application problems.
Use Case Examples
- network throughput testing
- voice and video link quality
- troubleshooting performance
- cloud throughput
- network stress testing
- wireless throughput
- web application tuning
iPerf3 vs Speedtest
iPerf3 is Linux-based and not recommended or supported with Windows for various reasons. It is the most recent version and not included with most Linux distro packages.
iPerf3 is not compatible with iPerf2 whether running on Linux or Windows. Each testing platform supports similar and different features that affect test results and reports generated. iPerf2 is an older testing platform that is supported on Windows and Linux.
Speedtest is a popular online tool for measuring throughput that has both advantages and disadvantages. It is used primarily for testing the upload and download speed of your internet connection from a web browser. Speedtest also reports idle and loaded latency based on the server location that you select.
Speedtest does not test throughput between host endpoints within your network or administrative domain. For example, you cannot use Speedtest to measure throughput between your data center and a cloud server or another data center server. This is an important distinction between Speedtest and iPerf3 that reports on actual performance metrics for your network.
Speedtest is an effective tool for estimating bandwidth delay product (BDP) on an internet link where most latency occurs. Speedtest can only estimate performance to a data center in Los Angeles for example based on a local Speedtest server. Your actual internet connection would terminate at an ISP in Los Angeles where throughput and latency could vary. There are other factors that also affect results since it is not end-to-end testing to your data center.
It should be noted that many performance testing tools such as Ookla Speedtest report higher throughput than iPerf since they include headers. iPerf does not include protocol headers so throughput report is actual data payload. For example, iPerf would report 950 Mbps at best when testing Gigabit Ethernet (1000 Mbps) theoretical speed. There is approximately 5% deducted for Ethernet, IP and TCP headers that reduce actual throughput.
Throughput vs Latency
TCP-based applications comprise most internet and data center traffic. TCP window size will have the most effect on maximum throughput. This is important to understand when analyzing iPerf or Wireshark reports with TCP attributes such as MSS, window size, scaling factor, and throughput. Application developers and DevOps can use iPerf and Speedtest for fine-tuning memory buffers from Windows and Linux default settings.
TCP throughput is most affected by network latency since it is used to calculate bandwidth delay product (BDP). Network engineers must minimize latency and packet loss particularly across internet links where network performance is most affected. TCP window size can expand when packet loss is minimal. UDP is much less affected by network latency since it is connectionless with less protocol overhead than TCP. It sends datagrams as fast as possible based on average latency and any rate limiting applied.
You could intuitively conclude that UDP throughput is higher than TCP however there is packet loss with UDP that could cause similar or even lower throughput. Fine-tuning UDP for larger packet size could increase throughput while balancing acceptable packet loss. Network engineers should also verify window scaling is working correctly and apply fine-tuning of memory buffers. This is most applicable to a long fat network (LFN) since it has high bandwidth and latency.
Throughput Calculation
You have an internet connection to Los Angeles that Speedtest reports as 150 Mbps download speed and 60 ms idle latency. What is the average throughput per second you could expect when doing a file transfer?
BDP = bandwidth (bps) x RTT latency (sec)
= 150,000,000 bps x 0.060 sec
= 9,000,000 bits / 8 bits/byte
= 1,125,000 bytes (1.125 MB) -> window scaling required (> 64 KB)
Throughput = TCP window size / RTT latency
= 1,125,000 bytes / .060 sec
= 18,750,000 (18.7 Mbytes/sec)
18,750,000 x 8 bits/byte = 150,000,000 bits/sec (150 Mbps)
Speedtest provides bandwidth and idle RTT latency to their nearest test server for an estimate only. You could also use MTR or ping for RTT latency and iPerf report for bandwidth to calculate bandwidth delay product (bdp).
iPerf reports data transer in MBytes (MB) and throughput (bitrate) in bps. This is based on report interval (-i) that you select with the default being 1 second. iPerf throughput reported represents how many bytes were sent per time interval. This is preferred since videos, files, and web pages are specified in bytes instead of bits.
What Causes Latency?
Network latency is the most important performance metric that affects both throughput and application performance. There are five sources of latency that include propagation delay, transmission delay, queuing delay, processing delay, and protocol delay. Any reported latency represents the sum of all sources with a single RTT value. The affect of propagation delay is by far the biggest contributor. This is based on the distance between source and destination and varies with network media type (fiber, copper, wireless) and number of hops.
This also explains the popularity of CDN services that provide local access points where content is cached locally. The other significant contributor to latency is transmission delay that represents interface speed (bandwidth). Any bandwidth upgrade will reduce latency but also explains why it is not a quick fix solution since there are multiple causes of latency. Protocol delay is inherent to TCP with flow control and retransmits. DNS and SSL/TLS handshakes are a common source of protocol delay as well with web-based applications.
Most performance tools will report network latency as round-trip time (RTT). Measure latency from both directions since is not necessarily symmetrical and reverse path could be higher. This could be the result of a different reverse path with more hops and lower bandwidth links or queuing delays. Jitter is the amount of variation in latency also reported with iPerf and MTR. It is most relevant to voice and video delay-sensitive applications.

How to Install iPerf3
There are various methods available to install iPerf on your client and server machine. The following Linux commands will update Linux package then install iPerf3. Do not install iPerf3 with Windows since it is not supported.
sudo apt update
sudo apt -y install iperf3
How iPerf3 Works
iPerf3 client creates a control channel on TCP port 5201 to start and stop tests. iPerf3 server listens on the same TCP or UDP port for test data based on selected protocol. This means your firewall must allow the same TCP and UDP port number for testing with iPerf3.

iPerf3 test parameters from the client are sent across the control channel to server along with test results from server. iPerf3 server listens on TCP port 5201 for a client connection by default The client is randomly assigned an ephemeral (dynamic) port greater than 1023. You can assign a non-default port to server instead of 5201 with -p option. The client command must include the new destination port as well since client will send packets to default port if not specified.
Tests will sometimes end with data that is still in-flight between client and server. This effect can be significant for tests less than 5 seconds, however it is often negligible for longer tests. iPerf commands on the TCP control channel can terminate testing before all of the test data has been processed. The result is a mismatch between the data (bytes) sent from client and what was received at server (data transfer) or vice versa. There is also server buffer overflow that could account for more data sent than received as well.
Tips and Tricks
- Stop server mode service after testing to prevent unauthorized client connections.
- Identify any host-based and/or network firewalls to request open ports TCP/UDP 5201 (iPerf3) or select a common port (TCP 443).
- Reverse mode supports bidirectional testing across NAT and firewalls.
- Client network interface speed determines the maximum bitrate available for testing.
- Update to the most current version of iPerf for best results.
- RDP can initiate tests remotely from different locations to test servers or laptops.
- iPerf reports actual data payload throughput and not theoretical capacity
Bidirectional Mode
Most applications (HTTP, FTP, DNS, etc.) are bidirectional. Ethernet and serial interfaces support multiple full-duplex bidirectional application sessions across a single physical link. The noted exception is wireless network that is bidirectional half-duplex communication.
Each application session creates a single socket on client and server. This is referred to as bidirectional same socket communication and represents TCP protocol. The exception is an application such as FTP that has separate sockets for control channel and data channel. Most bandwidth usage however is across the data channel.
UDP creates unidirectional sessions since it is connectionless and well-suited to voice and video traffic. UDP sends traffic in a single direction and supports bidirectional traffic using a separate socket for some different function. Each side of a voice call for example would have separate UDP sockets.
iPerf3 support reverse mode (-R) testing that measures throughput from server to client. The default setting for iPerf is testing throughput from client to server. The advantage of reverse mode is same socket communication initiated from client that enables NAT and firewall traversal. The firewall for example would not permit a new socket from an existing connection in the reverse direction. Bidirectional mode (–bidir) with iPerf3 also supports NAT and firewall traversal since client initiates socket.
Symmetrical vs Asymmetrical
Throughput test results are directly affected by the ISP assigned bitrate for upload and download speed. Symmetrical bandwidth refers to a network link with the same upload and download speed such as Ethernet WAN and Fiber.
Asymmetrical bandwidth refers to a network link assigned different upload and download bitrate. This is typical of broadband services such as DSL, LTE, and Cable where download speed is often faster than upload speed. This aligns with applications that make requests and download large amount of data such as video streaming, database applications, and bulk transfers.
iPerf reports results as upload speed from client and download speed to server. This is a unidirectional flow from client to server unless you select reverse mode or bidirectional testing. Throughput for a symmetrical link should be identical in both directions unless rate limiting or throttling exists between client and server. The same testing applies to asymmetrical links where iPerf will report upload and download speed. The reports refer to data transfer (number of bytes) and throughput as bitrate (bps).
- average throughout (bps)
- data transfer (bytes)
- packet loss (%)
- jitter (ms)
- retransmissions
- congestion window (cwnd)
- MSS packet size
