Bandwidth != Network Performance

You might think that if you want faster internet performance, you can simply get a connection to the internet that has higher bandwidth. When you get a “faster” internet connection you may observe faster downloads. But it’s less frequently the additional bandwidth, and more frequently reduced latency that actually produces increased interactive web performance. This post explains why.

First of all, let’s review some definitions:

  • Bandwidth: The amount of data that can be passed along a communications channel in a given period of time.
  • Latency: The time it takes for a packet to cross a network connection, from sender to receiver.
  • Speed: Fast and rapid moving, going, traveling, proceeding, or performing; swiftness.
  • Throughput: The quantity data transmitted by a computer network over a given period of time.

Now, all of these terms are related, and I want to highlight some of the minutia here:


The higher the bandwidth is on a network connection, the more data it’s capable of transmitting in a given period of time. Higher bandwidth is better.


This is very very important, because latency effectively limits the amount of bandwidth you can consume if you are using a synchronous data transmission, like a TCP/IP download. Lower latency is better, and will yield faster speed.


Throughput is another way of expressing speed. The higher the throughput, the faster your network communications will be. Note that your maximum possible throughput is your bandwidth. Actual throughput is equal to or less than your bandwidth.


If your network is high speed, you should observe high bandwidth, low latency, and high throughput.

Latency and Bandwidth are Inversely Proportional

For TCP/IP transmissions, the higher your latency is, the lower your throughput will be. Let’s explore why. The most common use of TCP/IP is for the web, which uses the HTTP protocol. HTTP works by making a TCP/IP connection to a remote server, issuing a request for a document, and then receiving the response. The protocol is text based. A simple HTTP transmission is illustrated below.

Client Request:

GET / HTTP/1.1
User-Agent: Wget

Server Response:

HTTP/1.1 200 OK
Server: Apache/2.2.3 (Red Hat)
Last-Modified: Tue, 15 Nov 2005 13:24:10 GMT
ETag: "b300b4-1b6-4059a80bfd280"
Accept-Ranges: bytes
Content-Type: text/html; charset=UTF-8
Connection: Keep-Alive
Date: Wed, 18 Nov 2009 22:36:34 GMT
Age: 1010
Content-Length: 438

  Example Web Page

You have reached this web page by typing "",
  or "" into your web browser.

These domain names are reserved for use in documentation and are not available
  for registration. See &lta href="">RFC
  2606</a>, Section 3.

Here is a trace of the TCP/IP packets that make up that request:

14:57:47.146665 IP > S 3717672264:3717672264(0) win 5840
14:57:47.220092 IP > . ack 1 win 183
14:57:47.220309 IP > P 1:123(122) ack 1 win 183  (GET Request)
14:57:47.300962 IP > P 1:728(727) ack 123 win 4502  (200 OK Response)
14:57:47.300993 IP > . ack 728 win 228
14:57:47.302035 IP > F 123:123(0) ack 728 win 228
14:57:47.375475 IP > . ack 124 win 4502
14:57:47.375499 IP > F 728:728(0) ack 124 win 4502
14:57:47.375510 IP > . ack 729 win 228

Notice that there are 10 packets in the above trace. It’s a three way handshake to set up the TCP session, then a round trip to send the data, then two more round trips to close down the connection. Each time the server receives a packet from the client, the connection may wait in the server’s connection queue to be processed, which can further increase the interactive protocol latency. Consider the impact of high latency on a connection like this. Suppose that it takes 0.2 seconds for each round trip. That connection would have a total throughput of 727 bytes downloaded in 0.8 seconds. That’s a rate of 909 Bytes/sec. Maybe your internet connection is 15 Mb/sec. bandwidth did not matter. Latency caused the throughput to be low.

Now, you might be wondering why we can’t just improve networking technology to make latency lower. We can, but that’s not going to help much, because we are still bounded by the speed of light, among other factors. The speed of light is slow when you consider the distance it has to travel to cross continents on the earth. Let’s look at some match to explain that:

  • The speed of light in vacuum is 299,792,458 m/s.
  • The speed of light in fiber optic cable is ~200,000,000 m/s.
  • The distance from Anaheim, CA to New York is 4,494,898 meters
  • The one-way latency to New York is 4,494,898 / 200,000,000 = 22.47ms
  • The round-trip time between Anaheim, CA and New York is 44.95ms
  • The current ping time from Anaheim, CA to New York is 72 ms
  • Tracing the route to (
      1 ( 0 msec ( 0 msec ( 4 msec
      2 ( 28 msec ( 28 msec ( 28 msec
      3 ( 40 msec 40 msec ( 40 msec
      4 ( 52 msec ( 56 msec ( 52 msec
      5 ( 72 msec ( 72 msec ( 72 msec
      6 ( 72 msec * ( 72 msec

This round trip time includes all of the switching and routing to get the packet through its full round trip. That means that even if all switching and routing were instantaneous, and we had a perfectly straight fiber path between all points on the earth, that we could only reduce latency by about 40%. We can not accelerate the speed of light, so without a significant advance in data transmission technology (perhaps a quantum physics approach) we must accept the speed of light as a performance boundary.

Making Web Sites Faster

If you’re a web content publisher, you can set up your systems to work around these natural limitations. One way to make interactive web performance faster is to place copies of your data in various geographic locations that are physically closer to your end users. Using a CDN for your media content is one way to do this. You can also make your web server as fast as possible so that your dynamically generated content can be processed as quickly as possible. Using memcached to speed up your web application can help. Also, take a look at some best practices for web developers for good performance.

You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.


  • James says:

    Great read. I like how you explained it. Very informative and detailed. Thanks for sharing these thoughts.

  • Bandwidth Management says:

    Thanks for explaining the differences between bandwidth and latency. Great post.

  • example (dot) com (dot) ru says:

    Very less explored subject and you have provided valuable information. Thank you for the article. Very helpful!


Leave a Reply

XHTML: You can use these tags: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Spam protection by WP Captcha-Free