Bandwidth != Network Performance
You might think that if you want faster internet performance, you can simply get a connection to the internet that has higher bandwidth. When you get a “faster” internet connection you may observe faster downloads. But it’s less frequently the additional bandwidth, and more frequently reduced latency that actually produces increased interactive web performance. This post explains why.
First of all, let’s review some definitions:
- Bandwidth: The amount of data that can be passed along a communications channel in a given period of time.
- Latency: The time it takes for a packet to cross a network connection, from sender to receiver.
- Speed: Fast and rapid moving, going, traveling, proceeding, or performing; swiftness.
- Throughput: The quantity data transmitted by a computer network over a given period of time.
Now, all of these terms are related, and I want to highlight some of the minutia here:
The higher the bandwidth is on a network connection, the more data it’s capable of transmitting in a given period of time. Higher bandwidth is better.
This is very very important, because latency effectively limits the amount of bandwidth you can consume if you are using a synchronous data transmission, like a TCP/IP download. Lower latency is better, and will yield faster speed.
Throughput is another way of expressing speed. The higher the throughput, the faster your network communications will be. Note that your maximum possible throughput is your bandwidth. Actual throughput is equal to or less than your bandwidth.
If your network is high speed, you should observe high bandwidth, low latency, and high throughput.
Latency and Bandwidth are Inversely Proportional
For TCP/IP transmissions, the higher your latency is, the lower your throughput will be. Let’s explore why. The most common use of TCP/IP is for the web, which uses the HTTP protocol. HTTP works by making a TCP/IP connection to a remote server, issuing a request for a document, and then receiving the response. The protocol is text based. A simple HTTP transmission is illustrated below.
GET / HTTP/1.1 User-Agent: Wget Host: www.example.com
HTTP/1.1 200 OK Server: Apache/2.2.3 (Red Hat) Last-Modified: Tue, 15 Nov 2005 13:24:10 GMT ETag: "b300b4-1b6-4059a80bfd280" Accept-Ranges: bytes Content-Type: text/html; charset=UTF-8 Connection: Keep-Alive Date: Wed, 18 Nov 2009 22:36:34 GMT Age: 1010 Content-Length: 438 Example Web Page You have reached this web page by typing "example.com", "example.net", or "example.org" into your web browser. These domain names are reserved for use in documentation and are not available for registration. See <a href="http://www.rfc-editor.org/rfc/rfc2606.txt">RFC 2606</a>, Section 3.
Here is a trace of the TCP/IP packets that make up that request:
14:57:47.146665 IP 192.168.144.2.39556 > 18.104.22.168.80: S 3717672264:3717672264(0) win 5840 14:57:47.220092 IP 192.168.144.2.39556 > 22.214.171.124.80: . ack 1 win 183 14:57:47.220309 IP 192.168.144.2.39556 > 126.96.36.199.80: P 1:123(122) ack 1 win 183 (GET Request) 14:57:47.300962 IP 188.8.131.52.80 > 192.168.144.2.39556: P 1:728(727) ack 123 win 4502 (200 OK Response) 14:57:47.300993 IP 192.168.144.2.39556 > 184.108.40.206.80: . ack 728 win 228 14:57:47.302035 IP 192.168.144.2.39556 > 220.127.116.11.80: F 123:123(0) ack 728 win 228 14:57:47.375475 IP 18.104.22.168.80 > 192.168.144.2.39556: . ack 124 win 4502 14:57:47.375499 IP 22.214.171.124.80 > 192.168.144.2.39556: F 728:728(0) ack 124 win 4502 14:57:47.375510 IP 192.168.144.2.39556 > 126.96.36.199.80: . ack 729 win 228
Notice that there are 10 packets in the above trace. It’s a three way handshake to set up the TCP session, then a round trip to send the data, then two more round trips to close down the connection. Each time the server receives a packet from the client, the connection may wait in the server’s connection queue to be processed, which can further increase the interactive protocol latency. Consider the impact of high latency on a connection like this. Suppose that it takes 0.2 seconds for each round trip. That connection would have a total throughput of 727 bytes downloaded in 0.8 seconds. That’s a rate of 909 Bytes/sec. Maybe your internet connection is 15 Mb/sec. bandwidth did not matter. Latency caused the throughput to be low.
Now, you might be wondering why we can’t just improve networking technology to make latency lower. We can, but that’s not going to help much, because we are still bounded by the speed of light, among other factors. The speed of light is slow when you consider the distance it has to travel to cross continents on the earth. Let’s look at some match to explain that:
- The speed of light in vacuum is 299,792,458 m/s.
- The speed of light in fiber optic cable is ~200,000,000 m/s.
- The distance from Anaheim, CA to New York is 4,494,898 meters
- The one-way latency to New York is 4,494,898 / 200,000,000 = 22.47ms
- The round-trip time between Anaheim, CA and New York is 44.95ms
- The current ping time from Anaheim, CA to New York is 72 ms
Tracing the route to sl-gw33-nyc.sprintlink.net (188.8.131.52) 1 sl-crs1-ana-0-14-2-0.sprintlink.net (184.108.40.206) 0 msec sl-crs2-ana-0-14-2-0.sprintlink.net (220.127.116.11) 0 msec sl-crs1-ana-0-14-2-0.sprintlink.net (18.104.22.168) 4 msec 2 sl-crs2-fw-0-13-3-0.sprintlink.net (22.214.171.124) 28 msec sl-crs2-fw-0-9-5-0.sprintlink.net (126.96.36.199) 28 msec sl-crs1-fw-0-3-3-0.sprintlink.net (188.8.131.52) 28 msec 3 sl-crs2-kc-0-0-0-2.sprintlink.net (184.108.40.206) 40 msec 220.127.116.11 40 msec sl-crs1-kc-0-5-5-0.sprintlink.net (18.104.22.168) 40 msec 4 sl-crs2-chi-0-13-5-0.sprintlink.net (22.214.171.124) 52 msec sl-crs1-chi-0-1-0-3.sprintlink.net (126.96.36.199) 56 msec sl-crs2-chi-0-15-2-0.sprintlink.net (188.8.131.52) 52 msec 5 sl-crs1-nyc-0-8-0-3.sprintlink.net (184.108.40.206) 72 msec sl-crs2-nyc-0-8-0-1.sprintlink.net (220.127.116.11) 72 msec sl-crs1-chi-0-10-3-0.sprintlink.net (18.104.22.168) 72 msec 6 sl-gw33-nyc-14-0-0.sprintlink.net (22.214.171.124) 72 msec * sl-gw33-nyc-15-0-0.sprintlink.net (126.96.36.199) 72 msec
This round trip time includes all of the switching and routing to get the packet through its full round trip. That means that even if all switching and routing were instantaneous, and we had a perfectly straight fiber path between all points on the earth, that we could only reduce latency by about 40%. We can not accelerate the speed of light, so without a significant advance in data transmission technology (perhaps a quantum physics approach) we must accept the speed of light as a performance boundary.
Making Web Sites Faster
If you’re a web content publisher, you can set up your systems to work around these natural limitations. One way to make interactive web performance faster is to place copies of your data in various geographic locations that are physically closer to your end users. Using a CDN for your media content is one way to do this. You can also make your web server as fast as possible so that your dynamically generated content can be processed as quickly as possible. Using memcached to speed up your web application can help. Also, take a look at some best practices for web developers for good performance.
You can follow any responses to this entry through the RSS 2.0 feed. You can skip to the end and leave a response. Pinging is currently not allowed.