A notch above a monkey

Latency, bandwidth and speed of applications - part II

This is the second post on this subject. You can find the first part here .

So, how do latency and bandwidth relate to the speed of execution?

As mentioned, latency is time needed for any amount of information to travel a certain distance and bandwidth is the amount of data that can flow through it at the same time.

Let me give you a real-life example. If you imagine people walking down the corridor, then latency is amount of time one person needs to walk from start to its end and bandwidth is number of people who can do this at the same time.

You can measure latency quite easily with ping, a tool available on every major OS, which will give you a round-trip time from client to server (2+5 from our original 7 intervals). The only problem is that it will tell you just your personal latency at that time, which can vary significantly for people on different networks and even for you at different times. There are similar tools for measuring bandwidth and they have more or less the same problems.

However, sometimes you can tell a bit about what you can expect. For example, if you’re building an application aimed at mobile users using GPRS, then it’s quite reasonable to assume that bandwidth will be scarce and latency will be high. On local networks it’s usually the opposite of that.

In general, latency rises with distance between server and client and bandwidth can’t be wider than the narrowest bandwidth on the way and the question is how do we react to our situation?

Basic rules are:

  • if latency is high, then we should make as few round-trips to server as possible
  • if bandwidth is wide, then we can transfer more data at each trip

In practice we should always try for the most balanced approach between this two factors. If latency is fairly high, but bandwidth is not severely limited, then we might transfer also data we don’t need immediately, if it might spare us another request later on. If latency is low and bandwidth is narrow, then it might make more sense to transfer only what’s needed, but make those requests more often.

Another way we can sometimes cut down response time, when bandwidth is not the limiting factor, is to multiplex connections, which is a bit fancy way of saying to simultaneously open multiple connections from client to server. It’s nice to keep in mind that some browsers like Internet Explorer will limit number of simultaneous connections for HTTP protocol to 4 (as specified by its standard), but you are more free to do what you want, if you write your own program. It’s also nice to keep in mind that if order in which responses are coming is important, you’ll have to handle that yourself.

A different approach is to simulate multiplex connections over one connection. In this case client sends multiple commands inside of one network request and receives answers to all of them at once. It’s easier to handle order, since you get all data at the same time, but you give up possibly faster response for some of the data.

There are other factors that can influence our decisions, such as possibly increased load on server from more requests. Or the speed with which we can generate a request and process reply, if we need to pack and unpack more data. But this are issues worth exploring in some other post, if interest justifies it.

Latency, bandwidth and speed of applications - part I

It seems we’re back to discussing the importance of bandwidth and latency on perceived speed of application. In a way it’s amazing that after 10 years of widespread use of Internet and local networks, we developers still discuss these issues.

So, what contributes to speed of execution in network applications?

Each network request can be divided into seven time intervals:

  1. time needed to send a request
  2. latency, which is time spent for any amount of data to travel between client and server
  3. data transfer time
  4. time needed to process a request, form a response and send it
  5. latency again, but this time in other direction
  6. data transfer time for response
  7. time needed to process response and present result to user

Sum of all these parts is time lapsed between issued command and presented result. How fast this needs to be depends on application and our experiences. For example, we expect faster response to our key presses in SSH client than we do waiting for search results.

However, if this sum is under a quarter of a second, it will be generally regarded as instantaneous, but you really should aim for 1/10th of a second to satisfy even the most twitchy users.

Usually, we might get to control all parts of this equation only when client and server are located on a local network and we develop both of them. So, a well behaving application needs to take in account environment in which it will run and act accordingly.

This means, among other things, to give user feedback that something is happening whenever there is even a remote chance that response might take a while, to react sensibly to network disruptions (such as timeouts) and offer a way to abort user actions.

So, how do we speed up the application itself?

Absolutely the best way to speed up a network request is to not make one. The usual way to do this is to cache result over its life span and use it when possible.

The other way, lately often mentioned in relation to AJAX, is to make our requests asynchronously. Unlike synchronous requests, which are direct results of user actions, asynchronous are those were data is transfered in expectation of future user actions.

An example would be a mail program where we transfer message headers of yet unread mail in background before we actually need to display them to user, since it’s fairly likely that he’ll want to check them in near future.

There are also downsides to this approach. It can be difficult to predict what to transfer next if no user action is more probable than rest. It can also incur significant costs if bandwidth is expensive as often is on GPRS networks in Europe. And when number of simultaneous connections is limited (HTTP allows 4, but not all browsers comply), it can occupy a resource when you need one.

Still, it’s a useful tool when it can be applied.

More about saving time with our seven intervals in part two due tomorrow.

Update: second part has been posted .

Hide email address from spammers with Javascript

Update: I published a new, more safe but less friendly version of this script.

Wouldn’t it be nice, if you could post your email address on your web site without worrying spammers will pick it up?

Now you can, by applying a little bit of javascript to your web page. Just import this javascript file in head of your document and call mangle() inside your onload handler. What it does, is replace elements of form

<span class="change">billg at microsoft dot com</span>

with

<a href="mailto:billg@microsoft.com">billg@microsoft.com</a>

There are few caveats to its use. You’re not allowed to use any HTML tag inside of span blocks, which have class set to change. The script also expects such blocks to have only one class definition. If change is only part of class attribute value, this script won’t work.

You are free to use script as you please and to make any changes necessary. But if you choose to replace “at” and “dot” as delimiters, pick replacements that make address easily recognizable for those who don’t use javascript.

Note: The reason why this script works is that spammers use programs which search for email like pattern in page. They don’t interpret pages using javascript interpreter, since it would make collecting addresses significantly slower and more expensive.

Update: Jay Samec let me know there’s a bug in my code, since it didn’t handle emails with multiple dots (e.g. those with subdomains). Script has been fixed now.

Update 2: Holger Rindermann pointed out another bug and provided a patch to fix it that is now a part of the script.