⚠️ Warning: this is an old article and may include information that’s out of date. ⚠️


Ever since Steve Souders started evangelizing web performance, it’s been pounded into our heads that extra HTTP requests add a lot of additional overhead, and that we should combine them if possible to dramatically decrease the load time of our web pages.

The practical implication of this has been to combine our JavaScript and CSS files, which is relatively easy and straightforward, but the harder question has been what to do with images.


Image sprites are a concept taken from video games: the idea is to cram a ton of image assets into one file, and rearrange a “viewport” of sorts to view only specific pieces of that file at a time. For instance, a simple sprite that holds two images might have one viewport that only looks at the top half of the sprite (image #1), and another viewport that only looks at the bottom half (image #2).

On the web side of things, this means that those multiple requests have now been combined into one request. This is nice because it saves both the overhead of additional HTTP requests as well as the overhead of setting up an image’s file header each time.

But there’s a few drawbacks with using image sprites:

Data URIs and Base64 encoding

Data URIs (see this, this, and this) and Base64 encoding goes hand-in-hand. This method allows you to embed images right in your HTML, CSS, or JavaScript.

Just like sprites, you save HTTP requests, but there’s also some drawbacks:

The “33% larger” claim is generally accepted truth now, despite the fact that the figure varies wildly depending on the type of content. This is exactly what I wanted to test, albeit in a pretty limited and nonscientific way.

Before I tested, I wanted to keep in mind a few unverified intuitions (which aren’t entirely my own, but seem to be ideas that are floating around out there). Here’s a few questions I had before going to test:

There’s something else I wanted to test: whether Gzipping binary image data made much difference. I know text compresses well, but is it even worth compressing JPEG files with Gzip, for instance?

I ran three tests: one with a set of small UI icons, one with a set of small photographs, and one with a set of the same photographs in a larger size. Though my tests were by no means extensive, they do show that care should be taken in making assumptions about base64.

Just a note about the tables: they are comparing the binary form (original png or jpeg) with the base64 form as it would appear in a CSS stylesheet, and comparing each of those with their gzipped form, which is most likely how they would be sent down the wire. The CSS representation has a few practical declarations and looks something like this:

.address-book--arrow {
  background-image: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAp1JREFUeNqEU21IU1EYfu7unW5Ty6aBszYs6MeUjGVYokHYyH5E1B9rZWFEFPQnAwmy6Hc/oqhfJsRKSSZGH1JIIX3MNCsqLTD9o1Oj6ebnnDfvvefezrnbdCHhCw/n433P8z7nPe/hBEEAtX0U7hc164uwuvVSXKwZLoOmaRDim+7m9vZa0WiEKSUFFpNpCWlmMyypqTDRuYn6t3k8vmQ2gRDCxs0t9fW45F52aBTROJLtZl7nEZad2m+KtoQCQ0FBARyOCGRZ/q92I1WgqqXlfdd95VsrK8/pChIEqqpCkiQsiCII0aBQZZoWl8lzFDwsFjMl0DBLY8Lj41hBwK4jSQrWOIphL6xYyhwJDWGo6wFSaH1Y3PTCAsITE1oyAa8flhWkbSiCLX8vun11eiGIpiJ/z2nYdx5HqLdVV7elrOzsuqysL3rmBIGiKPizKCHHWY4PLVeQbnXAdegqdhy+hu8dDTBnbqQJZJ1A7u+vz7RaiymWCZgCRSF6Edk8b9cx+B/W6WuVxPaZnyiqXoPpyUmVYvkKTIFClHigEieKjYuSvETUllaF4GAUM1NT6ooaJDKx+aDfC9fByxj90REb+9ppmIoAscH/6leg8MS9DJXPAM9xHCM443K57C6biMjcHDaVVCHw9RmCA2/RGC5C00AqXk/m4p20HZK4CM/J3Zk9n0ecMBhDQnJHcrTisyMfdQXOilrdMfxcwoHq/fg5R59TiQV3hYGKo6X2J/c7LyQIjOx9GXhOw/zoJ8wEevRGyp53o/lGMNYsBgPtEwLecwov7/jGDKa1twT6o3KpL4MdZgGsWZLtfPr7f1q58k1JNHy7YYaM+J+K3Y2PmAIbRavX66229hrGVvvL5uzsHDEUvUu+NT1my78CDAAMK1a8/QaZCgAAAABJRU5ErkJggg==);
  width: 16px;
  height: 16px;
  background-repeat: no-repeat;

Ok, onto the tests!

Test #1: Five 16×16 icons from the Fugue Icon set (PNG)

File Binary Binary Gzipped CSS + Base64 CSS + Base64 Gzipped
abacus 1443 1179 2043 1395
acorn 1770 1522 2478 1728
address-book-arrow 763 810 1153 948
address-book-exclamation 795 848 1199 988
address-book-minus 734 781 1113 919
Total 5,505 5,140 7,986 5,978
Combined file (5,505) (4,128) 7,986 4,423


Practically, the developer has two options: deliver 5,140 bytes to the user in 5 separate HTTP requests, or 4,423 bytes in one HTTP request (CSS with base64 encoded image data). Base64 is the clear winner here, and seems to confirm that small icons compress extremely well.

Test #2: Five Flickr 75×75 Pictures (JPEG)

File Binary Binary Gzipped CSS + Base64 CSS + Base64 Gzipped
1 6734 5557 9095 6010
2 5379 4417 7287 4781
3 25626 18387 34283 20103
4 7031 6399 9491 6702
5 5847 4655 7911 5077
Total 50,617 39,415 68,067 42,673
Combined file (50,617) (36,838) 68,067 40,312


Practically, the developer can deliver 39,415 bytes in 5 separate requests, or 40,312 in 1 request. Not much filesize difference here, but 1 request seems preferable when we’re talking about 40kb.

Test #3: Five Flickr 240×160 Pictures (JPEG)

File Binary Binary Gzipped CSS + Base64 CSS + Base64 Gzipped
1 24502 23403 32789 23982
2 20410 19466 27333 19954
3 43833 36729 58561 38539
4 31776 31180 42485 31686
5 21348 20208 28581 20761
Total 141,869 130,986 189,749 134,922
Combined file (141,869) (129,307) 189,749 133,615


The developer must choose between delivering 130,986 bytes in 5 HTTP requests, or 133,615 bytes in one HTTP request. Any good Souders follower would opt for the one request, BUT I would be careful here…

Caution: things aren’t always as they seem

There’s a huge caveat here: it may actually be more beneficial for perceived performance to deliver the images in 5 separate requests.

Why? Because 133,615 bytes is a lot to deliver all in one package to an end user who will be staring at blank placeholders for the duration. If the 5 base64 images all come in one request, that request will have to complete before ANYTHING is shown on the screen. All 5 images go from blank placeholders to almost immediately decoded from base64 and displayed in place.

Compare this with 5 requests that are most likely made in parallel and actually give a visual indicator to the user that actual image content is being downloaded, by showing parts of the images as they’re downloaded (you can also try a throwback to progressive JPEGs - really anything will be better than just a blank screen). That’s why it might actually be beneficial for perceived performance to just load images in the good old fashioned way. They will most likely load in parallel anyway, so the extra HTTP requests may actually not really make a difference. Not to mention it will be easier to let the browser manage the cache for each file instead of having to make your JavaScript manage your cache and prevent you from downloading an image that’s already stored away in localStorage or sessionStorage.

This being said, it’s generally advisable to put your common UI icons in base64 in your CSS, then let that whole chunk get cached by the browser. Those are usually clean vector icons as well, which seem to get compressed quite well (see Test #1).

But for image content, where there is nothing to be saved but HTTP requests, you should definitely think twice about base64 encoding to save requests. Yes, you will save a few HTTP requests, you won’t really be saving bytes, and the user might actually think the experience is slower because they can’t see the image content as it’s being downloaded. Even if you shave off a few milliseconds of wait time, the perceived performance is what matters most.

(EDIT: changed the wording of the “unverified intuitions” section from “not verified” to actual questions, to make it clearer)

EDIT May 2013: added an example of sprite bleedthrough