Archive for the ‘Performance’ Category

Mozilla Reconsiders, May Support WebP Image Format

WebP versus JPEG. Click the image to see the full size examples on Google’s WebP comparison page. Image: Google[/caption]

Want your website to load faster? Slim your images. According to the HTTPArchive, images account for roughly 60 percent of total page size. That means the single biggest thing most sites can do to slim down is to shrink their images.

We recently covered how you can cut down your website’s page load times using Google’s image-shrinking WebP format. Unfortunately, one of the downsides to WebP is that only Opera and Chrome support it. But that may be about to change — Firefox is reconsidering its decision to reject WebP.

The change of heart makes sense since most of the objections Firefox developers initially raised about WebP have since been addressed. However, Firefox hasn’t committed to WebP just yet. As Firefox developer Jeff Muizelaar writes on the re-opened bug report, “just to be clear, no decision on adopting WebP has been made. The only thing that has changed is that we’ve just received some more interest from large non-Google web properties which we never really had before.”

Whatever the case, if Firefox does land support for WebP it would help the fledgling format cross the line where more browsers support it than don’t, which tends to be the threshold for wider adoption.

If you’d like to experiment with WebP today, while still providing fallbacks for browsers that don’t support it, be sure to check out our earlier write-up.

Scaling on a Shoestring, Lessons from NewsBlur

NewsBlur survives a traffic surge after news of Google Reader’s pending demise gets around.
Image: NewsBlur.

One of the more interesting stories to emerge from the demise of Google Reader is that of NewsBlur, a previously small, but very nice, open source alternative RSS reader.

NewsBlur is a one-man operation that was humming along quite nicely, but when Google announced Reader would shutdown, NewsBlur saw a massive traffic spike — in a few short days NewsBlur more than doubled its user base. How NewsBlur developer Samuel Clay handled the influx of new users should be required reading for anyone working on a small site without loads of funding and armies of developers.

“I was able to handle the 1,500 users who were using the service everyday,” writes Clay, “but when 50,000 users hit an uncachable and resource intensive backend, unless you’ve done your homework and load tested the living crap out of your entire stack, there’s going to be trouble brewing.”

Having tested NewsBlur a few times right after Google announced Reader was closing, I can vouch for the fact that there were times when the site was reduced to a crawl, but it came back to life remarkably quickly for a one-man operation.

In his postmortem, Clay details the moves he had to make to keep NewsBlur functioning under the heavy load — switching to new servers, adding a new mailing service (which then accidentally mailed Clay 250,000 error reports) and other moments of rapid, awkward growth.

It’s also worth noting that Clay credits the ability to scale to his premium subscription model, writing that, “the immediate benefits of revenue have been very clear over the past few days.”

As for the future, Clay says he plans to work on “scaling, scaling, scaling,” launching a visual refresh (which you can preview at dev.newsblur.com) and listening to feedback from the service’s host of new users.

If you’re looking for a Google Reader replacement, give NewsBlur a try. There’s a free version you can test out (the number of feeds is limited). A premium account runs $24/year and you can also host NewsBlur on your own server if you prefer.

The Return of the Progressive JPEG

Unlike progressive JPEGs, you just never know what a baseline image is going to be until it loads. Image: rickastley.co.uk

Everything old eventually becomes new again and lately that’s meant a revival of interest in something most web developers probably abandoned long ago — progressive JPEG images.

Progressive JPEGs offer some advantages over their more common “baseline” counterparts, including potentially smaller file sizes and faster perceived load times. But there are trade offs to bear in mind before you start converting your back catalog of images.

If you happened to have missed the pixelated image loads of the circa 1999 web, here’s a brief refresher: There are two primary types of JPEG images, baseline and progressive. These days the vast majority of photos you encounter are baseline JPEGs, which means they start loading with the fully rendered top of the image and then continue to draw in the rest of the image as the data is received.

Progressive JPGs on the other hand load the full photo right off the bat, but with only some of the pixel data. That means the image briefly looks pixelated and then appears to sharpen focus as the rest of the data loads. This was the generally recommended way to optimize images back in the days when 56K dial up was considered smoking fast.

Lately, with mobile devices bringing bandwidth limitations back to the web, there’s been something of a resurgence of interest in progressive JPEGs. The Web Performance Advent Calendar even ran a piece entitled “Progressive JPEGs: a new best practice.” Here’s developer Ann Robson’s take on why you should use progressive JPEGs instead of baseline:

Progressive JPEGs are better because they are faster. Appearing faster is being faster, and perceived speed is more important that actual speed. Even if we are being greedy about what we are trying to deliver, progressive JPEGs give us as much as possible as soon as possible.

If you’re building responsive websites, progressive JPEGs are also appealing because you can avoid the content reflow that happens when baseline images are loaded after text content. With progressive JPEGs, because some data is loaded right off the bat, text doesn’t jump around (you can avoid this for non-responsive images by specifying the image dimensions).

Be sure to read Robson’s full article for some important caveats regarding progressive JPEGs, including the fact that browser support is less than ideal. All browsers will render progressive JPEGs just fine, but many of them — Safari, Mobile Safari, Opera and IE 8 — render progressive images just like baseline JPEGs, meaning there is no speed difference.

Another strike against progressive JPEGs is that they must be rendered multiple times as more data arrives. So while they may be marginally faster and possibly make users feel like the page has loaded faster, they hit the CPU pretty hard. That makes them potentially slower than baseline JPEGs in one of the use cases they’re supposed to be ideal for — underpowered mobile devices.

But perhaps the most questionable aspect of progressive JPEGs is whether or not users actually perceive a fully loaded, but blurry image that eventually comes into focus as faster than an image that takes longer, but renders all at once. Unfortunately I haven’t been able to find any actual usability studies addressing that question. I suspect that how you feel about progressive JPEGs is probably, among other things, a good indicator of how long you’ve been using the web, which is to say that if you’re all-too-familiar with progressive JPEGs from watching them slowly sharpen into focus over painfully slow dialup it’s hard to see them as anything but an annoying anachronism.

So, should you switch to progressive JPEGs? As with most things in web design there is no right answer. First you should look at your site’s stats, see which browser and devices your visitors are using and whether or not those browsers even render progressive JPEGs progressively. Assuming they do and you want to test progressive JPEGs, check out this old, but still very relevant, post from Yahoo YSlow developer Stoyan Stefanov, who has some data on when, where and how to use progressive JPEGs.

File Under: CSS, HTML, Performance

GitHub’s Tips for Building Faster Websites

Social code hosting service GitHub isn’t just a free, easy way to host and share your code; it’s also a huge CSS and HTML testing ground with experience writing a fast, scalable code.

So what has GitHub learned from running a hugely successful site? That surprisingly small changes to both HTML and CSS can have a huge impact on performance.

GitHub’s Jon Rohan gave a talk about some of the service’s performance problems and solutions at the CSS Dev Conference in Honolulu earlier this year. (The slides are available on Speaker Deck.) The whole video is worth watching, but the key takeaway is that the right small changes in your code can have a huge impact on performance.

Many of Rohan’s suggestions for faster CSS will be familiar to anyone who’s used YSlow and other performance tools — get rid of unnecessary tag identifiers in your CSS, i.e., div.menu becomes just .menu, eliminate ancestors where possible and avoid chaining your CSS selectors.

On the HTML side — and Rohan says it’s here that GitHub really saw performance improvements — he suggests reducing the amount of matched HTML on the page. That is, look at your pages in a profiler, figure out which tags are being matched and look for ways to simplify the layout to avoid bottlenecks. Among the more depressing things Rohan presents is how much the page load times dropped with switching from anchor links to a JavaScript solution that, while faster, is considerably less accessible.

GitHub is undeniably different than most websites — especially pages like the Git diff views, which involve considerably more code than most pages will need. But, while GitHub may be the extreme example, in many cases the same small changes can help speed up much simpler pages as well.