The team behind Nginx (pronounced engine-ex) have released version 1.4, which brings a number of new features, most notably support for the SPDY protocol.
SPDY, the HTTP replacement, promises to speed up website load times by up to 40 percent. Given that Nginx is the second most popular server on the web — powering big name sites like Facebook and WordPress — the new SPDY support should prove a boon for the nascent protocol. Apache, still far and away the most popular server on the web, also has a mod_spdy module.
SPDY support should also help make Nginx more appealing, not that it needs much help. Nginx’s winning combination of lightweight and fast have made it the darling of the web in recent years with everyone from Facebook to Dropbox relying on it in one form or another.
Indeed, part of Nginx’s success lies in its versatility. The server can be used for everything from a traditional high performance web server to a load balancer, a caching engine, a mail proxy or an HTTP streaming server.
SPDY isn’t the only thing new in Nginx 1.4, there’s also support for proxying WebSocket connections and a new Gunzip module that decompresses gzip files for clients that do not support gzip encoded files.
For more details and to grab the latest Nginx source, head on over to the Nginx website.
Amazon’s S3 file storage service started life as just that — a simple way to store static files and pay for only the data you used. When you don’t need an always-on server, S3 fits the bill.
But if you can store static files, why not whole static websites? In 2011 Amazon began allowing you to point your own domain to an S3 “bucket”, a folder in Amazon parlance. Custom domain support made it simple to host entire static sites; the catch was that you needed to use a subdomain — for example, www.
Now the www restriction has been lifted and you can point any root domain at S3 and serve your files directly. The only catch is that Amazon has created its own non-standard DNS workaround, which means you must use Amazon’s Route 53 service to host the DNS data for your domain.
Unfortunately, while the new root domain support is great news for anyone using a static blog generator like Jekyll, Amazon’s documentation leaves much to be desired. To help you get started with S3 hosting, here’s a quick guide to setting up S3 to serve files from a root domain (rather than making the root domain redirect to www.mydomain.com, as the Amazon blog post instructions do).
First, register a domain name and point your DNS records to Amazon’s Route 53 service (the Route 53 docs have detailed instructions on how to do this). The next step is to create an S3 bucket for your domain. In other words, a bucket named mydomain.com.
Now click the Properties button, select the Website tab and make sure that the option is enabled and the Index Document is set to index.html. You’ll also need to click the Permissions tab and set a bucket policy (you can use this basic example from Amazon).
Now upload your site to that bucket and head back to Route 53. Here comes the magic. To make this work you need to create an A “Alias” DNS record. Make sure you name it the same as your domain name. Sticking with the earlier example, that would be mydomain.com. Now click the Alias Target field and select the S3 endpoint you created earlier when you set up the bucket.
And that’s it. Behind the scenes that Route 53 “Alias” record looks like a normal DNS A record. That means things like email will continue to work for your domain and at the same time Route 53 directs requests to your S3 bucket. If you want to make www redirect to the root domain you can either set that up through Route 53 (see Amazon’s instructions) or handle it through another service.
Amazon’s Glacier file storage service costs less than a penny per gigabyte per month. It’s hard to think of a cheaper, better way to create and store an offsite backup of your files.
Of course backups are only useful if you actually create them on a regular basis. Unfortunately, getting your files into Glacier’s dirt-cheap storage requires either a manual effort on your part or some scripting-fu to automate your own system.
Back when Glacier first launched we speculated that it would be a perfect fit for a backup utility like the OS X backup app Arq. Now Arq 3 has been released and among its new features is built-in support for Amazon Glacier. Arq 3 is $29 per computer, upgrading from v2 is $15.
Arq creator Stefan Reitshamer sent over a preview of Arq 3 a while back and, having used it for the better part of a week now, I can attest that it, combined with Glacier, does indeed make for a near-perfect low-cost off-site backup solution.
Using Arq 3 with Glacier is simple. Just sign up for an Amazon Web services account and create a set of access keys. Then fire up Arq, enter your keys and select which files you want to back up. Choose Glacier for the storage type and then make any customizations you’d like (for example, excluding folders and files you don’t want backed up).
That’s all there is to it; close Arq and it will back up your files in the background. By default Arq 3 is set to make Glacier backups every day at 12 a.m., but you can change that in the preferences.
Should disaster strike and you need to get your files out of Glacier (or S3), just fire up Arq, select the files you need and click “restore.” Arq will give you an estimate of your costs and you can adjust the download speed — the slower the download the cheaper it is to pull files out of Glacier. There’s also an open source command line client available on GitHub in the event that the Arq app is no longer around when you need to get your files back.
Estimating costs with Arq’s Glacier restore screen. Image: Screenshot/Webmonkey
Existing Arq users should note that Amazon currently doesn’t offer an API for moving from S3 to Glacier (though the company says one is in the works). That means if you want to switch any current S3 backups to Glacier you’ll need to first remove the folder from Arq and then re-add it to trigger the storage type dialog.
In order to get the most out of Arq 3 and Glacier it helps to understand how Glacier works. Unlike Amazon S3, which is designed for cheap but accessible file storage, Glacier is, as the name implies, playing the long, slow game. Glacier is intended for long-term storage that’s not accessed frequently. If you need to grab your files on a regular basis Glacier will likely end up costing you more than S3, but for secondary (or tertiary) backups of large files like images, videos or databases Glacier works wonderfully.
My backup scenario works like this: For local backups I have two external drives. One is nearly always connected and makes a Time Machine backup every night. Once a week I clone my entire drive off to the second external drive. For offsite backups I use rsync and cron to backup key documents to my own server (most are also stored in Dropbox, which is not really a backup service, but can, in a pinch, be used like one).
But my server was running out of space. Photo and video libraries are only getting bigger and most web hosting services tend to get very expensive once you pass the 100GB mark. That’s where Arq and Glacier come in. It took a while, but I now have all 120GB of my photos backed up to Glacier, which will cost me $1.20/month.
The only catch to using Glacier is that getting the data back out can take some time. There are also some additional charges for pulling down your data, but as noted above, Arq will give you an estimate of your costs and you can adjust the download speed to make things cheaper. The slow speeds aren’t ideal when you actually need your data, but these are secondary, worst-case scenario backups anyway. If my laptop drive dies, I can just copy the clone or Time Machine backup drive to get my files back. The Glacier backup is only there if my house burns down or floods or something else destroys my local backups. While it would, according to Arq’s estimate, cost about $60 and take over four days to get my data out of Glacier, that would likely seem like a bargain when I’d have otherwise lost everything.
After nearly two years of testing and improving, Google is removing the beta label and releasing mod_pagespeed 1.0. The mod_pagespeed tool is Google’s open source effort to speed up websites running on the popular Apache web server. Pagespeed automatically optimizes pages and their resources, making websites load faster.
Despite the beta label that’s been attached to it for two years, Google says that over 120,000 websites are already using mod_pagespeed, including big-name web hosting companies like Dreamhost and content delivery networks like EdgeCast.
Today’s web shows up on a tremendous variety of screens — desktops, televisions, tablets, phones and lately “phablets” (whatever those are). Testing your site on even a fraction of the devices available can seem like a full time job. Tools like Adobe Shadow simplify the process somewhat, refreshing your local site across devices with the click of a button. But Shadow has limitations, for instance, it only works with WebKit browsers.
If you’ve got a wide array of devices to test with you’ll probably want a local network solution — that is, serve your site over your local network and connect all your test devices to that virtual host domain.
Unfortunately setting up a local network and connecting to it can be a pain, which is where the curiously-named Xip.io comes in. Xip.io is a wildcard DNS service that makes it drop-dead simple to set up a network and connect any device to your local test site.
The service is really just a custom DNS server you can easily tap into. So, for example, if your LAN IP address is 10.0.0.1, using Xip.io, mysite.10.0.0.1.xip.io resolves to 10.0.0.1. With the DNS taken care of you can access virtualhosts on your local development server from any devices on your local network, zero configuration required.
Xip.io is a free service from 37signals, whose Sam Stephenson says, “we were tired of jumping through hoops to test our apps on other devices and decided to solve the problem once and for all.” Xip.io might not work for everyone, but if you’ve ever struggled and failed to set up and test sites on a local network, Xip.io might be able to help.