All posts tagged ‘amazon s3’

File Under: Backend, servers, Web Services

Host Your Static Website on Amazon S3, No WWW Necessary

Amazon’s S3 file storage service started life as just that — a simple way to store static files and pay for only the data you used. When you don’t need an always-on server, S3 fits the bill.

But if you can store static files, why not whole static websites? In 2011 Amazon began allowing you to point your own domain to an S3 “bucket”, a folder in Amazon parlance. Custom domain support made it simple to host entire static sites; the catch was that you needed to use a subdomain — for example, www.

Now the www restriction has been lifted and you can point any root domain at S3 and serve your files directly. The only catch is that Amazon has created its own non-standard DNS workaround, which means you must use Amazon’s Route 53 service to host the DNS data for your domain.

Unfortunately, while the new root domain support is great news for anyone using a static blog generator like Jekyll, Amazon’s documentation leaves much to be desired. To help you get started with S3 hosting, here’s a quick guide to setting up S3 to serve files from a root domain (rather than making the root domain redirect to www.mydomain.com, as the Amazon blog post instructions do).

First, register a domain name and point your DNS records to Amazon’s Route 53 service (the Route 53 docs have detailed instructions on how to do this). The next step is to create an S3 bucket for your domain. In other words, a bucket named mydomain.com.

Now click the Properties button, select the Website tab and make sure that the option is enabled and the Index Document is set to index.html. You’ll also need to click the Permissions tab and set a bucket policy (you can use this basic example from Amazon).

Now upload your site to that bucket and head back to Route 53. Here comes the magic. To make this work you need to create an A “Alias” DNS record. Make sure you name it the same as your domain name. Sticking with the earlier example, that would be mydomain.com. Now click the Alias Target field and select the S3 endpoint you created earlier when you set up the bucket.

And that’s it. Behind the scenes that Route 53 “Alias” record looks like a normal DNS A record. That means things like email will continue to work for your domain and at the same time Route 53 directs requests to your S3 bucket. If you want to make www redirect to the root domain you can either set that up through Route 53 (see Amazon’s instructions) or handle it through another service.

File Under: APIs, Backend, Web Services

Google Drive’s New ‘Site Publishing’ Takes on Amazon, Dropbox

Google’s demo site, served entirely by Google Drive. Image: Screenshot/Webmonkey

Google has unveiled a new feature dubbed “site publishing” for the company’s Drive cloud hosting service. Drive’s new site publishing is somewhere between a full-featured static file hosting service like Amazon S3 and Dropbox’s public folders, which can make hosted files available on the web.

Google has set up a simple demo site served entirely from Google Drive to give you an idea of what’s possible with the site publishing feature. Essentially site publishing gives your public folders a URL on the web — anything you drop in that folder can then be referenced relative to the root URL. It’s unclear from the announcement how these new features fit with Google’s existing answer to Amazon S3, Google Cloud Storage.

The API behind site publishing works a lot like what you’ll find in Amazon’s S3 offering. If you use the Drive API’s files.insert method to upload a file to Drive, it will return a webViewLink attribute, something like https://googledrive.com/host/A1B2C3D4E5F6G7H8J. That ugly, but functional URL becomes the base URL for your content. So, if you uploaded a folder named images, with a file named kittens.jpg, you could access it on the web at https://googledrive.com/host/A1B2C3D4E5F6G7H8J/images/kittens.jpg

There’s one drawback though, Drive’s site publishing doesn’t appear to support custom domains, which means it works fine for assets like images, CSS or JavaScript, but unless you don’t mind serving your site from some funky URLs, it’s probably not the best choice for hosting an entire site.

There are already numerous static file hosting solutions on the web including Dropbox and Amazon’s S3, as well as whole publishing systems that use Dropbox and S3 to host files, but for those who would prefer a Google-based solution, now you have it.

For more details on the new API see the Google Apps Developer Blog and be sure to read through the Drive SDK docs. If you need help, Google is answering questions over on Stack Overflow.

File Under: Programming, Web Services

Amazon S3 Storage Now Handles Entire Websites

Cheap, cloud-hosted web servers are a key component of a distributed web. But sometimes you don’t need a server, you just need a cheap way to host your static files, like images and videos. That’s the gap Amazon’s S3 service has long filled — offering a simple and cheap way to serve up static files without paying for an always-on server.

Now, thanks to an update, you can host not just a few image files, but a complete static website on Amazon S3.

Previously, S3 wouldn’t work for an entire site because the root level of your Amazon S3 “bucket” (as storage containers are called in Amazon parlance) was an XML file. For entire websites you needed to use Amazon EC2, even if your site was purely static content.

But Amazon has changed the way S3 works. Now, an S3 bucket can be accessed as a website, making it possible to host static sites on the service. If an error occurs, your visitors will see an normal HTML error document instead of the old XML error message.

Amazon CTO Werner Vogels is eating his own dog food and has a helpful post on how he moved his blog to S3. Like most blogs, Vogels’ site is mainly static content, so serving it from S3 is simply a matter of uploading the files and changing the CNAME to point to the S3 bucket instance. Of course, for those elements of the site that aren’t static — editing posts, managing comments and searching — Vogels still relies on a web server.

For those using static publishing systems like Jekyll, the revamped S3 makes a cheap hosting option.

Amazon S3 still has some notable oversights — like the lack of support for gzip/deflate — but now that it can handle whole static websites there’s no need to pay for a server when you don’t need one.

See Also:

File Under: Other

S3 Outage Makes Developers Consider Redundancy

Amazon Web ServicesAt least the downtime gods picked a notoriously low traffic day to punish Amazon’s S3 storage service. The darling of many web apps, including popular Twitter, was down for eight hours Sunday.

Amazon’s service provides storage and transfer of data for a small fee, so that developers can let their own servers focus on more important issues. For small upstart sites, it’s cheaper to buy their data a tiny slice at a time than to invest in a lot of hardware. S3 is especially popular for image hosting, which can be large files in comparison to trim HTML. Also, there are often many images per web page, putting extra strain on a server.

Among the revelations for some developers after a third of a day without S3 is that no single service can be counted on for 100% uptime. Of course, there’s oodles of redundancy built in to Amazon’s service. Yet, still it can go down, leaving many sites that count on it with a single point of failure.

Dave Winer sees a business opportunity in S3 redundancy:

“It would be easy to hook up an external service to S3, and for a fee, keep a mirror on another server. Then it would be a matter of redirecting domains to point at the other server when S3 goes down.”

Developers could achieve the same result Winer mentions on their own. Robert Accettura notes how WordPress.com weathered the S3 outage gracefully:

“They have (slower) back up’s in house for when S3 is down and can failover if S3 has a problem. This means they can leverage S3 to their advantage, but aren’t down because of S3.”

As many have noted, true web scalability and redundancy can be a tough sheep to sheer. While Larry Dignan questions if S3 is too complicated, I think the larger issue is that it is too simple. Too many have viewed it as a silver bullet, with Amazon doing the dirty work for them. This outage (and another back in February) has shown that S3 and services like it can help us a lot, but we still need to do our own work.

See also: