Amazon has quietly joined the ranks of cloud-based file syncing services like Dropbox, Google Drive and Microsoft’s SkyDrive. The company’s Amazon Cloud Drive — previously limited to a rather primitive web-based interface — now offers desktop file syncing tools like those found in Dropbox.
To test out the new Cloud Drive syncing, grab the new desktop app for Windows or OS X (sorry Linux fans, currently there is no desktop client for Linux).
Once you’ve installed the new Cloud Drive app, you’ll find a new folder on your drive — drop whichever files you’d like to sync into that folder and they’ll automatically be sent to Amazon’s servers. You’ll then have access to them on any computer with Cloud Drive installed and through the Cloud Drive web interface, though what you can do with files in the web interface is extremely limited.
It’s worth noting that the Cloud Drive app requires Java. As our friends at Ars Technica point out, that means users with newer Macs will be prompted to install Java as well (the Windows app comes with Java bundled).
There’s also no mobile apps for any platform (there is an Android Photo app, but all it does is send photos from your phone to Cloud Drive). In fact, while Cloud Drive will sync files between desktops, beyond that there isn’t much to see yet.
Part of the appeal of any web-based sync tool is ubiquitous access, not just via the web but in your favorite mobile apps as well and in that space Dropbox clearly has a huge lead over Cloud Drive.
Amazon offers 5GB of Cloud Drive storage for free, with additional storage available at roughly $.50/GB, which is down from the $1/GB price back when Cloud Drive first launched. That’s on par with SkyDrive’s pricing and roughly half the price of Dropbox. In this case though — at least right now — you get what you pay for. Amazon has the makings of a Dropbox competitor but it still has a lot of catching up to do.
Transcoding video is the process of taking a user uploaded video and converting it to a video format that works on the web, typically MP4 and WebM. Consumer video services like YouTube and Vimeo handle this for you behind the scenes. But if you want to actually build the next Vimeo or YouTube you’re going to have transcode video.
Open source tools like ffmpeg simplify the video transcoding process, but require considerable server power to operate at scale. And server power is something Amazon has in spades.
Amazon’s foray into video is hardly the first cloud-powered video transcoding service — Zencoder is another popular service (and runs on Amazon servers) — but Amazon’s offering is marginally cheaper and well-integrated with the company’s other services.
The Amazon Elastic Transcoder works in conjunction with the company’s other cloud offerings like S3 file storage. You send a video from one S3 “bucket” to Transcoder, which then converts it to the formats you need and writes the resulting files to another S3 bucket.
For now the Elastic Transcoder will only output MP4 video containers with Apple-friendly H.264 video and AAC audio. The new Transcoder options in the Amazon Web Services control panel allow you to create various quality presets if, for example, you’re delivering video to both mobile and desktop clients.
As with all Amazon Web Services the new Transcoder has a pay-as-you-go pricing model with rates starting at $0.015 per minute for standard definition video (less than 720p) and $0.030 per minute for HD video. That means transcoding a 10 minute video (the max on YouTube) would cost you $.15 for SD output and $.30 for HD, which sounds cheap until you start looking at transcoding several hundred 10-minute videos a day (200 a day would set you back $60 a day for HD). Amazon’s free usage tier will get you 20 minutes of SD video or 10 minutes of HD video encoded for free each month.
Amazon’s rates are marginally cheaper than Zencoder, which charges $0.020/minute for SD and double that for HD. Zencoder does have a considerable edge when it comes to output format though, offering pretty much anything you’d need for the web, including live streaming, while, at least for now, Amazon’s offering is limited to MP4.
Amazon’s Glacier file storage service costs less than a penny per gigabyte per month. It’s hard to think of a cheaper, better way to create and store an offsite backup of your files.
Of course backups are only useful if you actually create them on a regular basis. Unfortunately, getting your files into Glacier’s dirt-cheap storage requires either a manual effort on your part or some scripting-fu to automate your own system.
Back when Glacier first launched we speculated that it would be a perfect fit for a backup utility like the OS X backup app Arq. Now Arq 3 has been released and among its new features is built-in support for Amazon Glacier. Arq 3 is $29 per computer, upgrading from v2 is $15.
Arq creator Stefan Reitshamer sent over a preview of Arq 3 a while back and, having used it for the better part of a week now, I can attest that it, combined with Glacier, does indeed make for a near-perfect low-cost off-site backup solution.
Using Arq 3 with Glacier is simple. Just sign up for an Amazon Web services account and create a set of access keys. Then fire up Arq, enter your keys and select which files you want to back up. Choose Glacier for the storage type and then make any customizations you’d like (for example, excluding folders and files you don’t want backed up).
That’s all there is to it; close Arq and it will back up your files in the background. By default Arq 3 is set to make Glacier backups every day at 12 a.m., but you can change that in the preferences.
Should disaster strike and you need to get your files out of Glacier (or S3), just fire up Arq, select the files you need and click “restore.” Arq will give you an estimate of your costs and you can adjust the download speed — the slower the download the cheaper it is to pull files out of Glacier. There’s also an open source command line client available on GitHub in the event that the Arq app is no longer around when you need to get your files back.
Estimating costs with Arq’s Glacier restore screen. Image: Screenshot/Webmonkey
Existing Arq users should note that Amazon currently doesn’t offer an API for moving from S3 to Glacier (though the company says one is in the works). That means if you want to switch any current S3 backups to Glacier you’ll need to first remove the folder from Arq and then re-add it to trigger the storage type dialog.
In order to get the most out of Arq 3 and Glacier it helps to understand how Glacier works. Unlike Amazon S3, which is designed for cheap but accessible file storage, Glacier is, as the name implies, playing the long, slow game. Glacier is intended for long-term storage that’s not accessed frequently. If you need to grab your files on a regular basis Glacier will likely end up costing you more than S3, but for secondary (or tertiary) backups of large files like images, videos or databases Glacier works wonderfully.
My backup scenario works like this: For local backups I have two external drives. One is nearly always connected and makes a Time Machine backup every night. Once a week I clone my entire drive off to the second external drive. For offsite backups I use rsync and cron to backup key documents to my own server (most are also stored in Dropbox, which is not really a backup service, but can, in a pinch, be used like one).
But my server was running out of space. Photo and video libraries are only getting bigger and most web hosting services tend to get very expensive once you pass the 100GB mark. That’s where Arq and Glacier come in. It took a while, but I now have all 120GB of my photos backed up to Glacier, which will cost me $1.20/month.
The only catch to using Glacier is that getting the data back out can take some time. There are also some additional charges for pulling down your data, but as noted above, Arq will give you an estimate of your costs and you can adjust the download speed to make things cheaper. The slow speeds aren’t ideal when you actually need your data, but these are secondary, worst-case scenario backups anyway. If my laptop drive dies, I can just copy the clone or Time Machine backup drive to get my files back. The Glacier backup is only there if my house burns down or floods or something else destroys my local backups. While it would, according to Arq’s estimate, cost about $60 and take over four days to get my data out of Glacier, that would likely seem like a bargain when I’d have otherwise lost everything.
Unlike Amazon S3, which is designed for cheap but accessible file storage, Glacier is, as the name implies, playing the long, slow game. Glacier is intended for data you don’t need to get to often — database backups, images archives and the like. In the press release Amazon also says that Glacier data is intended to last, as in “centuries.”
Here’s how it works:
To store data in Glacier, you start by creating a named vault. You can have up to 1000 vaults per region in your AWS account. Once you have created the vault, you simply upload your data (an archive in Glacier terminology). Each archive can contain up to 40 Terabytes of data and you can use multipart uploading or AWS Import/Export to optimize the upload process. Glacier will encrypt your data using AES-256 and will store it durably in an immutable form.
While there’s an obvious use case for enterprise web services and any digital archiving project, Glacier could also be used as a cheap way to create an off-site backup of your files using something like Arq That would make Glacier not just a long-term storage partner for S3, but a competitor to backup services like CrashPlan or Backblaze.
[Update: Stefan Reitshamer, creator of Arq, tells Webmonkey that he’s looking into adding support for Glacier to a future version of Arq. However, he also points out a couple of potential gotchas to using Glacier for personal backups, namely the possibility of very expensive transfer fees (see the discussion on Hacker News for more on this) and fees for deleting data less than 3 months old. It’s also worth mentioning that Amazon’s own blog notes that in some cases it may still be cheaper to use S3.]
Amazon also says that an S3-to-Glacier file moving tool for automated backups is in the works.
It’s important to note that getting your data out of Glacier is priced a bit differently than what you might be used to with S3. With Glacier you can retrieve up to 5 percent of your average monthly storage, pro-rated daily, for free each month. After that prices start at $0.01/GB. For full pricing details check out the Glacier pricing page.
The new Python support means that popular web frameworks like Django (which powers Instagram, Everyblock and other popular sites) are easier to deploy across Amazon’s suite of cloud services.
It also means that Amazon and Google App Engine are once again going head to head, this time over Google App Engine’s territory. Thanks to its Python-friendly environment, App Engine has been a favorite with Python developers looking to deploy apps on hosted services.
While it’s always been possible to host Python apps on Amazon, setting up and configuring apps can be a pain. That’s where Beanstalk comes in. For those who haven’t tried it, Elastic Beanstalk greatly simplifies the process of deploying your app to Amazon’s various cloud services, including setting up new EC2 instances, load balancing with Elastic Load Balancing, as well as scaling and managing your app after it’s deployed. Beanstalk also integrates with Git and virtualenv.