Archive for the ‘Web Services’ Category

File Under: Security, Web Services

Users Scramble as GitHub Search Exposes Passwords, Security Details

Inspectocat says “never store private stuff in public places.” Image: Github

GitHub has temporarily shut down some parts of the site-wide search update it launched yesterday. As we mentioned in our earlier post, the new search tools made it much easier to find passwords, private ssh keys and security tokens stored in GitHub repos.

GitHub hasn’t officially addressed the issue, but it appears to be blocking some of the security-related searches that were posted earlier in this Hacker News thread.

GitHub’s status site also says that “search remains unavailable,” though in my testing searching worked just fine so long as you weren’t entering words like “RSA,” “password,” “secret_token” or the like.

Most of the passwords and other security data exposed were personal — typically private ssh keys to someone’s server or a Gmail password — which is bad enough, but at least one appeared to reveal a password for an account on, the repository that holds the source code for Google’s open-source web browser. Another reportedly exposed an ssh password to a production server of a “major, MAJOR website in China.”

Unfortunately for people that have been storing their private security credentials in public GitHub repos what GitHub’s search engine revealed is nothing new. Google long ago indexed that data and a targeted search will turn up the same exposed security info, which makes GitHub’s temporarily crippled search a token gesture at best.

If you accidentally stored sensitive data on GitHub the most important thing to do is change your passwords, keys and tokens. After you’ve created new security credentials for any exposed servers and accounts then you can go back and delete your old data from GitHub.

Given that Git, the version control system behind GitHub, is specifically designed to prevent data from disappearing, deleting your sensitive data takes more than just the Git command rm. GitHub has full details on how to get your sensitive data off the site. As GitHub’s instructions say, “if you committed a password, change it! If you committed a key, generate a new one. Once the commit has been pushed you should consider the data to be compromised.”

Find the Droids You’re Looking for With GitHub’s Powerful New Search Tools

GitHub’s Octobi Wan Catnobi. Image: GitHub

Open source is about building on the work of others and not having to reinvent the wheel. But if you can’t find the code you need then you’re stuck reinventing the wheel. Again.

To help you find exactly the wheels your project needs, code hosting giant GitHub has announced a new, much more powerful search tool that peers inside GitHub repositories and offers dozens of filters to help you discover the code you need.

The new search further cements GitHub’s place as the go-to source not just for publishing, but also discovering, code on the web.

While GitHub’s new search lacks the web-wide reach of more general code search engines like Google’s once-mighty Code Search (now a hollow shell of its former self), it’s likely to return more useful results thanks to some nice extras like the ability to see recent activity and narrow results by the number of users, stars and forks.

GitHub’s advanced search page now supports operators like @username to limit results to just your repositories (or another user’s repos), code from only one repository (repo:name) or even code from a particular path within a repo. You can also limit by file extension, repo size, number of forks, number of stars, number of followers, number of repos and user location.

While the advanced operators make a quick way to search, there’s no need to memorize them all. The new advanced search form allows you to craft your query using multiple fields, while it displays the shorthand version at the top the page so you learn as you go.

Under the hood GitHub’s new search is powered by an ElasticSearch cluster which live-indexes your code as you push it to GitHub. The results you see will include any public repositories, as well as any private repositories that you have access to.

The GitHub blog also notes that, “to ensure better relevancy, we’re being conservative in what we add to the search index.” That means, for example, that forks will not be in search results (unless the fork has more stars than the parent repository). While that may mean you occasionally miss a bit of code, it goes a long way toward reducing a problem that plagues many other code search engines — the overwhelming amount of duplicate results.

GitHub’s more powerful search has turned up one unintended consequence — exposed data. It’s much easier to search for anything on the site, including, say, usernames and passwords. As it turns out many people seem to have everything from SSH keys to Gmail passwords stored in public GitHub repos. There’s a discussion about the issue over on Hacker News. The ability to find things like exposed passwords isn’t new, but the new search tool does make it easier than ever. Let this be a reminder of something that’s hopefully obvious to Webmonkey readers — never store passwords or private keys on a public site. And if you find someone doing that, do the right thing and let them know.

For more details on everything that’s new in GitHub’s search page, head on over to the GitHub blog.

File Under: APIs, Web Services

Google’s Cloud Platform Floats Over to GitHub

Google’s Cloud Platform tools are now available on GitHub. The move to GitHub will make it easier for developers already using GitHub to get started with Google’s various Cloud Platform offerings.

Thus far most of the repositories in Google’s GitHub account consist of code samples and projects related to offerings like App Engine, BigQuery, Compute Engine, Cloud SQL, and Cloud Storage.

The Google Open Source Blog says that most of Google Cloud Platform’s existing open source tools will be migrated to the new GitHub organization “over time.”

For now though you can get started building apps on Google Cloud Platform just by forking one of the demo repositories and tweaking the code to fit your project. Sample apps like the guestbook demos for Python and Java, along with the OAuth 2 helper apps, make a good place to start if you’ve never built anything on Google’s cloud platform before.

File Under: Software, Web Services

It’s Official, Microsoft to Kill Off Windows Messenger in March

Image: Screenshot/Webmonkey

Attention fans of Windows Live Messenger (née MSN Messenger), Microsoft is shutting the service down for good March 15, 2013.

The company sent out an email this week informing Windows Live Messenger users that the service will be going the way of Clippy. Instead users (and their contact lists) will be migrated to Skype, which Microsoft acquired in May 2011.

As with most service shutdowns, expect this one to be bumpy, especially given the relatively short notice and the fact that Skype lacks a number of features Messenger offers, including controlling a remote screen, custom emoticons and offline messages. There are already numerous threads on the Skype community forums complaining about the features lost in the move to Skype.

But thus far, complaining hasn’t stopped the transition. To get started making the switch you’ll need to download the Skype client app and then login using your Microsoft account. From there you should have access to all your Windows Live Messenger contacts. If you’re already a Skype user as well you can login with your Skype account and link it to your Messenger account.

According to Microsoft’s FAQ, between now and the cutoff date, Messenger will continue to work as it always has, though you’ll see a banner encouraging you to download Skype (provided you’re using a newer version of Messenger). If you click the banner and follow the install instruction Messenger will be uninstalled after Skype is ready to go.

After March 15, you’ll no longer be able to sign into Messenger.

File Under: Backend, servers, Web Services

Host Your Static Website on Amazon S3, No WWW Necessary

Amazon’s S3 file storage service started life as just that — a simple way to store static files and pay for only the data you used. When you don’t need an always-on server, S3 fits the bill.

But if you can store static files, why not whole static websites? In 2011 Amazon began allowing you to point your own domain to an S3 “bucket”, a folder in Amazon parlance. Custom domain support made it simple to host entire static sites; the catch was that you needed to use a subdomain — for example, www.

Now the www restriction has been lifted and you can point any root domain at S3 and serve your files directly. The only catch is that Amazon has created its own non-standard DNS workaround, which means you must use Amazon’s Route 53 service to host the DNS data for your domain.

Unfortunately, while the new root domain support is great news for anyone using a static blog generator like Jekyll, Amazon’s documentation leaves much to be desired. To help you get started with S3 hosting, here’s a quick guide to setting up S3 to serve files from a root domain (rather than making the root domain redirect to, as the Amazon blog post instructions do).

First, register a domain name and point your DNS records to Amazon’s Route 53 service (the Route 53 docs have detailed instructions on how to do this). The next step is to create an S3 bucket for your domain. In other words, a bucket named

Now click the Properties button, select the Website tab and make sure that the option is enabled and the Index Document is set to index.html. You’ll also need to click the Permissions tab and set a bucket policy (you can use this basic example from Amazon).

Now upload your site to that bucket and head back to Route 53. Here comes the magic. To make this work you need to create an A “Alias” DNS record. Make sure you name it the same as your domain name. Sticking with the earlier example, that would be Now click the Alias Target field and select the S3 endpoint you created earlier when you set up the bucket.

And that’s it. Behind the scenes that Route 53 “Alias” record looks like a normal DNS A record. That means things like email will continue to work for your domain and at the same time Route 53 directs requests to your S3 bucket. If you want to make www redirect to the root domain you can either set that up through Route 53 (see Amazon’s instructions) or handle it through another service.