Dutch artist Sebastien Schmieg has elevated the Google Image search from its humble intent, creating a short film that strings together a series of image searches. The result oscillates between the prosaic and profound, and feels more like a grand homage to humanity than a collection of random images.
To create the image sequence Schmieg fed a single transparent PNG into Google Images and used the “visually similar” feature to recursively loop through the results. Schmieg’s movie of the results, entitled Search by Image, Recursively, Transparent PNG, #1, is a (slightly NSFW) truly hypnotic, algorithmic tour of life as Google Images knows it.
In all there are some 2,951 images in the video. The “visually similar” option in Google Image Search tends to get stuck in loops using it the way Schmieg did so if an image had already been used in the sequence, he would skip to the next image in the results. But otherwise the sequence is entirely algorithmic. Beware pareidolia.
For more info about the movie and some other, similar efforts, be sure to check out Schmieg’s website.
Personalized Google Search (image courtesy of Google)
Google has announced a new personalized search the company calls “Search, plus Your World.” The update turns the classic Google search results page from an anonymous collection of webpages into something more personal, mining your Google+ network for results related to you. Rather than just scouring the web for webpages related to your search queries, Google will also now find conversations and images posted by your friends.
Call it the “plusification” of Google Search, but, unlike the way Google has forced Plus into many of its services, on the search page it’s easy to toggle it on and off — now you have Plus results, now you don’t.
To see the new customized search results just log in to your Google account and head over to the secure version of Google’s search page. If you’re not seeing the options shown in the screenshot above be patient. Google says it will be rolling out custom search to users over the next few days.
It’s entirely possible to continue using Google’s search page without ever using any of the personalization features. Indeed there are probably many queries in which results from your social network friends would be irrelevant. Thankfully Google has made it easy to toggle the Plus features on and off, just click the respective icon to show and hide Plus results.
For example, say you want to find information about SOPA, the much-malignedStop Online Piracy Act. Search Google for SOPA and click the personalize icon. If anyone in any of your Google+ circles has posted something about SOPA, their posts will appear in the search results. If any of your contacts have posted SOPA-related images to Picasa, those will show up as well. If you decide you don’t care what your friends think of SOPA, just click on the globe icon and the Plus results are gone.
In addition to the info drawn from your Google Plus circles, Google now includes profiles in search results, making it easier to find people. It also helps narrow the results to the particular person you’re looking for — search for John Smith and Google will return your friend John Smith, skipping the millions of other John Smiths in the world.
If you’re not seeing the new Google Search personalization features just yet, check out the video below from Google which shows the new features in action.
For the next 60 days Google searches for the words “browser,” “Chrome” or even “Chrome browser” will not include a link to the main Google Chrome download page. Google removed the Chrome download page from its search results after it discovered that one of its own sponsored post campaigns had violated its webmaster guidelines.
Because no one likes spammy links in Google search results — least of all Google — the company has penalized its own Chrome browser just like it would any other company using the same tactics. Searching Google for these terms will still bring up links that can eventually lead users to the Chrome download page, but there is no direct link (there are links to the Chrome beta download page in some results).
Search Engine Land’s Danny Sullivan discovered the suspicious links in Google’s search results and pointed out that they seem to violate Google’s webmaster guidelines, which prohibit “buying or selling links that pass PageRank.” All of the pages in question clearly stated that they were sponsored posts (created with Google’s implicit blessing as part of a campaign from Unruly Media) which means, according the Google’s webmaster guidelines, all the links should have been using rel=”nofollow”. Most did use nofollow, but one did not.
We did find one sponsored post that linked to www.google.com/chrome in a way that flowed PageRank. Even though the intent of the campaign was to get people to watch videos — not link to Google — and even though we only found a single sponsored post that actually linked to Google’s Chrome page and passed PageRank, that’s still a violation of our quality guidelines, which you can find at http://support.google.com/webmasters/bin/answer.py?hl=en&answer=35769#3 .
In response, the webspam team has taken manual action to demote www.google.com/chrome for at least 60 days. After that, someone on the Chrome side can submit a reconsideration request documenting their clean-up just like any other company would. During the 60 days, the PageRank of www.google.com/chrome will also be lowered to reflect the fact that we also won’t trust outgoing links from that page.
While Google’s response may seem extreme, it’s not the first time the company has punished its own. Google previously banned BeatThatQuote (one of its own companies) over almost the same issue last year. And of course it also deranked JC Penny and Forbes for similarly shady tactics.
Clearly Google doesn’t have a double standard when it comes to violating its own guidelines, but, as Sullivan points out, that the company paid Unruly Media to run the ad campaign in the first place is troubling. “Google’s paying to produce a lot of garbage,” writes Sullivan, “the same type of garbage that its Panda Update was designed to penalize.”
The “Panda Update” involved tweaks to the way Google’s algorithms rank search results which heavily penalized co-called “content farms.” Google defines content farms as “sites with shallow or low-quality content.” In other words, sites just like the ones Google was paying Unruly Media to create.
Mind what you say in Facebook comments, Google will soon be indexing them and serving them up as part of the company’s standard search results. Google’s all-seeing search robots still can’t find comments on private pages within Facebook, but now any time you use a Facebook comment form on a other sites, or a public page within Facebook, those comments will be indexed by Google.
Typically when Google announces it’s going to expand its search index in some way everyone is happy — sites get more searchable content into Google and users can find more of what they’re looking for — but that’s not the case with the latest changes to Google’s indexing policy.
Developers are upset because Google is no longer the passive crawler it once was and users will likely become upset once they realize that comments about drunken parties, embarrassing moments or what they thought were private details are going to start showing up next to their names in Google’s search results.
For now most of the ire seems limited to concerned web developers worried that Google’s new indexing plan ignores the HTML specification and breaks the web’s underlying architecture. To understand what Google is planning to do and why it breaks one of the fundamental gentleman’s agreements of the web, you first have to understand how various web requests work.
There are two primary requests you can initiate on the web — GET and POST. In a nutshell, GET requests are intended for reading data, POST for changing or adding data. That’s why search engine robots like Google’s have always stuck to GET crawling. There’s no danger of the Googlebot altering a site’s data with GET, it just reads the page, without ever touching the actual data. Now that Google is crawling POST pages the Googlebot is no longer a passive observer, it’s actually interacting with — and potentially altering — the websites it crawls.
While it’s unlikely that the new Googlebot will alter a site’s data — as the Google Webmaster Blog writes, “Googlebot may now perform POST requests when we believe it’s safe and appropriate” — it’s certainly possible now and that’s what worries some developers. As any webmaster knows, mistakes happen, especially when robots are involved, and no one wants to wake up one day to discover that the Googlebot has wreaked havoc across their site.
If you’d like to stop the Googlebot from crawling your site’s forms, Google suggests using the robots.txt file to disallow the Googlebot on any POST URLs your site might have. So long as you’re surfacing your content in other ways — and you should be, provided you want it indexed — there shouldn’t be any harm in blocking the Googlebot from POST requests.
If, on the other hand, you’d like to stop the Googlebot from indexing any embarrassing comments you may have left on the web, well, you’re out of luck.