Archive for the ‘servers’ Category

OpenDNS and Google Working with CDNs on DNS Speedup

A group of DNS providers and content delivery network (CDN) companies have devised a new extension to the DNS protocol that that aims to more effectively direct users to the closest CDN endpoint. Google, OpenDNS, BitGravity, EdgeCast, and CDNetworks are among the companies participating in the initiative, which they are calling the Global Internet Speedup.

The new DNS protocol extension, which is documented in an IETF draft, specifies a means for including part of the user’s IP address in DNS requests so that the nameserver can more accurately pinpoint the destination that is topologically closest to the user. Ensuring that traffic is directed to CDN endpoints that are close to the user could potentially reduce latency and congestion for high-impact network services like video streaming.

The new protocol extension has already been implemented by OpenDNS and Google’s Public DNS. It works with the CDN services that have signed on to participate in the effort. Google and OpenDNS hope to make the protocol extension an official IETF standard. Other potential adopters—such as Internet ISPs—are free to implement it from the draft specification.

It’s not really clear in practice how much impact this will have on network performance. It’s worth noting that GeoIP lookup technology is already used by some authoritative DNS servers for location-aware routing. The new protocol extension will reportedly address some of the limitations of previous approaches.

This article originally appeared on Ars Technica, Wired’s sister site for in-depth technology news.

File Under: Backend, servers, Web Basics

Move Over, HTTP. Say ‘Hello World’ to SPDY

Google plans to introduce a new protocol for web transactions it says is more than 50 percent faster than HTTP.

A post on Google’s Chromium blog describes the new protocol, SPDY, pronounced “Speedy”:

SPDY is at its core an application-layer protocol for transporting content over the web. It is designed specifically for minimizing latency through features such as multiplexed streams, request prioritization and HTTP header compression.

The Chromium team, which is in charge of developing the Chrome browser and its associated technologies, reports that SPDY has been able to load web pages 55 percent faster than the HTTP protocol in lab conditions using simulated home-network connections. The team says its goal is to make SPDY eventually run twice as fast as HTTP.

HTTP is the language currently used by servers and browsers for the vast majority of common tasks on the web. When you request a web page or a file from a server, chances are your browser sends that request using HTTP. The server answers using HTTP, too. This is why “http” appears at the beginning of most web addresses.

So, Google’s proposal would involve rewriting the web’s most commonly used and baked-in transaction method.

“HTTP has served the web incredibly well,” the post’s authors write. “We want to continue building on the web’s tradition of experimentation and optimization, to further support the evolution of websites and browsers.”

If such a massive shift were to ever take place (and nobody’s promising it will at this point), it would require a whole lot of buy-in from outside Google. To that end, the company is releasing its early-stage documentation and code for SPDY along with a call for feedback.

It may seem like a brash move, but the Chromium team seems to enjoy ruffling feathers. In September, the same group released the Chrome Frame plug-in for Internet Explorer which essentially embeds Google’s browser inside Microsoft’s, giving the ability to render websites that IE wouldn’t normally be able to handle.

To contribute to the SPDY discussion, visit the Chromium Google Group.

Image: Warner Brothers

See Also:

File Under: Programming, servers

Cool Tutorial: Django in the Real World

Django’s big sell is that it’s easy.

Compared to other open-source (or even proprietary) frameworks for building specialized, database-driven websites, Django makes the core tasks remarkably easy and fast to complete. A developer with working knowledge of databases and Python can get a site up and running in less than an hour.

But once your code is written, what comes next?

That question forms the basis of a talk given by Jacob Kaplan-Moss, one of Django’s lead developers, at the OSCON Open Source Convention in San Jose, California Tuesday. His slides are now online (PDF, 1.7MB)

The concentration is on testing, staging and deployment. There are also recommendations for fine-tuning performance. And even if you don’t know Python or if you’ve never used Django, the presentation is still helpful since it’s full of general advice about building and deploying web applications.

Also, there’s a great series of tutorials hosted by the Django Project itself, and there’s a beginner’s Django tutorial right here on Webmonkey.

Illustration: Stefan Imhoff

See Also:

File Under: Business, servers

Register.com Victimized by DDoS Attack

Register.com is having a rough week. The popular domain registrar and website hosting company has been having coughing fits for a few days. Service has been intermittent, with some users complaining of outages on the company’s web servers as well as its e-mail and data storage services.

Turns out the problem was the result of a DDoS attack.

Today, this e-mail was sent out, and the company posted the same note on its website:

For the past three days Register.com has been experiencing intermittent service disruptions as a result of a distributed denial of service (DDoS) attack – an intentionally malicious flooding of our systems from various points across the internet. We know the disruption of business this has caused our customers is unacceptable, and we are working round the clock to combat it. (For more information about DDoS attacks, please see http://en.wikipedia.org/wiki/Denial-of-service_attack.)

While we are still under attack, our counter-measures are currently minimizing the disruption to your services. We are using all available means to halt this criminal attack on our business and our customers’ business.

We are committed to updating you in as timely manner as possible, please continue to check back here for additional updates or go to www.twitter.com/Register_com.

Thank you for your patience.

File Under: Events, servers

PDC 2008: SensorMap Is Some Hot (and Cool) Tech

Los Angeles — Microsoft has already left its mark on software, consumer devices, gaming and the web. Next, the company is turning its attention to green technology, environmental research and effects climate change.

At its annual Professional Developers Conference, the company debuted some new distributed computing technology the Microsoft Research team has created to collect data on energy use, transportation efficiency and global climate change.

During Wednesday morning’s keynote, Microsoft Research’s Feng Zhao showed off the pocket-sized devices his MSR SenseWeb team created to monitor any number of environmental factors. Microsoft has deployed hundreds of these sensors, each no bigger than a cell phone, around downtown Seattle, Singapore and Taipei, in the mountains near Davos, Switzerland and glaciers in Juneau, Alaska.

Each sensor records information about wind speed, temperature, humidity and other metrics. The devices are customized for each location — the sensors in Davos are connected to high-tension power lines, and they measure shortwave radiation. The ones in Seattle have cameras and study traffic patterns. The sensors on the Alaskan glacier measure water discharge.

Anyone can go to the SensorMap website to dig in to the sensor data, view time-based graphs and generate custom reports. The site will remain a public source of data for tracking changes in the environment.

As Microsoft Research’s director Rick Rashid quipped, “We’re using the cloud to keep track of clouds.”

It seems purely altruistic, but there’s a practical reason for Microsoft’s investment — the company is using the same tech to monitor the data centers which will power its new Windows Azure cloud computing platform. As the company builds the physical infrastructure for Azure, it’s also been installing sensors and feeding data into what it calls the Data Center Genome project. The sensors measure energy usage, heat and power distribution and efficiency within the warehouse-sized server complexes.

Zhao showed what the Genome data looks like. He revealed that hundreds of sensors had been deployed throughout the Los Angeles convention center, which is hosting the conference. On the stage’s jumbo screen, he showed a satellite photo of the building with a overlaying grid marking the energy sensor array.

He played back the collected data as an animated heat map, sped up over time, to show heat rising in parts of the building as attendees filed in to view the previous day’s Windows 7 keynote, then fanned out into the sessions and the expo hall afterwards.

It gave a detailed view of exactly how efficiently the air conditioning system cooled the building — including huge blue spots where the HVAC vents were pumping chilled air into areas of the hall that were entirely empty and devoid of people for hours.