Archive for the ‘servers’ Category

File Under: servers, Web Basics

What to Do When Your Website Is Hacked

All it takes is one open lock. Photo: David Bleasdale/Flickr

One drawback to the otherwise awesome sauce of the do-it-yourself web is that you’re also responsible for fixing it yourself when something goes wrong — call it the FIY corollary to the DIY web.

For example, what happens if the bad guys attack your website?

In some cases your web hosting service may be able to help, but most of the time undoing the damage is your responsibility. Websites are attacked every day; well-tested though they may be, frameworks and publishing tools inevitably have security flaws and eventually you may be bitten by one. Or it might not even be the tools that end up being the problem, it might be something far less obvious. Developer Martin Sutherland’s server was recently hacked because one file on a shared server had the wrong file permissions.

Sutherland’s write-up of how he discovered and fixed the attack on his server is well worth a read and makes an excellent primer on how to handle being hacked. While Sutherland’s situation may be specific to the attack that his site suffered, his diagnostic steps make an excellent starting point even if you use a completely different publishing system. (Sutherland uses Movable Type.)

Sutherland’s strategy (once he realizes he’s been hacked) is to scan through all the files on his server to see which ones had recently been changed. He then filters that list, ignoring files that should have changed (log files, etc.) and narrowing it down to suspicious file changes.

How much this approach will tell you if your own site has been hacked depends on what the attacker has done and what your server setup looks like, but it should help you get moving in the right direction. Read through the full post for the specific command line tools Sutherland uses to inspect his files. If you’re not comfortable on the command line or don’t have shell access to your server you may be able to use something like Exploit Scanner (if you’re using WordPress) or a similar tool for your publishing system.

Once you know what happened and which files were affected it’s just a matter of rolling back the changes using your backups. You do have backups right? As Sutherland writes, “it’s not a matter of if something goes wrong, it’s a matter of when.” Remember: backups are only useful if you have them before you need them.

We sincerely hope your site is never hacked, however, it does happen all too frequently. As Sutherland’s write-up illustrates, one of the keys to making sure that you recover quickly is to have good backups. Do yourself a favor and spend a few minutes creating an automated backup system before something goes wrong. Now excuse me while I go make sure my pg_dump cron script is running properly.

File Under: servers

Apache 2.4: A Major Update for the Web’s Most Popular Server

The Apache Software Foundation has announced the release of version 2.4 of its namesake Apache HTTP Server. The new version is the first major release for Apache since 2005. During that time several new servers, including the increasingly popular Nginx server, have emerged to challenge Apache’s dominance. However, while Nginx may have surpassed Microsoft IIS to become the second most used server on the web, it still trails well behind Apache, which powers nearly 400 million web sites.

To upgrade your servers to the latest release, head over to the Apache HTTP Server Project and download a copy of Apache 2.4.

Much of the focus in Apache 2.4 is on performance. The Apache Software Foundation blog touts reduced memory usage and better concurrency among the improvements in this release. Apache 2.4 also offers better support for asynchronous read/write operations and much more powerful Multi-Processing Module (MPM) support. Multiple MPMs can now be built as loadable modules at compile time and the MPM of choice can be configured at run time, making Apache 2.4 considerably more flexible than its predecessors.

There have also been numerous updates for Apache’s various modules, as well as a host of new modules that are available with this release — including mod_proxy_fcgi, a FastCGI protocol backend for mod_proxy.

For a complete rundown of everything that’s new in Apache 2.4, be sure to check out the documentation.

File Under: Security, servers, Web Basics

Google, Microsoft, Yahoo, PayPal Go After Phishers With New E-Mail Authentication Effort

Major e-mail providers, including Google, Microsoft, and Yahoo are teaming up with PayPal, Facebook, LinkedIn, and more, to implement a new system for authenticating e-mail senders to try to prevent the sending of fraudulent spam and phishing messages.

The protocol that powers e-mail, SMTP, dates back to a more trusting era; a time when the only people who sent you e-mails were people you wanted to send you e-mails. SMTP servers are willing to accept pretty much any e-mail destined for a mailbox they know about (which is, admittedly, an improvement on how things used to be, when they’d accept e-mails even for mailboxes they didn’t know about), a fact which spammers and phishers exploit daily.

Making any fundamental changes to SMTP itself is nigh impossible; there are too many e-mail servers, and they all have to interoperate with each other, an insurmountable hurdle for any major change. So what we’re left with is all manner of additional systems that are designed to give SMTP servers a bit more information about the person sending the e-mail, so that they can judge whether or not they really want to accept the message.

The two main systems in use today are called SPF (Sender Policy Framework) and DKIM (DomainKeys Identified Mail). Both systems use DNS to publish extra information about the e-mail sender’s domain. SPF tells the receiving server which outgoing servers are allowed to send mail for a given domain; if the receiving server receives mail from a server not on the list, it should assume that the mail is fraudulent. DKIM embeds a cryptographic signature to e-mail messages and an indication of which DNS entry to examine. The receiving server can then look up the DNS entry and use the data it finds to verify the signature.

These systems are not perfect; though both are used widely, they haven’t been adopted universally. This means that some legitimate mail will arrive that doesn’t have SPF or DKIM DNS entries, and so mail servers can’t depend on its presence. Common legitimate operations can also break them; many mailing list programs add footers to messages, which will cause rejection by DKIM, and forwarding e-mails causes rejection by SPF. As a result, failing one or other test is not a good reason to reject a message.

These systems also make it hard to diagnose misconfigurations; receiving servers will typically just swallow or ignore mails sent by systems with bad SPF or DKIM configurations.

The large group of companies, which includes the biggest web mail servers and some of the most common corporate victims of phishing attempts, is proposing a new scheme, DMARC (“Domain-based Message Authentication, Reporting & Conformance”), in an attempt to tackle these problems. DMARC fills some of the gaps in SPF and DKIM, making them more trustworthy.

DMARC's position within the mail receipt process (illustration by dmarc.org)

DMARC is based on work done by PayPal in conjunction with Yahoo, and later extended to Gmail. This initial work resulted in a substantial reduction in the number of PayPal phishing attempts seen by users of those mail providers, and DMARC is an attempt to extend that to more organizations. As with SPF and DKIM, DMARC depends on storing extra information about the sender in DNS. This information tells receiving mail servers how to handle messages that fail the SPF or DKIM tests, and how critical the two tests are. The sender can tell recipient servers to reject messages that fail SPF and DKIM outright, to quarantine them somehow (for example, putting them into a spam folder), or to accept the mail normally and send a report of the failure back to the sender.

In turn, this makes SPF and DKIM much safer for organizations to deploy. They can start with the “notification” mode, confident that no mail will be lost if they have made a mistake, and use the information learned to repair any errors. DMARC also allows recipients to know if a domain should be using SPF and DKIM in the first place.

Without a global rollout, DMARC can’t solve all phishing and spam problems. The companies that have signed up to support the project include major recipients of phishing attempts—the various free e-mail providers—and sites against which phishing attacks are regularly made. Mail sent between the organizations will be verified using the SPF/DKIM/DMARC trifecta. Anyone using the major mail providers and the major services should see a substantial reduction in fraudulent mail. Senders and recipients who want to receive similar protection can implement DMARC themselves by following the specification that the DMARC group is working on.

Given the constraints imposed by SMTP, we may never get an e-mail system that is entirely free of malicious and annoying junk. SMTP e-mail was never designed to be trustworthy, and systems like SPF and DKIM are constrained by the inadequacies of SMTP’s design. Nonetheless, mechanisms such as DMARC can still make a big difference, and with the support of these major companies, e-mail might get that little bit safer.

This article originally appeared on Ars Technica, Wired’s sister site for in-depth technology news.

Illustration by dmarc.org

Protest SOPA: Black Out Your Website the Google-Friendly Way

On Wednesday Jan. 18, Reddit, Wikipedia and many other websites will black out their content in protest of the Stop Online Piracy Act (SOPA), the Protect Intellectual Property Act (PIPA) and the Online Protection and Enforcement of Digital Trade Act (OPEN). Organizers of the SOPA Strike are asking interested sites to black out their content for 12 hours and display a message encouraging users to contact their congressional representatives and urge them to oppose the legislation.

Although it was rumored that Google might join in the protest, that does not appear to be the case. The search giant does, however, have some advice for anyone who would like to black out their site and ensure that doing so doesn’t harm their Google search rank or indexed content. [Update: It appears Google will be participating in some fashion. A Google spokesperson tells Ars Technica that "tomorrow [Google] will be joining many other tech companies to highlight this issue on our U.S. home page.” WordPress and Scribd will also be participating. You can read the full story on Ars Technica.]

Writing on Google+, Google’s Pierre Far offers some practical tips in a post entitled, “Website Outages and Blackouts the Right Way.” The advice mirrors Google’s previous best practices for planned downtime, but warrants a closer look from anyone thinking of taking their site offline to protest the SOPA/PIPA/OPEN legislation.

Far’s main advice is to make sure that any URLs participating in the blackout return a HTTP 503 header. The 503 header will tell Google’s crawlers that your site is temporarily unavailable. That way your protest and blacked out website won’t affect your Google ranking nor will any protest content be indexed as part of your site. If you use Google’s Webmaster tools you will see crawler errors, but that’s what you want — your site to be unavailable, causing an error.

Implementing a 503 header page isn’t too difficult, though the details will vary according to which technologies power your site. If you’re using WordPress there’s a SOPA Blackout plugin available that can handle the blackout for you. It’s also pretty easy to create a 503 redirect at the server level. If you use Apache ensure that you have the Rewrite module installed and then add something like the following code to your root .htaccess file:

    RewriteRule .* /path/to/file/myerror503page.php

That will redirect your entire website to the 503 error page. Now just make sure that your myerror503page.php page returns a 503 error. Assuming you’re using PHP, something like this will do the trick:

    header('HTTP/1.1 503 Service Temporarily Unavailable');
    header('Retry-After: Thu, 19 Jan 2012 00:00:00 GMT');

For more details, be sure to read up on the HTTP 503 header and see the rest of Far’s Google+ post to learn how to handle robots.txt and a few things you should definitely not do (like change your robots.txt file to block Google for the day, which could mean Google will stay away for far more than just a day). Even if you aren’t planning to participate in the anti-SOPA blackout tomorrow, Far’s advice holds true any time you need to take some or all of your site offline — whether it’s routine server maintenance, rolling out an upgrade or as part of a political protest.

[Image by SOPAStrike.com]

File Under: servers

Open Source Upstart Nginx Surpasses Microsoft Server

For the first time since it sprang onto the web in 2004, Nginx (pronounced “engine-ex”), the lightweight open source web server that could, has overtaken Microsoft IIS to become the second most used server on the web.

Nginx currently powers some 12.18 percent of the web’s active sites — including big names like Facebook and WordPress — which means Nginx has just barely squeaked ahead of Microsoft IIS which currently powers 12.14 percent of websites. While Apache is still far ahead of both with over 57 percent of the market, of the top three, only Nginx continues to grow in market share.

These market share numbers come from NetCraft, which has been tracking data like server type and server operating system since 1995. It’s worth noting that Nginx is only ahead in the “active sites” survey which throws out results like parked domains and registrar placeholder pages (full details of NetCraft’s methodology can be had here).

Unlike Apache, which, while robust and powerful also uses considerably more resources, Nginx was designed to be fast and lightweight. The server can handle a very large number of simultaneous connections without suffering a performance hit or requiring additional hardware.

The combination of lightweight and fast has made Nginx the darling of the web in recent years with everyone from Facebook to Dropbox relying on it in one form or another. Indeed, another part of Nginx’s success lies in its versatility. The server can be used for everything from a traditional high performance web server to a load balancer, a caching engine, a mail proxy or an HTTP streaming server.

Having recently moved several primarily static websites to Nginx I can also vouch for another of Nginx’s strengths — outstanding documentation.

If you’d like to give Nginx a try head on over to the official site and download a copy today.