Casual users of email are only mildly irritated, and even occasionally amused, by spam. “Just click delete!” they say. “One keypress and it’s gone! What could be easier?” The more of it you see, though, and the more wear your Delete key gets, the less tolerant you become. It’s like crazy people coming up to you on the street, perhaps. If you only ever see one, you laugh about his antics forever. If you see one a day, you start to think, “What a shame! Can’t something be done for these poor, poor people?” And if, everywhere you go, you are surrounded by crazy people raving in your ears and blocking your progress, it becomes impossible to get anything done. At that point, you’re basically working in Hollywood.
Spam, for the most part, is not profitable for the advertisers who pay to have it sent. It has an incredibly low success rate, and only seems like a good idea because it’s so cheap to reach millions of inboxes. The only guy who makes a profit is the middlemen:the spamhouses that take money from hapless breast-enlargement-pill manufacturers in exchange for almost-worthless bulk mailings. They use shifty techniques like forged email headers, automated freemail accounts, and bulk-mailingsoftware.
When you start getting a lot of spam, or when you manage email for a number of people, it becomes crucial to sort the noise out of the signal. Because sorting by hand is tedious and unfeasible on even a moderate scale, the key is, of course, finding a way that a computer can distinguish spam from non-spam. A number of interesting solutions to this problem have been attempted.
In this article, it is assumed that you are running a mail server like the one described here:Set Up IMAP on Your Mail Server. Many of the techniques described herein will still be applicable on any Unix system, even if it’s just a mail client machine; and the principles apply to any email handling process.
File permissions on Unix and Linux are one of the most ubiquitous stumbling blocks for even regular users of those operating systems. The intricate structure of which users on a system are allowed to do what is one of the foundations of Unix, providing security and interoperability, but at times it can make working with the system a pain. Here’s a look at how permissions work and how to work with them.
Go outside and pop the hood of your car. You should see a thick metal barrier at the back of the engine compartment. This is called the firewall. To see how it works, poke a small hole in the fuel line so that a tiny amount of gasoline starts dripping on the engine block. Now close the hood, start the car, and head out on the highway (Some of you may choose to save life and limb (and time!) by merely visualizing this exercise).
If you have positioned the puncture correctly, within a few minutes the escaped gasoline should ignite and cause a small engine fire. At this point you may see smoke emerge from the engine compartment. Continue driving. You should be able to proceed a considerable distance before the heat becomes uncomfortable and toxic fumes and flames start to enter the passenger compartment.
The reason you can drive so far with a flaming engine is because the firewall is a highly effective barrier between the engine compartment and the passenger compartment. If your car had no firewall, the engine fire would have already melted the dashboard electronics and plastic, destroyed the upholstery, and toasted you to a crisp.
Now. Pull over and very carefully extinguish the fire.
A similar principle can be applied to networked computers. Picture your machine as the cozy, tricked-out interior of your automobile, and the outside world as the dirty but powerful engine that makes it go. It won’t do to have the vulnerable components of your network exposed to the engine’s maliciously raging heat — it’s best to install a firewall.
Let us abandon our weakening metaphor here before it carries us into a ping-pong tournament without a paddle. A firewall, in the networking sense, is a machine that straddles the interface between a private network and the Internet at large, and follows predetermined rules for allowing certain traffic to pass, while blocking traffic that’s unwanted.
So, how to get yourself one of those disaster-averting firewalls? You can start by reading on.
Think that turning off cookies and turning on private browsing makes you invisible on the web? Think again.
The Electronic Frontier Foundation (EFF) has launched a new web app dubbed Panopticlick that reveals just how scarily easy it is to identify you out of millions of web users.
The problem is your digital fingerprint. Whenever you visit a site, your browser and any plug-ins you have installed can leak data. Some of it isn’t very personal, like your user agent string. Some of it is more personally revealing, like which fonts you have installed. But the what if you put it all together? Would the results make you identifiable?
As the EFF says, “this information can create a kind of fingerprint — a signature that could be used to identify you and your computer.”
The EFF’s test suite highlights what most of us probably already suspect — we’re readily identifiable on the web. We ran the test on a Mac using Firefox, Safari and Google Chrome, all of which leaked enough data to make us identifiable according the EFF’s privacy explanations.
The purpose of Panopticlick is to show you how much you have in common with other browsers. The more your configuration mirrors everyone else’s, the harder it would be to identify you. The irony is, the nerdier you are — using a unique OS, a less common browser, customizing your browser with plug-ins and other power-user habits — the more identifiable you are.
For example, say you’re running Firefox on Ubuntu with the Gnash plug-in instead of Flash — way to stick it to the man — but you’re also showing up with a unique configuration of browser, OS, installed fonts, plug-ins and more which can be combined to identify you via a unique online fingerprint.
So what can you do to make yourself less identifiable? Well, by disabling cookies, the Flash plug-in, the Java plug-in and most of our extensions we were able to blend in better. Actually, the fact that we didn’t have Java or Flash turned on made us more identifiable in those categories, but it also denied the test access to our installed fonts and other bits of data, so overall, less identifiable.
Obviously that approach has a downside — without Flash there’s not much in the way of online video, a lack of cookies will cause issues with logins, and without Java, you won’t be able to crash your browser or cause it to get hung up for hours.
In short, the disabling method isn’t much fun. Strange though it may seem, the best way to lose the unique online fingerprint is to blend in with the herd. As the EFF points out, mobile browsers are hardest to identify since there are few customization options and, for the most part, one version of Mobile Safari looks just like another.
By the same token, if you want to blend in, stick with stock system fonts, run Windows XP, use Firefox with no add-ons and turn off cookies. You’ll be much harder to identify.
We should point out that, no matter how well you blend in the fingerprint test, you are of course still identifiable by your ISP. Advertisers and websites generally can’t access the information your ISP has on you, but of course governments — with the cooperation of your ISP — always can. So don’t think just because you’ve eliminated your fingerprints no one knows who you are.
Raskin likens the idea to how Firefox (and other browsers) currently handle phishing attack warnings, using visual icons and simple language.
For the active social web user, keeping track of which bits of your data are public and which are private on different sites is a chore. Some websites share your photos, status updates, your list of friends, who you’re following and other data on the open web by default. Some share nothing. The rest are somewhere in the middle.
Part of the problem is the privacy policies themselves. They are complex, mind-numbingly long legal documents. We routinely ignore them, breezing past them by clicking “I agree.” Dangerous behavior, indeed.
Raskin and his supporters have borrowed some ideas from the way Creative Commons licensing works, and the way licensing options are denoted on content sites. Originally, the idea was to create a Creative Commons model for privacy policies — that is, a common, readable, reusable set of policies much like the Creative Commons licenses for content — but that plan was abandoned because policies differ too much from site to site. There’s no easy boilerplate for privacy like there is for content publishing.
Raskin is very clear that, so far, this is a work in progress. There are, as of yet, no icons designed, and the details of how they would be implemented remain vague. Nor has Mozilla made any official announcement that it would support such a system.
However, recent events have proven there’s clearly a need for a standardized, front-and-center privacy notification system. In December, Facebook began a shift towards looser default privacy settings that encourage users to share more of their data. Just last week, Facebook CEO Mark Zuckerberg, in an interview with TechCrunch’s Mike Arrington, noted that people’s notions of privacy on the social web evolve often, and that social web sites will have to continually update their own privacy policies to reflect those changes. As a result, Facebook’s new defaults will offer less privacy. Zuckerberg’s words set off a fierce debate on the topic, with Marshall Kirkpatrick of ReadWriteWeb presenting the clearest counterargument that changing social mores should not lead to looser default privacy settings on the social web.
We’ve often said the browser is the most logical place to display identity and privacy information to the user. As people surf from site to site, they should be able to see, at a glance, what level of privacy they’re currently working with. Raskin’s model sounds like a pretty good plan, though implementing it might be a bit more difficult.
One obvious problem: What’s to stop a site from using icons that are totally different than what the written policy actually says? Raskin and crew want the icons to supersede the written policy so, in that scenario, the written policy is trumped by the icons and the user retains their rights. Whether or not an icon can legally trump a written document is something Raskin doesn’t directly address, and, as one commenter points out, the situation gets much more complex when you start considering international legal systems.
If you’ve got ideas or would like to participate in the discussion, head over to Raskin’s blog or sign up for the upcoming privacy workshop hosted at Mozilla on Jan. 27 (see Aza’s post for full details).