Keeping track of the ever-evolving HTML5 and CSS 3 support in today’s web browsers can be an overwhelming task. Sure you can use CSS animations to create some whiz-bang effects, but should you? Which browsers support it? What should you do about older browsers?
The first question can be answered by When Can I Use, which tracks browser support for HTML5 and CSS 3. You can then add tools like Modernizer to detect whether or not a feature is supported, so that you can gracefully degrade or provide an alternate solution for browsers that don’t support the features you’re using. But just what are those alternate solutions and polyfills? That’s what the new (somewhat poorly named) HTML5 Please site is designed to help with.
HTML5 Please offers a list of HTML5 elements and CSS 3 rules with an overview of browser support and any polyfills for each element listed (CSS 3 is the much more heavily documented of the two, which is why the HTML5 emphasis in the name is perhaps not the best choice). The creators of the site then go a step further and offer recommendations, “so you can decide if and how to put each of these features to use.”
The goal is to help you “use the new and shiny responsibly.”
HTML5 Please was created by Paul Irish, head of Google Chrome developer relations, Divya Manian, Web Opener for Opera Software, (along with many others) who point out that the recommendations offered on the site “represent the collective knowledge of developers who have been deep in the HTML5 trenches.”
The recommendations for HTML5 and CSS 3 features are divided into three groups — “use”, “use with caution” and “avoid”. The result is a site that makes it easy to figure out which new elements are safe to use (with polyfills) and which are still probably too new for mainstream work. If the misleading name bothers you, there’s also Browser Support, which offers similar data.
If you’d like to contribute to the project, head over to the GitHub repo.
The Google Webmaster blog has posted an overview of how to use the often overlooked HTML link elements rel=”next” and rel=”prev” to let Google’s spiders know that something on your site is part of a paginated series.
What’s a “paginated series”? As the Webmaster blog writes:
Throughout the web, a paginated series of content may take many shapes—it can be an article divided into several component pages, or a product category with items spread across several pages, or a forum thread divided into a sequence of URLs.
The first example, article pagination, is generally not a good idea, particularly if you’re trying to make reader-friendly website. However, the other two use cases, for example a blog’s category archives or a long forum thread, make Google’s rel=”next” and rel=”prev” support much more useful.
If you’d like to add rel=”next” and rel=”prev” to your site it’s not hard to do. All you need to to is add the link rel tag to the <head> section of your paginated content. For example suppose your blog had paginated category archives. On page two of the archive the head tags would look something like this:
If you’re using WordPress you may have noticed that it outputs a number of link rel tags, including rel=”start”, rel=”index” and others, all of which have been dropped from the HTML5 spec. WordPress also plans to drop support for its extraneous rel tags when version 3.3 arrives. However, while most link rel tags have been purged, rel=”next” and rel=”prev” remain part of HTML5.
For more details, including how to handle the “view all” page option some websites use, head over to the Google Webmaster blog.
Adobe has released a preview version of a new HTML animation tool dubbed Edge. Together with Wallaby, Adobe’s Flash-to-HTML conversion app, Edge is part of Adobe’s push to remind the web that the company is more than just its much-maligned Flash plugin.
Edge has been released as a free, beta public preview and is available for download through the Adobe Labs website.
HTML, especially some of the new elements in HTML5, combined with CSS 3′s animation syntax offers web designers a way to create sophisticated animations without requiring users to have the Flash plugin installed. That’s a good thing since no iOS user is going to have the Flash plugin.
Like Hype (see our review) and other HTML animation apps out there, Edge looks and behaves much like Adobe’s Flash development environment with a timeline, keyframes and editing tools that will look familiar to Flash developers. If you know how to use Flash, you’ll be up to speed with Edge in no time.
The Edge interface should look familiar to anyone who has used Flash.
Despite Adobe’s marketing efforts, there’s almost nothing about Edge that is HTML5. Adobe is hardly alone in its misleading use of the HTML5 moniker. Both Hype and Sencha Animator claim to be “HTML5″ animation apps and, like Adobe, neither generates much of anything that isn’t in the HTML4 spec.
Why go with div and CSS-based animations when there’s Canvas and SVG? Well, for one thing, this is a very early preview and Adobe claims that eventually Edge will support canvas and SVG (in fact Edge already has some support for importing SVG file). A Mozilla developer raised this question in the Adobe forums and Adobe’s Mark Anders chimed in to say that, “we seriously considered canvas, but current performance on mobile browsers (especially iOS) is very bad.”
Anders goes on to note that iOS 5 will remedy much of iOS’s canvas performance woes, and Adobe is clearly looking for developer feedback on where to go with Edge. If you’ve got strong feelings about where Edge should focus its efforts, head over to the forums and let Adobe know.
You wouldn’t write your username and passwords on a postcard and mail it for the world to see, so why are you doing it online? Every time you log in to Twitter, Facebook or any other service that uses a plain HTTP connection, that’s essentially what you’re doing.
There is a better way, the secure version of HTTP — HTTPS. That extra “S” in the URL means your connection is secure, and it’s much harder for anyone else to see what you’re doing. But if HTTPS is more secure, why doesn’t the entire web use it?
HTTPS has been around nearly as long as the web, but it’s primarily used by sites that handle money — your bank’s website or shopping carts that capture credit card data. Even many sites that do use HTTPS use it only for the portions of their websites that need it — like shopping carts or account pages.
Web security got a shot in the arm last year when the FireSheep network-sniffing tool made it easy for anyone to detect your login info over insecure networks — your local coffeeshop’s hotspot or public Wi-Fi at the library. That prompted a number of large sites to begin offering encrypted versions of their services on HTTPS connections.
Lately even sites like Twitter (which has almost entirely public data anyway) are nevertheless offering HTTPS connections. You might not mind anyone sniffing and reading your Twitter messages en route to the server, but most people don’t want someone also reading their username and password info. That’s why Twitter recently announced a new option to force HTTPS connections (note that Twitter’s HTTPS option only works with a desktop browser, not the mobile site, which still requires manually entering the HTTPS address).
So, with the web clearly moving toward more HTTPS connections, why not just make everything HTTPS?
That’s the question I put to Yves Lafon, one of the resident experts on HTTP(s) at the W3C. There are some practical issues most web developers are probably aware of, such as the high cost of secure certificates, but obviously that’s not as much of an issue with large web services that have millions of dollars.
The real problem, according to Lafon, is that with HTTPS you lose the ability to cache. “Not really an issue when servers and clients are in the same region (meaning continent),” writes Lafon in an e-mail to Webmonkey, “but people in Australia (for example) love when something can be cached and served without a huge response time.”
Lafon also notes that there’s another small performance hit when using HTTPS, since “the SSL initial key exchange adds to the latency.” In other words, a purely security-focused, HTTPS-only web would, with today’s technology, be slower.
For sites that don’t have any reason to encrypt anything — in other words, you never log in, so there’s nothing to protect — the overhead and loss of caching that comes with HTTPS just doesn’t make sense. However, for big sites like Facebook, Google Apps or Twitter, many users might be willing to take the slight performance hit in exchange for a more-secure connection. And the fact that more and more websites are adding support of HTTPS shows that users do value security over speed, so long as the speed difference is minimal.
Another problem with running an HTTPS site is the cost of operations. “Although servers are faster, and implementations of SSL more optimized, it still costs more than doing plain HTTP,” writes Lafon. While less of a concern for smaller sites with little traffic, HTTPS can add up, if your site suddenly becomes popular.
Perhaps the main reason most of us are not using HTTPS to serve our websites is simply that it doesn’t work with virtual hosts. Virtual hosts, which are what the most common cheap web-hosting providers offer, allow the web host to serve multiple websites from the same physical server — hundreds of websites all with the same IP address. That works just fine with regular HTTP connections, but it doesn’t work at all with HTTPS.
There is a way to make virtual hosting and HTTPS work together — the TLS Extensions protocol — but Lafon notes that, so far, it’s only partially implemented. Of course that’s not an issue for big sites, which often have entire server farms behind them. But until that spec — or something similar — is widely used, HTTPS isn’t going to work for small, virtually hosted websites.
In the end there is no real reason the whole web couldn’t use HTTPS. There are practical reasons why it isn’t happening today, but eventually the practical hurdles will fall away. Broadband speeds will improve, which will make caching less of a concern, and improved servers will be further optimized for secure connections.
In the web of the future the main concern won’t just be how fast a site loads, but how well it safeguards you and protects your data once it does load.
The World Wide Web Consortium (W3C), the standards body that overseas HTML, CSS and other web technologies, has release a rough draft specification for touch screen devices. The spec is far from complete, but eventually it could give developers a set of standards for creating touch-based interfaces.
Thus far touch screen devices have primarily mimicked mouse behaviors. But the rise of multi-touch gestures and the larger screens available on tablets, mean that touch screens of the future may offer design possibilities far beyond the mouse-based world that exists on today’s web. The goal of the W3C’s touch-based spec is to help define standard behaviors and events that developers can translate into touch-friendly interfaces.
Like much of the W3C’s work, the new touch-screen spec starts with existing specs, in this case Apple’s iOS touch event spec. The W3C’s draft adds several more events like X and Y radii for touch areas and a “force” property. The later, while rather vague at the moment, could give developers a way to emulate mouse-rollover events. For example, a light touch could trigger a rollover, while a hard touch clicks a link.
Mobile platform consultant Peter-Paul Koch calls out a few minor problems and undecided issues — for example, no units are specified for the radius or force properties — but overall says the spec is a step in the right direction.
The Touch Events Specification is a long way from done; it doesn’t even have a real URL on the W3C site yet. And, other than the events cloned from Apple, the spec is not supported anywhere in the wild. Still, touch screens clearly need an expanded set of standards to go along with desktop standards and it’s nice to see the W3C stepping up to the plate.