Archive for the ‘Web Standards’ Category

File Under: Interview, Web Standards

Interview | Lea Verou on Why Web Standards Matter and How You Can Help

Image: Lea Verou

This is the first in a coming series of interviews with web developers. We’re excited to start with Lea Verou, a front-end web developer from Greece who has not only made lots of cool stuff we’ve linked to, but also recently joined the W3C to help work on web standards.

Webmonkey: You joined the W3C Developer Relations last year, which is a relatively new thing at the W3C, actively reaching out to web designers and developers. What does the day to day work of a W3C Developer Relations person look like?

Lea Verou: You’re absolutely right Scott, it’s a new initiative that Doug Schepers started last year. W3C was interested in outreach for years, but there was no official Developer Relations activity before.

My role is pretty mixed. I help organize W3Conf, our conference for web designers and developers, I help develop and promote WebPlatform.org, presenting at conferences around the world, I write articles about web standards in industry media and many other things.

WM: You were interested in web standards very early on, what was it that made standards important to you?

Verou: When I started developing for the web, IE6 was the most widely used browser. As you probably remember, making websites that work cross-browser was way harder back then than it is today. We had to rely on browser detection, ugly hacks and whatnot. I wished browsers could just agree on some common ground and implement that. A couple years later, I discovered that this is actually a thing and it’s called web standards. Since then, I made it one of my personal goals to raise awareness in fellow web developers, get browsers to implement them and advance the standards themselves for the common good.

WM: Of course many of those ugly hacks remain, especially for developers still wrestling with IE7 (IE6 seems to have been put to rest for the most part). What’s your take on supporting older browsers? Is that an important thing to do or is it time we leave them behind because they’re holding back the web?

Verou: I’m a big supporter of progressive enhancement and graceful degradation. Websites should be usable, if possible, in older browsers, but they don’t need to have all the bling. However, graceful degradation is not black & white. Everyone seems to have a different definition of what is “graceful” and what is “enhancement”.

Is a solid color an acceptable fallback for a pattern? What if your lightbox has no overlay? What if your stripes become a solid color? What if your transitions are not there? What if your code has no syntax highlighting? I tend to lean towards being more permissive instead of looking for perfection in older browsers, especially on websites targeted to a more technical audience. I will provide fall-backs, but I will not go out of my way and use proprietary IE7-specific stuff to make something look good there. With a < 0.5% global market share, it’s just not worth it.

WM: A lot of the developers I know have a kind of love-hate relationship with the W3C. But you wrote on your site that working for the W3C was “a dream of mine ever since I learned what a web standard is.” Can you talk a little bit about what makes the W3C great and why you wanted to work there?

Verou: Like I said before, promoting web standards was something I was already doing for years anyway. I felt that working for W3C itself would enable me to do it more systematically and have a bigger impact. For instance, one of my main tasks has been helping organize W3Conf — happening this February in San Francisco — which is aimed at showing web professionals that web standards are not some utopian ideal but practical to adhere to in everyday work, as well as educate them about recent developments that they can use today. Connecting those two worlds is a fun challenge!

WM: Standards do at times feel less than practical, especially because they’ve been changing a lot lately — e.g. WebSockets got a rewrite after it had already shipped in multiple browsers, ditto CSS Flexbox. So there’s these seemingly rapid changes, and then on the other hand it seems like we’ve been waiting for other things forever. I know the W3C recently launched, WebPlatform.org for developers, but what other resources would you suggest for web professionals who’d like to educate themselves about web standards, and more importantly stay up-to-date?

Verou: W3C is well aware of the fact that sometimes we can be slow and we are trying to speed things up to meet developer needs. This is why we are encouraging implementors (like browser vendors) to implement earlier on in the process so we can get feedback by developers and we’re putting more emphasis on testing, which is going to improve interoperability.

All this means that experimental features will ship which still need work. Having shipped in browsers is not an indication of stability. Browsers often ship experimental features so that developers can play with them and give feedback. This doesn’t mean the feature is frozen — quite the opposite, it means we need feedback to make it better.

Regarding resources, I know I’m weird, but I often read about new features in specs. I search for a feature, come across the spec, take a look on what else is there and then see if it’s implemented anywhere. I also often find good information on Twitter and looking at others’ code. There are also some websites with good information like:

and many others.

WM: Now that you’re actually working for the W3C has your perspective on standards changed? Is there anything that looks different from the other side of the fence?

Verou: I was involved in standards before I joined W3C, so many things already looked different. For instance, many developers tend to blame W3C for being slow with standardization, whereas the reality is that often implementors are just busy with other things (we need multiple implementations for a spec to exit CR level) or spec editors are focusing their attention elsewhere.

Another common misconception is that spec editors and Working Group members are exclusively or mostly W3C staff. Whereas many W3C Team members do edit specs and participate in WGs, the majority of spec editors are employees of member companies, as evident in most specifications (you can see a list of the editors in the header). W3C is not some authority that dictates standards from up high, but merely a forum for interested parties to get together and collaborate on advancing the web.

WM: How can developers who aren’t (yet) well-known contribute to the process or give feedback about what works and what doesn’t?

Verou: Participating in web standards is a matter of joining the conversation. W3C is very open. Technical discussion happens in the public mailing lists and IRC, which you can join. Pragmatic feedback from anybody is welcome, especially from people who have tried using the feature in question. Experiment, try to make it work for you and share experiences — good or bad — about it. It might seem at first that you’re just one voice among many, but if your feedback is good, your voice is going to be heard. Technical arguments are judged on their merit and not their origin.

WM: I get tired just looking at your GitHub pagePattern Gallery, -prefix-free, Dabblet, Prism and a bunch of other useful tools — where do you find the time to build all this cool stuff? And are you going to be able to keep doing it now that you’re at the W3C?

Verou: I actually released another tool after I joined W3C: Contrast Ratio. W3C supports me in making tools to help developers use open web technologies more effectively. In fact, improving Prism and Dabblet is one of my tasks at W3C since we are going to be using them in WebPlatform.org, our vendor-neutral documentation effort, where all the big players of the Web are working in harmony to create a valuable resource. However, I plan to slow down on releasing new things, so I can maintain the existing ones. Nobody likes to use abandoned scripts and tools, right? :)

WM: The first time I recall landing on your blog was for a post about CSS abuses, like making the Mona Lisa in pure CSS. Which is of course silly, but what caught my eye was that you wrote about how people should be using SVG, which is an awesome tool that almost no one seems to use (despite the fact that it often has better browser support than most CSS 3 features and works great on every screen resolution). Why is SVG still the neglected stepchild of the web stack and do you think that’s ever going to change?

Verou: SVG was significantly held back by a number of different factors. One was the lack of proper support in browsers for many years. Internet Explorer was promoting VML (a proprietary technology that influenced SVG) until IE8 and only implemented SVG in IE9, which is not that long ago. In addition, there are far more browser bugs in SVG implementations across browsers, since fewer people use it, so fewer of them get reported and fixed.

Last but not least, there just aren’t many extensive resources for SVG documentation, a gap that WebPlatform.org is trying to cover (and since it’s a wiki, you can help too!).
However, SVG is certainly picking up in the last few years, either directly by people using the format, or indirectly, through many of its features getting added in CSS. For example, CSS Transforms, CSS Filter Effects, Blending and Compositing, as well as CSS Masking, are all basically SVG applied to HTML with a simpler syntax.

WM: Everyone has their pet standard, personally I’d like to see CSS Flexbox get better browser support and end the float insanity — what’s at the top of your web standards wish list?

Verou: As an editor of CSS Backgrounds & Borders Level 4 I can’t wait for it to get more attention. Regarding other specs however, I’m very interested in the new SVG-inspired specs like Filter Effects, Compositing and Masking. They allow us to do things we badly needed for years and for the most part, they degrade pretty gracefully, unlike the new Layout modules or the syntax improvements.

To keep up with Verou’s latest projects and musings on web standards, check out her blog and follow her on Twitter and GitHub.

File Under: APIs, HTML5, Web Standards

Improve Your Website’s Accessibility With the W3C’s ‘Guide to Using ARIA’

WAI-ARIA, the W3C’s specification for Accessible Rich Internet Applications, provides web developers with a means of annotating page elements with the roles, properties, and states that define exactly what those elements do. The added definitions help screen readers and other assistive devices navigate through your website.

We’ve looked at how you can use ARIA roles to not just improve your site’s accessibility, but style elements as well, but now you can get the official word from the W3C. The W3C has published the First Public Working Draft of Using WAI-ARIA in HTML.

The W3C’s guide goes beyond the ARIA Landmark Roles that we’ve covered in the past, offering suggestions on how ARIA can help with HTML5 apps that load dynamic content or build entire interfaces with JavaScript. In fact, this is where the true power of ARIA comes into play since there is often no other way for assistive devices to get at your application’s data.

Unfortunately not everything in the ARIA spec works in every screen reader. Support for the landmark roles is pretty solid, but much of the rest remains a work in progress. As always there’s no substitute for real world testing.

File Under: Browsers, Web Standards

Think One Fewer Browser Means Less Work? Think Again

WebKit: not actually one ring, but many, many rings. Image: Screenshot/Webmonkey

Opera software is abandoning its homegrown rendering engine in favor of the open source WebKit rendering engine. Many developers seem to think this means one fewer browser to test in, but unfortunately, that’s not the case.

The problem with the dream of less testing because there’s more WebKit is that “WebKit” can mean many things. The WebKit in Safari does not have all the features you’ll find in the WebKit that powers Google Chrome. The situation gets even more complicated with mobile where there are about as many different versions of WebKit as there are browsers.

As Mozilla’s Rob Hawkes and Robert Nyman point out in the post WebKit: An Objective View, that means “each browser will still have its own quirks, performance differences, design, and functionality. These should all be tested for.”

Worse, individual WebKit browsers can pick and choose which APIs to include in their final builds, which means just because something is available in WebKit, does not mean it’s available in, for example, both Chrome and Safari. Couple this with Safari’s relatively slow release schedule and just the two major desktop WebKit variants are going to require testing to make sure everything works.

Throwing a WebKit-based Opera in the mix just means another WebKit browser that needs to be part of your testing.

There’s nothing wrong with this state of affairs, nor will it change all that much when Opera is on WebKit as well, but it won’t mean less testing, nor is it going to make web developers’ lives any easier (especially since most of them weren’t testing in Opera anyway).

Testing will always be a necessary part of web development, but the danger that Hawkes and Nyman foresee is that developers will test less because they assume that if something works in one version of WebKit it will work in all of them. While that hasn’t happened yet, the CSS prefix debacle certainly doesn’t bode well for the WebKit-heavy future.

DRM for the Web? Say It Ain’t So

So far it ain’t so, but some form of DRM in HTML is becoming a more likely possibility every day.

The W3C’s HTML Working Group recently decided that a proposal to add DRM to HTML media elements — formally known as the Encrypted Media Extensions proposal — is indeed within its purview and the group will be working on it.

That doesn’t mean that the Encrypted Media Extensions proposal will become a standard as is, but it does up the chances that some sort of DRM system will make its way into HTML.

The Encrypted Media Extensions proposal — which is backed by the likes of Google, Microsoft, Netflix and dozens of other media giants — technically does not add DRM to HTML. Instead it defines a framework for bringing a DRM system, or “protected media content” as the current draft puts it, to the web.

If you think the idea of DRM seems antithetical to the inherently open nature of HTML, you’re not alone. Ian Hickson, former editor of the W3C’s HTML spec, has called the Encrypted Media Extensions proposal “unethical.” Hickson is no longer in charge of the W3C’s HTML spec, but HTML WG member Manu Sporny, has already asked the WG not to publish the first working draft because the “specification does not solve the problem the authors are attempting to solve.”

There are numerous problems with the Encrypted Media Extensions proposal, including the basic fact that, historically, DRM doesn’t work.

Other problems specific to the current draft of the proposal include the fact that it might well be impossible for open source web browsers to implement without relying on closed source components. Then there are the gaping security flaws that would make it trivially easy to defeat the currently defined system.

But Sporny raises a far more ominous objection — that the proposal in its current form does not actually define a DRM system. Instead it proposes a common API, which would most likely lead to a proliferation of DRM plugins. Here’s Sporny’s take:

The EME specification does not specify a DRM scheme in the specification, rather it explains the architecture for a DRM plug-in mechanism. This will lead to plug-in proliferation on the Web. Plugins are something that are detrimental to inter-operability because it is inevitable that the DRM plugin vendors will not be able to support all platforms at all times. So, some people will be able to view content, others will not.

That sounds a lot like the bad old days when you needed Flash, Real Player, Windows Media Player and dozens of other little plugins installed just to watch a video.

That’s a web no user wants to return to.

At the same time there continue to be companies which believe DRM is essential to their bottom line and the web offers no solution. That’s why Flash, Silverlight and other DRM-friendly plugins remain the media players of choice for many content providers.

So the question of DRM on the web boils down to this: should the W3C continue to work on a spec that defines some kind of DRM system or should the interested companies go off and do their own work? For its part the W3C clearly wants to be part of the process, though it remains unclear what, if any, value a standards-based DRM system might have for web users.

File Under: HTML, Web Standards

‘Main’ Element Lands a Starring Role in HTML

Original Image by Christian Haugen/Flickr

HTML5 introduces several new tags that give HTML more semantic meaning. There’s <nav> for navigation elements, <header> for headers, <footer> for footers and now a new element has been added to the HTML draft spec<main>, to wrap around, well, the main content on a page.

As we reported back when the W3C’s HTML Working Group first considered adding it to the list of HTML elements, the primary purpose of the main element is to map the WAI-ARIA landmark role “main” to an HTML element.

Thanks to the hard work of developer Steve Faulkner, who wrote up the proposal for <main> and did much of the hard work convincing the Working Group that it was worth adding to the spec, you can start using the main element today. In fact <main> is already natively supported in nightly builds of Firefox and Chrome.

There’s an ongoing debate as to whether more than one <main> element should be allowed on the page. Currently the W3C’s draft of the spec explicitly prohibits more than one <main> per page, but the WHATWG’s version of the spec is less specific.

It might sound counter-intuitive to have more than one <main> per page, but the argument is that the rest of the new block level tags have no such restrictions. In other words, there can be more than one <header>, more than one <footer> and more than one <nav> so why not more than one <main>? Developer Jeremy Keith has a post on why more than one <main> could be a good idea. [Update: There's also been some discussion on the HTML WG mailing list and a call for supporting data. As Steven Falkner notes in the comments below "the discussion continues, but at this stage there is no evidence that such a change will bring a benefit to users and may well complicate the usage of the feature and dilute its meaning and benefit for users."]

For now we suggest sticking to just one main element per page, which simplifies using <main>. Chances are you have something like <div id="main"> in your code right now. To use the new main element, just rewrite that to be <main role="main">.

The role="main" may seem redundant, and someday it will be, but right now it acts as a polyfill for older web browsers, ensuring that they map the element to accessibility APIs. Older browsers will also need to be told about the element’s block level status with main {display:block;}. HTML5 shiv, a popular way to add support for the new elements to older browsers, has already been updated to support <main>.