All posts tagged ‘interview’

File Under: Interview, Web Standards

Interview | Lea Verou on Why Web Standards Matter and How You Can Help

Image: Lea Verou

This is the first in a coming series of interviews with web developers. We’re excited to start with Lea Verou, a front-end web developer from Greece who has not only made lots of cool stuff we’ve linked to, but also recently joined the W3C to help work on web standards.

Webmonkey: You joined the W3C Developer Relations last year, which is a relatively new thing at the W3C, actively reaching out to web designers and developers. What does the day to day work of a W3C Developer Relations person look like?

Lea Verou: You’re absolutely right Scott, it’s a new initiative that Doug Schepers started last year. W3C was interested in outreach for years, but there was no official Developer Relations activity before.

My role is pretty mixed. I help organize W3Conf, our conference for web designers and developers, I help develop and promote WebPlatform.org, presenting at conferences around the world, I write articles about web standards in industry media and many other things.

WM: You were interested in web standards very early on, what was it that made standards important to you?

Verou: When I started developing for the web, IE6 was the most widely used browser. As you probably remember, making websites that work cross-browser was way harder back then than it is today. We had to rely on browser detection, ugly hacks and whatnot. I wished browsers could just agree on some common ground and implement that. A couple years later, I discovered that this is actually a thing and it’s called web standards. Since then, I made it one of my personal goals to raise awareness in fellow web developers, get browsers to implement them and advance the standards themselves for the common good.

WM: Of course many of those ugly hacks remain, especially for developers still wrestling with IE7 (IE6 seems to have been put to rest for the most part). What’s your take on supporting older browsers? Is that an important thing to do or is it time we leave them behind because they’re holding back the web?

Verou: I’m a big supporter of progressive enhancement and graceful degradation. Websites should be usable, if possible, in older browsers, but they don’t need to have all the bling. However, graceful degradation is not black & white. Everyone seems to have a different definition of what is “graceful” and what is “enhancement”.

Is a solid color an acceptable fallback for a pattern? What if your lightbox has no overlay? What if your stripes become a solid color? What if your transitions are not there? What if your code has no syntax highlighting? I tend to lean towards being more permissive instead of looking for perfection in older browsers, especially on websites targeted to a more technical audience. I will provide fall-backs, but I will not go out of my way and use proprietary IE7-specific stuff to make something look good there. With a < 0.5% global market share, it’s just not worth it.

WM: A lot of the developers I know have a kind of love-hate relationship with the W3C. But you wrote on your site that working for the W3C was “a dream of mine ever since I learned what a web standard is.” Can you talk a little bit about what makes the W3C great and why you wanted to work there?

Verou: Like I said before, promoting web standards was something I was already doing for years anyway. I felt that working for W3C itself would enable me to do it more systematically and have a bigger impact. For instance, one of my main tasks has been helping organize W3Conf — happening this February in San Francisco — which is aimed at showing web professionals that web standards are not some utopian ideal but practical to adhere to in everyday work, as well as educate them about recent developments that they can use today. Connecting those two worlds is a fun challenge!

WM: Standards do at times feel less than practical, especially because they’ve been changing a lot lately — e.g. WebSockets got a rewrite after it had already shipped in multiple browsers, ditto CSS Flexbox. So there’s these seemingly rapid changes, and then on the other hand it seems like we’ve been waiting for other things forever. I know the W3C recently launched, WebPlatform.org for developers, but what other resources would you suggest for web professionals who’d like to educate themselves about web standards, and more importantly stay up-to-date?

Verou: W3C is well aware of the fact that sometimes we can be slow and we are trying to speed things up to meet developer needs. This is why we are encouraging implementors (like browser vendors) to implement earlier on in the process so we can get feedback by developers and we’re putting more emphasis on testing, which is going to improve interoperability.

All this means that experimental features will ship which still need work. Having shipped in browsers is not an indication of stability. Browsers often ship experimental features so that developers can play with them and give feedback. This doesn’t mean the feature is frozen — quite the opposite, it means we need feedback to make it better.

Regarding resources, I know I’m weird, but I often read about new features in specs. I search for a feature, come across the spec, take a look on what else is there and then see if it’s implemented anywhere. I also often find good information on Twitter and looking at others’ code. There are also some websites with good information like:

and many others.

WM: Now that you’re actually working for the W3C has your perspective on standards changed? Is there anything that looks different from the other side of the fence?

Verou: I was involved in standards before I joined W3C, so many things already looked different. For instance, many developers tend to blame W3C for being slow with standardization, whereas the reality is that often implementors are just busy with other things (we need multiple implementations for a spec to exit CR level) or spec editors are focusing their attention elsewhere.

Another common misconception is that spec editors and Working Group members are exclusively or mostly W3C staff. Whereas many W3C Team members do edit specs and participate in WGs, the majority of spec editors are employees of member companies, as evident in most specifications (you can see a list of the editors in the header). W3C is not some authority that dictates standards from up high, but merely a forum for interested parties to get together and collaborate on advancing the web.

WM: How can developers who aren’t (yet) well-known contribute to the process or give feedback about what works and what doesn’t?

Verou: Participating in web standards is a matter of joining the conversation. W3C is very open. Technical discussion happens in the public mailing lists and IRC, which you can join. Pragmatic feedback from anybody is welcome, especially from people who have tried using the feature in question. Experiment, try to make it work for you and share experiences — good or bad — about it. It might seem at first that you’re just one voice among many, but if your feedback is good, your voice is going to be heard. Technical arguments are judged on their merit and not their origin.

WM: I get tired just looking at your GitHub pagePattern Gallery, -prefix-free, Dabblet, Prism and a bunch of other useful tools — where do you find the time to build all this cool stuff? And are you going to be able to keep doing it now that you’re at the W3C?

Verou: I actually released another tool after I joined W3C: Contrast Ratio. W3C supports me in making tools to help developers use open web technologies more effectively. In fact, improving Prism and Dabblet is one of my tasks at W3C since we are going to be using them in WebPlatform.org, our vendor-neutral documentation effort, where all the big players of the Web are working in harmony to create a valuable resource. However, I plan to slow down on releasing new things, so I can maintain the existing ones. Nobody likes to use abandoned scripts and tools, right? :)

WM: The first time I recall landing on your blog was for a post about CSS abuses, like making the Mona Lisa in pure CSS. Which is of course silly, but what caught my eye was that you wrote about how people should be using SVG, which is an awesome tool that almost no one seems to use (despite the fact that it often has better browser support than most CSS 3 features and works great on every screen resolution). Why is SVG still the neglected stepchild of the web stack and do you think that’s ever going to change?

Verou: SVG was significantly held back by a number of different factors. One was the lack of proper support in browsers for many years. Internet Explorer was promoting VML (a proprietary technology that influenced SVG) until IE8 and only implemented SVG in IE9, which is not that long ago. In addition, there are far more browser bugs in SVG implementations across browsers, since fewer people use it, so fewer of them get reported and fixed.

Last but not least, there just aren’t many extensive resources for SVG documentation, a gap that WebPlatform.org is trying to cover (and since it’s a wiki, you can help too!).
However, SVG is certainly picking up in the last few years, either directly by people using the format, or indirectly, through many of its features getting added in CSS. For example, CSS Transforms, CSS Filter Effects, Blending and Compositing, as well as CSS Masking, are all basically SVG applied to HTML with a simpler syntax.

WM: Everyone has their pet standard, personally I’d like to see CSS Flexbox get better browser support and end the float insanity — what’s at the top of your web standards wish list?

Verou: As an editor of CSS Backgrounds & Borders Level 4 I can’t wait for it to get more attention. Regarding other specs however, I’m very interested in the new SVG-inspired specs like Filter Effects, Compositing and Masking. They allow us to do things we badly needed for years and for the most part, they degrade pretty gracefully, unlike the new Layout modules or the syntax improvements.

To keep up with Verou’s latest projects and musings on web standards, check out her blog and follow her on Twitter and GitHub.

A Brave New Web Will Be Here Soon, But Browsers Must Improve

The great promise of HTML5 is that it will turn the web into a full-fledged computing platform awash with video, animation and real-time interactions, yet free of the hacks and plug-ins common today.

While the language itself is almost fully baked, HTML5 won’t fully arrive for at least another two years, according to one of the men charged with its design.

“I don’t expect to see full implementation of HTML5 across all the major browsers until the end of 2011 at least,” says Philippe Le Hegaret, interaction domain leader for the Worldwide Web Consortium (W3C), who oversees the development of HTML5.

He tells Webmonkey the specification outlining the long-promised rewrite of the web’s underlying language will be ready towards the end of 2010, but because of varying levels of support across different browsers, especially in the areas of video and animation, we’re in for a longer wait.

Most web pages are currently written in HTML version HTML 4.01, which has been around since the late 1990s. The web was mostly made up of static pages when HTML was born, and it has grown by leaps and bounds since then. Now, we favor complex web applications written in JavaScript like Gmail and Facebook, we stream videos in high-definition, we consume news in real-time feeds and generally push our browsers as far as they’ll go. These developments have left HTML drastically outdated, and web authors have resorted to using a variety of hacks and plug-ins to make everything work properly.

HTML5 — which is actually a combination of languages, APIs and other technologies to make scripted applications more powerful — promises to solve many of the problems of its predecessor, and do so without the hacks and plug-ins.

We’re already close. All the major browsers are providing some level of support for HTML5.

“There’s strong support already in Firefox and Safari. Even Microsoft IE8 has some partial support,” says Le Hegaret, referring to some code within HTML5 that enables the browser to pass information between pages.

Browser makers are approaching support incrementally, adding features little by little with every subsequent release. Some, like Mozilla, can build new features into the next release in a matter of months. For others, like Microsoft, it takes much longer.

Google Chrome is maturing extremely quickly and already supports most of HTML5. This is mostly because Google didn’t start from scratch — the company chose to use the open source Webkit rendering engine, the same one used by Safari. Still, this doesn’t mean both browsers support HTML5 equally.

“Video support between Safari and Chrome, despite the fact that they are both using the same underlying engine, is totally different because video support is not part of the Webkit project at the moment,” says Le Hegaret.

It’s actually this very issue — support for playing videos inside the browser — that continues to be one of main factors blocking the broad adoption of HTML5.

The way the specification is written now, website authors will have the ability to link to a video file as simply as an image file. The video plays in the browser without using a plug-in, and the author can create a player wrapper with controls.

But browser vendors are stuck arguing over which video format to support. Mozilla, Google and Opera are interested in the open source Ogg Theora video format. Apple has substantial investments in its Quicktime technology, so it’s pushing for the Quicktime-backed H.264 format. Microsoft wants people to use its Silverlight plug-in, so Internet Explorer isn’t supporting native video playback in the browser at all.

Google has voiced support for Ogg, but it has also recently made a bid to purchase On2, a company that makes a competing video technology. Rumor has it Google might release On2′s video technology under an open source license once the sale is complete.

Until these issues are sorted out, consumers and content providers alike are forced to rely on plug-ins. Le Hegaret says that while these plug-ins have certainly helped the web arrive where it is today, they continue to be a burden on the user.

Setting up any browser to support both H.264 and Ogg Theora requires at least one plug in, which harms the user experience.

“It’s hard today to ask people to install a plug-in unless the payoff is huge,” he says. “What’s driving the most successful plug-in, which is Flash, is video support. If you can’t see YouTube, your life on the web is pretty miserable. You’re missing a lot.”

Plug-ins aren’t just harder on web users, but they’re hard on web developers, too.

“Building with Flash or Silverlight in a way that lets you share information between the content appearing inside the plug-in and the rest of the page presents some challenges,” says Le Hegaret.

Unlike its predecessor, HTML5 has been designed with web applications in mind. The current HTML5 specification includes a media API that makes it easier to connect animations or video and audio elements — things traditionally presented within a Flash player — with the rest of the content on the page.

“You get a smoother application if you use HTML5. You’re not crossing a software layer. It’s all part of the same application.”

Unfortunately, the YouTubes of the world aren’t going to make a baseline switch from Flash to HTML5 unless they know there’s strong support for it in the browsers.

But they are testing the waters: Wikipedia is experimenting with HTML5 video support by serving Ogg Theora video to browsers that can handle it, and Flash to everyone else. YouTube and the video site Dailymotion have also set up special demo pages using this technique.

Le Hegaret says we’ll be in this period of transition — a dual-experience web where content sites serve HTML5 video along with a Flash fall-back — for a while.”

Web developers will continue to have to understand that not everyone is using the latest generation web browser, and that’s OK in the short term.”As far as being able to make the switch to a pure HTML5 web altogether, Le Hegaret says that’s only possible once browser vendors sort out their differences.

Once that day arrives, the final switch to HTML5 will be in the hands of the content providers. It’s up to them to begin coding for HTML5 standards and ditching support for old browsers.”

There are still a significant amount of people out there using IE6,” says Le Hegaret. “As a developer right now, you can’t really ignore it. Hopefully, in two or three years, you will be able to start ignoring IE6.”

See Also:

File Under: Other

Adobe’s Kevin Lynch on AIR’s Open-Source Road to the Desktop

Lynchqna1
Photo: Michael Calore

This week, Adobe released version 1.0 of it’s Adobe Integrated Runtime (or AIR for short) a mechanism that allows applications created for the internet to run on the desktop completely independent of the web browser and across multiple operating systems. You can read our initial coverage of AIR’s release on the Compiler blog.

We got the chance to talk with Adobe’s Kevin Lynch prior to AIR’s release on Monday. Under his previous title of chief software architect, Lynch led the development of AIR from its beginnings under the code-name "Apollo," through it’s year-long public beta stage. During the development process, Lynch (who was recently promoted to chief technology officer of Adobe) remained vocal about the fact that his team was both using and contributing to open-source technologies.

When we spoke to him over the telephone on the eve of AIR’s release, Lynch was eager to talk about how open-source guided AIR’s development and it has influenced the growth of web apps in general. We also talked about the state of Flash and AIR on Linux, and AIR’s potential to bring new applications to the Linux desktop.

We started out by asking Kevin what role open-source played in AIR’s development.

Kevin Lynch: We’re really working to increase our usage of open-source technologies and to contribute to open-source. Because we developed AIR so openly, we were able to share it with developers and make sure we were developing the right things.

This is a huge release for us. We’ve got a cross-operating system runtime that really works on Mac and Windows. We’re in the midst of Linux development right now, and that one’s starting to shape up. That will be out later this year. It’s looking really good already, but we’re still working on it. We’re actually looking for Linux testers, so if you or anyone else wants to get involved, let us know.

(At an Adobe press event Monday, Lynch showed a version of AIR running on an IBM ThinkPad with Ubuntu Linux installed. It was a little slower than the Mac version, but fully functional.)

Continue Reading “Adobe’s Kevin Lynch on AIR’s Open-Source Road to the Desktop” »

Ogg’s Creator On Why Open Media Formats Still Rule

Fish_xiph_org
In the past few weeks, the Ogg family of patent-free media formats have received something of a boost. Some developments playing to Ogg’s advantage have been unintentional, such as Microsoft’s renewed claims to ownership of open-source software patents and online music retailers’ shift towards DRM-free sales. Others, such as the Free Software Foundation’s launch of a new Ogg awareness campaign at PlayOgg.org, have been directly proactive. Read more about the re-kindled interest in Ogg in the Wired News story, "How to Live an Open-Source Musical Life with Ogg Vorbis."

We got the chance to ask Ogg creator and Xiph.org co-founder Chris "Monty" Montgomery a few questions about these developments and what they mean for the future of Ogg, Digital Rights Management technologies and those ubiquitous little music machines.

Monty’s answers, sent to us over e-mail (he says he was far too busy hacking code for a phone call) are presented here verbatim.

Continue Reading “Ogg’s Creator On Why Open Media Formats Still Rule” »

File Under: Other

Stepan Pachikov Wants to Study Your Handwriting

Stepan
I just had an hour-long conversation with Stepan Pachikov, a developer at the forefront of handwriting recognition technology in computer software. He’s probably most famous for his company, ParaGraph, which he co-founded and which provided the handwriting recognition technology used in the Apple Newton. He has also developed similar handwriting recognition technology for Silicon Graphics, Microsoft and the U.S. Postal Service. In his spare time, Stepan is writing a book of old Russian jokes.

Stepan recounted an amusing anecdote about the Newton experience. Apple wanted to license the technology for a product they were working on, but they wouldn’t tell the ParaGraph team what it was. ParaGraph agreed to the license anyway. When the Newton arrived months later, Stepan sensed a missed opportunity. He would have been able to make the handwriting recognition much better if he had been able to get his hands on the device and customize the software.

That’s one argument against closed, secretive development.

Continue Reading “Stepan Pachikov Wants to Study Your Handwriting” »