Mozilla Demos MediaStream Processing, Audio Mixing in Firefox
Mozilla’s Robert O’Callahan, the author of the MediaStream Processing API proposal draft, released experimental Firefox builds that include MediaStream Processing support. He has also published a set of demos (note: you need to run the experimental build to see the demos) that illustrate some of the functionality defined by the specification.
The demos show how the APIs can be used to perform tasks like rendering a visualization of a video’s audio track in a Canvas element while the video is playing. It also shows how the APIs can be used for mixing tasks, like implementing a cross-fade between two audio streams, dynamically adjusting the volume of a video, and programmatically generating audio streams.
One of the characteristics that distinguishes the MediaStream Processing API from previous web audio API proposals is that it aims to interoperate better with existing web standards. For example, it relies on the MediaStream interface in the WebRTC specification. It also allows users to take advantage of Web Workers for threading and will work with getUserMedia to eventually support real-time manipulation of streams from microphones and webcams.
The current implementation of the specification focuses on audio capabilities. As O’Callahan explained this week in a blog post, support for video manipulation will be added in the future when the necessary graphics APIs are accessible through Web Workers. MediaStream Processing on video will be useful for doing things like QR code recognition and augmented reality in web applications, he said.
So, when will this functionality be available in a stable Firefox release? It might take some time. According to O’Callahan, the patch needs some cleanup before the functionality can land in trunk and make it into regular nightly builds. Even then, the MediaStream Processing functionality likely won’t be generally available until the specification has solidified.
“The biggest limitation is that it’s not shipping in Firefox yet. My giant patch is messy and a lot of cleanup needs to be done. I have a plan to split the patch up, clean up the pieces and land them piecemeal. In particular I need to get some of the infrastructure landed ASAP to help the WebRTC team make progress,” he wrote. “When we ship it, much or all of the API will probably be disabled by default, behind a hidden pref, until the standards situation is resolved.”
MediaStream Processing is definitely going to be worth the wait. Some fantastic capabilities are going to be unlocked when the specification is fully implemented. It will open the door for using native web standards to perform some sophisticated real-time media processing tasks that were previously only possible with browser plugins.
This article originally appeared on Ars Technica, Wired’s sister site for in-depth technology news.