Quick update for those looking for the slides! https://brionv.com/misc/video-wikiconusa-2015.pdf
Quick update for those looking for the slides! https://brionv.com/misc/video-wikiconusa-2015.pdf
One of the really cool ‘web maker’ projects that Mozilla sponsored in the last few years was Popcorn Maker, an in-browser video editor that could take direct video clips or videos from Youtube etc and let you remix to your heart’s content.
Unfortunately Mozilla has to shuffle priorities, and Popcorn Maker is on the outs.
Fortunately, there’s enough interest in the Wikimedia video world that we’re helping to pick it up!
If you’re interested in helping out, we’re going to have some work sprints at WikiConference USA next weekend in Washington, DC. Please come and help out!
This first generation uses the Ogg Theora video codec, which we started using on Wikipedia “back in the day” before WebM and MP4/H.264 started fighting it out for dominance of HTML5 video. In fact, Ogg Theora/Vorbis were originally proposed as the baseline standard codecs for HTML5 video and audio elements, but Apple and Microsoft refused to implement it and the standard ended up dropping a baseline requirement altogether.
Ah, standards. There’s so many to choose from!
I’ve got preliminary support for WebM in ogv.js; it needs more work but the real blocker is performance. WebM’s VP8 and newer VP9 video codecs provide much better quality/compression ratios, but require more CPU horsepower to decode than Theora… On a fast MacBook Pro, Safari can play back ‘Llama Drama’ in 1080p Theora but only hits 480p in VP8.
That’s about a 5x performance gap in terms of how many pixels we can push… For now, the performance boost from using an older codec is worth it, as it gets older computers and 64-bit mobile devices into the game.
But it also means that to match quality, we have to double the bitrate — and thus bandwidth — of Theora output versus VP8 at the same resolution. So in the longer term, it’d be nice to get VP8 — or the newer VP9 which halves bitrate again — working well enough on ogv.js.
Awesome town. But what are the limitations and pain points?
This is “convenient” for classic scripting in that you don’t have to worry about picking the right numeric type, but it has several horrible, horrible consequences:
Did I say “luckily”? 😛
So this leads to one more ugly consequence:
This actually performs well once it’s through the JIT compiler, but it bloats the .js code that we have to ship to the browser.
Emscripten provides a C-like memory model by using Typed Arrays: a single ArrayBuffer provides a heap that can be read/written directly as various integer and floating point types.
Currently I’m simply copying the input packets into emscripten’s heap in a wrapper function, then calling the decoder on the copy. This works, but the extra copy makes me sad. It’s also relatively slow in Internet Explorer, where the copy implementation using Uint8Array.set() seems to be pretty inefficient.
Getting data out can be done “zero-copy” if you’re careful, by creating a typed-array subview of the emscripten heap; this can be used for instance to upload a WebGL texture directly from the decoder. Neat!
But, that trick doesn’t work when you need to pass data between worker threads.
Parallel computing is now: these days just about everything from your high-end desktop to your low-end smartphone has at least two CPU cores and can perform multiple tasks in parallel.
Unfortunately, despite half a century of computer science research and a good decade of marketplace factors, doing parallel programming well is still a Hard Problem.
This is actually a really nice model that reduces the number of ways you can shoot yourself in the foot with multithreading!
But, it maps very poorly to C/C++ threads, where you start with shared memory and foot-shooting and try to build better abstractions on top of that.
So, we’re not yet able to make use of any multithreaded capabilities in the actual decoders.
But, we can run the decoders themselves in Worker threads, as long as they’re factored into separate emscripten subprograms. This keeps the main thread humming smoothly even when video decoding is a significant portion of wall-clock time, and can provide a little bit of actual parallelism by running video and audio decoding at the same time.
The Theora and VP8 decoders currently have no inherent multithreading available, but VP9 can so that’s worth looking out for in the future…
Some browser makers are working on providing an “opt-in” shared-memory threading model through an extended ‘SharedArrayBuffer’ that emscripten can make use of, but this is not yet available in any of my target browsers (Safari, IE, Edge).
Modern CPUs provide SIMD instructions (“Single Instruction Multiple Data”) which can really optimize multimedia operations where you need to do the same thing a lot of times on parallel data.
But, my main targets (Safari, IE, Edge) don’t support SIMD in JS yet so I haven’t started…
The obvious next thing to ask is “Hey what about the GPU?” Modern computers come with amazing high-throughput parallel-processing graphics units, and it’s become quite the rage to GPU accelerate everything from graphics to spreadsheets.
The good news is that current versions of all main browsers support WebGL, and ogv.js uses it if available to accelerate drawing and YCbCr-RGB colorspace conversion.
The bad news is that’s all we use it for so far — the actual video decoding is all on the CPU.
It should be possible to use the GPU for at least parts of the video decoding steps. But, it’s going to require jumping through some hoops…
libvpx at least has a fork with some OpenCL and RenderScript support — it’s worth investigating. But no idea if this is really feasible in WebGL.
In the meantime, I’ve got lots of other things to fix in Wikipedia’s video support so will be concentrating on that stuff, but will keep on improving this as the JS platform evolves!
Soft launch of ogv.js on Wikipedia and Wikimedia Commons has begun! This initial deployment covers the desktop view only, so iPhones and iPads won’t get the media player yet in mobile view.
See list of pending fixes for additional improvements that should go out next week, after which I’ll make wider announcements.
Or simply if the integration code’s automatic benchmark overestimated your browser speed, running too much decoding just made everything crap.
Luckily, the HTML5 web platform has a solution — Web Workers.
The limitation is that scripts running in a Worker have no direct access to your code or data running in the web page’s main thread — you can communicate only by sending messages with ‘raw’ data types. Folks working with lots of DOM browser nodes thus can’t get much benefit, but for buffer-driven compute tasks like media decoding it’s perfect!
Threading comms overhead
My first attempt was to take the existing decoder class (an emscripten module containing the Ogg demuxer, Theora video decoder, and Vorbis audio decoder, etc) and run it in the worker thread, with a proxy object sending data and updated object properties back and forth.
This required a little refactoring to make the decoder interfaces asynchronous, taking callbacks instead of returning results immediately.
It worked pretty well, but there was a lot of overhead due to the demuxer requiring frequent back-and-forth calls — after every processing churn, we had to wait for the demuxer to return its updated status to us on the main thread.
This only took a fraction of a millisecond each time, but a bunch of those add up when your per-frame budget is 1/30 (or even 1/60) second!
I had been intending a bigger refactor of the code anyway to use separate emscripten modules for the demuxer and audio/video decoders — this means you don’t have to load code you won’t need, like the Opus audio decoder when you’re only playing Vorbis files.
It also means I could change the coupling, keeping the demuxer on the main thread and moving just the audio/video decoders to workers.
This gives me full speed going back-and-forth on the demuxer, while the decoders can switch to a more “streaming” behavior, sending packets down to be decoded and then displaying the frames or queueing the audio whenever it comes back, without having to wait on it for the next processing iteration.
The result is pretty awesome — in particular on older Windows machines, IE 11 has to use the Flash plugin to do audio and I was previously seeing a lot of “stuttery” behavior when the video decode blocked the Flash audio queueing or vice versa… now it’s much smoother.
The main bug left in the worker mode is that my audio/video sync handling code doesn’t properly handle the case where video decoding is consistently too slow — when we were on the main thread, this caused the audio to halt due to the main thread being blocked; now the audio just keeps on going and the video keeps playing as fast as it can and never catches up.
However this should be easy to fix, and having it be wrong but NOT FREEZING YOUR BROWSER is an improvement over having sync but FREEZING YOUR BROWSER.
In cleaning it up for release, I’ve noticed some performance regressions on IE and Edge due to cleaning out old code I thought was no longer needed.
It turns out that uploading single-channel textures as LUMINANCE or ALPHA formats is vveerryy ssllooww in IE 11 update 1 and Edge, compared to uploading the exact same data blob as an RGBA texture…
As it turns out, I had had some older code to stuff things into RGBA textures and then unpack them in a shader for IE 10 and the original release of IE 11, which did not support LUMINANCE or ALPHA texture uploads! I had removed this code to simplify my WebGL code paths since LUMINANCE got added in IE 11 update 1, but hadn’t originally noticed the performance regression.
Unfortunately this adds a user-agent sniff to my ogv.js code… I prefer to use the LUMINANCE textures directly on other browsers where they perform well, because the textures can be scaled more cleanly in the case of source files with non-square pixels.
There were a fair number of folks interested in video chatting at Wikimania! A few quick updates:
An experimental ‘Schnittserver’ (‘Clip server’) project has been in the works for a while with some funding from ze Germans; currently sitting at http://wikimedia.meltvideo.
The Schnittserver can also do server-side rendering of projects using the ‘melt’ format such as those created with Kdenlive and Shotcut — this allows uploading your original footage (usually in some sort of MP4/H.264 flavor) and sharing the editing project via WebM proxy clips, without generational loss on the final rendering.
Once rendered, your final WebM output can be published up to Commons.
I would love to see some more support for this project, including adding a better web front-end for managing projects/clips and even editing…
Mozilla has an in-browser media editor thing called Popcorn.js; they’re unfortunately reducing investment in the project, but there’s some talk among people working on it and on our end that Wikimedia might be interested in helping adapt it to work with the Schnittserver or some future replacement for it.
Unfortunately I missed the session with the person working on Popcorn.js, will have to catch up later on it!
Recently fixed some major sound sync bugs on slower devices, and am finishing up controls which will be used in the mobile view (when not using the full TimedMediaHandler / MwEmbedPlayer interface which we still have on the desktop).
Demo of playback at https://brionv.com/misc/
A slightly older version of ogv.js is also running on https://ogvjs-testing.wmflabs.
I had a talk with Faidon about video requirements on the low-level infrastructure layer; there are some things we need to work on before we really push video:
– seeking/streaming a file with Range subsets causes requests to bypass the Varnish cache layer, potentially causing huge performance problems if there’s a usage spike!
– very large files can’t be sharded cleanly over multiple servers, which makes for further performance bottlenecks on popular files again
– VERY large files (>4G or so) can’t be stored at all; which is a problem for high-quality uploads of things like long Wikimania talks!
For derivative transcodes, we can bypass some of these problems by chunking the output into multiple files of limited length and rigging up ‘gapless playback’, as can be done for HLS or MPEG-DASH-style live streaming. I’m pretty sure I can work out how to do this in the ogv.js player (for Safari and IE) as well as in the native <video> element playback for Chrome and Firefox via Media Source Extensions. Assuming it works with the standard DASH profile for WebM, this is something we can easily make work on Android as well using Google’s ExoPlayer.
DASH playback will also make it easier to use adaptive source switching to handle limited bandwidth or CPU resources.
However we still need to be able to deal with source files which may be potentially quite large…
List and phab projects!
As a reminder there’s a wikivideo-l list: https://lists.wikimedia.
and a Wikimedia-Video project tag in phabricator: https://
Folks who are interested in pushing further work on video, please feel free to join up. There’s a lot of potential awesomeness!
So I fixed a really annoying bug in my video player program where it sometimes just hung waiting on network input.
I thought it was going to be a really hairy multi-threaded concurrency issue, as I was building a blocking i/o interface on top of a non-blocking URL-fetching layer in order to reuse existing libraries.
I scoured my code looking for incorrect ordering, bad locking, or a problem with my semaphores…. Bit by bit the code was instrumented, logged, inspected, cleared, cleaned, and set back in place better than it was before. But the bug remained, untamed — nay untouched — by my onslaught.
At last I found the part of the code the bug was hiding: if the input buffer ran dry and the last network fetch had already completed, the i/o thread would die and the blocking request would time out.
But why was it broken? Because… um… I forgot to call the function that sends a follow-up request for more data.
I’ve been cleaning up some of my old test code for running Ogg media on iOS, adding WebM support and turning it into OGVKit, a (soon-to-be) reusable library that we can use to finally add video and audio playback to our Wikipedia iPhone app.
Of course decoding VP8 or Theora video on the CPU is going to be more expensive in terms of energy usage than decoding H.264 in dedicated silicon… but how much more?
The iOS 9 beta SDK supports enhanced energy monitoring in Xcode 7 beta… let’s try it out! The diagnostic detail screen looks like so:
Whoa! That’s a little overwhelming. What’s actually going on here?
First, what’s going on here
I’ve got my OGVKit demo app playing this video “Curiosity’s Seven Minutes of Terror” found on Wikimedia Commons, on two devices running iOS 9 beta: an iPod Touch (the lowest-end currently sold iDevice) and an iPad Air (one generation behind the highest-end currently sold iDevice).
The iPod Touch is playing a modest 360p WebM transcode, while the iPad Air is playing a higher-resolution 720p WebM transcode with its beefier 64-bit CPU:
At first, the energy usage looks pretty high:
Now what’s the spot-meter look like?
See approximate reported energy usage levels for all transcode formats (Ogg Theora and WebM at various resolutions) if you like! Ogg Theora is a little faster to decode but WebM looks significantly better at the bitrates we use.
Ok but how’s that compare to native H.264 playback?
Good question. I’m about to try it and find out.
Ok here’s what we got:
The native AVPlayer downloads smaller chunks more slowly, but similarly shows higher CPU and energy usage during download. Once playing only, reported CPU usage dives to a percent or two and the reported energy impact is “Zero”.
Now, I’m not sure I believe “Zero”… 😉
I suppose I’ll have to rig up some kind of ‘run until the battery dies’ test to compare how reasonable this looks for non-trivial playback times… but the ‘Low’ reportage for WebM at reasonable resolutions makes me happier than ‘Very High’ would have!
Mirror of mailing list post on wikitech-l and related lists
I’ve been passing the last few days feverishly working on audio/video stuff, cause it’s been driving me nuts that it’s not quite in working shape.
TL;DR: Major fixes in the works for Android, Safari (iOS and Mac), and IE/Edge (Windows). Need testers and patch reviewers.
ogv.js for Safari/IE/Edge
I’ll want to update it to work with Video.js later, but I’d love to get this version reviewed and deployed in the meantime.
Please head over to https://ogvjs-testing.wmflabs.org/ in Safari 6.1+ or IE 10+ (or ‘Project Spartan’ on Windows 10 preview) and try it out! Particularly interested in cases where it doesn’t work or messes up.
I’ve found that Safari on iOS supports QuickTime movies with Motion-JPEG video and mu-law PCM audio. JPEG and PCM are, as it happens, old and not so much patented. \o/
However these get really bad compression ratios, so to keep bandwidth down similar to the 360p Ogg and WebM versions I had to reduce quality and resolution significantly. Hold an iPhone at arm’s length and it’s maybe ok, but zoom full-screen on your iPad and you’ll hate the giant blurry pixels!
This should also provide a working basic audio/video experience in our Wikipedia iOS app, until such time as we integrate Ogg or WebM decoding natively into the app.
Note that it seems tricky to bulk-run new transcodes on old files with TimedMediaHandler. I assume there’s a convenient way to do it that I just haven’t found in the extension maint scripts…
In progress: mobile video fixes
Audio has worked on Android for a while — the .ogg files show up in native <audio> elements and Just Work.
But video has been often broken, with TimedMediaHandler’s “popup transforms” reducing most video embeds into a thumbnail and a link to the original file — which might play if WebM (not if Ogg Theora) but it might also be a 1080p original which you don’t want to pull down on 3G! And neither audio nor video has worked on iOS.
This patch adds a simple mobile target for TMH, which fixes the popup transforms to look better and actually work by loading up an embedded-size player with the appropriately playable transcodes (WebM, Ogg, and the MJPEG last-ditch fallback).
ogv.js is used if available and necessary, for instance in iOS Safari when the CPU is fast enough. (Known to work only on 64-bit models.)
Future: codec.js and WebM and OGVKit
For the future, I’m also working on extending ogv.js to support WebM for better quality (especially in high-motion scenes) — once that stabilizes I’ll rename the combined package codec.js. Performance of WebM is not yet good enough to deploy, and some features like seeking are still missing, but breaking out the codec modules means I can develop the codecs in parallel and keep the high-level player logic in common.
Browser infrastructure improvements like SIMD, threading, and more GPU access should continue to make WebM decoding faster in the future as well.
I’d also like to finish up my OGVKit package for iOS, so we can embed a basic audio/video player at full quality into the Wikipedia iOS app. This needs some more cleanup work still.
Phew! Ok that’s about it.