The ogv.js distribution includes Ogg Theora video decoding, which we use on Wikipedia to play back our media files in Safari, IE, and Edge, but also has an experimental mode for WebM files with VP8 video.
WebM/VP8 is better supported by many tools, and provides higher quality video at lower bandwidth usage than Ogg Theora… There are two major reasons I haven’t taken WebM out of “experimental” status:
- The demuxer library (nestegg) was hacked in quickly and I need to add support for seeking, “not crashing”, etc
- Decoding VP8 video is several times slower than decoding Theora at the same resolution.
The first is a simple engineering problem; I can either hack up nestegg or replace it with something that fits the i/o model I’m working with better.
The second is an intrinsic complexity problem: although the two formats are broadly similar technologies, VP8 requires more computation to decode a frame than Theora does.
Add to this the fact that we have some environmental limitations on common CPU optimizations for parallelizable code in the C libraries:
- SIMD (“Same Instruction Multiple Data”) processing is not available in Safari, and not yet production-enabled in Edge.
So, if we can’t use SIMD instructions to parallelize tiny bits of processing, and we can’t simply crank up multithreading to use a now-ubiquitous second CPU core, how can we split up the work?
The first step, I think, is to determine some data boundaries where we might hope to be able to export data from the libvpx library before it’s fully done processing.
The VP8 decoder & bitstream format is defined in RFC 6386, from which we can roughly divide decoding into four stages:
- …decode input stream entropy encoding…
- reconstruct base signal by applying motion vectors against a reference frame
- reconstruct residual signal by applying inverse DCT
- apply loop filter
The entropy encoding isn’t really a separate stage, but happens as other data gets read in and then fed into each other stage. It’s not really parallelizable itself either.
So before I go researching more on trying to optimize some of these steps, what’s actually the biggest, slowest stuff to work on?
I did a quick profiling run playing back some 480p WebM in Chrome (pretty good compiler, and the profiling tools support profiling just one worker thread which isolates the VP8 decoder nicely). Broke down the resulting function self-time list by rough category and found:
(“Other” included any function I didn’t tag, as well as general overhead.)
Ouch! Filtering looks like a good first application of a separate step.
Possible directions – staying on CPU
If we stick with the CPU, we can create further Worker threads and send blocks of data to be filtered. In many cases, even when processing of one macroblock to another is hard to parallelize because of data dependencies, there is natural parallelism in the color system — the Y (“luma”, or brightness) plane can be processed by one thread while the U and V (“chroma”, or color) planes can be processed independently by another.
However splitting processing between luma and chroma has a limited benefit. There’s twice as much luma data as chroma, so you save at most 33% of the runtime here.
Macroblocks and subblocks
VP8 divides most processing & data into “macroblocks” of 16×16 pixels, composed of 24 “subblocks” of 4×4 pixels (that’s 16 subblocks of luma data, plus 4 each from the two lower-resolution chroma planes).
In many cases we can parallelize at the subblock level, which divides 24 evenly into 2 cores, or even 4 or 8 cores! But the most performance-sensitive devices will usually only have 2 CPU cores available, giving us only a 50% potential speedup.
Going crazy – GPU time?
Graphics processors are much more aggressively multithreaded, with multiple cores and ability to handle many more simultaneous operations.
If we run more operations on the GPU, we might actually have a chance to run 24 subblocks all in parallel!
However there’s a lot more fuzziness involved in getting there:
- may have to rewrite code from C into GLSL
- have to figure out how to operate as fragment or vertex shaders
- have to figure out how to efficiently get data in/out
- oh and you can’t use WebGL from a Worker thread, so have to bounce data up to the main thread and back down to the codec
- once all that’s done, what’s the actual performance of the GPU vs the CPU per-operation? no idea 😀
So, there would seem to be great scaling potential on the GPU side, but there are a lot of unknowns.
Worth investigating but we’ll see what’s really feasible…