Brain dump: JavaScript sandboxing

Another thing I’ve been researching is safe, sandboxed embedding of user-created JavaScript widgets… my last attempt in this direction was the EmbedScript extension (examples currently down, but code is still around).

User-level problems to solve:

  • “Content”
    • Diagrams, graphs, and maps would be more fun and educational if you could manipulate them more
    • What if you could graph those equations on all those math & physics articles?
  • Interactive programming sandboxes
  • Customizations to editor & reading UI features
    • Gadgets, site JS, shared user JS are potentially dangerous right now, requiring either admin review or review-it-yourself
    • Narrower interfaces and APIs could allow for easier sharing of tools that don’t require full script access to the root UI
  • Make scriptable extensions safer
    • Use same techniques to isolate scripts used for existing video, graphs/maps, etc?
    • Frame-based tool embedding + data injection could make export of rich interactive stuff as easy as InstantCommons…

Low-level problems to solve

  • Isolating user-provided script from main web context
  • Isolating user-provided script from outside world
    • loading off-site resources is a security issue
    • want to ensure that wiki resources are self-contained and won’t break if off-site dependencies change or are unavailable
  • Providing a consistent execution environment
    • browsers shift and change over time…
  • Communicating between safe and sandboxed environments
    • injecting parameters in safely?
    • two-way comms for allowing privileged operations like navigating page?
    • two-way comms for gadget/extension-like behavior?
    • how to arrange things like fullscreen zoom?
  • Potential offline issues
    • offline cacheability in browser?
    • how to use in Wikipedia mobile apps?
  • Third-party site issues
    • making our scripts usable on third-party wikis like InstantCommons
    • making it easy for third-party wikis to use these techniques internally

Meta-level problems to solve

  • How & how much to review code before letting it loose?
  • What new problems do we create in misuse/abuse vectors?

Isolating user-provided scripts

One way to isolate user-provided scripts is to run them in an interpreter! This is potentially very slow, but allows for all kinds of extra tricks.

JS-Interpreter

I stumbled on JS-Interpreter, used sometimes with the Blockly project to step through code generated from visual blocks. JS-Interpreter implements a rough ES5 interpreter in native JS; it’s quite a bit slower than native (though some speedups are possible; the author and I have made some recent tweaks improving the interpreter loop) but is interesting because it allows single-stepping the interpreter, which opens up to a potential for an in-browser debugger. The project is under active development and could use a good regression test suite, if anyone wants to send some PRs. 🙂

The interpreter is also fairly small, weighing in around 24kb minified and gzipped.

The single-stepping interpreter design protects against infinite loops, as you can implement your own time limit around the step loop.

For pure-computation exercises and interactive prompts this might be really awesome, but the limited performance and lack of any built-in graphical display means it’s probably not great for hooking it up to an SVG to make it interactive. (Any APIs you add are your own responsibility, and security might be a concern for API design that does anything sensitive.)

Caja

An old project that’s still around is Google Caja, a heavyweight solution for embedding foreign HTML+JS using a server-side Java-based transpiler for the JS and JavaScript-side proxy objects that let you manipulate a subset of the DOM safely.

There are a number of security advisories in Caja’s history; some of them are transpiler failures which allow sandboxed code to directly access the raw DOM, others are failures in injected APIs that allow sandboxed code to directly access the raw DOM. Either way, it’s not something I’d want to inject directly into my main environment.

There’s no protection against loops or simple resource usage like exhausting memory.

Iframe isolation and CSP

I’ve looked at using cross-origin <iframe>s to isolate user code for some time, but was never quite happy with the results. Yes, the “same-origin policy” of HTML/JS means your code running in a cross-origin frame can’t touch your main site’s code or data, but that code is still able to load images, scripts, and other resources from other sites. That creates problems ranging from easy spamming to user information disclosure to simply breaking if required offsite resources change or disappear.

Content-Security-Policy to the rescue! Modern browsers can lock down things like network access using CSP directives on the iframe page.

CSP’s restrictions on loading resources still leaves an information disclosure in navigation — links or document.location can be used to navigate the frame to a URL on a third domain. This can be locked down with CSP’s childsrc param on the parent document — or an intermediate “double” iframe — to only allow the desired target domain (say, “*.wikipedia-embed.org” or even “item12345678.wikipedia-embed.org”). Then attempts to navigate the frame to a different domain from the inside are blocked.

So in principle we can have a rectangular region of the page with its own isolated HTML or SVG user interface, with its own isolated JavaScript running its own private DOM, with only the ability to access data and resources granted to it by being hosted on its private domain.

Further interactivity with the host page can be created by building on the postMessage API, including injecting additional resources or data sets. Note that postMessage is asynchronous, so you’re limited in simulating function calls to the host environment.

There is one big remaining security issue, which is that JS in an iframe can still block the UI for the whole page (or consume memory and other resources), either accidentally with an infinite loop or on purpose. The browser will eventually time out from a long loop and give you the chance to kill it, but it’s not pleasant (and might just be followed by another super-long loop!)

This means denial of service attacks against readers and editors are possible. “Autoplay” of unreviewed embedded widgets is still a bad idea for this reason.

Additionally, older browser versions don’t always support CSP — IE is a mess for instance. So defenses against cross-origin loads either need to somehow prevent loading in older browsers (poorer compatibility) or risk the information exposure (poorer security). However the most popular browsers do enforce it, so applications aren’t likely to be built that rely on off-site materials just to function, preventing which is one of our goals.

Worker isolation

There’s one more trick, just for fun, which is to run the isolated code in a Web Worker background thread. This would still allow resource consumption but would prevent infinite loops from blocking the parent page.

However you’re back to the interpreter’s problem of having no DOM or user interface, and must build a UI proxy of some kind.

Additionally, there are complications with running Workers in iframes, which is that if you apply sandbox=allow-scripts you may not be able to load JS into a Worker at all.

Non-JavaScript languages

Note that if you can run JavaScript, you can run just about anything thanks to emscripten. 😉 A cross-compiled Lua interpreter weighs in around 150-180kb gzipped (depending on library inclusion).

Big chart

Here, have a big chart I made for reference:

Offline considerations

In principle the embedding sites can be offline-cached… bears consideration.

App considerations

The iframes could be loaded in a webview in apps, though consider the offline + app issues!

Data model

A widget (or whatever you call it) would have one or more sub resources, like a Gadget does today plus more:

  • HTML or SVG backing document
  • JS/CSS module(s), probably with a dependency-loading system
  • possibly registration for images and other resources?
    • depending on implementation it may be necessary to inject images as blobs or some weird thing
  • for non-content stuff, some kind of registry for menu/tab setup, trigger events, etc

Widgets likely should be instantiable with input parameters like templates and Lua modules are; this would be useful for things like reusing common code with different input data, like showing a physics demo with different constant values.

There should be a human-manageable UI for editing and testing these things. 🙂 See jsfiddle etc for prior art.

How to build the iframe target site

Possibilities:

  • Subdomain per instance
    • actually serve out the target resources on a second domain, each ‘widget instance’ living in a separate random subdomain ideally for best isolation
    • base HTML or SVG can load even if no JS. Is that good or bad, if interactivity was the goal?
    • If browser has no CSP support, the base HTML/CSS/JS might violate constraints.
    • can right-click and open frame in new window
    • …but now you have another out of context view of data, with new URLs. Consider legal, copyright, fairuse, blah blah
    • have to maintain and run that second domain and hook it up to your main wiki
    • how to deal with per-instance data input? Pre-publish? postMessage just that in?
      • injecting data over postMessage maybe best for the InstantCommons-style scenario, since sites can use our scripts but inject data
    • probably easier debugging based on URLs
  • Subdomain per service provider, inject resources and instance data
    • Inject all HTML/SVG/JS/CSS at runtime via postMessage (trusting the parent site origin). Images/media could either be injected as blobs or whitelisted by URL.
    • The service provider could potentially be just a static HTML file served with certain strict CSP headers.
    • If injecting all resources, then could use a common provider for third-party wikis.
      • third-party wikis could host their own scripts using this technique using our frame broker. not sure if this is good idea or not!
    • No separate content files to host, nothing to take down in case of legal issues.
    • Downside: right-clicking a frame to open it in new window won’t give useful resources. Possible workarounds with providing a link-back in a location hash.
    • Script can check against a user-agent blacklist before offering to load stuff.
    • Downside: CSP header may need to be ‘loose’ to allow script injection, so could open you back up to XSS on parameters. But you’re not able to access outside the frame so pssssh!

Abuse and evil possibilities

Even with the security guarantees of origin restrictions and CSP, there are new and exciting threat models…

Simple denial of service is easy — looping scripts in an iframe can lock up the main UI thread for the tab (or whole browser, depending on the browser) until it eventually times out with an error. At which point it can potentially go right back into a loop. Or you can allocate tons of memory, slowing down and eventually perhaps crashing the browser. Even tiny programs can have huge performance impact, and it’s hard to predict what will be problematic. Thus script on a page could make it hard for other editors and admins to get back in to fix the page… For this reason I would  recommend against autoplay in Wikipedia articles of arbitrary unreviewed code.

There’s also possible trolling patterns: hide a shock image in a data set or inside a seemingly safe image file, then display it in a scriptable widget bypassing existing image review.

Advanced widgets could do all kinds of fun and educational things like run emulators for old computer and game systems. That brings with it the potential for copyright issues with the software being run, or for newer systems patent issues with the system being emulated.

For that matter you could run programs that are covered under software patents, such as decoding or encoding certain video file formats. I guess you could try that in Lua modules too, but JS would allow you to play or save result files to disk directly from the browser.

WP:BEANS may apply to further thoughts on this road, beware. 😉

Ideas from Jupyter: frontend/backend separation

Going back to Jupyter/IPython as an inspiration source; Jupyter has a separation between a frontend that takes interactive input and displays output, and a backend kernel which runs the actual computation server-side. To make for fancier interactive displays, the output can have a widget which runs some sort of JavaScript component in the frontend notebook page’s environment, and can interact with the user (via HTML controls), with other widgets (via declared linkages) and with the kernel code (via events).

We could use a model like this which distinguishes between trusted (or semi-trusted) frontend widget code which can do anything it can do in its iframe, but must be either pre-reviewed, or opted into. Frontend widgets that pass review should have well-understood behavior, good documentation, stable interfaces for injecting data, etc.

The frontend widget can and should still be origin-isolated & CSP-restricted for policy enforcement even if code is reviewed — defense in depth is important!

Such widgets could either be invoked from a template or lua module with a fixed data set, or could be connected to untrusted backend code running in an even more restricted sandbox.

The two main ‘more restricted sandbox’ possibilities are to run an interpreter that handles loops safely and applies resource limits, or to run in a worker thread that doesn’t block the main UI and can be terminated after a timeout…. but even that may be able to exhaust system resources via memory allocation.

I think it would be very interesting to extend Jupyter in two specific ways:

  • iframe-sandboxing the widget implementations to make loading foreign-defined widgets safer
  • implementing a client-side kernel that runs JS or Lua code in an interpreter, or JS in a sandboxed Worker, instead of maintaining a server connection to a Python/etc kernel

It might actually be interesting to adopt, or at least learn from, the communication & linkage model for the Jupyter widgets (which is backbone.js-based, I believe) and consider the possibilities for declarative linkage of widgets to create controllable diagrams/visualizations from common parts.

An interpreter-based Jupyter/IPython kernel that works with the notebooks model could be interesting for code examples on Wikipedia, Wikibooks etc. Math potential as well.

Short-term takeaways

  • Interpreters look useful in niche areas, but native JS in iframe+CSP probably main target for interactive things.
  • “Content widgets” imply new abuse vectors & thus review mechanisms. Consider short-term concentration on other areas of use:
    • sandboxing big JS libraries already used in things like Maps/Graphs/TimedMediaHandler that have to handle user-provided input
    • opt-in Gadget/user-script tools that can adapt to a “plugin”-like model
    • making those things invocable cross-wiki, including to third-party sites
  • Start a conversation about content widgets.
    • Consider starting with strict-review-required.
    • Get someone to make the next generation ‘Graphs’ or whatever cool tool as one of these instead of a raw MW extension…?
    • …slowly plan world domination.

ogv.js 1.4.0 released

ogv.js 1.4.0 is now released, with a .zip build or via npm. Will try to push it to Wikimedia next week or so.

Live demo available as always.

New A/V sync

Main improvement is much smoother performance on slower machines, mainly from changing the A/V sync method to prioritize audio smoothness, based on recommendations I’d received from video engineers at conferences that choppy audio is noticed by users much more strongly than choppy or out of sync video.

Previously, when ogv.js playback detected that video was getting behind audio, it would halt audio until the video caught up. This played all audio, and showed all frames, but could be very choppy if performance wasn’t good (such as in Internet Explorer 11 on an old PC!)

The new sync method instead keeps audio rock-solid, and allows video to get behind a little… if the video catches back up within a few frames, chances are the user won’t even notice. If it stays behind, we look ahead for the next keyframe… when the audio reaches that point, any remaining late frames are dropped. Suddenly we find ourselves back in sync, usually with not a single discontinuity in the audio track.

fastSeek()

The HTMLMediaElement API supports a fastSeek() method which is supposed to seek to the nearest keyframe before the request time, thus getting back to playback faster than a precise seek via setting the currentTime property.

Previously this was stubbed out with a slow precise seek; now it is actually fast. This enables a much better “scrubbing” experience given a suitable control widget, as can be seen in the demo by grabbing the progress thumb and moving it around the bar.

VP9 playback

WebM videos using the newer, more advanced VP9 codec can use a lot less bandwidth than VP8 or Theora videos, making it attractive for streaming uses. A VP9 decoder is now included for WebM, initially supporting profile 0 only (other profiles may or may not explode) — that means 8-bit, 4:2:0 subsampling.

Other subsampling formats will be supported in future, can probably eventually figure out something to do with 10-bit, but don’t expect those to perform well. 🙂

The VP9 decoder is moderately slower than the VP8 decoder for equivalent files.

Note that WebM is still slightly experimental; the next version of ogv.js will make further improvements and enable it by default.

WebAssembly

Firefox and Chrome have recently shipped support for code modules in the WebAssembly format, which provides a more efficient binary encoding for cross-compiled code than JavaScript. Experimental wasm versions are now included, but not yet used by default.

Multithreaded video decoding

Safari 10.1 has shipped support for the SharedArrayBuffer and Atomics APIs which allows for fully multithreaded code to be produced from the emscripten cross-compiler.

Experimental multithreaded versions of the VP8 and VP9 decoders are included, which can use up to 4 CPU cores to significantly increase speed on suitably encoded files (using the -slices option in ffmpeg for VP8, or -tile_columns for VP9). This works reasonably well in Safari and Chrome on Mac or Windows desktops; there are performance problems in Firefox due to deoptimization of the multithreaded code.

This actually works in iOS 10.3 as well — however Safari on iOS seems to aggressively limit how much code can be compiled in a single web page, and the multithreading means there’s more code and it’s copied across multiple threads, leading to often much worse behavior as the code can end up running without optimization.

Future versions of WebAssembly should bring multithreading there as well, and likely with better performance characteristics regarding code compilation.

Note that existing WebM transcodes on Wikimedia Commons do not include the suitable options for multithreading, but this will be enabled on future builds.

Misc fixes

Various bits. Check out the readme and stuff. 🙂

What’s next for ogv.js?

Plans for future include:

  • replace the emscripten’d nestegg demuxer with Brian Parra’s jswebm
  • fix the scaling of non-exact display dimensions on Windows w/ WebGL
  • enable WebM by default
  • use wasm by default when available
  • clean up internal interfaces to…
  • …create official plugin API for demuxers & decoders
  • split the demo harness & control bar to separate packages
  • split the decoder modules out to separate packages
  • Media Source Extensions-alike API for DASH support…

Those’ll take some time to get all done and I’ve got plenty else on my plate, so it’ll probably come in several smaller versions over the next months. 🙂

I really want to get a plugin interface so people who want/need them and worry less about the licensing than me can make plugins for other codecs! And to make it easier to test Brian Parra’s jsvpx hand-ported VP8 decoder.

An MSE API will be the final ‘holy grail’ piece of the puzzle toward moving Wikimedia Commons’ video playback to adaptive streaming using WebM VP8  and/or VP9, with full native support in most browsers but still working with ogv.js in Safari, IE, and Edge.

Limitations of AVSampleBufferDisplayLayer on iOS

In my last post I described using AVSampleBufferDisplayLayer to output manually-uncompressed YUV video frames in an iOS app, for playing WebM and Ogg files from Wikimedia Commons. After further experimentation I’ve decided to instead stick with using OpenGL ES directly, and here’s why…

  • 640×360 output regularly displays with a weird horizontal offset corruption on iPad Pro 9.7″. Bug filed as rdar://29810344
  • Can’t get any pixel format with 4:4:4 subsampling to display. Theora and VP9 both support 4:4:4 subsampling, so that made some files unplayable.
  • Core Video pixel buffers for 4:2:2 and 4:4:4 are packed formats, and it prefers 4:2:0 to be a weird biplanar semi-packed format. This requires conversion from the planar output I already have, which may be cheap with Neon instructions but isn’t free.

Instead, I’m treating each plane as a separate one-channel grayscale image, which works for any chroma subsampling ratios. I’m using some Core Video bits (CVPixelBufferPool and CVOpenGLESTextureCache) to do texture setup instead of manually calling glTeximage2d with a raw source blob, which improves a few things:

  • Can do CPU->GPU memory copy off main thread easily, without worrying about locking my GL context.
  • No pixel format conversions, so straight memcpy for each line…
  • Buffer pools are tied to the video buffer’s format object, and get swapped out automatically when the format changes (new file, or file changes resolution).
  • Don’t have to manually account for stride != width in the texture setup!

It could be more efficient still if I could pre-allocate CVPixelBuffers with on-GPU memory and hand them to libvpx and libtheora to decode into… but they currently lack sufficient interfaces to accept frame buffers with GPU-allocated sizes.

A few other oddities I noticed:

  • The clean aperture rectangle setting doesn’t seem to be preserved when creating a CVPixelBuffer via CVPixelBufferPool; I have to re-set it when creating new buffers.
  • For grayscale buffers, the clean aperture doesn’t seem to be picked up by CVOpenGLESTextureGetCleanTexCoords. Not sure if this is only supposed to work with Y’CbCr buffer types or what… however I already have all these numbers in my format object and just pull from there. 🙂

I also fell down a rabbit hole researching color space issues after noticing that some of the video formats support multiple colorspace variants that may imply different RGB conversion matrices… and maybe gamma…. and what do R, G, and B mean anyway? 🙂 Deserves another post sometime.

 

 

Drawing uncompressed YUV frames on iOS with AVSampleBufferDisplayLayer

One of my little projects is OGVKit, a library for playing Ogg and WebM media on iOS, which at some point I want to integrate into the Wikipedia app to fix audio/video playback in articles. (We don’t use MP4/H.264 due to patent licensing concerns, but Apple doesn’t support these formats, so we have to jump through some hoops…)

A trick with working with digital video is that video frames are usually processed, compressed, and stored using the YUV (aka Y’CbCr) colorspace instead of the RGB used in the rest of the digital display pipeline.

This means that you can’t just take the output from a video decoder and blit it to the screen — you need to know how to dig out the pixel data and recombine it into RGB first.

Currently OGVKit draws frames using OpenGL ES, manually attaching the YUV planes as separate textures and doing conversion to RGB in a shader — I actually ported it over from ogv.js‘s WebGL drawing code. But surely a system like iOS with pervasive hardware-accelerated video playback already has some handy way to draw YUV frames?

While researching working with system-standard CMSampleBuffer objects to replace my custom OGVVideoBuffer class, I discovered that iOS 8 and later (and macOS version something) do have a such handy output path: AVSampleBufferDisplayLayer. This guy has three special tricks:

  • CMSampleBuffer objects go in, pretty pictures on screen come out!
  • Can manage a queue of buffers, synchronizing display times to a provided clock!
  • If you pass compressed H.264 buffers, it handles decompression transparently!

I’m decompressing from a format AVFoundation doesn’t grok so the transparent decompression isn’t interesting to me, but since it claimed to accept uncompressed buffers too I figured this might simplify my display output path…

The queue system sounds like it might simplify my timing and state management, but is a bigger change to my code to make so I haven’t tried it yet. You can also tell it to display one frame at a time, which means I can use my existing timing code for now.

There are however two major caveats:

  • AVSampleBufferDisplayLayer isn’t available on tvOS… so I’ll probably end up repackaging the OpenGL output path as an AVSampleBufferDisplayLayer lookalike eventually to try an Apple TV port. 🙂
  • Uncompressed frames must be in a very particular format or you get no visible output and no error messages.

Specifically, it wants a CMSampleBuffer backed by a CVPixelBuffer that’s IOSurface-backed, using bi-planar YUV 4:2:0 pixel format (kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange
or kCVPixelFormatType_420YpCbCr8BiPlanarFullRange). However libtheora and libvpx produce output in traditional tri-planar format, with separate Y, U and V planes. This meant I had to create buffers in appropriate format with appropriate backing memory, copy the Y plane, and then interleave the U and V planes into a single chroma muddle.

My first super-naive attempt took 10ms per 1080p frame to copy on an iPad Pro, which pretty solidly negated any benefits of using a system utility. Then I realized I had a really crappy loop around every pixel. 😉

Using memcpy — a highly optimized system function — to copy the luma lines cut the time down to 3-4ms per frame. A little loop unrolling on the chroma interleave brought it to 2-3ms, and I was able to get it down to about 1ms per frame using a couple ARM-specific vector intrinsic functions, inspired by assembly code I found googling around for YUV layout conversions.

It turns out you can interleave 8 pixels at a time in three instructions using two vector reads and one write, and I didn’t even have to dive into actual assembly:

static inline void interleave_chroma(unsigned char *chromaCbIn, unsigned char *chromaCrIn, unsigned char *chromaOut) {
#if defined(__arm64) || defined(__arm)
    uint8x8x2_t tmp = { val: { vld1_u8(chromaCbIn), vld1_u8(chromaCrIn) } };
    vst2_u8(chromaOut, tmp);
#else
    chromaOut[0] = chromaCbIn[0];
    chromaOut[1] = chromaCrIn[0];
    chromaOut[2] = chromaCbIn[1];
    chromaOut[3] = chromaCrIn[1];
    chromaOut[4] = chromaCbIn[2];
    chromaOut[5] = chromaCrIn[2];
    chromaOut[6] = chromaCbIn[3];
    chromaOut[7] = chromaCrIn[3];
    chromaOut[8] = chromaCbIn[4];
    chromaOut[9] = chromaCrIn[4];
    chromaOut[10] = chromaCbIn[5];
    chromaOut[11] = chromaCrIn[5];
    chromaOut[12] = chromaCbIn[6];
    chromaOut[13] = chromaCrIn[6];
    chromaOut[14] = chromaCbIn[7];
    chromaOut[15] = chromaCrIn[7];
#endif
}

This might be even faster if copying is done on a “slice” basis during decoding, while the bits of the frame being copied are in cache, but I haven’t tried this yet.

With the more efficient copies, the AVSampleBufferDisplayLayer-based output doesn’t seem to use more CPU than the OpenGL version, and using CMSampleBuffers should allow me to take output from the Ogg and WebM decoders and feed it directly into an AVAssetWriter for conversion into MP4… from there it’s a hop, skip and a jump to going the other way, converting on-device MP4 videos into WebM for upload to Wikimedia Commons…

 

Testing in-browser video transcoding with MediaRecorder

A few months ago I made a quick test transcoding video from MP4 (or whatever else the browser can play) into WebM using the in-browser MediaRecorder API.

I’ve updated it to work in Chrome, using a <canvas> element as an intermediary recording surface as captureStream() isn’t available on <video> elements yet there.

Live demo: https://brionv.com/misc/browser-transcode-test/capture.html

There are a couple advantages of re-encoding a file this way versus trying to do all the encoding in JavaScript, but also some disadvantages…

Pros

  • actual encoding should use much less CPU than JavaScript cross-compile
  • less code to maintain!
  • don’t have to jump through hoops to get at raw video or audio data

Cons

  • MediaRecorder is realtime-oriented:
    • will never decode or encode faster than realtime
    • if encoding is slower than realtime, lots of frames are dropped
    • on my MacBook Pro, realtime encoding tops out around 720p30, but eg phone camera videos will often be 1080p30 these days.
  • browser must actually support WebM encoding or it won’t work (eg, won’t work in Edge unless they add it in future, and no support at all in Safari)
  • Firefox and Chrome both seem to be missing Vorbis audio recording needed for base-level WebM (but do let you mix Opus with VP8, which works…)

So to get frame-rate-accurate transcoding, and to support higher resolutions, it may be necessary to jump through further hoops and try JS encoding.

I know this can be done — there are some projects compiling the entire ffmpeg  package in emscripten and wrapping it in a converter tool — but we’d have to avoid shipping an H.264 or AAC decoder for patent reasons.

So we’d have to draw the source <video> to a <canvas>, pull the RGB bits out, convert to YUV, and run through lower-level encoding and muxing… oh did I forget to mention audio? Audio data can be pulled via Web Audio, but only in realtime.

So it may be necessary to do separate audio (realtime) and video (non-realtime) capture/encode passes, then combine into a muxed stream.

Canvas, Web Audio, MediaStream oh my!

I’ve often wished that for ogv.js I could send my raw video and audio output directly to a “real” <video> element for rendering instead of drawing on a <canvas> and playing sound separately to a Web Audio context.

In particular, things I want:

  • Not having to convert YUV to RGB myself
  • Not having to replicate the behavior of a <video> element’s sizing!
  • The warm fuzzy feeling of semantic correctness
  • Making use of browser extensions like control buttons for an active video element
  • Being able to use browser extensions like sending output to ChromeCast or AirPlay
  • Disabling screen dimming/lock during playback

This last is especially important for videos of non-trivial length, especially on phones which often have very aggressive screen dimming timeouts.

Well, in some browsers (Chrome and Firefox) now you can do at least some of this. 🙂

I’ve done a quick experiment using the <canvas> element’s captureStream() method to capture the video output — plus a capture node on the Web Audio graph — combining the two separate streams into a single MediaStream, and then piping that into a <video> for playback. Still have to do YUV to RGB conversion myself, but final output goes into an honest-to-gosh <video> element.

To my great pleasure it works! Though in Firefox I have some flickering that may be a bug, I’ll have to track it down.

Some issues:

  • Flickering on Firefox. Might just be my GPU, might be something else.
  • The <video> doesn’t have insight to things like duration, seeking, etc, so can’t rely on native controls or API of the <video> alone acting like a native <video> with a file source.
  • Pretty sure there are inefficiencies. Have not tested performance or checked if there’s double YUV->RGB->YUV->RGB going on.

Of course, Chrome and Firefox are the browsers I don’t need ogv.js for for Wikipedia’s current usage, since they play WebM and Ogg natively already. But if Safari and Edge adopt the necessary interfaces and WebRTC-related infrastructure for MediaStreams, it might become possible to use Safari’s full screen view, AirPlay mirroring, and picture-in-picture with ogv.js-driven playback of Ogg, WebM, and potentially other custom or legacy or niche formats.

Unfortunately I can’t test whether casting to a ChromeCast works in Chrome as I’m traveling and don’t have one handy just now. Hoping to find out soon! 😀

Exploring VP9 as a progressive still image codec

At Wikipedia we have long articles containing many images, some of which need a lot of detail and others which will be scrolled past or missed entirely. We’re looking into lazy-loading, alternate formats such as WebP, and other ways to balance display density vs network speed.

I noticed that VP9 supports scaling of reference frames from different resolutions, so a frame that changes video resolution doesn’t have to be a key frame.

This means that a VP9-based still image format (unlike VP8-based WebP) could encode multiple resolutions to be loaded and decoded progressively, at each step encoding only the differences from the previous resolution level.

So to load an image at 2x “Retina” display density, we’d load up a series of smaller, lower density frames, decoding and updating the display until reaching the full size (say, 640×360). If the user scrolls away before we reach 2x, we can simply stop loading — and if they scroll back, we can pick up right where we left off.

I tried hacking up vpxenc to accept a stream of concatenated PNG images as input, and it seems plausible…

Demo page with a few sample images (not yet optimized for network load; requires Firefox or Chrome):
https://media-streaming.wmflabs.org/pic9/

Compared to loading a series of intra-coded JPEG or WebP images, the total data payload to reach resolution X is significantly smaller. Compared against only loading the final resolution in WebP or JPEG, without any attempt at tuning I found my total payloads with VP9 to be about halfway between the two formats, and with tuning I can probably beat WebP.

Currently the demo loads the entire .webm file containing frames up to 4x resolution, seeking to the frame with the target density. Eventually I’ll try repacking the frames into separately loadable files which can be fed into Media Source Extensions or decoded via JavaScript… That should prevent buffering of unused high resolution frames.

Some issues:

Changing resolutions always forces a keyframe unless doing single-pass encoding with frame lag set to 1. This is not super obvious, but is neatly enforced in encoder_set_config in vp9_cx_iface.h! Use –passes=1 –lag-in-frames=1 options to vpxenc.

Keyframes are also forced if width/height go above the “initial” width/height, so I had to start the encode with a stub frame of the largest size (solid color, so still compact). I’m a bit unclear on whether there’s any other way to force the ‘initial’ frame size to be larger, or if I just have to encode one frame at the large size…

There’s also a validity check on resized frames that forces a keyframe if the new frame is twice or more the size of the reference frame. I used smaller than 2x steps to work around this (tested with steps at 1/8, 1/6, 1/4, 1/2, 2/3, 1, 3/2, 2, 3, 4x of the base resolution).

I had to force updates of the golden & altref on every frame to make sure every frame ref’d against the previous, or the decoder would reject the output. –min-gf-interval=1 isn’t enough; I hacked vpxenc to set the flags on the frame encode to VP8_EFLAG_FORCE_GF | VP8_EFLAG_FORCE_ARF.

I’m having trouble loading the VP9 webm files in Chrome on Android; I’m not sure if this is because I’m doing something too “extreme” for the decoder on my Nexus 5x or if something else is wrong…

Thoughts on Ogg adaptive streaming

So I’d like to use adaptive streaming for video playback on Wikipedia and Wikimedia Commons, automatically selecting the appropriate source format and resolution at runtime based on bandwidth and CPU availability.

For Safari, Edge, and IE users, that means figuring out how to rig a Media Source Extensions-like interface into ogv.js to let the streaming handler inject its buffered data into the demuxer and codecs instead of letting the player handle its own buffering.

It also means I have to figure out how to do adaptive stream switching for Ogg streams and Theora video, since WebM VP8 still decodes too slowly in ogv.js to rely on for deployment…

Theory vs Theora

At its base, adaptive streaming relies on the ability to feed the decoders with data from another stream without them freaking out and demanding a pause or reset. We can either read packets from a subset of a monolithic file for each source, or from a bunch of tiny segmented files.

In order to do this, generally you need to switch on video keyframe boundaries: each keyframe represents a point in the data stream where the video decoder can reset its state.

For WebM with VP8 and VP9 codecs, the decoders are pretty good at this. As long as you came in on a keyframe boundary you can just start feeding it packets at a new resolution and it’ll happily output frames at the new resolution.

For Ogg Theora, there are a few major impediments.

Ogg stream serial numbers

At the Ogg stream level: each Ogg logical bitstream gets a random serial number; those serial numbers will not match across separate encodings at different resolutions.

Ogg explicitly allows for “chaining” of complete bitstreams, where one ends and you just tack another on, but we’re not quite doing that here… We want to be able to switch partway through with minimal interruption.

For Vorbis audio, this might require some work if pulling audio+video together from combined .ogv files, but it gets simpler if there’s one .oga audio stream and separate video-only .ogv streams — we’d essentially have separate demuxer contexts for audio and video, and would not need to meddle with the audio.

For the Theora video stream this is probably ok too, since when we reach a switch boundary we also need to feed the decoder with…

Header packets

Every Theora video stream sets up start codes at the beginning of the stream in its three header packets. This means that encodings of the same video at different resolutions will have different header setup.

So, when we switch sources we’ll need to reinitialize the Theora decoder with the header packets from the target stream; then it should be safe to feed new packets into it from our arbitrary start position.

This isn’t a super exotic requirement; I’ve seen some provision for ‘start codes’ for MP4 adaptive streaming too.

Keyframe timing

More worrisome is that keyframe timing is not predictable in a Theora stream. This is actually due to the libtheora encoder internals — it allows you to specify a maximum keyframe interval, but it may decide at any time to insert a keyframe on its own if it thinks it’s more efficient to store a frame that way, at which point the interval starts counting from there instead of the last scheduled keyframe.

Since this heuristic is determined based on actual frame data, the early keyframes will appear in different times and places for renderings at different resolutions… And so will every keyframe following them.

This means you don’t have switch points that are consistent between sources, breaking the whole model!

It looks like a keyframe can be forced by changing the keyframe interval to 1 right before a desire keyframe, then changing it back to the desired value after. This would result in still getting some early keyframes at unpredictable times, but then also getting predictable ones. As long as the switchover points aren’t too often, that’s probably fine — just keep decoding over the extra keyframes, but only switch/segment on the predictable ones.

Streams vs split files

Another note: it’s possible to either store data as one long file per source, or to split it up into small chunk files at each keyframe boundary.

Chunk files are nice because they can be streamed easily without use of the HTTP ‘Range’ header and they’re friendly to cache layers. Long files can be easier to manage on the server, but Wikimedia ops folks have told me that the way large files are stored doesn’t always interact ideally with the caching layer and they’d be much happier with split chunk files!

A downside of chunks is that it’s harder to download a complete copy of a file at a given resolution for offline playback. But, if we split audio and video tracks we’re in a world where that’s hard anyway… Can either just say “download the full resolution source then!” Or provide a remuxer to produce combined files for download on the fly from the chunks… 🙂
The keyframe timing seems the ugliest issue to deal with; may need to patch ffmpeg2theora or ffmpeg to work around it, but at least shouldn’t have to mess with libtheora itself…

WebM seeking coming in ogv.js 1.1.2

Seeking in WebM playback will finally be supported in ogv.js 1.1.2. Yay! Try it out!

I seeked in this WebM file! Yeah really!

This took some fancy footwork, as the demuxer library I’m using (nestegg) only provides a synchronous i/o-using interface for seeking: on the first seek, it needed to be able to first do a seek to the location of the cues in the file (they’re usually at the end in WebM, not at the beginning!), then read in the cue data, and then run a second seek to the start of a cluster.

On examining the innards of the library I found that it’s fairly safe to ‘restart’ the operation directly after the seek attempts, which saved me the trouble of patching the library code; though I may come up with a patch to more cleanly & reliably do this.

For the initial hack, I have my i/o callbacks detect that an attempt was made to seek outside the buffer range, and then when the nestegg library function fails out, the demuxer sees the seek attempt and passes it back to the caller (OGVPlayer) which is able to trigger a seek on the actual, asynchronous i/o layer. Once the new data starts arriving, we call back into the demuxer to read the cues or confirm that we’ve seeked to the right place and can continue decoding. As an additional protection against the library freaking out, I make sure that the entire cues element has been buffered before attempting to read it.

I’ve also fixed a bug that caused WebM playback to occasionally die with an out of memory error. This one was caused by not freeing packet data in the demuxer. *headdesk* At least that bug was easier. 😉

This gets WebM VP8 playback working about as well as Ogg Theora playback, except for the decoding being about 5x slower! Well, I’ve got plans for that. 😀

VP8 parallelization continued

Following up on earlier musings… Found an interesting analysis by someone who did some work in this area a few years ago.

image

Based on the way data flows from  macroblocks adjacent above or to the left, it is safe to parallelize multiple super blocks in diagonal lines running from top-right towards the bottom-left. This means you start with a batch of one macroblock, then a batch of two, then three…. Up to the maximum diagonal dimension (30 blocks at 480p or 68 at 1080p) then back down again to one at the far bottom right corner. Hmm, not exactly diagonal, actually half diagonal?

image

(Based on my reading the filter stage doesn’t need to access the above-right block which simplifies things, but I could be wrong… Intraframe prediction looks scarier though and may need the different angle cut. Anyway the half diagonal order should work for the entire set of all operations…).

This breakdown should be suitable both for CPU worker-based threading (breaking the batches into smaller sub-batches per core) and for WebGL shader based GPU work (where each large batch would issue several draw calls, each processing up to the full batch size of macroblocks).

For CPU work there could also be pipelining between the data stages, though if doing full batching that may not be necessary.

Data locality and latency

On modern computers, accessing memory is often the slowest part of a naively written calculation. If the memory you’re working with isn’t in cache, fetching it can be verrrry slow. And if you need to transfer data between the CPU and GPU things get even nastier.

Unpacking the entropy-coded data structures for an entire frames worth of macroblocks then processing them in a different order sounds like it might be expensive. But it shouldn’t be insanely so; most processing will be local to each block.

For GPU usage, data also flows mostly one way from the CPU into textures and arrays uploaded into the GPU, where random texel access should be fast. We shouldn’t need to read any of that data back to the CPU until the end of the loop filter, when we read the YUV buffers back to feed into the next frame’s predictions and output to the player.

What we do need for the GPU is to read the results of each batch’s computations back into the next batch. As long as I can render into a texture and use that texture as a source for the next call I think that all works as I need it.

Loop filter madness

The loop filter stage reduces artifacting at the edges of macroblocks and sub-blocks from motion prediction and DCT fun. Because it’s very precisely specced, and the output feeds back into the next frame’s predictions, it’s important to get this right.

per spec, each macroblock may apply up to four filter passes:

left edge

subblock left edges

top edge

subblock top edges

Along each edge, for each pixel along the edge the boundaries are tested for a threshold and a filter may or may not be applied, variously to one, two, or three pixels deep from the edge. For the macroblock edges, when the filter applies it also modifies the adjacent block data!

It’s all pretty funky, but it looks parallelizable within limits.

Ah, now to find time to research this further while still getting other stuff done. 😉