Using Web Worker threading in ogv.js for smoother playback

I’ve been cleaning up MediaWiki’s “TimedMediaHandler” extension in preparation for merging integration with my ogv.js JavaScript media player for Safari and IE/Edge when no native WebM or Ogg playback is available. It’s coming along well, but one thing continued to worry me about performance: when things worked it was great, but if the video decode was too slow, it could make your browser very sluggish.

In addition to simply selecting too high a resolution for your CPU, this can strike if you accidentally open a debug console during playback (Safari, IE) or more seriously if you’re on an obscure platform that doesn’t have a JavaScript JIT compiler… or using an app’s embedded browser on your iPhone.

Or simply if the integration code’s automatic benchmark overestimated your browser speed, running too much decoding just made everything crap.

Luckily, the HTML5 web platform has a solution — Web Workers.

Workers provide a limited ability to do multithreading in JavaScript, which means the decoder thread can take as long as it wants — the main browser thread can keep responding to user input in the meantime.

The limitation is that scripts running in a Worker have no direct access to your code or data running in the web page’s main thread — you can communicate only by sending messages with ‘raw’ data types. Folks working with lots of DOM browser nodes thus can’t get much benefit, but for buffer-driven compute tasks like media decoding it’s perfect!

Threading comms overhead

My first attempt was to take the existing decoder class (an emscripten module containing the Ogg demuxer, Theora video decoder, and Vorbis audio decoder, etc) and run it in the worker thread, with a proxy object sending data and updated object properties back and forth.

This required a little refactoring to make the decoder interfaces asynchronous, taking callbacks instead of returning results immediately.

It worked pretty well, but there was a lot of overhead due to the demuxer requiring frequent back-and-forth calls — after every processing churn, we had to wait for the demuxer to return its updated status to us on the main thread.

This only took a fraction of a millisecond each time, but a bunch of those add up when your per-frame budget is 1/30 (or even 1/60) second!

 

I had been intending a bigger refactor of the code anyway to use separate emscripten modules for the demuxer and audio/video decoders — this means you don’t have to load code you won’t need, like the Opus audio decoder when you’re only playing Vorbis files.

It also means I could change the coupling, keeping the demuxer on the main thread and moving just the audio/video decoders to workers.

This gives me full speed going back-and-forth on the demuxer, while the decoders can switch to a more “streaming” behavior, sending packets down to be decoded and then displaying the frames or queueing the audio whenever it comes back, without having to wait on it for the next processing iteration.

The result is pretty awesome — in particular on older Windows machines, IE 11 has to use the Flash plugin to do audio and I was previously seeing a lot of “stuttery” behavior when the video decode blocked the Flash audio queueing or vice versa… now it’s much smoother.

The main bug left in the worker mode is that my audio/video sync handling code doesn’t properly handle the case where video decoding is consistently too slow — when we were on the main thread, this caused the audio to halt due to the main thread being blocked; now the audio just keeps on going and the video keeps playing as fast as it can and never catches up. :)

However this should be easy to fix, and having it be wrong but NOT FREEZING YOUR BROWSER is an improvement over having sync but FREEZING YOUR BROWSER. :)

WebGL performance tricks on MS IE and Edge

One of my pet projects is ogv.js, a video/audio decoder and player in JavaScript powered by codec libraries ported from C with Mozilla’s emscripten transpiler. I’m getting pretty close to a 1.0 release and deploying it to Wikimedia Commons to provide plugin-free Ogg (and experimentally WebM) playback on Apple’s Safari and Microsoft’s Internet Explorer and Edge browsers (the only major browsers lacking built-in WebM video support).

In cleaning it up for release, I’ve noticed some performance regressions on IE and Edge due to cleaning out old code I thought was no longer needed.

In particular, I found that drawing and YUV-RGB colorspace conversion using WebGL, which works fantastically fast in Safari, Chrome, and Firefox, was about as slow as on-CPU JavaScript color conversion in IE 11 and Edge — luckily I had a hack in store that works around the bottleneck.

It turns out that uploading single-channel textures as LUMINANCE or ALPHA formats is vveerryy ssllooww in IE 11 update 1 and Edge, compared to uploading the exact same data blob as an RGBA texture…

ogvjs-win10-faster

As it turns out, I had had some older code to stuff things into RGBA textures and then unpack them in a shader for IE 10 and the original release of IE 11, which did not support LUMINANCE or ALPHA texture uploads! I had removed this code to simplify my WebGL code paths since LUMINANCE got added in IE 11 update 1, but hadn’t originally noticed the performance regression.

Unfortunately this adds a user-agent sniff to my ogv.js code… I prefer to use the LUMINANCE textures directly on other browsers where they perform well, because the textures can be scaled more cleanly in the case of source files with non-square pixels.

Wikimedia video community editing tools & infrastructure status

There were a fair number of folks interested in video chatting at Wikimania! A few quick updates:

An experimental ‘Schnittserver’ (‘Clip server’) project has been in the works for a while with some funding from ze Germans; currently sitting at http://wikimedia.meltvideo.com/ (uses OAuth, has a temporary SSL cert, UI is very primitive!) It is currently usable already for converting MP4 etc source footage to WebM!

The Schnittserver can also do server-side rendering of projects using the ‘melt’ format such as those created with Kdenlive and Shotcut — this allows uploading your original footage (usually in some sort of MP4/H.264 flavor) and sharing the editing project via WebM proxy clips, without generational loss on the final rendering.

Once rendered, your final WebM output can be published up to Commons.

I would love to see some more support for this project, including adding a better web front-end for managing projects/clips and even editing…

Mozilla has an in-browser media editor thing called Popcorn.js; they’re unfortunately reducing investment in the project, but there’s some talk among people working on it and on our end that Wikimedia might be interested in helping adapt it to work with the Schnittserver or some future replacement for it.

Unfortunately I missed the session with the person working on Popcorn.js, will have to catch up later on it!

I’m very close to what I consider a 1.0 release of ogv.js, my JavaScript shim to play Ogg (and experimentally WebM) video and audio in Safari and MS IE/Edge without plugins.

Recently fixed some major sound sync bugs on slower devices, and am finishing up controls which will be used in the mobile view (when not using the full TimedMediaHandler / MwEmbedPlayer interface which we still have on the desktop).

Demo of playback at https://brionv.com/misc/ogv.js/demo2/

A slightly older version of ogv.js is also running on https://ogvjs-testing.wmflabs.org/ with integration into TimedMediaHandler; I’ll update those patches with my 1.0 release next week or so.

Infrastructure issues:

I had a talk with Faidon about video requirements on the low-level infrastructure layer; there are some things we need to work on before we really push video:

– seeking/streaming a file with Range subsets causes requests to bypass the Varnish cache layer, potentially causing huge performance problems if there’s a usage spike!

– very large files can’t be sharded cleanly over multiple servers, which makes for further performance bottlenecks on popular files again

– VERY large files (>4G or so) can’t be stored at all; which is a problem for high-quality uploads of things like long Wikimania talks!

For derivative transcodes, we can bypass some of these problems by chunking the output into multiple files of limited length and rigging up ‘gapless playback’, as can be done for HLS or MPEG-DASH-style live streaming. I’m pretty sure I can work out how to do this in the ogv.js player (for Safari and IE) as well as in the native <video> element playback for Chrome and Firefox via Media Source Extensions. Assuming it works with the standard DASH profile for WebM, this is something we can easily make work on Android as well using Google’s ExoPlayer.

DASH playback will also make it easier to use adaptive source switching to handle limited bandwidth or CPU resources.

However we still need to be able to deal with source files which may be potentially quite large…

List and phab projects!

As a reminder there’s a wikivideo-l list: https://lists.wikimedia.org/mailman/listinfo/wikivideo-l

and a Wikimedia-Video project tag in phabricator: https://phabricator.wikimedia.org/tag/wikimedia-video/

Folks who are interested in pushing further work on video, please feel free to join up. There’s a lot of potential awesomeness!

One bug down, pass it around

So I fixed a really annoying bug in my video player program where it sometimes just hung waiting on network input.

I thought it was going to be a really hairy multi-threaded concurrency issue, as I was building a blocking i/o interface on top of a non-blocking URL-fetching layer in order to reuse existing libraries.

I scoured my code looking for incorrect ordering, bad locking, or a problem with my semaphores…. Bit by bit the code was instrumented, logged, inspected, cleared, cleaned, and set back in place better than it was before. But the bug remained, untamed — nay untouched — by my onslaught.

At last I found the part of the code the bug was hiding: if the input buffer ran dry and the last network fetch had already completed, the i/o thread would die and the blocking request would time out.

But why was it broken? Because… um… I forgot to call the function that sends a follow-up request for more data.

headdesk

WebM and Ogg energy usage on iOS 9 beta with OGVKit

I’ve been cleaning up some of my old test code for running Ogg media on iOS, adding WebM support and turning it into OGVKit, a (soon-to-be) reusable library that we can use to finally add video and audio playback to our Wikipedia iPhone app.

Of course decoding VP8 or Theora video on the CPU is going to be more expensive in terms of energy usage than decoding H.264 in dedicated silicon… but how much more?

The iOS 9 beta SDK supports enhanced energy monitoring in Xcode 7 beta… let’s try it out! The diagnostic detail screen looks like so:

energy - whole

Whoa! That’s a little overwhelming. What’s actually going on here?

First, what’s going on here

I’ve got my OGVKit demo app playing this video “Curiosity’s Seven Minutes of Terror” found on Wikimedia Commons, on two devices running iOS 9 beta: an iPod Touch (the lowest-end currently sold iDevice) and an iPad Air (one generation behind the highest-end currently sold iDevice).

The iPod Touch is playing a modest 360p WebM transcode, while the iPad Air is playing a higher-resolution 720p WebM transcode with its beefier 64-bit CPU:

energy - wassupFirst look: the cost of networking

At first, the energy usage looks pretty high:

energy - network highThis however is because in addition to media playback we’re buffering umpty-ump megabytes over HTTPS over wifi — as fast as a 150 Mbps cable connection will allow.

energy - before n afterOnce the download completes, the CPU usage from SSL decoding goes down, the wifi reduces its power consumption, and our energy usage relatively flattens.

Now what’s the spot-meter look like?

energy - lowPretty cool, right!?

See approximate reported energy usage levels for all transcode formats (Ogg Theora and WebM at various resolutions) if you like! Ogg Theora is a little faster to decode but WebM looks significantly better at the bitrates we use.

Ok but how’s that compare to native H.264 playback?

Good question. I’m about to try it and find out.

….

Ok here’s what we got:

energy - mp4The native AVPlayer downloads smaller chunks more slowly, but similarly shows higher CPU and energy usage during download. Once playing only, reported CPU usage dives to a percent or two and the reported energy impact is “Zero”.

Now, I’m not sure I believe “Zero”… 😉

I suppose I’ll have to rig up some kind of ‘run until the battery dies’ test to compare how reasonable this looks for non-trivial playback times… but the ‘Low’ reportage for WebM at reasonable resolutions makes me happier than ‘Very High’ would have!

MediaWiki audio/video support updates

Mirror of mailing list post on wikitech-l and related lists

I’ve been passing the last few days feverishly working on audio/video stuff, cause it’s been driving me nuts that it’s not quite in working shape.

TL;DR: Major fixes in the works for Android, Safari (iOS and Mac), and IE/Edge (Windows). Need testers and patch reviewers.

ogv.js for Safari/IE/Edge

In recent versions of Safari, Internet Explorer, and Microsoft’s upcoming Edge browser, there’s still no default Ogg or WebM support but JavaScript has gotten fast enough to run an Ogg Theora/Vorbis decoder with CPU to spare for drawing and outputting sound in real time.

The ogv.js decoder/player has been one of my fun projects for some time, and I think I’m finally happy with my TimedMediaHandler/MwEmbedPlayer integration patch for the desktop MediaWiki interface.

I’ll want to update it to work with Video.js later, but I’d love to get this version reviewed and deployed in the meantime.

Please head over to https://ogvjs-testing.wmflabs.org/ in Safari 6.1+ or IE 10+ (or ‘Project Spartan’ on Windows 10 preview) and try it out! Particularly interested in cases where it doesn’t work or messes up.

Non-JavaScript fallback for iOS

I’ve found that Safari on iOS supports QuickTime movies with Motion-JPEG video and mu-law PCM audio. JPEG and PCM are, as it happens, old and not so much patented. \o/

As such this should work as a fallback for basic audio and video on older iPhones and iPads that can’t run ogv.js well, or in web views in apps that use Apple’s older web embedding APIs where JavaScript is slow (for example, Chrome for iOS).

However these get really bad compression ratios, so to keep bandwidth down similar to the 360p Ogg and WebM versions I had to reduce quality and resolution significantly. Hold an iPhone at arm’s length and it’s maybe ok, but zoom full-screen on your iPad and you’ll hate the giant blurry pixels!

This should also provide a working basic audio/video experience in our Wikipedia iOS app, until such time as we integrate Ogg or WebM decoding natively into the app.

Note that it seems tricky to bulk-run new transcodes on old files with TimedMediaHandler. I assume there’s a convenient way to do it that I just haven’t found in the extension maint scripts…

In progress: mobile video fixes

Audio has worked on Android for a while — the .ogg files show up in native <audio> elements and Just Work.

But video has been often broken, with TimedMediaHandler’s “popup transforms” reducing most video embeds into a thumbnail and a link to the original file — which might play if WebM (not if Ogg Theora) but it might also be a 1080p original which you don’t want to pull down on 3G! And neither audio nor video has worked on iOS.

This patch adds a simple mobile target for TMH, which fixes the popup transforms to look better and actually work by loading up an embedded-size player with the appropriately playable transcodes (WebM, Ogg, and the MJPEG last-ditch fallback).

ogv.js is used if available and necessary, for instance in iOS Safari when the CPU is fast enough. (Known to work only on 64-bit models.)

Future: codec.js and WebM and OGVKit

For the future, I’m also working on extending ogv.js to support WebM for better quality (especially in high-motion scenes) — once that stabilizes I’ll rename the combined package codec.js. Performance of WebM is not yet good enough to deploy, and some features like seeking are still missing, but breaking out the codec modules means I can develop the codecs in parallel and keep the high-level player logic in common.

Browser infrastructure improvements like SIMD, threading, and more GPU access should continue to make WebM decoding faster in the future as well.

I’d also like to finish up my OGVKit package for iOS, so we can embed a basic audio/video player at full quality into the Wikipedia iOS app. This needs some more cleanup work still.

Phew! Ok that’s about it.

im in ur javascript, decoding ur webm

Been a while since I posted; working on various bits and bobs.

Most exciting at the moment: initial work on pure-JavaScript WebM playback support for ogv.js (soon to become codec.js!)

This is currently very incomplete; video doesn’t come out right and audio’s not hooked up yet, and most notably you only get a few frames before it craps out. :)

Update: Got audio working after some sleep, and playback is more thorough. Yay! Still no seeking yet, and playback may not always reach completion.

I’ve seen a couple other attempts to build WebM decoders in JS using emscripten (eg Route9), but nobody’s gotten them hooked up to a full-blown player with audio yet. With a little more work on the decoder side, I should be able to leverage all of my investment in ogv.js’s player logic.

webm-in-progresslive preview of current experimental build

How’s it work?

I’m using nestegg for the WebM container parsing; currently this is giving me some impedence mismatch with the first-generation ogv.js code due to different i/o models.

ogv.js’s C-side Ogg container parsing was set up with entirely asynchronous i/o due to the needs of using async XHR to fetch data over the network. Nestegg, however, uses synchronous i/o callbacks in some places, and I really don’t want to jump through all the hoops to do WebM cue seeking without using the library. (Already did that with Ogg and it was horrid!)

I may be able to adapt the ogv.js C-side code to using synchronous i/o callbacks and let them get mapped to async i/o on the JS side, thanks to some fancy features in emscripten — either Asyncify or the newer emterpreter mode for parts of the high-level code while keeping the low-level decoder in pure asm.js. Need to do a little more research… :)

Possibilities

Performance: speed on WebM/VP8 decoding won’t be as good as the Ogg/Theora decoder for now so may not be suitable for mobile devices, but should work fine on many laptop/desktop machines.

VP9: untested so far, but should work… though again, likely to be still slower than VP8.

Threading: currently threading is disabled in the build. There’s work ongoing at Mozilla to support pthreads for emscripten programs, using a new SharedArrayBuffer interface to share memory between Web Worker threads. Might experiment with that later as libvpx has some threading options (may only be useful for VP9 multi-slice decoding for now) but it’ll be a while before most browsers support it. However this might be a good use case for asking Microsoft and Apple to add support. :)

SIMD: there’s also ongoing work on SIMD instruction abstraction in JavaScript, with Mozilla and Microsoft partnering. Will have to test with autovectorization in the compiler enabled and see if it makes any difference to decode speed, but don’t know how complete the support in Firefox and Edge is etc.

ogv.js MediaWiki integration updates

Over the last few weekends I’ve continued to poke at ogv.js, both the core library and the experimental MediaWiki integration. It’s getting pretty close to merge-ready!

Recent improvements to ogv.js player (gerrit changeset):

  • Audio no longer super-choppy in background tabs
  • ‘ended’ is no longer unreasonably delayed
  • various code cleanup
  • ogvjs-version.js with build timestamp available for use as a cache-buster helper

Fixes to the MediaWiki TimedMediaHandler desktop player integration (gerrit changeset):

  • Post-playback behavior is now the same as when using native playback
  • Various code cleanup

Fixes to the MediaWiki MobileFrontend mobile player integration (gerrit changeset):

  • Autoplay now working with native playback in Chrome and Firefox
  • Updated to work with current MobileFrontend (internal API changes)
  • Mobile media overlay now directly inherits from the MobileFrontend photo overlay class instead of duplicating it
  • Slow-CPU check is now applied on mobile player — this gets ogv.js video at 160p working on an old iPhone 4S running iOS 7! Fast A7-based iPhones/iPads still get 360p.

While we’re at it, Microsoft is opening up a public ‘suggestion box’ for Internet Explorer — folks might want to put in their votes for native Ogg Vorbis/Theora and WebM playback.

Flame on! Trying out Firefox OS at 2.1…

Ever since I heard about Mozilla’s ‘Boot2Gecko‘ project a few years back I was very excited about the eventual possibility of Firefox-powered phones running a truly free operating system, with apps provided through the open web  instead of platform-lock-in walled gardens.

It’s been a long journey though, and often a painful one. Early versions of Firefox OS were pretty rough, it was hard to get phones that weren’t severely underpowered, and actually upgrading to the latest versions on a release phone was….. often not really possible.

So I finally gave in and picked up the Flame, which is the officially recommended Firefox OS reference device. Current builds are actually, like, published for it!

I immediately flashed the device to the current base image (v180, with a low-level ‘Gonk’ layer based on Android 4.4’s low-level Linux layers) and updated to the almost-ready-for-release Firefox OS 2.1.

Version 2.1 finally does away with the old crappy browser app and treats web site browsing on the same level as installed ‘apps’. Graphics are pretty smooth, using hardware compositing, and in general it’s a HUGE improvement over 1.x.

The Flame

Hardware notes

    • The Flame is meant to be representative of the next generation of Firefox OS release phones which are targeting developing markets, so it’s not as fancy as the latest Android or iOS devices.
      • The screen is only 1.5x density, versus 3x on my Nexus 5. But it’s still a big improvement over the older 1x 320×480 devices.
      • Decent 1GB RAM — can be configured lower to simulate lower-end devices, which I have not attempted. Eek!
      • There’s a limited amount of internal storage, and a micro-SD card slot where you’re expected to store additional files such as media. I only had a 4GB card handy from an old phone so I’m using that for now, but will replace it with a 32GB card later.
    • the Flame has 2 SIM slots, both full-size. This meant I needed a micro-SIM-to-fullsize-SIM adapter to get my main phone line running on the Flame. The micro-SIM kept popping out of the adapter while trying to insert it, but I eventually got it in intact and it’s working fine. (T-Mobile US, HSDPA speeds. No LTE support on the Flame.) Conveniently the adapter kit also included the necessary adapter to move my backup/testing phone line from my iPhone 5s (nano-SIM) to the Nexus 5. Why can’t we all just use the same damn size SIM?

The camera seems kinda awful; video framerate is bad. Not sure if this is a software bug or a hardware limitation but it’s a bit of a bummer.

Back to the web: de-appifying

The most common apps I use on my Nexus 5 are:

    • Gmail
    • Google Maps
    • Facebook
    • Twitter
    • Feedly
    • Kindle
    • Amazon Music

Wikipedia

These are all available on the web, but with some caveats and limitations:

  • Gmail shows in a really crappy old-school mobile web interface instead of the nice modern HTML5 one you get on an Android or iOS device. I can’t seem to use it for multiple accounts either, which makes it a non-starter since I have both personal (gmail) and work (gapps) accounts. I’ve been using the Firefox OS built-in Email app instead for now, which seems to work better than in old versions but isn’t really optimized for my ‘archive everything’ workflow.
  • Google Maps shows the web interface, which is kinda ugly but seems to work including geolocation and transit directions. YAYYY
  • Facebook web seems pretty decent at least for reading, but I don’t get notifications of replies and have to check manually.
  • Twitter web seems pretty good, though the pull-to-refresh is a little flaky and again no notifications.
  • Feedly’s web interface is designed for desktop screens and doesn’t scale down property to a smartphone screen. BOOO
  • Kindle Cloud Reader actually runs — it downloads and views books and everything. But again, it’s designed for desktop and tablet screens and the UI doesn’t scale down. You can only see the top-left corner of the page and can’t actually read anything. BOOOOO
  • Amazon Cloud Player for online-stored music….. amazingly this works, but the interface is desktop-oriented and distinctly NOT mobile friendly. (It also prompts for Adobe Flash, but doesn’t seem to require it for playback.) However since playback stops when you switch away from the app, it’s kind of a bummer to use. BOOOO
  • We have a Firefox OS Wikipedia reader app based on our old PhoneGap app — it works fine, but hasn’t been maintained much and needs an overhaul. Meanwhile our mobile web site also works pretty well on Firefox os, and now supports editing and all kinds of cute stuff. YAYYYY

Now, some things I can’t get at all:

  • Uber
  • Skype
  • Walgreens

:(

  • There’s really nothing in Uber that needs to be an app as a customer — they could just as easily have a web app with all the capabilities of looking up, calling a car, watching the map, etc. I can’t even successfully log in to their web interface for viewing my past rides, but if I could there’s no way to call a cab there.
  • I occasionally use Skype, mainly when XBox Live’s chat system breaks. *cough* Microsoft owns them both *cough*. That’s all native apps and has no web option.
  • The Walgreens app on iOS/Android lets you scan the barcode on your medication to schedule a refill, it’s pretty handy! Their web site has no equivalent that I can find… but I can work around it by renewing via email notification instead.

So I’ll be carrying the Flame around as my main phone line for at least a bit, but I’m gonna keep the Nexus 5 around for a few things.

We’ll see how long it takes before I switch the main line back to Android or if I stick with it. 😀

A British driving adventure in five parts (or, Google Maps Can Suck It)

So I’m doing a little post-Wikimania traveling with my wife and my parents. Yesterday we drove from London to Cardiff via Stonehenge. It was… Quite the experience for a first-time driving in the UK.

Part one: Escaping London

IMG_0146.JPG Our adventure begins in the London Docklands, where the 72nd World Science Fiction Convention was held at the Excel Centre (loncon3.org). I was able to hire a car at the Europcar branch in the convention center, made it over to our hotel, and we just managed to squeeze our luggage into the back of this Skoda something or other.

Google Maps wanted to route us through the London city center to get out to the M4 motorway, but everyone I asked assured me this was a terrible idea and I should get to the M25 “orbital” highway that circles the city. A13 runs east from the docklands to the M25 and was pretty easy to get to; after some initial confusion getting used to driving on the left and being on the right side of the car I more or less adjusted, and we stopped for a quick lunch at a rest stop (“services centre”) off M25.

Part two: reaching Stonehenge

From there the route to Stonehenge was very simple: go south and west on the M25 orbital until the M3 branches off, then take A303 out to Amesbury and follow the signs to Stonehenge. This route was great; mostly big modern highways, well labeled, in the middle of the day. My main difficulty was adjusting to properly centering the car in the lane when I’m sitting on the “wrong” side of the car.

image

Part three: English country back road hell

When I planned out the route I didn’t do enough research on how to get back to the main motorway; it looked clear enough on Google Maps and I just turned on navigation on my phone and followed the directions a while.

The phone losing gps signal at first was a bad sign, but in retrospect the route was bad to begin with. We ended up taking A360 sorta northwestward toward the M4 which leads straight to Cardiff. As it turns out, while A303 was mostly a pretty comfortable minor highway, A360 is actually a series of tiny country and village back roads.

Often it narrows to one lane, has no shoulder, squirrels around and makes weird turns, etc. this was a somewhat harrowing experience, especially as signage was nearly nonexistent and I had a poor idea of how far I was from the main highway.

Part four: finding M4

Eventually we reached the entrance to M4… And I missed the exit from the roundabout and ended up on the wrong road. Google Maps rerouted us… Down another country road which eventually took us back to M4, much much later than I had hoped to be on the main road.

Once on M4 we were back in a world of wide lanes, divided highways, good signage, etc. Life was good again. We kept going west, crossing the Severn bridge to Wales. Interestingly this is a toll bridge westbound, but the toll collection is a good few miles past the bridge instead of before it like San Francisco’s bridges.

Part five: diversion hell

Then, as we got to about 20 miles from Cardiff, the damn motorway closed down for “works” — possibly related to the upcoming NATO summit and security measures being put in place around town.

I tried to follow the diversion signs but ended up taking the wrong exit from the roundabout and got stuck going north on A449. Unlike our old friend A360 this was a very nice modern highway, but there’s no place to turn around for 10 miles… So it takes a while to get back and try again.

Following the diversion signs we ended up back on M4 but eastbound, back towards London. Argh! We stopped at the next services centre for another break and to regroup.

Google Maps just kept routing us to the closed section of M4 so was of limited help. I called the hotel in Cardiff to ask for a recommended alternate route, but they knew nothing about the closure. I called Europcar but they couldn’t give me anything useful either. Finally, a nice lady at the Costa coffee place overheard our dilemma and offered a route through Newport which would take us around the closure and pick up M4 again. Thanks Becky!

Unfortunately I made a wrong turn and picked up M4 too early, right back at the closure and diversion… And ended up going north on A449 again. We stopped at the first exit to recheck the maps and determined that if we headed back south to the barista’s recommended route and kept going through Newport correctly it would work… But we had to go the 10 miles to the turnaround first, which was very frustrating. Back on the alternate route, another A highway, we entered ROUNDABOUT HELL.

I’m still having nightmares of the Google Maps voice calling out “in 800 feet, at the roundabout, take the second exit to go straight ahead”. Every … fricking … intersection. The alternate route eventually turned out we think to be the recommended diversion route — there were yellow signs with a black circle and a narrow pointing which way to go which lined up with our route and we stuck with that until we returned to the blessed, blessed M4. Finally, we got into Cardiff and Google Maps was relatively sane again leading us to the hotel. We arrived before midnight, but not by much.

Afterword

Lesson learned: When using your satnav in Britain, research your route first. You can’t tell whether an A road will be comfortable or horrible unless you check it on Wikipedia or something. Gah!

A well deserved post-drive treat.
A well deserved post-drive treat.