Defender of the Realm

I showed some iterations of ScriptinScript’s proposed object value representation, using native JS objects with a custom prototype chain to isolate the guest world’s JS objects. The more I looked I saw more corner cases… I thought of the long list of security issues with the old Caja transpiling embedding system, and decided it would be best to change course.

Not only are there a lot of things to get right to avoid leaking host objects, it’s simply a lot of work to create a mostly spec-compliant JavaScript implementation, and then to maintain it. Instead I plan to let the host JavaScript implementation run almost the entire show, using realms.

What’s a Realm?

Astute readers may have clicked on that link and noticed that the ECMAScript committee’s realms proposal is still experimental, with no real implementations yet… But realms are actually a part of JS already, there’s just no standard way to manipulate them! Every function is associated with a realm that it runs in, which holds the global object and the intrinsic objects we take for granted — say, Object. Each realm has its own instance of each of these instrinsics, so if an object from one realm does make its way to another realm, their prototype chains will compare differently.

That sounds like what we were manually setting up last time, right? The difference is that when native host operations like throwing exceptions in a built-in function, auto-boxing a primitive value to an object, etc happen, the created Error or String etc instance will have the realm-specific prototype without us having to guard for it and switch it around.

If we have a separate realm for the guest environment, then there are a lot fewer places we have to guard against getting host objects.

Getting a realm

There are a few possible ways we can manage to get ahold of a separate realm for our guest code:

  • Un-sandboxed <iframe>
  • Sandboxed <iframe>
  • Web Worker thread
  • ‘vm’ module for Node.js

It should be possible to combine some of these techniques, such as using the future-native Realm inside a Worker inside a sandboxed iframe, which can be further locked down with Content-Security-Policy headers!

Note that using sandboxed or cross-origin <iframe>s or Workers requires asynchronous messaging between host and guest, but is much safer than Realm or same-origin <iframe> because they prevent all object leakage.

Similar techniques are used in existing projects like Oasis to seeming good effect.

Keep it secret! Keep it safe!

To keep the internal API for the guest environment clean and prevent surprise leakages to the host realm, it’s probably wise to clean up the global object namespace and the contents of the accessible intrinsics.

This is less important if cross-origin isolation and Content-Security-Policy are locked down carefully, but probably still a good idea.

For instance you probably want to hide some things from guest code:

  • the global message-passing handlers for postMessage to implement host APIs
  • fetch and XMLHttpRequest for network access
  • indexedDB for local-origin info
  • etc

In an <iframe> you would probably want to hide the entire DOM to create a fresh realm… But if it’s same-origin I don’t quite feel confident that intrinsics/globals can be safely cleaned up enough to avoid escapes. I strongly, strongly recommend using cross-origin or sandboxed <iframe> only! And a Worker that’s loaded from an <iframe> might be best.

In principle the realm can be “de-fanged” by walking through the global object graph and removing any property not on an allow list. Often you can also replace a constructor or method with an alternate implementation… as long as its intrinsic version won’t come creeping back somewhere. Engine code may throw exceptions of certain types, for instance, so they may need pruning in their details as well as pruning from the global tree itself.

In order to provide host APIs over postMessage, keep local copies of the global’s postMessage and addEventListener in a closure and set them up before cleaning the globals. Be careful in the messaging API to use only local variable references, no globals, to avoid guest code interfering with the messaging code.

Whither transpiling?

At this point, with the host environment in a separate realm *and* probably a separate thread *and* with its globals and intrinsics squeeky clean… do we need to do any transpiling still?

It’s actually, I think, safe at that point to just pass JS code for strict mode or non-strict-mode functions in and execute it after the messaging kernel is set up. You should even be able to create runtime code with eval and the Function constructor without leaking anything to/from the host context!

Do we still even need to parse/transpile? Yes!

But the reason isn’t for safety, it’s more for API clarity, bundling, and module support… Currently there’s no way to load JS module code (with native import/export syntax) in a Worker, and there’s no way to override module URL-to-code resolution in <script type=”module”> in an <iframe>.

So to support modern JS modules for guest code, you’d need some kind of bundling… which is probably desired anyway for fetching common libraries and such… and which may be needed to combine the messaging kernel / globals cleanup bootstrap code with the guest code anyway.

There’s plenty of prior art on JS module -> bundle conversion, so this can either make use of existing tools or be inspired by it.

Debugging

If code is simply executed in the host engine, this means two things:

One, it’s hard to debug from within the web page because there aren’t tools for stopping the other thread and introspecting it.

Two, it’s easy to debug from within the web browser because the host debugger Just Works.

So this is probably good for Tools For Web Develepers To Embed Stuff but may be more difficult for Beginner’s Programming Tools (like the BASIC and LOGO environments of my youth) where you want to present a slimmed-down custom interface on the debugger.

Conclusions

Given a modern-browser target that supports workers, sandboxed iframes, etc, using those native host tools to implement sandboxing looks like a much, much better return on investment than continuing to implement a full-on interpreter or transpiler for in-process code.

This in some ways is a return to older plans I had, but the picture’s made a LOT clearer by not worrying about old browsers or in-process execution. Setting a minimal level of ES2017 support is something I’d like to do to expose a module-oriented system for libraries and APIs, async, etc but this isn’t strictly required.

I’m going to re-work ScriptinScript in four directions:

First, the embedding system using <iframe>s and workers for web or ‘vm’ for Node, with a messaging kernel and global rewriter.

Second, a module bundling frontend that produces ready-to-load-in-worker JS, that can be used client-side for interactive editing or server-side for pre-rendering. I would like to get the semantics of native JS modules right, but may approximate them as a simplification measure.

Third, a “Turtle World” demo implementing a much smaller interpreter for a LOGO-like language, connected to a host API implementing turtle graphics in SVG or <canvas>. This will scratch my itch to write an interpreter, but be a lot simpler to create and maintain. ;)

Finally, a MediaWiki extension that allows storing the host API and guest code for Turtle World in a custom wiki namespace and embedding them as media in articles.

I think this is a much more tractable plan, and can be tackled bit by bit.

ScriptinScript value representation

As part of my long-running side quest to make a safe, usable environment for user-contributed scripted widgets for Wikipedia and other web sites, I’ve started working on ScriptinScript, a modern JavaScript interpreter written in modern JavaScript.

It’ll be a while before I have it fully working, as I’m moving from a seat-of-the-pants proof of concept into something actually based on the language spec… After poking a lot at the spec details of how primitives and objects work, I’m pretty sure I have a good idea of how to represent guest JavaScript values using host JavaScript values in a safe, spec-compliant way.

Primitives

JavaScript primitive types — numbers, strings, symbols, null, and undefined — are suitable to represent themselves; pretty handy! They’re copyable and don’t expose any host environment details.

Note that when you do things like reading str.length or calling str.charCodeAt(index) per spec it’s actually boxing the primitive value into a String object and then calling a method on that! The primitive string value itself has no properties or methods.

Objects

Objects, though. Ah now that’s tricky. A JavaScript object is roughly a hash map of properties indexed with string or symbol primitives, plus some internal metadata such as a prototype chain relationship with other objects.

The prototype chain is similar, but oddly unlike, class-based inheritance typical in many other languages.

Somehow we need to implement the semantics of JavaScript objects as JavaScript objects, though the actual API visible to other script implementations could be quite different.

First draft: spec-based

My initial design modeled the spec behavior pretty literally, with prototype chains and property descriptors to be followed step by step in the interpreter.

Guest property descriptors live as properties of a this.props sub-object created with a null prototype, so things on the host Object prototype or the custom VMObject wrapper class don’t leak in.

If a property doesn’t exist on this.props when looking it up, the interpreter will follow the chain down through this.Prototype. Once a property descriptor is found, it has to be examined for the value or get/set callables, and handled manually.

// VMObject is a regular class
[VMObject] {
    // "Internal slots" and implementation details
    // as properties directly on the object
    machine: [Machine],
    Prototype: [VMObject] || null,

    // props contains only own properties
    // so prototype lookups must follow this.Prototype
    props: [nullproto] {
        // prop values are virtual property descriptors
        // like you would pass to Object.defineProperty()
        aDataProp: {
            value: [VMObject],
            writable: true,
            enumerable: true,
            configurable: true,
        },
        anAccessorProp: {
            get: [VMFunction],
            set: [VMFunction],
            enumerable: true,
            configurable: true,
        },
    },
}

Prototype chains

Handling of prototype chains in property lookups can be simplified by using native host prototype chains on the props object that holds the property descriptors.

Instead of Object.create(null) to make props, use Object.create(this.Prototype ? this.Prototype.props : null).

The object layout looks about the same as above, except that props itself has a prototype chain.

Property descriptors

We can go a step further, using native property descriptors which lets us model property accesses as direct loads and stores etc.

Object.defineProperty can be used directly on this.props to add native property descriptors including support for accessors by using closure functions to wrap calls into the interpreter.

This should make property gets and sets faster and awesomer!

Proper behavior should be retained as long as operations that can affect property descriptor handling are forwarded to props, such as calling Object.preventExtensions(this.props) when the equivalent guest operation is called on the VMObject.

Native objects

At this point, our inner props object is pretty much the “real” guest object, with all its properties and an inheritance chain.

We could instead have a single object which holds both “internal slots” and the guest properties…

let MachineRef = Symbol('MachineRef');

// VMObject is prototyped on a null-prototype object
// that does not descend from host Object, and which
// is named 'Object' as well from what guest can see.
// Null-proto objects can also be used, as long as
// they have the marker slots.
let VMObject = function Object(val) {
    return VMObject[MachineRef].ToObject(val);
};
VMObject[MachineRef] = machine;
VMObject.prototype = Object.create(null);
VMObject.prototype[MachineRef] = machine;
VMObject.prototype.constructor = VMObject;

[VMObject] || [nullproto] {
    // "Internal slots" and implementation details
    // as properties indexed by special symbols.
    // These will be excluded from enumeration and
    // the guest's view of own properties.
    [MachineRef]: [Machine],

    // prop values are stored directly on the object
    aDataProp: [VMObject],
    // use native prop descriptors, with accessors
    // as closures wrapping the interpreter.
    get anAccessorProp: [Function],
    set anAccessorProp: [Function],
}

The presence of the symbol-indexed [MachineRef] property tells host code in the engine that a given object belongs to the guest and is safe to use — this should be checked at various points in the interpreter like setting properties and making calls, to prevent dangerous scenarios like exposing the native Function constructor to create new host functions, or script injection via DOM innerHTML properties.

Functions

There’s an additional difficulty, which is function objects.

Various properties will want to be host-callable functions — things like valueOfand toString. You may also want to expose guest functions directly to host code… but if we use VMObject instances for guest function objects, then there’s no way to make them directly callable by the host.

Function re-prototyping

One possibility is to outright represent guest function objects using host function objects! They’d be closures wrapping the interpreter, and ‘just work’ from host code (though possibly careful in how they accept input).

However we’d need a function object that has a custom prototype, and there’s no way to create a function object that way… but you can change the prototype of a function that already has been instantiated.

Everyone says don’t do this, but you can. ;)

let MachineRef = Symbol('MachineRef');

// Create our own prototype chain...
let VMObjectPrototype = Object.create(null);
let VMFunctionPrototype = Object.create(VMObjectPrototype);

function guestFunc(func) {
    // ... and attach it to the given closure function!
    Reflect.setPrototypeOf(func, VMFunction.prototype);

    // Also save our internal marker property.
    func[MachineRef] = machine;
	return func;
}

// Create our constructors, which do not descend from
// the host Function but rather from VMFunction!
let VMObject = guestFunc(function Object(val) {
    let machine = VMObject[MachineRef];
    return machine.ToObject(val);
});

let VMFunction = guestFunc(function Function(src) {
    throw new Error('Function constructor not yet supported');
});

VMFunction.prototype = VMFunctionPrototype;
VMFunctionPrototype.constructor = VMFunction;

VMObject.prototype = VMObjectPrototype;
VMObjectPrototype.constructor = VMObject;

This seems to work but feels a bit … freaky.

Function proxying

An alternative is to use JavaScript’s Proxy feature to make guest function objects into a composite object that works transparently from the outside:

let MachineRef = Symbol('MachineRef');

// Helper function to create guest objects
function createObj(proto) {
    let obj = Object.create(proto);
    obj[MachineRef] = machine;
    return obj;
}

// We still create our own prototype chain...
let VMObjectPrototype = createObj(null);
let VMFunctionPrototype = createObj(VMObjectPrototype);

// Wrap our host implementation functions...
function guestFunc(func) {
    // Create a separate VMFunction instance instead of
    // modifying the original function.
    //
    // This object is not callable, but will hold the
    // custom prototype chain and non-function properties.
    let obj = createObj(VMFunctionPrototype);

    // ... now wrap the func and the obj together!
    return new Proxy(func, {
        // In order to make the proxy object callable,
        // the proxy target is the native function.
        //
        // The proxy automatically forwards function calls
        // to the target, so there's no need to include an
        // 'apply' or 'construct' handler.
        //
        // However we have to divert everything else to
        // the VMFunction guest object.
        defineProperty: function(target, key, descriptor) {
            if (target.hasOwnProperty(key)) {
                return Reflect.defineProperty(target, key, descriptor);
            }
            return Reflect.defineProperty(obj, key, descriptor);
        },
        deleteProperty: function(target, key) {
            if (target.hasOwnProperty(key)) {
                return Reflect.deleteProperty(target, key);
            }
            return Reflect.deleteProperty(obj, key);
        },
        get: function(target, key) {
            if (target.hasOwnProperty(key)) {
                return Reflect.get(target, key);
            }
            return Reflect.get(obj, key);
        },
        getOwnPropertyDescriptor: function(target, key) {
            if (target.hasOwnProperty(key)) {
                return Reflect.getOwnPropertyDescriptor(target, key);
            }
            return Reflect.getOwnPropertyDescriptor(obj, key);
        },
        getPrototypeOf: function(target) {
            return Reflect.getPrototypeOf(obj);
        },
        has: function(target, key) {
            if (target.hasOwnProperty(key)) {
                return Reflect.has(target, key);
            }
            return Reflect.has(obj, key);
        },
        isExtensible: function(target) {
            return Reflect.isExtensible(obj);
        },
        ownKeys: function(target) {
            return Reflect.ownKeys(target).concat(
                Reflect.ownKeys(obj)
            );
        },
        preventExtensions: function(target) {
            return Reflect.preventExtensions(target) &&
                Reflect.preventExtensions(obj);
        },
        set: function(target, key, val, receiver) {
            if (target.hasOwnProperty(key)) {
                return Reflect.set(target, key, val, receiver);
            }
            return Reflect.set(obj, key, val, receiver);
        },
        setPrototypeOf: function(target, proto) {
            return Reflect.setPrototypeOf(obj, proto);
        },
    });
}

// Create our constructors, which now do not descend from
// the host Function but rather from VMFunction!
let VMObject = guestFunc(function Object(val) {
    // The actual behavior of Object() is more complex ;)
    return VMObject[MachineRef].ToObject(val);
});

let VMFunction = guestFunc(function Function(args, src) {
    // Could have the engine parse and compile a new guest func...
    throw new Error('Function constructor not yet supported');
});

// Set up the circular reference between
// the constructors and protoypes.
VMFunction.prototype = VMFunctionPrototype;
VMFunctionPrototype.constructor = VMFunction;
VMObject.prototype = VMObjectPrototype;
VMObjectPrototype.constructor = VMObject;

There’s more details to work out, like filling out the VMObject and VMFunction prototypes, ensuring that created functions always have a guest prototype property, etc.

Note that implementing the engine in JS’s “strict mode” means we don’t have to worry about bridging the old-fashioned arguments and caller properties, which otherwise couldn’t be replaced by the proxy because they’re non-configurable.

My main worries with this layout are that it’ll be hard to tell host from guest objects in the debugger, since the internal constructor names are the same as the external constructor names… the [MachineRef] marker property should help though.

And secondarily, it’s easier to accidentally inject a host object into a guest object’s properties or a guest function’s arguments…

Blocking host objects

We could protect guest objects from injection of host objects using another Proxy:

function wrapObj(obj) {
    return new Proxy(obj, {
        defineProperty: function(target, key, descriptor) {
            let machine = target[MachineRef];
            if (!machine.isGuestVal(descriptor.value) ||
                !machine.isGuestVal(descriptor.get) ||
                !machine.isGuestVal(descriptor.set)
            ) {
                throw new TypeError('Cannot define property with host object as value or accessors');
            }
            return Reflect.defineProperty(target, key, descriptor);
        },
        set: function(target, key, val, receiver) {
            // invariant: key is a string or symbol
            let machine = target[MachineRef];
            if (!machine.isGuestVal(val)) {
                throw new TypeError('Cannot set property to host object');
            }
            return Reflect.set(target, key, val, receiver);
        },
        setPrototypeOf: function(target, proto) {
            let machine = target[MachineRef];
            if (!machine.isGuestVal(val)) {
                throw new TypeError('Cannot set prototype to host object');
            }
            return Reflect.setPrototypeOf(obj, proto);
        },
    };
}

This may slow down access to the object, however. Need to benchmark and test some more and decide whether it’s worth it.

For functions, can also include the `apply` and `construct` traps to check for host objects in arguments:

function guestFunc(func) {
    let obj = createObj(VMFunctionPrototype);
    return new Proxy(func, {
        //
        // ... all the same traps as wrapObj and also:
        //
        apply: function(target, thisValue, args) {
            let machine = target[MachineRef];
            if (!machine.isGuestVal(thisValue)) {
                throw new TypeError('Cannot call with host object as "this" value');
            }
            for (let arg of args) {
                if (!machine.isGuestVal(arg)) {
                    throw new TypeError('Cannot call with host object as argument');
                }
            }
            return Reflect.apply(target, thisValue, args);
        },
        construct: function(target, args, newTarget) {
            let machine = target[MachineRef];
            for (let arg of args) {
                if (!machine.isGuestVal(arg)) {
                    throw new TypeError('Cannot construct with host object as argument');
                }
            }
            if (!machine.isGuestVal(newTarget)) {
                throw new TypeError('Cannot construct with host object as new.target');
            }
            return Reflect.apply(target, args, newTarget);
        },
    });
}

Exotic objects

There are also “exotic objects”, proxies, and other funky things like Arrays that need to handle properties differently from a native object… I’m pretty sure they can all be represented using proxies.

Next steps

I need to flesh out the code a bit more using the new object model, and start on spec-compliant versions of interpreter operations to get through a few simple test functions.

Once that’s done, I’ll start pushing up the working code and keep improving it. :)

Update (benchmarks)

I did some quick benchmarks and found that, at least in Node 11, swapping out the Function prototype doesn’t appear to harm call performance while using a Proxy adds a fair amount of overhead to short calls.

$ node protobench.js 
empty in 22 ms
native in 119 ms
guest in 120 ms

$ node proxybench.js
empty in 18 ms
native in 120 ms
guest in 1075 ms

This may not be significant when functions have to go through the interpreter anyway, but I’ll consider whether the proxy is needed and weigh the options…

Update 2 (benchmarks)

Note that the above benchmarks don’t reflect another issue — de-optimization of call sites that accept user-provided callbacks, if you sometimes pass them regular functions and other times pass them re-prototyped or proxied objects, they can switch optimization modes and end up slightly slower also when passed regular functions.

If you know you’re going to pass a guest object into a separate place that may be interchangeable with a native host function, you can make a native wrapper closure around the guest call and it should avoid this.

ScriptinScript is coming

Got kinda sidetracked for the last week and ended up with a half-written JavaScript interpreter written in JavaScript, which I’m calling “ScriptinScript”. O_O

There are such things already in existence, but they all seem outdated, incomplete, unsafe, or some combination of those. I’ll keep working on this for my embeddable widgets project but have to get back to other projects for the majority of my work time for now… :)

I’ve gotten it to a stage where I understand more or less how the pieces go together, and have been documenting how the rest of it will be implemented. Most of it for now is a straightforward implementation of the language spec as native modern JS code, but I expect it can be optimized with some fancy tricks later on. I think it’s important to actually implement a real language spec rather than half-assing a custom “JS-like” language, so code behaves as you expect it to … and so we’re not stuck with some totally incompatible custom tool forever if we deploy things using it.
Will post the initial code some time in the next week or two once I’ve got it running again after some major restructuring from initial proof of concept to proper spec-based behavior.

Old Xeon E5520 versus PS3 emulator RPCS3

I’ve been fixing up my old Dell Precision T5500 workstation, which had been repurposed to run (slightly older) games, to run both work stuff & more current games if possible. Although the per-core performance of a 2009-era Xeon E5520 processor is not great, with 8 cores / 16 threads total (dual CPUs!) it still packs a punch on multithreaded workloads compared to a laptop.

When I stumbled on the RPCS3 Playstation 3 emulator, I just had to try it too and see what happened… especially since I’ve been jonesing for a Katamari fix since getting rid of my old PS3 a couple years ago!

The result is surprisingly decent graphics, but badly garbled audio:

My current theory based on reading a bunch in their support forums is that the per-thread performance is too bad to run the thread doing audio processing, so it’s garbling/stuttering at short intervals. Between the low clock speed (2.26 GHz / 2.5 GHz boost) and the older processor tech AND the emulation overhead, one Xeon core is probably not going to be as fast as one PS3 Cell SPE unit, so if that thread was just close enough to full to work on the PS3 it’ll be too slow on my PC…

Windows’ Task Manager shows a spread of work over 8-9 logical processors, but not fully utilized. Threads that are woken/sleeped frequently like audio or video processing tend to get broken up on this kind of graph (switching processors on each wake), so you can’t tell if one *OS* thread is maxed out as easily from the graph.

This all leads me to believe the emulator’s inherently CPU-bound here, and really would do better with a more modern 4-core or 6-core CPU in the 3ish-to-4ish GHz range. I’ll try some of the other games I have discs still lying around for just for fun, but I expect similar results.

This is probably something to shelve until I’ve got a more modern PC, which probably means a big investment (either new CPU+RAM+motherboard or just a whole new PC) so no rush. :)

If you get grounded in virtual reality, you get grounded in real life

I keep researching the new VR stuff and thinking “this could be fun, but it’s still too expensive and impractical”. Finally took a quick jaunt to the Portland Microsoft store this morning hoping for a demo of the HTC Vive VR headset to see how it feels in practice.

Turns out they don’t have the Vive set up for demos right now because it requires a lot of setup and space for the room-scale tracking, so instead I did a quick demo with the Oculus Rift, which doesn’t require as much space set aside.

First impressions:
* They make you sign a waiver in case of motion sickness…
* Might have been able to fit over my glasses, but I opted to just go without (my uncorrected blurry vision is still slightly better than the resolution of the Rift anyway)
* Turned it on – hey that’s pretty cool!
* Screen door effect from the pixel grid is very visible at first but kinda goes away.
* Limited resolution is also noticeable but not that big a deal for the demo content. I imagine this is a bigger problem for user interfaces with text though.
* Head tracking is totally smooth and feels natural – just look around!
* Demos where I sit still and either watch something or interact with the controllers were great.
* Complete inability to see real world gave a feeling of helplessness when had to, say, put on the controllers…
* Once controllers in hand, visualization of hand/controller helped a lot.
* Shooting gallery demo was very natural with the rift controllers.
* Mine car roller coaster demo instantly made me nauseated; I couldn’t have taken more than a couple minutes of that.

For FPS-style games and similar immersion, motion without causing motion sickness is going to be the biggest problem — compared to a fixed screen the VR brain is much more sensitive to mismatches between visual cues and your inner ear’s accelerometer…

I think I’m going to wait on the PC VR end for now; it’s a young space, the early sets are expensive, and I need to invest in a new more powerful PC anyway. Microsoft is working on some awesome “mixed reality” integration for Windows 10, which could be interesting to watch but the hardware and software are still in flux. Apple is just starting to get into it, mostly concentrating (so far as we know) on AR views on iPhones and iPads, but that could become something else some day.

Google’s Daydream VR platform for Android is interesting, but I need a newer phone for that — and again I probably should wait for the Pixel 2 later this year rather than buy last year’s model just to play with VR.

So for the meantime, I ordered a $15 Google Cardboard viewer that’ll run some VR apps on my current phone, as long as I physically hold it up to my face. That should tide me over with some games and demos, and gives me the chance to experiment with making my own 3D scenes via either Unity (building to a native Android app) or something like BabylonJS (running in Chrome with WebGL/WebVR support).

Dell P2415Q 24″ UHD monitor review

Last year I got two Dell P2415Q 24″ Ultra-HD monitors, replacing my old and broken 1080p monitor, to use with my MacBook Pro. Since the model’s still available, thought I’d finally post my experience.

tl;dr:

Picture quality: great
Price:
good for what you get and they’re cheaper now than they were last year.
Functionality:
mixed; some problems that need workarounds for me.

So first the good: the P2415Q is the “right size, right resolution” for me; with an operating system such as Mac OS X, Windows 10, or some Linux environments that handles 200% display scaling correctly, it feels like a 24″ 1080p monitor that shows much, much sharper text and images. When using the external monitors with my 13″ MacBook Pro, the display density is about the same as the internal display and the color reproduction seems consistent enough to my untrained eye that it’s not distracting to move windows between the laptop and external screens.

Two side by side plus the laptop makes for a vveerryy wwiiddee desktop, which can be very nice when developing & testing stuff since I’ve got chat, documentation, terminal, code, browser window, and debugger all visible at once. :)

The monitor accepts DisplayPort input via either full-size or mini, and also accepts HDMI (limited to 30 Hz at the full resolution, or full 60Hz at 1080p) which makes it possible to hook up random devices like phones and game consoles.

There is also an included USB hub capability, which works well enough but the ports are awkward to reach.

The bad: there are three major pain points for me, in reducing order of WTF:

  1. Sometimes the display goes black when using DisplayPort; the only way to resolve it seems to be to disconnect the power and hard-reset the monitor. Unplugging and replugging the DisplayPort cable has no effect. Switching cables has no effect. Rebooting computer has no effect. Switching the monitor’s power on and off has no effect. Have to reach back and yank out the power.
  2. There are neither speakers nor audio passthrough connectors, but when connecting over HDMI devices like game consoles and phones will attempt to route audio to the monitor, sending all your audio down a black hole. Workaround is to manually re-route audio back to default or attach a USB audio output path to the connected device.
  3. Even though the monitor can tell if there’s something connected to each input or not, it won’t automatically switch to the only active input. After unplugging my MacBook from the DisplayPort and plugging a tablet in over HDMI, I still have to bring up the on-screen menu and switch inputs.

The first problem is so severe it can make the unit appear dead, but is easily worked around. The second and third may or may not bother you depending on your needs.

So, happy enough to use em but there’s real early adopter pain in this particular model monitor.

US political parties are aconstitutional. Let’s fix that.

If you skim through the US Constitution you’ll find there are zero mentions of political parties. Party politics are, at best, an “aconstitutional” concept whose powers and influence in our country are worrisome, and I think as a nation we should question our assumptions about how legislatures work, vote, and campaign.

A lot of my fellow geeks lean libertarian and prefer the idea that parties should have less of a role so that individual gov representatives can both serve their personal missions and rep their constituents as directly as possible.

I think we should rather embrace that people feel the need to organize into blocs to advance their common interests, and think about ways to make political parties serve the people better.

One of the most basic is to consider changing the House of Representatives and/or the Senate to have proportional representation. That is, instead of pretending that each congressional district is having its own isolated election to rep the largest local bloc of voters, we recognize that people are distributed across and within physical districts. So let’s not let a 51%/49% split in every district lead to a 100% victory for one party — let it lead to a 51%/49% representation in the ongoing work of the legislature.

Proportional representation is also more amenable to multiple parties — the current system strongly favors giant parties because if you’re not #1 you have no voice, and only #2 ever has a chance of a voter turnout surge bringing it back to #1. If there’s always a few % dedicated to smaller parties, those parties and the people they represent have an opportunity to actually be heard in session, and the possibility of shifting party coalitions can help to correct imbalances of power between election cycles.

Our current system basically flip-flops between favoring one of two parties as each election has a couple percentage points difference from the last. I don’t think it’s healthy.

Chromebook Pixel first look

Screenshot 2013-03-01 at 6.25.18 PM

So, I gave in and picked up a Chromebook Pixel. I admit, I’m seduced by the high-resolution 2560×1700 screen. Nom nom nom so many tiny pixels!

The browser works like you’d expect — all the usual web stuff seems to work, just like Chrome on Linux or Mac or Windows. Like the newer MacBook Pros it has a very high-density display, which looks fantastic. Wikipedia looks great; we properly detect the density and load enhanced-resolution images. (We still have to make the logo high-res, we know that’s a problem. :)

The machine also correctly handles mixed-resolution situations when you plug in an external monitor. (The plug is mini-DVI, conveniently compatible with your existing MacBook VGA, DVI, or HDMI adapters. Yay!) Hook up a regular 1080p monitor and drag a browser window over — it’ll automatically switch to low-density and everything appears the correct size. Move the window back to the main screen, and it pops back into beautiful high-resolution. The main limitation is that windows can’t span screens; except during the move operation itself they display only on one monitor or the other.

But of course you’re all wondering about Chrome OS and its suitability for a medium-high-end laptop. Is it good or bad? Hard to say so far, I’m still exploring it… but be aware the machine isn’t limited to Chrome: it’s easy to unlock to developer mode and either mess with the underlying Linux system or install a stock OS distribution like Ubuntu.

Just to prove it to myself, I went ahead and followed the directions on switching to developer mode, enabling USB and legacy booting, and was able to boot an Ubuntu installer stick into the GUI. (I was stuck for a while unable to boot, but it turned out to be because I had an incomplete .iso download. Whoops!) Unfortunately the trackpad isn’t supported in the stock distro yet; some people have been working on drivers, but I might wait a bit for it to be better integrated. Ubuntu’s Unity desktop also isn’t quite “retina-ready”, and needs some more loving for high-density screens.

In the meantime, I’m trying out Chrome OS as she was meant to be spoken. As a fan of Firefox OS, the idea of a browser-centric OS already appeals to me (though they are very differently implemented under the hood)… but I also know that there are limitations.

In regular (non-developer) mode, you don’t have access to a low-level Linux shell. There is a terminal emulator (press control+alt+T) which can do ssh, so if you do all your development on a server in a shell, that might be good enough for you. :)

But, you can’t install anything that doesn’t run in the browser… but it’s a pretty good browser, and is extended in several ways:

  • all the usual pure HTML5 suspects we know and love
  • Flash plugin
  • special plugin for Netflix — you don’t get that on stock Linux :(
  • PDF viewer
  • NaCl plugin

NaCl is interesting because it allows running sandboxed native code at full speed, within the existing HTML/JavaScript security model. Sorta like Java applets but precompiled on the server. Downside is that it requires compiling to multiple platforms (x86, x86-64, and ARM), but the upside is you can apparently run some pretty fast stuff, including access to OpenGL ES for graphics. This should be pretty good for games, if developers are willing to port… A low-end example is NaClBox which is a port of DosBox to run in the NaCl environment.

(Mozilla meanwhile is pushing emscripten as a platform-neutral alternative to NaCl. This compiles C and C++ programs to JavaScript through a clang/LLVM layer. The overhead of JavaScript compilation and type-safety slows it down compared to NaCl, but it achieves reasonably good performance on modern JS engines and works in more browsers. Combined with WebGL, this is also a way to port C/C++ games to the web. There are some nice examples like the BananaBread FPS demo, which *almost* works on the Chromebook… graphics are lovely but the mouse movement seems to be misdetected.)

As for getting “real work” done… thanks to Apple’s limitations I can’t do iOS development on anything but an actual Mac OS X machine, so I won’t be using it for my current main project. But it can serve well for secondary tasks: poking the wikis, email, calendering, chat, Google Docs and Hangouts, notes in Etherpad, etc. If I can rig up an SSH key, I should be able to ssh into my own or work servers in the terminal to do some maintenance there. In theory, I can do web development through an in-browser IDE like Cloud9 — I’ll try it out on MediaWiki and see what I can report.

I’m having trouble finding a good web-based IRC chat. Freenode’s web chat interface is usable but just …. not very good. I tried Kiwi IRC which has a better UI, but I’m still not quite satisfied with it. Maybe I’ll go back to the terminal. ;)

To be continued…

Microsoft Surface / Windows RT initial review

I’m a sucker for gadgets and like having new things to test and develop on, so I preordered myself a Microsoft Surface tablet. It arrived yesterday, and most importantly I’ve confirmed that it runs my Windows 8/RT Wikipedia app correctly. :)

First things first

Screen resolution is noticeably lower than the Retina iPad. I’ve run Windows 8 at full res on my Retina MacBook Pro so you can’t fool me, I know how much better it would look. But it’s adequate enough, and I know more devices are coming with 1080p panels, hopefully to be followed by 2560×1440 panels… we’ll forgive this for a first-gen product perhaps.

As a tablet

I’ve already grown accustomed to the swipe and touch gestures on Windows 8; little in Windows RT is a surprise here. Switching and launching apps, tuning settings, the onscreen keyboard, all that’s pretty good.

The current application availability is limited; some players are there like Netflix and Evernote, while apps like Pandora and Gmail remain missing. Expect to use the browser and web apps as stopgaps.

There seem to be enough games to keep me occupied, both new ones for Windows 8 and ports like Cut the Rope and Angry Birds Space.

As a desktop

One of the advantages of Windows 8 over iOS and Android is the classic Windows desktop, available awkwardly on a touchscreen or in its full glory with a keyboard, mouse, and monitor attached.

The Surface has a real USB port and Bluetooth support for keyboards and mice, and a micro-HDMI port that can be hooked up to HDMI, DVI, or VGA monitors with an adaptor (not included, but I already own some). So you might be tempted to use this Windows RT machine the same way as a “real PC” when docked.

Unfortunately this is where Windows RT’s limitations strike. There’s no compatibility with x86 Windows apps, and ARM-based software for the desktop is limited to what Microsoft chose to ship for you:

  • Internet Explorer
  • Explorer
  • Notepad
  • MS Paint
  • Office
  • Task Manager

And that’s about it. You can’t install Chrome or Firefox. You can’t install LibreOffice. You can’t install git. You can’t even install an IRC client that’s not a full-screen Metro app.

Here’s where the Surface RT falls down for me as something I could use for work:

  • No ability to install native programming environments. Maybe web IDEs work for some purposes…
  • IE doesn’t allow plugins except Flash, so can’t be used for Google+ Hangout video chats. We use these extensively in Wikimedia’s mobile team, which is distributed.
  • Gerrit, the code review tool we use at Wikimedia, barfs on IE 10 due to sloppy version checks. I can’t read diffs or make reviews and can’t just switch to another browser.
  • Pandora runs in IE, but to run music in the background you have to open it on the desktop explicitly. Metro IE stops playback when you switch away.

In general, Metro-style apps are nice on the small tablet screen but get more awkward to work with on an external monitor. Evernote is just icky at 1920 pixels wide!

For people whose school or work requirements fit with what Word, Excel, and PowerPoint provide, Windows RT may be an adequate ‘dockable tablet’ to work on. For me it’ll mostly be for web surfing, games, media and testing.

Of course in theory people could create IDE apps for the Metro environment that ship a mini web server, editor, PHP, and goodies, and make it run on ARM… but so far that doesn’t appear to exist.

 

If you expect to do software development on a Surface or other Windows tablet, *do not* get a Windows RT device: you will be disappointed. Wait for the Windows 8 pro version or otherwise get something with an Intel inside.

 

The touch keyboard cover

This was one of the unique selling points of the Surface when it was announced; you can get it with a magnetically attached folding cover which doubles as a keyboard and trackpad. I’m mixed on this; it works but the keyboard is not great, my accuracy is about as bad as with the onscreen keyboard but with worse spelling correction being applied. Possibly getting used to it would improve my typing.

On the other hand it takes up no room on screen which has a certain advantage!

The touch cover only really works when you have the tablet down on a flat surface using the kickstand; sitting on the couch you’ll only be able to use the cover as a cover very well.

I’m curious to see how this ensemble fares on an airplane: will there be room for the cover and kickstand on my little tray table? We shall see.

 

Until next time!

Top Ten Good Things About Taking Off Your Glasses

  1. helps avoid visual distractions: you can’t worry about what you can’t see
  2. make your games run faster by cranking the resolution down
  3. gives excuse to play with browser’s zoom feature
  4. final step of all ’80s movie makeovers
  5. get to bellyache about them whippersnappers with their tiny displays that are so hard to read
  6. dovetails with nostalgia for ’80s PC and video game graphics
  7. blurry icons in your mobile app no longer concern you
  8. looks sweet if you do it all dramatic
  9. avoid that annoying line where the edge of your glasses cuts off the bottom inch of your monitor in one eye when you’re parked on the couch with your laptop
  10. leave secret identity behind to become Superman