There are two key differences between this in-progress code and how the stock parser does it:
- The stock parser expands a preprocessor node tree as it flattens it into another string (eg for later parsing steps on the compound markup) — the experimental parser produces another node tree.
Later-stage parsing will deal with re-assembling start and end tokens when producing HTML or other output, but within the tree structure instead of a flat stream of string tokens — that’ll let output know how to mark where pieces of output came from in the input so they can be hooked up to editing activation.
Currently this is fairly naive and suspends the entire iterator, meaning that all work gets serialized and every network round-trip will add wait time. A smarter next step could be to keep on iterating over other nodes, then come back to the one that was waiting when it’s ready — this could be a win when working with many templates and high network latency. (Another win can be to pre-fetch things you know you’re going to need in bulk!)
I’m actually a bit intrigued by the idea of more aggressive asynchronous work on the PHP-side code now; while latency is usually low within the data center, things still add up. Working with multiple data centers for failover and load balancing may make it more likely for Wikimedia’s own sites to experience slow data fetches, and things like InstantCommons can already require waiting for potentially slow remote data fetches on third-party sites using shared resources over the internet.
This is still very much in-progress code (and in-progress data structure), but do feel free to take a peek over and give feedback.