Civilization V cross-play is dead

PSA for Civilization V aficionados: the Windows and Mac versions are no longer compatible for online multiplayer.

It seems the game’s online state management is probably based on passing raw game structures, and some types differ between 32-bit and 64-bit versions: the Windows version of the game is 32-bit, but the Mac version was updated to 64-bit last year to allow it to run on recent versions of macOS that dropped 32-bit support.

It’s unclear whether updating the Windows version to 64 bit would resolve the incompatibility, as macOS and Windows have some different types at 64-bit as well.


This was an avoidable problem, by either using device independent serializations or device independent core representations. And it was exacerbated by Apple dropping 32-bit compatibility and forcing developers to make rushed decisions about supporting or abandoning legacy products.

One bug down, pass it around

So I fixed a really annoying bug in my video player program where it sometimes just hung waiting on network input.

I thought it was going to be a really hairy multi-threaded concurrency issue, as I was building a blocking i/o interface on top of a non-blocking URL-fetching layer in order to reuse existing libraries.

I scoured my code looking for incorrect ordering, bad locking, or a problem with my semaphores…. Bit by bit the code was instrumented, logged, inspected, cleared, cleaned, and set back in place better than it was before. But the bug remained, untamed — nay untouched — by my onslaught.

At last I found the part of the code the bug was hiding: if the input buffer ran dry and the last network fetch had already completed, the i/o thread would die and the blocking request would time out.

But why was it broken? Because… um… I forgot to call the function that sends a follow-up request for more data.


Release late, release rarely?

Apple’s iPhone & iPad platforms are pretty and delicious, and still miles ahead of the competition in smooth, attractive user interfaces.

But there are plenty of things to irk you as well. For web applications and Open Source developers like me, the most irksome are the limitations Apple places on software development and distribution.

Developers in the FOSS community have long lived by the motto “release early, release often”. Getting your programs into a lot of peoples’ hands early gives more opportunity for feedback; that includes direct code contributions, but bug reports and UX feedback are just as important even for people who aren’t sharing their source code! Releasing updates often gets bug fixes and new features into peoples’ hands quickly, which encourages the feedback cycle and makes for happy users.

Web developers have had the additional advantage of being able to roll out many software upgrades near-instantly to all their users: update the files on your servers and *poof* everyone using your app (“visiting your web page”) is using the latest code. If something went wrong and you introduced a bug, you can update to a fixed version or roll back to the last-good version just as quickly.

When you start pushing apps out for iPhone or iPad though… these doors are closed to you. A small fraction of users will have jailbroken the system so they can run arbitrary apps (and bully for them!) but to reach a general audience, you need to go through Apple as a middleman. To run any program on your phone, it either has to be obtained through their online store, or you need to jump through hoops with special limited-access development keys.

Aside from the ethical issues of whether it’s ok for one company to interfere in independent commerce between customers and developers, it also breaks FOSS and web-style development processes badly.

The pre-1.0 cycle becomes less useful because it’s harder to get people involved in testing and bug fixing during your early stages. You can’t even test a program on your own phone without paying Apple for access to the code-signing system, and having other people test it who aren’t also registered iPhone developers requires obtaining their device IDs ahead of each new release.

When you’re ready to push out to the App Store for wide distribution, you have to wait an arbitrary amount of time for review. That’s generally about a week of waiting, then an Apple reviewer pokes at your app for an hour or two. If you’re lucky, the reviewer confirms it looks ok and pushes it up for sale. If you’re unlucky, the nice Apple employee tells you there’s something you need to fix, and you fix it and wait another week.

If you’re extra unlucky, the nice Apple employee looks at it, decides it’s ok, and pushes it up for distribution… but it turns out you actually introduced a horrible bug that breaks the app completely for actual users. (Whoops!)

Until the fixed version gets through review, I’ve had to temporarily pull StatusNet Mobile from the iTunes App Store altogether to keep people from installing the broken version. We’re hoping that the review of the fix will go faster, or that Apple can revert our distribution to the last known-good release until it’s complete, but these options aren’t available under our direct control; I’ve asked and simply have to wait and hope.

In constrast, a web app breaking like this could have been fixed immediately, with the fix instantly visible to all users. An Android app breaking like this could be updated in the Market immediately, with users seeing update notifications within a day. A Windows, Linux, or Mac app could have a new version pushed out immediately, with probably moderately slower uptake depending on how you distribute your updates.

To be fair, Apple’s iTunes App Store system is remarkably liberal by the standards of game consoles and many older “mobile apps” models, where you need a lot more up-front negotiation & money with the hardware manufacturer or carrier. $99/year and a cut of sales for platform-exclusive sales channel isn’t awful; I’m not even sure how to find out how many hoops I’d have to jump through as an independent developer to whip up some little goodie for the Wii or Playstation 3 and get it out for legit sale to end-users.

But compared to what we’re used to on personal computers and the web, it’s extremely onerous for developers, and the side effects harm our users in directly visible ways.

Update 2010-10-25: Our StatusNet Mobile 1.0.3 update has been approved after 9 days sitting in the “in review” state without comment. It should appear in the App Store again shortly.

Firefox font rendering differences between Ubuntu, Mozilla packages?

Has anybody else encountered this? Subpixel antialiasing on text seems to be a lot more aggressive when running Mozilla’s packages of Firefox and Thunderbird than when running the Ubuntu packages:

I originally noticed it when running Mozilla’s Thunderbird 3 packages on Ubuntu 9.10, but originally chalked it up to “weird stuff from 32-bit apps sometimes acting weird”, but both the 32-bit and 64-bit Firefox 4.0b1 packages are doing it to me on Ubuntu 10.04.

Is there some difference with the bundled libraries, or some custom Ubuntu or Debian patch that changes the behavior? And can I change it??? I’m liking Firefox 4 so far but this text is just awful on my eyes.

Updated 2010-07-13: Commenter noted this bug, which looks like it may be it:

Freedom, compromise, and geek fights

I’ve seen a rash of complaints lately about some absolutist flame wars and trollfests in various parts of the free & open source software community, and it leaves me kind of sad when I see people whose work I respect jumping around and saying hurtful things to each other.

I understand, of course… us geeks tend to like absolutes. Absolutes are often very handy in an engineering context — this algorithm is more efficient with our data sets under these constraints; that algorithm is less efficient. This hard drive performs better for this server load; that one is worse. We unfortunately have a tendency to apply the same sort of arguments when we don’t have a clear-cut context… and there may be legitimately different answers for different people. Which programming language is best? (The one I’m most productive in!) Which mobile gadget is best? (The one that I would buy for my needs!) Which operating system is best? (The one I like to run my applications or tune to my preferences!) Which voting system is best? Which political system is best? Which religion is best?

We quickly fall into unwinnable circular arguments where the participants talk past each other. Not only are these unproductive; they can create very angry, adversarial communities that tend to drive away new members. Especially where participation is self-selected and involves both technical and ideological goals — like free software and Wikipedia communities — there’s a constant danger of ugly geekfights.

I’ll admit I’ve flamed my share of people who disagreed with me on the Internet — more so at 21 than at 31! — but I’ve always tried to keep myself in check by reminding myself of an incident in my youth…

When I was a young lad, I was raised in what is sometimes called a Post-Christian environment. As middle-class white Americans, we inherited some of the outside trappings of the old Christian civilization of medieval Europe, but we were never really religious. We celebrated Christmas and Easter,  assumed “Yahweh” when someone said “God” instead of asking “which god?”, and understood that the “Bible” is the default holy book, with one section where GOD HATES SHRIMP and another where JESUS LOVES YOU. But we only had a token prayer at dinner, and only went to churches as tourists or funeralgoers; the one time I got dragged to my grandparents’ regular Sunday services at a Lutheran church I found the whole thing incomprehensible. Bible stories sometimes got presented to me as cultural background, but no more so than other religious tales like the similarly-ancient Greco-Roman myths which nobody believes are literal truth.

As a 14-year-old or so, I assumed that this was the normal, natural way that everyone in our post-Englightenment science-based Western culture was raised. Someone who believed in any particular religion — so concluded my adolescent brain — must then be either ignorant or stupid. If they were ignorant, then surely explaining the true facts to them would make them give a quick facepalm and finally join the 18th century. If they didn’t get the explanation, then either my explanation wasn’t good enough (let’s try it again!) or they’re just stupid and it’s time to write them off entirely.

Eventually I started realizing that my assumptions didn’t actually hold. One day, a schoolyard discussion about science and philosophy (as only 9th-graders can philosophize… poorly!) resulted in a classmate declaring that “Darwin was a jerk!” for putting forth his theories on biological evolution. Yes, one of my honors-level classmates wasn’t just religious, he was a creationist. I knew he wasn’t an idiot — he was a bright kid who did great in math, science, literature, and history. I knew his parents weren’t idiots — they were smart, successful people. But this smart, successful family believed things I found to range from the odd to the silly to the downright insane.

I’ve never been convinced about religion — and definitely not creationism! — but that day I started to learn that believing things I find to be obviously wrong doesn’t make someone an unintelligent or malicious person, even if I can point to a heap of evidence that totally convinces me how wrong they are.

At best I could accuse him of being wrong and not having the same set of assumptions and values in his decision-making process that made the opposite conclusion so obvious to me. Given time, education, and a changing environment, he might change his mind, or he might not. But my arguments weren’t doing it, and weren’t going to do it, yet I couldn’t dismiss him entirely as an idiot.

I was instead going to have to just deal with someone being wrong.

This was probably the most important lesson I’ve ever learned. It’s hard, and I mean hard, to practice it, especially as a techie geek.

But it’s one of the foundations of our modern pluralistic democracies, and basically comes down to the social contract of “don’t oppress me, and I won’t oppress you”. My freedom to be an agnostic/atheist comes with the responsibility to tolerate Christians, Jews, Muslims, Buddhists, Hindus, etc… and even Creationists, to an extent. I’m willing to accept that compromise because they’re bound to it, too — it keeps “them” from ostracizing me as a heretic, burning me at the stake, stoning me to death, or just refusing to let me vote, own property, or run for public office just for being an agnostic/atheist, Christian, Jew, Muslim, Buddhist, Hindu, etc. We just have to work out a reasonable compromise — we teach the actual state of science in public-school science class, and it’s up to each religious group to explain to their children the specific ways, if any, that their religious worldview differs from centuries of evidence-based scientific research so that even the creationist kids still learn the cultural context of how our post-Enlightenment society works, even if they disagree with it.

So please… before you go flaming people for being traitors to the cause, or not getting it, or whatever… consider whether what you’re saying is actually going to add anything useful to the conversation, or if you’re just piling more noise on a never-ending geekfight. If we can avoid killing our neighbors over fundamental religious differences, we really ought to be able to live with someone else occasionally saying something nice about a product line you dislike.

Printing from Mac to Linux: the cheat sheet

For the last few years, desktop Linux and Mac OS X alike have used CUPS (Common Unix Printing System) as their native printing subsystem. In an ideal world, this would mean it’s really easy to set them up to talk to each other.

In the world we live in, however, things are not so easy. Configuring CUPS remains a black art, compounded by the absolutely abysmal reporting of errors to the printer UI on both OSes. To have any clue what’s going on you have to seek out and find the log files…

I can understand that on Linux, but really, how did Steve Jobs let this out the door on his precious Macintosh? Heck, Apple even bought the company that developed CUPS a few years back. Stop making our iPods smaller for a couple minutes and fix your printing error messages! ;)

The situation:

I have a relatively straightforward setup: an Ubuntu Linux desktop PC (stormcloud.local) with a well-supported USB printer hooked up, and a Mac laptop (nimbus.local) which roams the world. When at home, it’s nice to be able to print directly from the Mac rather than print to PDF, copy the file, and then print.

The cheat sheet:

First the basics — make sure printer sharing is enabled on Linux; this much you should be able to do through the regular GUI:

Now the voodoo! Add to /etc/cups/cupsd.conf on Linux:

    # Allow remote access
    ServerAlias *
    Port 631

And restart cupsd:

    sudo /etc/init.d/cupsd restart

Now, you can add the printer on the Mac; be sure to fill everything out!

Several gotchas I discovered:

Listening isn’t enough

Very early in my journey I made sure that the Linux box’s cupsd.conf was set to listen on the network as well as to itself:

    BAD: Listen localhost:631
    GOOD: Port 631

But when I’d try to hit the CUPS web administration pages I’d just get a “400 Bad Request”. After some experimentation, I found that it actually responds just fine… as long as in the HTTP headers I call it “localhost” instead of by its proper local network name.

To get it working (so eg http://stormcloud.local:631/ would actually pull something up!) I had to add this to cupsd.conf:

    GOOD: ServerAlias *

No, setting the name I wanted in ServerName mysteriously wasn’t enough.

Pick a queue, any queue

The Mac’s IPP printer setup dialog says you can leave the “Queue” field “blank for default queue”. This is a lie! Despite having only one printer available, I could only get printing working if I listed the queue explicitly.

To add insult to injury, you need to include the “printers/” prefix. This is easiest if you find the printer on the web interface and copy-paste the path from the URL…

Now I can print my dang Fandango tickets, which I’m pretty happy about!


I’ve been using the native driver for the printer on the Mac side. It should also work to just leave it at Generic PostScript as long as the Linux box has a driver, but I feel safer with it there. ;)

Don’t spam me, bro

I got a pretty HTML spam email from Microsoft about the opening of their Mission Viejo, CA retail store today. This was probably not a smart marketing decision…

  1. Spam is bad; I don’t remember ever asking for this
  2. I haven’t lived in the area for 3 years. What old product reg info were they pulling from?
  3. All links in the mail run through the email marketing partner’s domain, so I’m never going to click them… haven’t they heard of phishing?

Although I have to admit, I’m tempted to swing by when I’m down that way visiting the folks for Christmas, just to see a Microsoft store… :)

Compound document formats

I’m generally appalled at the state of compound documents…

In the Apple world, Mac apps like Keynote love to use bundled directories which look like flat files at the UI level. Cute, but Thunderbird gleefully destroys them as attachments… Apple’s transparently packages them into .zip archives for you, but Thunderbird just gives you a file with a directory listing, which naturally enough fails to open when you download it. Nice!

OpenOffice and the latest greatest MS Office stick their XML documents into a .zip archive and package image files, etc into that archive. These actually do act like flat files, so attachments and uploads work. :) But it makes file type detection and validation a little harder; verifying which file type your zip thingy is and whether it contains extra files slipped in…

And then you get fun packages like Scribus, which just give you an XML file referencing all your external image files by path, leaving no way to transfer your entire document without manually managing a directory structure and sending around or archiving multiple files manually.

We had a request for allowing Scribus uploads to Wikimedia sites for things like PR materials… sounds great, except for how any actually relevant document will need image files packed into the same directory which you can’t do. D’oh!