San Francisco Dent Event

We had a “dent event” meetup of open microblogging fans the other day here in San Francisco, as identi.ca’s Evan Prodromou is wandering about after the recent RecentChangesCamp in Portland.

There was wacky fun! Beers were drunk, fish and chips were eaten, netbooks were compared, and there was a nice little turnout of Wikimedians and Creative Commoners in the group.

There was talk about some of the various ways in which microblogging feeds are being used in the Wikimedia world (there are Twitter and Identica feeds of Wikinews updates, as well as a feed we set up recently with server administration log updates [and at Twitter]) and some talk of possible set up of localization work for the Laconica software at Betawiki where most of MediaWiki’s localizations are maintained.

We also discussed the necessity of setting firmer plans ahead of time for our next San Francisco Wikimedia meetups. ;)

Also of note — there’s a Laconica hackfest planned in Berkeley tomorrow (Saturday, February 28), which may be of interest to some Bay Area coders.

PDF live testing on English Wikipedia

I’ve enabled the article-to-PDF generation extension on en.wikipedia.org for live production testing. This is still slightly experimental, so the PDF download link and multi-article book collection part of it are currently limited to logged-in users only to let the load grow gradually.

We’re going to be keeping an eye on load on the PDF generation server in case we need to pull it temporarily. :)

Update: the server died for a few minutes as we had a memory leak go very bad. :) We’ve re-installed a watchdog script which automatically reboots it. PediaPress devs have been looking into seeing if they can prevent the leak. :D

I’ve been asked to point out where to report problems with page rendering — please report them to the PediaPress developers bug tracker directly, or in a pinch if you report something on our Bugzilla under the “Collection” extension component it’ll get to them as well.

Update 2009-02-27: More helpful links for folks:

Wikimedia data dump update

Quick update on data dump status:

Dumps are back up and running on srv31, the old dump batch host.

Please note that unlike the wikis sites themselves, dump activity is not considered time-critical — there is no emergency requirement to get them running as soon as possible.

Getting dumps running again after a few days is nearly as good as getting them running again immediately. Yes, it sucks when it takes longer than we’d like. No, it’s not the end of the world.

Dump runner redesign is in progress.

I’ve chatted a bit with Tim in the past on rearranging the architecture of the dump system to allow for horizontal scaling, which will make the big history dumps much much faster by distributing the work across multiple CPUs or hosts where it’s currently limited to a single thread per wiki.

We seem to be in agreement on the basic arch, and Tomasz is now in charge of making this happen; he’ll be poking at infrastructure for this over the next few days — using his past experience with distributed index build systems at Amazon to guide his research — and will report to y’all later this week with some more concrete details.

Dump format changes are in progress.

Robert Rohde’s p.o.c code for diff-based dumps is in our SVN and available for testing.

We’ll be looking at what the possibility on integrating this is to see what the effect on dump performance is; currently performance and reliability are our primary concerns, rather than output file size, but they can intersect since the bzip2 data compression is a time factor.

This will be pushed back to later if we don’t see an immediate generation-speed improvement, but it’s very much a desired project since it will make the full-history dump files much smaller.

Wikipedia downtime resolved

More internal NFS troubles — some of our redundancy was not as complete as we thought. :(

Sites down for approx 1 hr between 00:50 and 01:50 UTC, February 21:

borked

Right now main sites are back up and working. SSL interface and blog aggregator and a couple other little things took a little longer as that server was temporarily being used to move other files around.

We’re bumping priority on non-NFS protocols for internal file usage. Sigh…

Mobile browser links

We’re trying to get some more traffic onto the new mobile gateway for testing — and figuring out how best to get people to the mobile-optimized site if they hit a regular Wikipedia link while on their mobile phone.

For the moment I’ve slipped in some JavaScript onto English Wikipedia which (intermittently for now) pops up a big link if it detects you’re on an iPhone, iPod Touch, or Android-based device:

Barack Obama with link

the link takes you to the same page on the mobile-optimized gateway site:

Barack Obama on mobile siteBarack on mobile next page

Text is rearranged for comfortable viewing without a lot of zooming, images are sized for niceness, and sections are collapsed and expandable as you need them. Neat!

Hampton Catlin’s working on getting some awesome stuff going with cross-platform native client front-ends to wrap on top of this basic view for iPhone, Android, and potentially a few other platforms… updates to come. :)

Note that the gateway is not English-only — German, Polish, and a few others have support so far, with others coming. (A few localized bits are needed for the front page interface and other navigation.)

Please report any issues you find with the mobile interface to our bug tracker!

Your donations at work: new servers for Wikipedia

Between our high traffic, our wacky insane number of edits, new software features, and the ever-growing amount of stuff on Wikipedia and friends, the demand on our servers is always going up.

Our stack has several layers, from the MySQL database backend to the geographically distributed Squid caches, but the heavy lifting of all that wiki page formatting and editing logic is handled in (at last count) 156 Apache/PHP servers running MediaWiki.

We last did a major expansion of these application servers in mid-2007, which ended up holding us a lot longer than we’d originally anticipated. In the last couple of weeks we’ve finally started hitting up against some capacity limits at peak times — especially Mondays, around the afternoon in North America and evening in Europe — making everything horribly slow.

We’ve recently retired some of our oldest web servers to free up space for a full rack of newer, faster ones; an “energy-efficient” variant of the Dell PowerEdge 1950, packing 8 cores of awesome at 2.5 GHz each. (We’ll follow up on the fate of the retired boxes in another post…)

With just 9 of the 36 new boxes installed so far, we’ve already seen a visible decrease in page service times today:

moar-cpu-plz

According to Mark’s estimates tweaking the load balancer, these 9 boxes alone are serving as much traffic as our 41 oldest boxes still in service — with lower electricity usage, making them cheaper to run. Moore’s Law wins for you again!

The remaining 27 new boxes will be going into service over the next couple days, improving performance further.