GParted rocks

Did some upgrades on my girlfriend’s Windows PC today… The techs who originally set up her computer gave her an unconscionably small C: drive, a tiny 10 gig slice of an already-modest 40 gig drive. Even with careful discipline trying to put things on the D: partition, 10 gigs doesn’t go very far. Shared DLL installs, gobs of temporary files, cached updaters for all manner of software, etc all fill that stuff up and it was running out of room constantly.

My secret weapon to fix this was to be an Ubuntu Linux live CD, which conveniently comes with GParted.

I took a 200 gig drive left over from my dear departed Linux box and hooked it up, figuring I could back up the old data over the network, overwrite it with a raw disk image from the 40 gig drive, and then resize the NTFS partitions to a livable size.

Easy!

Well, sort of. :)

It turns out I could have saved myself some trouble at the command line by copying the partitions across drives with GParted itself instead of goin’ at it all old-school with dd. (Neat!)

I had two sticking points, though.

First, it didn’t seem to let me move the extended (D:) partition to a different place on the drive. That meant there was no room to expand the C: partition, which was the point of the exercise.

I ended up having to create a copy of the D: partition, which it let me put in the middle of the drive, and then delete the old partitions. Kind of roundabout, and it changed the partition type from extended to primary, but Windows doesn’t seem to care about that so keep those fingers crossed…

My second snag was due to Ubuntu’s user-friendliness. As soon as the new partition was created, the system mounted it — which caused the NTFS cloning process to abort, warning that it can’t work on a mounted filesystem.

Nice.

Had to go into the system settings and disable automatic mounting of removable media… luckily that’s easy to find in the menus. If you know it’s going to be there, at least. :)

Distributed time zones

Working with a distributed team, such as Wikimedia’s tech team, has its advantages and disadvantages. One irksome, yet useful aspect of that is the different time zones that people live in.

In the early days, our time zone distribution looked roughly like this:

Classic Wikipedia admin timezones

With Tim in Australia, Mark and others in Europe, and me in California, our timezones were nearly evenly spaced. If we all worked the same hours (local 9-to-5s for June are marked above), we’d almost never be online at the same time. Of course we all worked irregular hours, so there tended to be some overlap.

For most of 2007 though we’ve had something more like this:

Compressed timezones

Tim moved to England, I moved to Florida, and suddenly our time zones are much more compressed, with a much larger overlap.

On the one hand this is nice — we have more “face time” for real-time interaction in the chat channels.

On the other hand this leaves a big portion in the day when none of the core tech team is “on duty”, which reduces our ability to respond quickly to crises. Luckily we’ve had a lot fewer problems this year since we’ve gotten a lot of old problems fixed up and our hardware capacity has generally stayed at or ahead of the growth curve.

For 2008 it looks like we’ll be going back to a more spread out team:

New timezones

Tim’s moving back to Australia, and I’ll be heading back to California when the Wikimedia Foundation sets up its new offices in the San Francisco bay area. We’ll also have Rob still active with the servers in Tampa, filling in some holes in coverage in the middle.

There’s some concern that this’ll reduce our ability to work directly with each other by IRC, but that’s not necessarily a bad thing. Relying too much on chat introduces problems of its own:

  • Those who aren’t available online constantly get marginalized…

    When important decisions are made in chat, you don’t get to participate if you dare to sleep, have a day job, go to class, have a life… :)

  • Records are poorer compared with a mailing list or wiki — not only did you miss the boat, you don’t get to see what the boat looked like. You may not even know there was a boat…

    We try to combat this by keeping a detailed server admin log and announcing details of big outages or updates on the lists.

Putting more emphasis on mailing list and wiki communication could make it easier to embrace new developers who can’t all be online at the same time… and paying more attention to our own wikis might help with dogfooding. ;)

Updated: Corrected Melbourne to Sydney in 2008 time zone map.

So you wanna be a MediaWiki coder?

Some easy bugs to cut your teeth on…

  • Bug 1600 – clean up accidental == header markup == in new sections. (Note — there’s an unrelated patch which got posted on this bug by mistake ages ago, just ignore it. :)
  • Bug 11389 – current diff views probably should clear watchlist update notifications generally, as they do for talk page notifications.
  • Bug 11380 – the ‘Go’ search shortcut needs some namespace option lovin’…

Or maybe you’re prefer to clean up an old patch and get it ready to go?

  • Bug 900 – Fix category column spacing. Since letter headers take up more space than individual lines, we get oddly balanced columns if some letters are better represented than others…
    Age: 2 years, 7 months. Ouch! :)
    Patch status: Applied with only minor cleanup, this function hasn’t changed much! There seems to be something wrong with the algorithm here; while it seems to balance a bit better, I see items dropped off the end of the list sometimes. Needs more work.
  • Bug 1433 – HTML meta info for interlanguage links.
    Age: Two years, seven months.
    Patch status: Applied after minor changes, but doesn’t seem compatible. Provided an alternate version which seems to work with SeaMonkey and Lynx. Is this an appropriate thing, and how do we i18nize the link text?

rsync 3.0 crashy :( [fixed!]

rsync 3.0 may rock, but it’s also kinda crashy. :(

It’s died a couple of times while syncing up our ~2TB file upload backup. I’ve attached some gdb processes to try and at least get some backtraces next time it goes.

Update: Got a backtrace…

#0  0x00002b145663886b in free () from /lib/libc.so.6
#1  0x000000000043277b in pool_free_old (p=0x578730, addr=) at lib/pool_alloc.c:266
#2  0x0000000000404374 in flist_free (flist=0x89e960) at flist.c:2239
#3  0x000000000040ddbc in check_for_finished_files (itemizing=1, code=FLOG, check_redo=0) at generator.c:1851
#4  0x000000000040e189 in generate_files (f_out=4, local_name=) at generator.c:1960
#5  0x000000000041753c in do_recv (f_in=5, f_out=4, local_name=0x0) at main.c:774
#6  0x00000000004177ac in client_run (f_in=5, f_out=, pid=25539, argc=, 
    argv=0x56cf98) at main.c:1021
#7  0x0000000000418706 in main (argc=2, argv=0x56cf90) at main.c:1185

Looks like others may have seen it in the wild but a fix doesn’t seem to be around yet. Some sort of bug in the extent allocation pool freeing changes done in May 2007, I think.

Found and patched a probably unrelated bug in pool_alloc.c.

Further updated next day:

The crashy bug should be fixed now. Yay!

PHP 5.2.4 error reporting changes

Noticed a couple neat bits combing through the changlogs for the PHP 5.2.4 release candidate…

  • Changed “display_errors” php.ini option to accept “stderr” as value whichmakes the error messages to be outputted to STDERR instead of STDOUT with CGI and CLI SAPIs (FR #22839). (Jani)

This warms the cockles of my heart! We do a lot of command-line maintenance scripts for MediaWiki, and it’s rather annoying to have error output spew to stdout by default. Being able to direct it to stderr, where it won’t interfere with the main output stream, should be very nice.

  • Changed error handler to send HTTP 500 instead of blank page on PHP errors. (Dmitry, Andrei Nigmatulin)

This in theory should give nicer results for when the software appears to *just die* for no reason — with display_errors off, if you hit a PHP fatal error the code just stops and nothing else gets output. In an app that does its processing before any output, the result is a blank page with no cues to the user as to what happened.

Unfortunately it looks like it’s only going to be a help to machine processing, and even then only for the blank-page case. :(

In my quick testing, I only get the 500 error when there was no output done… and it *still* returns blank output, it just comes with the 500 result code.

The plus side is this should keep blank errors out of Google and other search indexes; the minus side is it won’t help with fatal errors that come in the middle of output, or the rather common case of sites which leave display_errors on… because then the error message gets output, so you don’t get a 500 result code.

rsync 3.0 rocks!

Wikimedia’s public image and media file uploads archive has been growing and growing and growing over the years, nowadays easily eating 1.5 TB or so.

This has made it harder to provide publicly downloadable copies, as well as to maintain our own internal backup copies — and not having backups in a hurricane zone is considered bad form.

In the terabyte range, making a giant tar archive is kind of… difficult. Not only is it insanely hard to download the whole thing if you want it, but it multiplies our space requirements — you need space for every complete and variant archive as well as all the original files. Plus it just takes forever to make them.

rsync seems like a natural fit for updating and then synchronizing a large set of medium-size files, but as we’ve grown it became slower and slower and slower.

The problem was that rsync worked in two steps:

First, it scanned the directory trees to list all the files it might have to transfer.

Then, once that was done, it transferred the ones which needed to be updated.

Great for a few thousand files — rotten for a few million! On an internal test, I found that after a full day the rsync process was still transferring nothing but filenames — and its in-memory file list was using 2.6 *gigabytes* of RAM.

Not ideal. :)

Searching around, I stumbled upon the interesting fact that the upcoming rsync 3.0 does “incremental recursion” — that is, it does that little “list, then transfer” cycle for each individual directory instead of for the entire file set at once.

I grabbed a development tree from CVS, compiled it up, and gave it a try — within seconds I started to see files actually being transferred.

Hallelujah! rsync works again…

We’re now in process of getting our internal backup synced up again, and will see about getting an off-site backup and maybe a public rsync 3.0 server set up.