innodb and memcached

I had a quick look at the source tree (I haven’t compiled it, just read the source – that’s what I do. I challenge any C/C++ compiler to keep up with my brain!) that’s got a tarball up on labs.mysql.com for the memcached interface to innodb. A few quick thoughts:

  • Where’s the Bazaar tree on launchpad? I hate pulling tarballs, following the dev tree is much more interesting from a tech perspective (especially for early development releases). I note that the NDB memcached stuff is up on launchpad now, so yay there. I would love it if the InnoDB team in general was much more open with development, especially with having source trees up on launchpad.
  • It embeds a copy of the memcached server engines branch into the MySQL tree. This is probably the correct way to go. There is no real sense in re-implementing the protocol and network stack (this is about half what memcached is anyway).
  • The copy of the memcached engine branch seems to be a few months old.
  • The current documentation appears to be the source code.
  • The innodb_memcached plugin embeds a memcached server using an API to InnoDB inside the MySQL server process (basically so it can access the same instance of InnoDB as a running MySQL server).
  • There’s a bit of something that kind-of looks similar to the Embedded InnoDB (now HailDB) API being used to link InnoDB and memcached together. I can understand why they didn’t go through the MySQL handler interface… this would be bracing to say the least to get correct. InnoDB APIs, much more likely to have fewer bugs.
  • If this accepted JSON and spat it back out… how fast would MongoDB die? weeks? months?
  • The above dot point would be a lot more likely if adding a column to an InnoDB table didn’t involve epic amounts of IO.
  • I’ve been wanting a good memcached protocol inside Drizzle, we have ,of course, focused on stability of what we do have first. That being said…. upgrade my flight home so I can open a laptop… probably be done well before I land….. (assuming I don’t get to it in the 15 other awesome things I want to hack on this week)

Drizzle Developer Day

As Lee mentioned on the Drizzle blog, we have an upcoming Drizzle Developer Day just after the MySQL Conference and Expo. Sign up here to make sure we have enough space and can help with planning.

The last couple developer days have been great – helping people getting started with Drizzle, discussing improvements that could be made (both big and small), operations concerns and new to this one: a GA release.

Having a GA release out is really exciting, I’m hoping that on the developer day we get people coming who are looking at developing with Drizzle, not just hacking on internals.

Moar users nom nom nom!

xtrabackup for Drizzle merge request

Follow it over on launchpad.

After having fixed an incredibly odd compiler warning (and with -Werror that we build with, error) on OSX (die die die) – xtrabackup for Drizzle is ready to be merged. This will bring it into our next milestone: freemont. Over the next few weeks you should see some good tests merged in for backup and restore too.

While not final final, I’m thinking that the installed binary name will be drizzlebackup.innobase. A simple naming scheme for various backup tools that are Drizzle specific. This casually pre-empts a drizzlebackup tool that can co-ordinate all of these (like the innobackupex script).

Drizzle online backup with xtrabackup

For backups, historically in the MySQL world you’ve had mysqldump (a SQL dump, means on restore you have to rebuild indexes), InnoDB Hot Backup (proprietary, but takes a copy of the InnoDB data files, so restore is much quicker), LVM snapshots (various scripts exist, does have larger IO impact, requires LVM) and more recently xtrabackup. Xtrabackup essentially does the same thing as InnoDB hot backup except that it’s free and open source software.

Many people have been using xtrabackup successfully for quite a while now.

In Drizzle7, our default storage engine is InnoDB. There have been a few changes, but it is totally InnoDB. This leaves us with the question of backup solutions. We have drizzledump (the Drizzle equivalent to MySQL dump – although with fewer gotchas), you could always use LVM snapshots and the probability of Oracle releasing InnoDB Hot Backup for Drizzle is rather minimal.

So enter xtrabackup as a possible solution… I had though of porting xtrabackup across for a while. Last weekend, while waiting for one of my iterations of catalog support to compile, I decided to give it a go. I wanted to see how far I could get with it also in that weekend.

I was successful – there’s a tree up at lp:~stewart/drizzle/xtrabackup thatproduces an xtrabackup binary that’s built for Drizzle (it’s not quite ready for merging yet, there are some obivous bugs around command line option parsing… but a backup and restore did work).

I wanted the following:

  • build to be integrated with Drizzle, using the same innobase build that we use to build the server
  • build with strict compiler warnings and -Werror (which we do forDrizzle)
  • build with a C++ compiler (as we do with innobase in Drizzle)
  • not re-add parts of mysys into the Drizzle build just for xtrabackup

I’ve already submitted merge requests to upstream xtrabackup containing the compiler fixes and added compiler warnings (they’ve also by now been merged into xtrabackup). Already my work has improved the quality of xtrabackup for everyone. Some of the warnings were fixed slightly differently in xtrabackup than in my Drizzle tree, but I plan to merge.

One issue was that the command line parsing library that xtrabackup uses – my_getopt which is part of mysys (the portability library inside MySQL) is long since gone from Drizzle. We currently use Boost::program_options. Thanks to the heroic efforts of Andrew Hutchings, xtrabackp in Drizzle is also using boost::program_options. This was a brilliant “hey, can you have a look at this conversion” followed by handing him a tree that did not even remotely compile, followed by a “I have to take the kids somewhere, here’s a tree – it may compile”. Amazingly enough, it pretty much did compile once I fixed the other issues.

An unresolved issue is how to deal with this going forward – my guess is that upstream xtrabackup doesn’t want to require Boost.

One solution could be just to factor out command line options into a sepfile that we can ignore for Drizzle and replace with our own. The other option could be to use a differnt command line option parsing library (perhaps from CCAN, as it’s then maintained by somebody else and doesn’t require heaps and heaps of other stuff).

Another issue I had to tackle is the patch to innobase that’s required to build xtrabackup.

I took a very minimal approach for the Drizzle patch. We are currently based on innobase 1.1.4 from MySQL 5.5 – so I mostly looked at the xtradb55 patch. I think it would be great if these were instead of one giant patch a series of patches to apply (a-la quilt) to a) make iteasier maintain and b) easier for myself to work out the exact reasoning of each bit (also, generating the patches with -p would help a fair bit too).

So how did I do it?

Step 0
was removing support for old innobase – we totally don’t need it for Drizzle.

Step 1
was creating a srv_read_only option for Drizzle’s innobase. This was fairly easy. The one thing I did have to change was adding a checkin os_file_lock() so that we don’t attempt to write lock the ibdatafiles when in read only (otherwise backups can’t be taken while drizzledis running). I’m a little surprised that this wasn’t hit in 5.5 at all.

Step 2
was implementing srv_fake_write. I’m pretty sure I’ve gotten this right in the Drizzle implementation, but the patch wasn’t as easy toread as I’d really like. I probably need to do a bit more of a code audit that this is actually correct (I may try and come up with anLD_PRELOAD library that will scream loudly if writes are made to files matching a pattern).

Step 3
was implemnting srv_apply_log_only. Pretty sure I have this right, again, more testing will be required. Why? Because I’m that paranoid about getting things very, very right.

Step 4
was to go through all the functions that xtrabackup needed to not be static. Instead of having prototypes for them inxtrabackup.cc, I instead added a xtrabackup_api.h header to Innobase and included it where needed (including in xtrabackup). I’d recommend this way going forward for xtrabackup too as it could be a lot less problematic to maintain (and makes xtrabackup source a bit easier to read)

Step 5
was fixing up a few skeleton functions that were needed to make our innobase happy. It may not be a bad idea to split out the skeleton functions into a sep source file so it’s a bit easier to track (and some #ifdefs around those not needed for certain releases).

I’m hoping to work with the upstream xtrabackup devs on the various points I’ve made above.
Another thought of mine is to port xtrabackup into HailDB where we can use much more neat API functions to create good tests for xtrabackup.

Thanks go out to all who’ve worked on xtrabackup. It honestly wasn’t too hardgetting it ported across to Drizzle – and with a bit of collaboration I think we can make it easy to keep up to date.

What’s the future for Xtrabackup in Drizzle? It’ll likely end up being a binary named drizzlebackup-innobase or similar (this means that there is a clear difference between xtrabackup for MySQL and what we have in Drizzle – which is more accurately defined as based on xtrabackup). We’ll also probably want a nice wrapper or integration with a backup tool to deal with everything Drizzle related. We shall also introduce a lot of testing; backups are important.

Xtrabackup is topical, check out the latest OurSQL podcast and the the Percona Xtrabackup website for more info!

Multi-tenancy Drizzle

My previous post focused on some of the problems of doing multi-tenant MySQL.

One of the reasons why I started hacking on Drizzle was that the multi-tenancy options for MySQL just weren’t very good (this is also the reason why I run my blog in a VM and not a shared hosting solution).

What you really want is to be able to give your users access to a virtual database server. What you don’t want is to be administering a separate database server for each of your users. What you want are CATALOGs.

A CATALOG is a collection of SCHEMAs (which have TABLEs in them). Each CATALOG is isolated from all the others. Once you connect to a catalog, that’s it. They are entirely separate units. There are no cross-catalog queries or CHANGE CATALOG commands. It is as if each catalog is its own database server.

You can easily imagine a world where there are per-catalog resource limits too (max connections, max temp tables etc).

For the Drizzle7 release, we got in some preliminary support to ensure that the upgrade path would be easy. You’ll notice that all the schemas you create are in the ‘local’ catalog (you can even spot this on the file system in the datadir).

For the next Drizzle milestone, the intent has been to flesh out this support to enable a very elegant multi-tenant solution for Drizzle.

One of the things we worked on a little while ago now is introducing TableIdentifier and SchemaIdentifier objects into Drizzle. Historically (and still in the MySQL codebase) tables would be referenced by a string in the format “database/table_name” (except sometimes when it could be “database\table_name”). There were also various times when this was the name as entered by the user and times when this was converted into a safe form for storing on disk (and comparing to one another).

Everywhere in Drizzle where we have to deal with the path to a table, we call a method on a TableIdentifier for it. There is also a method for getting a series of bytes that can be used as a key in a data structure (e.g. table definition cache). It’s even used to work out what path on the file system to store the data file in.

I would say that the use of TableIdentifier and SchemaIdentifier has prevented many bugs from creeping into the server during development. It took several aborted goes to get something like this into the codebase, just because how the database name, table name and table path strings were being used and abused around the server.

So, with our cleaned up codebase using TableIdentifier and SchemaIdentifier, how hard would it really be to introduce the new level of CATALOG?

So along comes an epic hacking session last weekend. Basically I wanted to see how close I could get in such a short time. I introduced a CatalogIdentifier and then forced the SchemaIdentifier constructor to require one as a parameter….

The great benefit? Every place in the code that I needed to change was a compile error. The overwhelming majority of places I had to fix were shown to me by GCC. I cannot express how wonderful this is.

Anyway, by going through all of these places and fixing up a few things that also needed fixing (including just throwing in a few temporary hacks), I got it working!

I have a tree on launchpad (lp:~stewart/drizzle/multitenant) where you can create multiple catalogs, connect to any of the catalogs you’ve created (or local), create schemas and tables in them. Each catalog is separate. So you can have 1 server with 2 schemas called ‘test’ and 2 tables called ‘t1’.

This is a pretty early development tree… and it comes with some limitations (that will need to be fixed):

  • “local” is the default catalog (as it is in Drizzle7)
  • “local” must exist
  • I_S and DATA_DICTIONARY only show up in local (i.e. SHOW TABLES doesn’t even work yet)
  • HailDB is “local” only
  • testing equals approximately zero
  • currently the only protocol plugin that supports connecting to another catalog is the console plugin

But, as a result of only a weekend of hacking, pretty cool.

Paving the way for easy Database-As-Aa-Service.

The problems with multi-tenant MySQL

Just about every web host in the world gives people “mysql databases”. Usually one. Why? because that’s how permissions work in MySQL and it’s a relatively simple way to set it up (each of your web host clients is a user with access to their one database).

This has a lot of limitations for the end user:

  • You also only get one user (that can do anything). This means you can forget enhancing security of your web app by not letting the front end DROP TABLE (or DROP SCHEMA). This is probably one of the main reasons that so few web apps use any of the access controls available in MySQL.
  • You only get one database. Typically this means you get to run one application. Or, you start to get applications that allow you to specify a table name prefix as a lovely hack around this kind of situation (ick).
  • If you get more than one database, it’s going to be all prefixed with your username – and it’s going to be via a web UI to create/drop them – not via the normal way.
  • You’ll never get replication.
  • If you need to scale – it’s migrating hosting solutions, there isn’t an easy to use “now i need read-only slaves” button.
  • Backups are only ever going to be SQL dumps from the master (if you don’t completely trust your hosting provider)
  • If your provider does do replication, one other bad user in the system could introduce EPIC amounts of replication latency.

One solution is to give everybody their own MySQL instance. Ick. Why ick? Well… you now have a lot more MySQL servers to administer. That’s not the problem however: you’ve just screwed yourself on IOPs – each mysqld gets to compete to sync its data to disk.

The next step people go to is running MySQL inside a virtual machine. You are again screwing yourself on IOPs. Every VM on the box can now fight each other for a limited number of sync operations to make the data safe on disk. Forget if group_commit works for your MySQL version, having many VMs running MySQL on the same physical box will screw you much more than lack of group_commit ever will. You can probably kiss consistent performance and latency goodbye too (this will largely depend on how VMs are being run by your hosting provider).

The best way to get screwed is to get “free” extra CPU cycles and IOPs that are excess on the physical machine and then to suddenly switch to not getting any “free” ones and instead only the ones you pay for… wonder why your site is suddenly slow to respond where the number of visitors is the same and you’ve changed NOTHING?

Even running MySQL inside a VM that is the only VM on the box has a performance impact. You want to be using each physical machine to its fullest. If you’ve got a bunch of MySQL servers running inside VMs – you are not doing that.

(you can substitute just about any other database server for “MySQL” in all of the above… interestingly enough I have been told that a certain proprietary database server has a very low performance drop when run inside a VM)

Things I’ve done in Drizzle

When writing my Dropping ACID: Eating Data in a Web 2.0 Cloud World talk for LCA2011 I came to the realisation that I had forgotten a lot of the things I had worked on in MySQL and MySQL Cluster. So, as a bit of a retrospective as part of the Drizzle7 GA release, I thought I might try and write down a (incomplete) list of the various things I’ve worked on in Drizzle.

I noticed I did a lot of code removal, that’s all fine and dandy but maybe I won’t list all of that… except perhaps my first branch that was merged :)

2008

  • First ever branch that was merged: some mysys removal (use POSIX functions instead of wrappers that sometimes have different semantics than their POSIX functions), some removal of NETWARE, build scripts that weren’t helpful (i.e. weren’t what any build team ever used to build a release) and some other dead code removal.
  • Improve ‘make test’ time – transactions FTW! (this would be a theme for me over the years, I always want build and test to be faster)
  • Started moving functions out into their own plugins, removing the difference between UDFs (User Defined Functions) and builtin functions. One API to rule them all.
  • Ahhh compiler warnings (we now build with -Werror) and unchecked return codes from system calls (we now add this to some of our own methods, finding and fixing even more bugs). We did enable a lot of compiler warnings and OMG fix a lot of them.
  • Removal of FRM – use a protobuf message to describe the table. The first branch that implemented any of this was merged mid-November. It was pretty minimal, and we still had the FRM around for a little while yet – but it was the beginning of the end for features that couldn’t be implemented due to limitations in the FRM file format. I wrote a few blog entries about this.
  • A lot of test fixes for Drizzle
  • After relating the story of ☃.test (hint: it broke the ability to check out the source tree on Solaris) to Anthony Baxter, he suggested trying something rather nasty… a unicode character above 2^16 – I chose 𝄢 – which at the time didn’t even render on Ubuntu – this was the test case you could only see correctly on MacOS X at the time (or some less broken Linux distro I guess). Some time later, I was actually able to view the test file on Ubuntu correctly.
  • I think it was November 1st when I started to work on Drizzle full time – this was rather awesome, although a really difficult decision as I did rather enjoy working with all the NDB guys.

2009:

  • January sparked the beginning of reading table information from the table protobuf message instead of the FRM file. The code around the FRM file was lovely and convoluted in places – to this day I’m surprised at the low bug count all that effort resulted in. One day I may write up a list of bugs and exploits probably available through the FRM code.
  • My hate for C++ fstream started in Feb 2009
  • In Feb I finally removed the code to work out (and store) in the FRM file how to display a set of fields for entering data into the table on a VT100 80×24 screen.
  • I filed my first GCC bug. The morning started off with a phone call and Brian asking me to look at some strange bug and ended with Intel processor manuals (mmm… the 387 is fun), the C language standard and (legitimately) finding it was a real compiler bug. Two hours and two minutes after filing the bug there was a patch to GCC fixing it – I was impressed.
  • By the end of Feb, the FRM was gone.
  • I spent a bit of time making Drizzle on linux-sparc work. We started building with compiler options that should be much friendlier to fast execution on SPARC processors (all to do with aligning things to word boundaries). -Wcast-align is an interesting gcc flag
  • Moved DDL commands to be in StorageEngine rather than handler for an instance of an open table (now known as Cursor)
  • MyISAM and CSV as temporary table only engines – this means we save a bunch of mucking about in the server.
  • Amazingly enough, sprintf is dangerous.
  • Moved to an API around what tables exist in a database so that the Storage Engines can own their own metadata.
  • Move to have Storage Engines deal with the protobuf table message directly.
  • Found out you should never assume that your process never calls fork() – if you use libuuid, it may. If it wasn’t for this and if we had param-build, my porting of mtr2 to Drizzle probably would have gone in.
  • There was this thing called pack_flag – its removal was especially painful.
  • Many improvements to the table protobuf message (FRM replacement) code and format – moving towards having a file format that could realistically be produced by code that wasn’t the Drizzle database kernel.
  • Many bug fixes, some acused by us, others exposed by us, others had always been there.

2010:

  • embedded_innodb storage engine (now HailDB). This spurred many bug and API fixes in the Storage Engine interface. This was an education in all the corner cases of the various interfaces, where the obvious way to do things was almost always not fully correct (but mostly worked).
  • Engine and Schema custom KEY=VALUE options.
  • Found a bug in the pthread_mutex implementation of the atomics<> template we had. That was fun tracking down.
  • started writing more test code for the storage engine interface and upper layer. With our (much improved) storage engine interface, this became relatively easy to implement a storage engine to test specific bits of the upper layer.
  • Wrote a CREATE TABLE query that would take over four minutes to run. Fixed the execution time too. This only existed because of a (hidden and undocumented) limitation in the FRM file format to do with ENUM columns.
  • Characters versus bytes is an important difference, and one that not all of the code really appreciated (or dealt with cleanly)
  • FOREIGN KEY information now stored in the table protobuf message (this was never stored in the FRM).
  • SHOW CREATE TABLE now uses a library that reads the table protobuf message (this will later be used in the replication code as well)
  • Started the HailDB project
  • Updated the innobase plugin to be based on the current InnoDB versions.
  • Amazingly, noticed that the READ_COMMITTED isolation level was never tested (even in the simplest way you would ever explain READ_COMMITTED).
  • Created a storage engine for testing called storage_engine_api_tester (or SEAPITester). It’s a dummy engine that checks we’re calling things correctly. It exposed even more bugs and strangeness that we weren’t aware of.
  • Fixed a lot of cases in the code where we were using a large stack frame in a function (greater than 32kb).
  • An initial patch using the internal InnoDB API to store the replication log

2011:

  • This can be detailed later, it’s still in progress :)
  • The big highlight: a release.

Drizzle7

We’ve released Drizzle7! Not only that, we’re now calling it Generally Available – a GA release.

What does this mean? What does this GA label mean?

You could view as a GA label being “we’re pretty confident people aren’t going to on mass ask for our heads when they start using it”… which isn’t a too bad description. We also plan to maintain it, there could be future releases in this series that just include bug fixes – we won’t just immediately tell you to go and use the latest tarball or bzr tree. This release series is a good one to use.

Drizzle7 is something that can be packaged in Linux distros. It’s no longer something where the best bet is to add the PPA and upgrade every two weeks or build from source yourself. If you’re looking to deploy Drizzle (or develop against it) – you can rely on this release.

I’ll never use the words “production ready” to describe a release – it’s never up to me. It’s up to each person or organisation looking to deploy a piece of software to decide if that bit of software is production ready for them.

Personally, I’m looking forward to see how people can break it. While Drizzle is the best tested FOSS SQL RDBMS server, I’m sure there’s new an interesting ways it can be broken by saying we’re ready for a much larger crowd to hammer on it.

Overall, I think we’ve managed to take the now defunct MySQL 6.0 tree (way back in 2008) and release something that can truly live up to the line “database for cloud”. Drizzle is modern, modular, rather solid and understandable. The future is bright, there is so much more to do to make the ultimate database for cloud. Drizzle7 is a great platform to build on – both for us (developers) and us (people who use relational databases).

Fixed in Drizzle: No more “GOTCHA’s”

 

O'Reilly MySQL Conference & Expo 2011

At the upcoming MySQL Conference and Expo, I’m going to give a Thursday afternoon (2pm) session entitled Fixed in Drizzle: No more “GOTCHA’s”. I plan to have a lot of fun with this session..

If you go back to the very start of when I started submitting code to Drizzle (June 2008) – I was going and fixing some of my favourite “gotcha’s” inside the code: BUILD/ scripts that didn’t build the way releases would, wrappers on POSIX functions with different (and inconsistent) semantics, NETWARE support, a non thread safe client lib, my_errno (different to errno) etc. I won’t really be talking about internals like this – it may give me a happy but really isn’t the latest awesome in databases.

I’ll instead be going over the way more awesome user and DBA visible things we’ve fixed/added/removed from Drizzle that make it a database with as few GOTCHA’s as possible.

Authentication (pluggable, LDAP), Logging (to syslog, gearman), DATA_DICTIONARY, INFORMATION_SCHEMA, engines owning their own metadata, STRICT mode by default, removing global mutexes, improving the Storage Engine API, improving the replication log, including code such as PBXT and PBMS Blob Streaming, filesystem engine (read files from disk like a table), pluggable protocol, UTF8 by default, ENUM data type, auto_increment behaviour.

All this and more is “Fixed in Drizzle”.

(oh, and there’s no 24bit integer or a BLOB that can only be 255 bytes)

Undocumented ALTER TABLE that does *nothing* (useful)

(at least since MySQL 5.1.42)

alter table t1 force;

Pretty neat huh? In fact, in Drizzle this will end up doing a copying alter table. Not useful.

There’s an over four year old bug report in MySQL (Bug#24091).

I’m just going to remove that bit from the parser in Drizzle – it makes no sense.

SQL Oddity: ALTER TABLE and default values

So, the MySQL (and Drizzle) ALTER TABLE syntax allows you to easily change the default value of a column. For example:

CREATE TABLE t1 (answer int);
ALTER TABLE t1 ALTER answer SET DEFAULT 42;

So, you create a TIMESTAMP column and forgot to set the default value to CURRENT_TIMESTAMP. Easy, just ALTER TABLE:

create table t1 (a timestamp);
alter table t1 alter a set default CURRENT_TIMESTAMP;

(This is left as another exercise for the reader as to what this will do – again, maybe not what you expect)

ALTER TABLE RENAME RENAME RENAME

Here’s a nice challenge for you. What does the following do (or error out on?):

CREATE TABLE t1 (a int);
CREATE TABLE t2 (b int);
ALTER TABLE t1 RENAME t3, RENAME t2, RENAME t4;

I’d be interested to know what a) you think it does and then b) if you were surprised when you went and typed it into your RDBMS of choice.

Timing queries in the 21st century (with LD_PRELOAD and sed)

So… Baron blogged about wanting higher precision timers from the mysql binary and that running sed on the binary wasn’t cutting it. However… I am not one to give up that easily!

This is what LD_PRELOAD was made for! Evil nasty hacks to make your life easier!

By looking at the mysql.cc source code, I can easily work out how this works… I just have to override two calls! They being sysconf() (we fake how many ticks per second there are) and times() (let’s return a much higher precision number).

Combined with the sed hack on the binary to change the sprintf call to print out the higher precision number, we have:

mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|   710720 |
+----------+
1 row in set (1.080110 sec)

Get it from my junkcode: http://www.flamingspork.com/junkcode/high_precision_mysql_timer.c

(or you can get it from http://paste.drizzle.org/show/364/)

Implicit COMMIT considered harmful.

If you execute the following, what does your RDBMS do?

CREATE TABLE t1 (a int);
START TRANSACTION;
INSERT INTO t1 (a) VALUES (1);
START TRANSACTION;
INSERT INTO t1 (a) VALUES (2);
ROLLBACK;
SELECT * FROM t1;

The answer may surprise you.

linux.conf.au 2011 Monday wrap-up

This year was a lot of fun. My talk was on the Thursday, which meant that there was most of the week spent going “eep” and making sure I was ready for it.

Total hats off to the organisers – you may have heard about a small area of Queensland (about the area of France and Germany combined – or ~ twice that of Texas) was declared a disaster area a couple of weeks before the conference. Honestly, if you didn’t hear about it, you wouldn’t have noticed.

Sunday night we had a gathering of linux.conf.au Ghosts of Conferences Past. We refer to those previously involved in running an LCA “Ghosts” – a wealth of knowledge for future teams. One of the most amusing tales was John Ferlito asking people what would they have done if just less than two weeks out they had to change venues. The consensus: curl up in a corner and cry hoping it would all just go away.

The lca2011 organisers didn’t.

So on the Monday I managed to catch a few good talks. In the morning I seemed to stalk Pia in her two talks: “Applying martial arts to the workplace: your guide to kicking arse” and “Cloud computing: finding the silver lining for government” in the Haeksen and Freedom In The Cloud miniconfs respectively.

If you didn’t manage to catch the martial arts talk, check out Pia’s blog post on it. I also strongly recommend a good martial arts school (contact me if you want).

The big take away for me from the Cloud in Government talk was that I really should go and read the AGIMO Cloud Strategy document. The other take away for me was that while government is looking at using new technology it is not currently thinking at all about individuals rights (such as privacy), security and safety in the cloud.

I have been known to rock up to talks with no real idea of the content but on the hope that it could end up being cool. This was the next one I went to. The title: “Open Source: Saving the World” sounded like a lovely idea and the speaker, Noirin is pretty cool. It turns out there’s some really awesome open source software out there for dealing with natural disasters, incidents of violence and education. Go check out the Lanka software foundation, the SAHANA Disaster Management System, Ushahidi (coming out of Kenya to track incidents of violence) and the Talking Book (a little yellow box that records and plays back audio).

I checked out Aeriana’s talk on overcomplicating home networks… and a bit too much of it was familiar :) I also have ideas.

Daniel Stone’s talk on input in X and everything related was interesting – there is a lot of historical things around and some truly bizarre things. Think: grabs are what grabs all input and makes you sad and OMG Multi-Pointer-X exists and it’s all backwards compatible OMG. I haven’t even mentioned the bits that allow your keyboard to have a key that makes demons fly from the screen and a harpsichord to play tunes while they do.

I managed to catch Matthew Garrett’s talk “License compliance in Open Source business” talk. The fact that the very first store he walked into after arriving in Australia was selling a device that violates his copyright is, well, not a good sign.

At the end of the Monday I managed to catch about the last half of “Training Allies” that was the last session in the Haecksen miniconf. This was a pretty good session with a rather packed room (standing room only). This is rather promising I think.

Big Day Out 2011

Yesterday, I headed to the Big Day Out in Melbourne with Leah, Hayden and Michael. This is after Leah and I had spent the last week at linux.conf.au – which (as anyone who’s ever been knows) is wonderful and tiring. I am amazed that this conference has continued to be so incredibly awesome and am still amazed that I speak at it and that my talks are often well received.

Leah @ Big Day Out Hayden and michael at BDO

First off we managed to see the Deftones. Somebody seriously needed smacking around the head with the levels… it just didn’t quite sound right. I do like the Deftones though, and was glad to catch them.\

By this time in the day, it was making days on the playa seem like childs play. Lots of water and really quite warm. Out of 52,000 at BDO this year, 1000 were treated for dehydration. I think there needs to be giant “Piss Clear” signs here too.

I managed to see Paul Dempsey, who I’ve seen solo once before playing Something For Kate material several years ago now – and it was truly one of the most awesome shows I’ve seen. I really enjoyed his performance at BDO and he certainly has to be one of my most favourite male vocalists.

Paul Dempsey

Next up? Wolfmother! That’s right kids, I missed half of Iggy and the Stooges to see Wolfmother. They didn’t disappoint – it was rather awesome and sooo totally nearly ditched everyone in the shade to run in jumping about.

Wolfmother Wolfmother

Iggy and the Stooges (well, the last half). I want to be doing this when I’m 63. Seriously. One year Hayden said that Iggy Pop was the best act he saw… and I can believe it. Again, awesome.

Iggy

Then it did get explodingly awesome. Yes, this means Rammstein.

Rammstein Rammstein Rammstein Rammstein

There should certainly be more bands using explosions as instruments and midway through fire a flamethrower high enough so that shorter people in the middle of the crowd can see it – how considerate! The heat from fire could be felt from where we were – AWESOME.

Finally….. Tool.

Tool

I saw them last time they were at Big Day Out (and when they played in Melbourne the next week). I’m doing the same this time around. I think finishing with Stinkfist gave a lovely end to the night – and loved how the crowd was right into Aenima. I cannot wait to see them on Wednesday night.

They took my Kodachrome away

It’s done. It’s gone. You can still find rolls on eBay and the like, but you’re not going to get them developed as colour slides. Dwayne’s Photo stopped accepting Kodachrome on December 30th 2010. I have a set on flickr of my Kodachrome shots. The scans do not do them justice. They look way better projected. I will also talk about the demise of Kodachrome without mentioning the Paul Simon song. oh, wait. fuck.

I’ve now gotten all my Kodachrome back. The last package arrived while I was away at linux.conf.au 2011 in Brisbane.

It is certainly the end of an era, and the last of my shots will go up on flickr.