A couple of weeks ago, I attended the Open Source Developers Conference (OSDC) in Sydney where I gave the dinner keynote. I had previously given the dinner keynote at OSDC 2010 in Melbourne, where I explored a number of interesting topics “that I wasn’t really qualified to talk about.”
In writing the dinner keynote for 2010, I took the idea that people come to conferences to hear from experts in the field and decided that I should instead do the opposite of that. Talk about all the things that I think are interesting but I’m not an expert in. So in 2010 I covered: Drizzle database server (the only thing I was actually qualified to talk about), developing your own film, how much effort it takes to write a book, brewing your own beer, Bluehackers (and mental health in general), Security (it was the time of Stuxnet, oppressive border security), censorship (and how government claims that the internet is both different to and just like publishing a book at the same time), Wikileaks, how perhaps we should go after child pornographers rather than waste money on totalitarian filters, feminism, code of conducts at conferences, homophobia, a history of marriage and the notion of ‘traditional marriage’, the concept of freedom itself and a few pictures of vegetables made to look like faces. In the words of some attendees, “there was something to make everyone at least slightly uncomfortable at some point”.
My 2010 talk went really well, there was much applause and it inspired at least one person to go and brew their own beer (in itself a victory). Many thanks to Donna for spending a non-trivial amount of time helping me polish the final talk and help ensure some of my most important points were communicated properly.
So for 2012, I felt I had some big shoes to fill. Picking a topic (and writing the talk itself) for a dinner keynote is tricky. You have a captive audience with a wide variety of interests (and likely a few partners of attendees who aren’t at all technically minded). I wanted a topic that could have a good amount of humour (after all, we’re at dinner, relaxing and chatting) as well as a serious message that would speak to all the developers in the room (after all, this was the Open Source Developers Conference). Needing a title for the talk much in advance of when I would start writing the talk, I started thinking along the lines of “Those who do not know UNIX are doomed to re-implement it – poorly” and “Those who do not know the past are doomed to repeat it” – thinking that there must be some good lessons that I’ve learned over the past years that could be turned into a dinner talk. I ended up settling on “Those who do not know the future are doomed to repeat it”.
I, of course, left most of the specifics to be determined much closer to the conference itself as procrastination seems to be an integral part of writing a talk. Fast forward a while and if you were nearby you would have heard me exclaim “Who had this dumb-ass idea for a talk?” and “well, it seemed like a good idea at the time.” Â Setting yourself constraints is good, and at least narrowed the search space for constructing something that’d go down well. Next came “How on earth do I construct a cohesive narrative around that?” as a whole bunch of fun anecdotes about what people in the past considered the future is great, but how do you weave a story around it? In thinking about what used to be the future (and indeed, researching it), I had the realisation that this in itself is a really good story and vehicle to talk about how to produce better software.
And so, I solidified a set of laws, and for mostly humorous purposes, I’ve called these “Stewarts Laws”. So, we started with:
Those who do not know the future are doomed to repeat it.
Stewarts 0th Law
Because in computing, we start counting from zero.
I then went on a grand tour of how we got to have the PC. Early personal computers being iterative improvements on technology that came before, and how packaging technology as an appealing product helps adoption and that no matter how good something is, if it’s too expensive, it’s never going to be mainstream. This last point was a homage to the great Hitchhiker’s Guide to the Galaxy, which was successful over the great Encyclopaedia Galactica for two reasons, one of which was “it was slightly cheaper”.
The platform which is more open will eventually succeed over ones that are more closed. (This really should have been a law… but I missed the opportunity). One example was Mozilla. The initial source release was way back in 1998 and this “quirky open source project” took a very long time to deliver a useful web browser (excluding all the internal Netscape development on this complete rewrite of the browser).
All complete rewrites of any sufficiently complex software takes at least 5 years to be remotely usable.
Stewarts 1st Law
With the insight that the more free platforms (the PC, Windows, the web, Mozilla) eventually win out and being a talk about the future, I could not possibly not cover “The Year of the Linux Desktop”. This was useful to cover the install and user experience of Debian 2.2 (potato). This was Linux in the year 2000 (with IPv6 support, and with World IPv6 day only six months ago, this is certainly the future). It was not friendly.
But there was KNOPPIX that built on what came before and this showed the way so that other distributions could end up creating a situation where there are now many distributions of Linux that make running a free desktop something that is no longer masochistic, it’s something that can be decidedly pleasant.
I (of course) had to cover the freedom in your pants. The cell phone. Specifically, how there is more free software running on a computer that fits in our pants pockets than there was storage in the computers we grew up with. It doesn’t matter if Android is better than an iPhone or not, the more open, free and cheaper platforms always win. But really, it’s just iterative improvement on what came before.
All innovation is really just iterative improvement.
Stewarts 2nd Law
Very rarely (if ever) is there a “eureka” moment that doesn’t build upon the work of others. Find your giant to stand upon so that you can see further.
We can, of course, get it wrong. I used the example of New Coke and wondered if Unity or GNOME3 are our “New Coke” or if Windows 8 is the new Vista. But really, it’s not making a mistake that is bad, it’s not realising it and correcting. What we need is CI. Not Continuous Integration (although that is part of it), I’m talking about Continuous Improvement.
Anybody who took a “Software Engineering” course at university will have read about, studied, and parroted things about “the waterfall model” and “software prototyping” and “incremental build model” and “spiral model” and maybe even “SCRUM” or XP (which seems to be jumping off cliffs and yelling at fish). You probably had to do an assignment where you wrote “We’re going to do X model” and then had to stick to it, quickly finding that it just didn’t quite work that way.
This is because all this static model of software development methodology is a bunch of dairy production byproduct – otherwise known as BOOLSHIT. There is no static way written in stone and there certainly isn’t “one true way.”
The best battle plans don’t survive first contact with the compiler
Stewarts 3rd Law
This law is obviously stolen, which leads me to:
Stealing good ideas is itself a good idea, that you should steal.
Stewarts 4th Law
Software development is evolution by natural selection. Mutations in software battle it out and the fittest survive. This is even more true in free software development, as anyone is free to fork the product, mutate it and compete. In this way, free software accelerates the free market – it forces companies to continually add value rather than vendor lock in.
Our development processes also evolve. We try new things and keep what works. There may be a “state of the art” that we think exists, but really what matters is continuing to improve your development process. You don’t have to suddenly catch up, just improve.
- Revision Control
We’ve had RCS, CVS, Subversion. We’ve had bzr, hg and git. Distributed is obviously the current state of the art. - Code review
and improving how we do code review. Could you review code better? Could we have automated code review? - assert(), make the compiler do the work, defensive coding
Write code to do some of your code review for you. - Explode at compile time rather than runtime (i.e. not user visible)
Detecting problems earlier is better. - Extensive Unit testing
Test each component, have components be components, not spaghetti. - Extensive testing
Test the system as a whole - Running the test suite
Actually run the test suite - Reliable test suite
Have the test suite be reliable so that a failure really is a failure and not a false negative. - Continuous Integration
Always test how things go together - Test before integration
Test before pushing to trunk, ensuring even further that trunk is always releasable. - Merge captain
Takes approved code, merges it. This is variants on the Linus model. - Automated merges
Take the manual steps out, we can automate them (who needs to type 10 version control commands in when one will do) - Always releasable trunk
“Release early, release often” refined to “release something that isn’t crap” - Release checklists
There are probably different things you want to do upon release, check that you do them. For adding awesome new features, you want your marketing department to know about it. For awesome bug fixes, you want your support staff to know about it. - Continuous Deployment
There is no environment like production environment.
This led me to two more laws:
Any system of sufficient size will have several versions of each component deployed simultaneously.
Stewarts 5th Law
Constructing software is itself a system of sufficient size.
Stewarts 5th Law, part B
This applies to software you both deploy yourself and release as a tarball (or however you do it). Even if we don’t like to think about it, when we release a software package we are slightly involved in deploying it. We can certainly make it easier or harder to deploy. There are always OS and library updates that will be out there, so there will always be your software running in different environments.
Not only will people using your software use it in different environments, the people developing your software will be too. No two developers have the same development environment setup. One will use a different editor, different shell, slightly different version of the compiler (maybe they haven’t applied an update yet) etc etc. We can’t program to the “one true environment” because no such thing exists.
So, What’s next?
Some of our older problems have good solutions, but many of the newer ones do not. How do we get the state of the art in software development to more people? What’s the next step to explore?
I encourage you to constantly think about your development process and what the future holds for it. After all, it is adapt or perish – the past is littered with the technological corpses of things that were “the future” but failed to innovate any further.