I’ve published the schedule for the Developer, testing, release and Continuous Integration miniconf at LCA2014. Enjoy and I hope to see you there!
(I hope the LCA website will be updated shortly with the schedule)
I’ve published the schedule for the Developer, testing, release and Continuous Integration miniconf at LCA2014. Enjoy and I hope to see you there!
(I hope the LCA website will be updated shortly with the schedule)
I’ve sneakily kept the CFP open for a while after the “deadline” and will be closing it for good on Wednesday, December 4th. SUBMIT NOW!
I have just opened the Call For Papers for the Developer, Testing, Release and Continuous integration Automation miniconf at linux.conf.au 2014.
This miniconf is all about improving the way we produce, collaborate, test and release software.
We want to cover tools and techniques to improve the way we work together to produce higher quality software:
– code review tools and techniques (e.g. gerrit)
– continuous integration tools (e.g. jenkins)
– CI techniques (e.g. gated trunk, zuul)
– testing tools and techniques (e.g. subunit, fuzz testing tools)
– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.
All sessions are 30 minutes unless there is prior arrangement. Typically there is a VGA plug at the front of the room but if you have any specialized A/V requirements please enter them as notes at the end and we’ll see what we can do.
Submissions are open until November 20th, with notifications going out over the following 1-2 weeks.
So, I started this thought experiment: let’s assume for the moment that government is completely trustworthy, only has your interests at heart and doesn’t secretly sell you out to whoever they feel like. Now, on top of that, what about the agreements you enter into with corporations? How long are they and could you properly understand all the implications to your privacy and give informed consent?
So… I started with when I left home. I got on a Virgin Flight, they have a privacy policy which is eight pages. I then arrived in New Zealand and filled out a customs form. I could not find anything about what the New Zealand customs service could do with that information, but let’s just assume they’re publishing it all on the internet and selling it to the highest bidder. The other alternative is that they follow the New Zealand Privacy act, which is a mere 182 pages.
Once getting through customs I turned on my phone. The basics are probably covered by the New Zealand Telecommunications Privacy Code (35 pages) and since I was on Vodafone NZ, their three page privacy policy likely applies. Of course, I’m roaming, so the Vodafone Australia three page privacy policy also likely applies (of course, under a completely different legal framework). There’s likely things in the other agreements I have with Vodafone, the standard agreement summary is a mere 4 pages and the complete agreement is 84 pages.
I arrived at my hotel and the Langham privacy policy is two pages. I then log into Facebook, 30 pages of important things there, into Twitter, another 11 pages. My phone is all hooked up to Google Play, so that’s another 10 pages. I walk into the conference, the code of conduct is a single page which was a pleasant relief. I then log into work mail, and the GMail terms of service is three pages with a four page privacy policy.
If I was someone who used the iTunes, it would be reasonable that I would watch something in the hotel room – another 24 pages of agreement before then deciding to call home, carefully reading the full 20 pages of Skype terms of service and privacy policy.
In total, that’s 428 pages.
This excludes any license agreements to the operating system you’re using on your laptop, phone and all the application software. It also excludes whatever agreement you enter into about the CCTV footage of you in the taxi to and from the airport.
So, my question to the panel at OSDC was: how on earth is the average consumer meant to be able to make an informed decision and give their informed consent to any of this?
Thought from a good discussion with at François at OSDC today, what is the carbon footprint of various languages? He mentioned that the carbon footprint of a new Haskell compiler release is remarkably non-trivial due to every Haskell package in Debian needing to be rebuilt.
So, I thought, what’s the impact of something like Python? (or Perl). Every machine running the code has to do the bytecode compilation/JIT/interpretation of that code so when, say, Ubuntu ships some new version of $random_desktop_thing_written_in_python, we’re actually compiling it well over 20 million times. That’s a remarkably non-trivial amount of CPU time (and thus CO2 emissions).
So, program in compiled languages such as C or C++ as doing so will save polar bears.
I’ll be in Hong Kong for the upcoming OpenStack Summit Nov 5-8. I’d be thrilled to talk database things with others present, especially around Trove DBaaS (DataBase as a Service) and high availability MySQL for OpenStack deployments.
I was last in Hong Kong in 2010 when I worked for Rackspace. The closest office to me was in Hong Kong so that’s where I did my HR onboarding training. I remember telling friends on the Sunday night before leaving for Hong Kong that I may be able to make dinner later in the week purely depending on if somebody got back to me on if I was going to Hong Kong that week. I was, and I went. I took some photos while there.
Walking from the hotel where we were staying to the Rackspace office could be done pretty much entirely through buildings without going outside. There were bits of art around too, which is just kind of awesome – I’m always in favour of random art.
The photo below was the view from my hotel room. The OpenStack summit is just by the airport rather than in the middle of town, so the views will be decidedly different to this, but still probably quite spectacular if you’re around the right place (I plan to take camera gear, so shout if you want to journey too)
There are some pretty awesome markets around Hong Kong offering just about everything you’d want, including a lot just out on the street.
Nightime was pretty awesome, having people from around the world journey out into the night was great.
I was there during the World Cup, and the streets were wonderfully decorated. I’m particularly proud of this photo as it was handheld, at night, after beer.
Last night I opened this wonderful bottle that I acquired on Friday (the only reason I didn’t open it then was the 10k Melbourne Marathon even I ran on early Sunday morning).
It was delicious. The fig really came through and lent a nice sweetness to contrast with the bitterness. I’m now wondering why I don’t make things with figs more often….
The other week Leah and I went to the Royal Melbourne Show (she won free tickets which makes it a lot easier to swallow than the $35/head otherwise) and I picked up some coffee beans while there (why not!). These beans are called “The Guji” and are from Cartel Coffee Roasters down in Geelong. I opened them the other day and as an increasing number of my Percona colleagues can attest to, I’ve been raving about them. These are some seriously good beans.
Over a year ago now, I announced the first Percona Server 5.6 alpha on the Percona MySQL Performance Blog (Announcing Percona Server 5.6 Alpha). That was way back on August 14th, 2012 and it was based on MySQL 5.6.5 released in April.
I’m really happy now to point to the release of the first GA release of Percona Server 5.6 along with some really interesting benchmarks. We’ve certainly come a long way from that first alpha and I’m really happy that we’ve also managed to continue to release Percona Server 5.5 and Percona Server 5.1 releases on time and of high quality.
Over the same time frame that we’ve been working on Percona Server 5.6 we’ve increased the size of the company, improved development practices and grown enough that we’ve reorganised how development of software is managed to make it scale better. One thing I’m really, really pleased about is a culture of quality we’ve managed to nurture.
Keeping a culture of quality alive is something that requires constant nurturing. All too often I’ve seen pressure to ship sooner rather than stabler (yes, I just invented that word), and yes, we initially planned the GA of PS 5.6 earlier than we ended up shipping it, but we instead took the time to round out features and stability to ship something much better.
Now comes the effort of continuing good releases, promoting it and writing a Webinar to give next week.
It’s getting close to time to head to Auckland for OSDC and a few days ago I blogged about how I’m speaking there). I’ll be speaking on MySQL In the Cloud, As A Service and all of the challenges that can entail as well as on The Agony and Ecstasy of Continuous Integration. Both of these talks draw heavily on the experience of Percona (my employer) and with experience from helping customers with all sorts of MySQL deployments and in our experience in producing our own high quality software.
I was in Auckland earlier this year, so thought I’d share some pictures of the wonderful city in which OSDC is being held.
Firstly, New Zealand has some pretty awesome wildlife. This is possibly not the best example of it ever as there are way more odd looking birds than this one:
The waterfront is quite nice, and when we were there earlier in the year it was awfully nice weather for it:
I’m pretty sure there isn’t going to be a triathlon in Auckland for OSDC, but I’m still hoping to get out for a run while there (anybody else up for one?). We left home at something like 3:30 in the morning and got some silly early flight (6am or before) and were totally walking around the city a little like zombies, realising that we simultaneously wanted to go for a run and sleep.
We were meeting friends from Seattle and managed to spot this coffee place down by the water. I didn’t try it myself, but I’ve certainly had good coffee at other places in New Zealand.
Streets at night:
And if I haven’t already convinced you that Auckland would be a great place to be, here’s a crappy cell-phone snapshot of a variety of New Zealand beers – a tiny, tiny fraction of beer you can get in New Zealand (the microbrewery scene is amazing)
Go register for OSDC 2013 right now: http://osdc.org.nz/tickets/
I’ve finally gotten around to uploading a bunch of photos I’ve had sitting around for quite a while now. Recently I finally got around to shooting some Velvia 50, after now several years of meaning to. This is all 35mm with a Nikon F80 and likely all with the 50mm lens. I’m quite pleased with some of the results, slightly more so for the ones not in full direct bright sunlight, but they were much better light for photos anyway. The downside of Velvia 50? Not a portrait film at all.
It’s been over ten years since the last linux.conf.au in Perth but don’t worry, this upcoming January, we’re back in Perth for linux.conf.au 2014. I’m really looking forward to getting back to Perth as I’ve only been there very, very briefly since 2003 and would love to explore the city a bit more.
Perth 2003 was the first linux.conf.au I ever went to and I’ve been to every single one since (2004 in Adelaide, 2005 in Canberra, 2006 in Dunedin, 2007 in Sydney, 2008 in Melbourne, 2009 in Hobart, 2010 in Wellington, 2011 in Brisbane, 2012 in Ballarat and 2013 in Canberra – each one of them absolutely brilliant). A few things were different back then, for example, there was a terminal room with actual terminals where you could use cutting edge technologies such as telnet.
As a surprise to many, 2003 was the first year that Linus came to an LCA, arriving in the fashion of the time (a penguin suit).
I have many fond memories of LCA back in 2003 and with the list of speakers and miniconfs for this year mostly up already, it’s looking to be an excellent conference in January 2014 – just a few short months away.
Early bird registrations finish soon so head on over to https://lca2014.linux.org.au/registration/new to register now.
I just replaced the old Pandora boost m4 macros in a project with boost.m4 from https://github.com/tsuna/boost.m4 and it basically just solved all my problems with Boost and the whole set of distributions that I build for (everything from CentOS/RHEL 5 to Debian unstable).
I like things that other people maintain.
I’ll be speaking at the upcoming OSDC conference in Auckland, New Zealand! It’s on October 21st-23rd and you should go here right now and register. I’m giving two talks at OSDC this year:
In case you need to quickly justify to your boss why you should go to OSDC, the conference organisers have helpfully provided a page of hints on just that subject.
I’ve used the Bazaar (bzr) version control system since roughly 2005. The focus on usability was fantastic and the team at Canonical managed to get the entire MySQL BitKeeper history into Bazaar – facilitating the switch from BitKeeper to Bazaar.
There were some things that weren’t so great. Early on when we were looking at Bazaar for MySQL it was certainly not the fastest thing when it came to dealing with a repository as large as MySQL. Doing an initial branch over the internet was painful and a much worse experience than BitKeeper was. The work-around that we all ended up using was downloading a tarball of a recent Bazaar repository and then “bzr pull” to get the latest. This was much quicker than letting bzr just do it. Performance for initial branch improved a lot since then, but even today it’s still not great – but at least it isn’t terrible like it once was.
The integration with Launchpad was brilliant. We never really used it for MySQL but for Drizzle the combination was crucial and helped us get releases out the door, track tasks and bugs and do code review. Parts of launchpad saw great development (stability and performance improved immensely) and others did not (has anything at all changed in blueprints in the past 5+ years?). Not running your own bugs db was always a win and I’m really sad to say that I still think Launchpad is the best bug tracker out there.
For both Drizzle and Percona, Bazaar was the right option as it was what MySQL was using, so people in the community already knew the tools. These days however… Git is the tool that there’s large familiarity with – even to the extent that Twitter maintains their MySQL branch in Git rather than in bzr.Is Bazaar really no longer being developed? Here are graphs (from github actually) on the activity on Bazaar itself over the years:You can easily see the drop off in commits and code changes. The last commit to trunk was 2 months ago and although there was the 2.6.0 release in August, in my opinion it wasn’t a very strong one (the first one I’ve had problems with in years).So… git is the obvious successor and with such a strong community around GitHub, it kinda makes sense. I’m not saying that GitHub has caught up to Launchpad in terms of features or anything – it’s just that with Bazaar clearly no longer really being developed…. it may be the only option.In fact, in my experiment of putting a mirror of Percona Server on GitHub, we already have a pull request mere days after I blogged about it. Migrating all of Percona development over to Git and Github may take some time, but it’s certainly time that we kicked the tyres on it and worked out how we’d do it without interrupting releases or development.I’ve also thrown up a Drizzle tree and although it required some munging to get the conversion to happen, I’m kind of optimistic about it and I think that after a round of merging things, I’m tempted to very strongly advocate for us switching (which I don’t think there’ll be any opposition to).When will Oracle move over their MySQL development? This I cannot say (as I don’t know and don’t make that call for them). There is a lot of renewed interest in code contribution by Oracle and moving to Git and GitHub may well be a very good way to encourage people.
The downside of git? Well… With BZR you could get away with not understanding pretty much every single bit of the internals. With git, I wish I was so lucky.
I’ve been mirroring a bunch of projects that have their source control in BZR up onto github recently. This turns out to be a bit harder than it sounds for a bunch of reasons that aren’t particularly interesting (although having a commit in the bzr repo where the name of the committer has a newline in it is among the more interesting).
Run on over to https://github.com/stewartsmith/drizzle to check it out. I’ve put up Drizzle 7.0, 7.1 and 7.2 branches.
For MySQL 5.1, 5.5 and 5.6 in the same repository, after repacking:
bzr: 269MB (217MB pack, 52MB indicies)
git: 177MB repo (152MB pack)
One thing I’ll say is that BZR is always more chatty over the network and is substantially slower than GIT in pulling a fresh copy.
The other night I made this marinade with some tofu (pressed in a TofuXpress to get more water out).
The recipe for the marinade is:
I marinated the tofu (I used a really firm tofu – which is also really high in protein) and fried it in a really hot skillet (which gave it a lovely colour). It was delicious. I served it with some broccoli (quickly fried with garlic and ginger and a tablespoon or two of the marinade) and some cous cous.
Actually… I think I’ll have to make this again soon. Really simple to do (although pressing the tofu and marinating takes time, but time you can do other things) and really tasty.
So, around the time one would reasonably expect an extra tap to have been put on our ADSL line, we started getting relatively frequent drop outs. This was somewhat resolved for a while until a few months ago… when ADSL dropouts started occuring several times a day.
Internode then informed me that “frequent dropouts” actually meant something like 5 or 7 per day rather than “more than once a day” and “this just started happening recently”. Colour me not impressed already. Anyway, it got a bit worse and I managed to convince them there was an actual fault. There were some tests done (which all failed somehow) and it ended with a technician coming out and checking things, then heading to the telephone exchange to do further tests before calling me (they didn’t… and after a few hours I gave up waiting and went for lunch).
So.. things mysteriously got a little bit better for a little while (a week maybe?) and then I went on vacation for a few weeks so didn’t really care what state my home ADSL connection was in.
Now, back from vacation and there were still dropouts. Then, on Friday things just went dead. I called Internode as I have done before, hoping for some actual action this time (it’s now months into this). They would not even log a fault without me listening for a dialtone. There is one problem with this, I don’t have a landline phone and haven’t wanted one for at least seven years. Yes, for seven years I have been paying at least $20/month for a service I did not want (that’s at least $1680 for those playing at home).
I explained that I did not have a phone, but still, no fault would be logged without listening for a dialtone. I asked what would happen if I had Naked DSL (as this is what I have wanted for SEVEN FUCKING YEARS but have been unable to get) and got some rather weird answer about it just being “more difficult to diagnose problems”. In a previous call with Internode before I went on vacation, I was informed that there would be a downtime of 7-10 days if I were to switch to Naked DSL and it wouldn’t be any cheaper than what I pay now (so I opted forgo that as it seems silly to pay the same amount of money for just an added long outage).
So, I expressed my dissatisfaction at having to go out and buy a landline phone that I didn’t fucking want… but $25 later I could ring back and say “there’s no dial tone” and then they had the balls to ask if I had another phone I could plug in and test. Seriously.
Finally though, a fault could be logged – and it was only a few hours out of my Friday and a further $25 out of my pocket for something I don’t want. Then there was the “good” news of when the problem would be fixed. In 24-48 hours there’d be an update – not a resolution. Then, it may take a couple of business days to have the phone line work again. Grr. Luckily we’re not old and in risk of having a heart attack and needing to call an ambulance as the prospect of 4 days without a dial tone would then be terrifying.
Luckily I have a 3g dongle for backup! Also, the Billion router I have has a USB port that will fail over from ADSL to 3g! Fantastic! For the lowly sum of $15/month Internode will give you 1GB of 3G data. It’s kinda useful (cheaper than hotel internet too) and useful to have a backup. You can also purchase extra data blocks if 1GB isn’t enough. It turns out that 1GB doesn’t last very long in our house while I’m working (even when being frugal with bandwidth). The price of a 1GB data block is now $39.90. This is a tad exorbitant even for 3G data prices… so I started to search elsewhere.
Amaysim looked like a good deal: $99.90 for 10GB of data to use within a year. A much better deal, in fact, a quarter of the price of internode! So, knowing that the USB dongle I bought from Internode a few years ago now is just a generic one that’s unlocked, I went and bought a Amaysim SIM card and loaded it up. It worked on my laptop (Fedora) just fine. It did not work in the Billion router. It just didn’t get an IP address.
An hour of fiddling with things, convinced I had something wrong, I rang Amaysim. I was on hold. They said I could “chat now” with someone on their web site. I tried that, it said 25 minute wait… so I figured that staying on hold couldn’t possibly be that long. Once I reached 45 minutes on hold, I also started the website “live chat” thing – it said 29 minutes. After an hour on hold, the phone went silent. It stayed silent. About that time I got through on the web site and the person was rather useless in helping debug the problem. The best I got was “with routers things get more complicated”. I don’t care about complicated, but at least they could refund the money (in 3-5 business days though) and close the account. The explanation offered: provider not compatible with device. This I’d never heard of.
Also, the Billion router has sweet fuck all diagnostics as to why something may not be working. I am convinced that what I really want is an actual linux box where I can run debian and say, look at the PPP logs.
I then tried Virgin Mobile. Exact same thing, except that the guy at the Virgin Mobile store said that if it didn’t work I could just bring it back and when I called their tech support I was on hold for maybe 3 seconds. Even though it didn’t work, at least I can go back and return it tomorrow. I heartily recommend them as my experience has been rather positive and I kinda wish it had worked.
So, five hours out of my Saturday and I still didn’t have a working solution, and have ended up paying crazy amounts of money for data as at least the Internode SIM works when the dongle is connected to both the router and my laptop and not just my laptop.
Maybe, sometime next week, I’ll have a proper internet connection that works… and hey, with luck, perhaps we’ll have this NBN thing at some point that actually delivers more bandwidth to my house than we get to Mars.
Oh, and if anybody knows of an ISP that is like how Internode used to be, let me know.
First I find out the first commit that is in 5.7 that isn’t in 5.6 (using bzr missing) and then look at the authors of all of those commits. Measuring the number of commits is a really poor metric as it does not measure the complexity of the code committed, and if your workflow is to revise a patchset before committing, you get much fewer commits than if you commit 10 times a day.
There are a good number of people who are committing a lot of code to the latest MySQL development tree. (Sorry for the annoying layout of “count. number-of-commits name”)
There’s also a good number who have 50-100 commits:
And there’s even more with less than 50:
There’s also a good number with fewer than 10 (31 names actually), which is encouraging as it means that this means it’s likely people who are not involved every day in development of new code (maybe QA, build etc) which probably means that (at least internally) contributing code isn’t really a big problem (and as I’ve shown previously, the barriers to external contributions between Oracle MySQL and MariaDB appear to result in roughly the same amount of code from people outside those companies).
There are 125 names here in total, with 19 having over 100 commits, 22 with 50-100 commits, another 53 with 10-50 commits and 31 with <10. So it’s possible to say that there are at least 125 people at Oracle working on MySQL – and I know there are awesome people who are missing from this list as their work doesn’t result in committing code directly to the tree.