Adverts or prettiness?

I normally read Planet MySQL from RSS, but for some reason I ended up on the actual site today in a Web Browser (Epiphany to be exact) and saw this:

planetmysql-ugly.jpg

And thought, “wow, ugly”. I don’t keep my browser “maximised” because I think it’s a stupid way to work – I often switch between tasks or like to have an editor open while referring to something in a browser (e.g. some tech details of some source module), or monitor IRC or IM. I remembered that Epiphany has an Ad Blocking extension, so in an effort to de-uglify, I enabled it. I now see:

planetmysql-pretty.jpg

Hrrm… much better. Notice how the links on the left to the most active are actually useful now (I can read them).

Note that this isn’t a rant on adverts on web sites – I can handle them (the google ones which aren’t obtrusive) – I’m against the uglyweb.

pluggable NDB

Spoke with Brian the other day on what was required to get NDB to be a pluggable engine – and started hacking.

The tricky bits invole dependencies of things like mysqldump and ndb_restore on some headers to determine what tables shouldn’t be dumped (hint: the cluster database used for replication).

Also, all those command line parameters and global variables – they’re fun too. It turns out InnoDB and PBXT are also waiting on this. In the meantime, I’ve done a hack that puts config options in a table.

Currently blocked on getting the embedded server (libmysqld) to build properly – but i now have a sql/mysqld binary with pluggable NDB. All libtool foo too.

Hopefully i’ll be able to post soon with a “it works” post

CREATE, INSERT, SELECT, DROP benchmark

Inspired by PeterZ’s Opening Tables scalability post, I decided to try a little benchmark. This benchmark involved the following:

  • Create 50,000 tables
  • CREATE TABLE t{$i} (i int primary key)
  • Insert one row into each table
  • select * from each table
  • drop each table
  • I wanted to test file system impact on this benchmark. So, I created a new LVM volume, 10GB in size. I extracted a ‘make bin-dist’ of a recent MySQL 5.1 tree, did a “mysql-test-run.pl –start-and-exit” and ran my script, timing real time with time.

    For a default ext3 file system creating MyISAM tables, the test took 15min 8sec.

    For a default xfs file sytem creating MyISAM tables, the test took 7min 20sec.

    For an XFS file system with a 100MB Version 2 log creating MyISAM tables, the test took 7min 32sec – which is within repeatability of the default XFS file system. So log size and version made no real difference.

    For a default reiserfs (v3) file system creating MyISAM tables, the test took 9m 44sec.

    For a ext3 file system with the dir_index option enabled creating MyISAM tables, the test took 14min 21sec.

    For an approximate measure of the CREATE performance…. ext3 and reiserfs averaged about 100 tables/second (although after the 20,000 mark, reiserfs seemed to speed up a little). XFS  averaged about 333 tables/second. I credit this to the check for if the files exist being performed by a b-tree lookup in XFS once the directory reached a certain size.

    Interestingly, DROPPING the tables was amazingly fast on ext3 – about 2500/sec. XFS about 1000/sec. So ext3 can destroy easier than it can create while XFS keeps up to speed with itself.

    What about InnoDB tables? Well…

    ext3(default): 21m 11s

    xfs(default): 12m 48s

    ext3(dir_index): 21m 11s

    Interestingly the create rate for XFS was around 500 tables/second – half that of MyISAM tables.

    These are interesting results for those who use a lot of temporary tables or do lots of create/drop tables as part of daily life.

    All tests performed on a Western Digital 250GB 7200rpm drive in a 2.8Ghz 800Mhz FSB P4 with  2GB memory running Ubuntu 6.10 with HT enabled.

    At the end of the test, the ibdata1 file had grown to a little over 800MB – still enough to fit in memory. If we increased this to maybe 200,000 tables (presumably about a 3.2GB file) that wouldn’t fit in cache, then the extents of XFS would probably make it perform better when doing INSERT and SELECT queries as opposed to the list of blocks that ext3 uses. This is because the Linux kernel caches the mapping of in memory block to disk block lookup making the efficiency of this in the file system irrelevant for data sets less than memory size.

    So go tell your friends: XFS is still the coolest kid on the block.

    Disk allocation, XFS, NDB Disk Data and more…

    I’ve talked about disk space allocation previously, mainly revolving around XFS (namely because it’s what I use, a sensible choice for large file systems and large files and has a nice suite of tools for digging into what’s going on).Most people write software that just calls write(2) (or libc things like fwrite or fprintf) to do file IO – including space allocation. Probably 99% of file io is fine to do like this and the allocators for your file system get it mostly right (some more right than others). Remember, disk seeks are really really expensive so the less you have to do, the better (i.e. fragmentation==bad).

    I recently (finally) wrote my patch to use the xfsctl to get better allocation for NDB disk data files (datafiles and undofiles).
    patch at:
    http://lists.mysql.com/commits/15088

    This actually ends up giving us a rather nice speed boost in some of the test suite runs.

    The problem is:
    – two cluster nodes on 1 host (in the case of the mysql-test-run script)
    – each node has a complete copy of the database
    – ALTER TABLESPACE ADD DATAFILE / ALTER LOGFILEGROUP ADD UNDOFILE creates files on *both* nodes. We want to zero these out.
    – files are opened with O_SYNC (IIRC)

    The patch I committed uses XFS_IOC_RESVSP64 to allocate (unwritten) extents and then posix_fallocate to zero out the file (the glibc implementation of this call just writes zeros out).

    Now, ideally it would be beneficial (and probably faster) to have XFS do this in kernel. Asynchronously would be pretty cool too.. but hey :)

    The reason we don’t want unwritten extents is that NDB has some realtime properties, and futzing about with extents and the like in the FS during transactions isn’t such a good idea.

    So, this would lead me to try XFS_IOC_ALLOCSP64 – which doesn’t have the “unwritten extents” warning that RESVSP64 does. However, with the two processes writing the files out, I get heavy fragmentation. Even with a RESVSP followed by ALLOCSP I get the same result.

    So it seems that ALLOCSP re-allocates extents (even if it doesn’t have to) and really doesn’t give you much (didn’t do too much timing to see if it was any quicker).

    I’ve asked if this is expected behaviour on the XFS list… we’ll see what the response is (i haven’t had time yet to go read the code… i should though).

    So what improvement does this patch make? well, i’ll quote my commit comments:

    BUG#24143 Heavy file fragmentation with multiple ndbd on single fs
    
    If we have the XFS headers (at build time) we can use XFS specific ioctls
    (once testing the file is on XFS) to better allocate space.
    
    This dramatically improves performance of mysql-test-run cases as well:
    
    e.g.
    number of extents for ndb_dd_basic tablespaces and log files
    BEFORE this patch: 57, 13, 212, 95, 17, 113
    WITH this patch  :  ALL 1 or 2 extents
    
    (results are consistent over multiple runs. BEFORE always has several files
    with lots of extents).
    
    As for timing of test run:
    BEFORE
    ndb_dd_basic                   [ pass ]         107727
    real    3m2.683s
    user    0m1.360s
    sys     0m1.192s
    
    AFTER
    ndb_dd_basic                   [ pass ]          70060
    real    2m30.822s
    user    0m1.220s
    sys     0m1.404s
    
    (results are again consistent over various runs)
    
    similar for other tests (BEFORE and AFTER):
    ndb_dd_alter                   [ pass ]         245360
    ndb_dd_alter                   [ pass ]         211632

    So what about the patch? It’s actually really tiny:

    
    --- 1.388/configure.in	2006-11-01 23:25:56 +11:00
    +++ 1.389/configure.in	2006-11-10 01:08:33 +11:00
    @@ -697,6 +697,8 @@
    sys/ioctl.h malloc.h sys/malloc.h sys/ipc.h sys/shm.h linux/config.h \
    sys/resource.h sys/param.h)
    
    +AC_CHECK_HEADERS([xfs/xfs.h])
    +
     #--------------------------------------------------------------------
    # Check for system libraries. Adds the library to $LIBS
    # and defines HAVE_LIBM etc
    
    --- 1.36/storage/ndb/src/kernel/blocks/ndbfs/AsyncFile.cpp	2006-11-03 02:18:41 +11:00
    +++ 1.37/storage/ndb/src/kernel/blocks/ndbfs/AsyncFile.cpp	2006-11-10 01:08:33 +11:00
    @@ -18,6 +18,10 @@
    #include
    #include
    
    +#ifdef HAVE_XFS_XFS_H
    +#include
    +#endif
    +
     #include "AsyncFile.hpp"
    
    #include
    @@ -459,6 +463,18 @@
    Uint32 index = 0;
    Uint32 block = refToBlock(request->theUserReference);
    
    +#ifdef HAVE_XFS_XFS_H
    +    if(platform_test_xfs_fd(theFd))
    +    {
    +      ndbout_c("Using xfsctl(XFS_IOC_RESVSP64) to allocate disk space");
    +      xfs_flock64_t fl;
    +      fl.l_whence= 0;
    +      fl.l_start= 0;
    +      fl.l_len= (off64_t)sz;
    +      if(xfsctl(NULL, theFd, XFS_IOC_RESVSP64, &fl) < 0)
    +        ndbout_c("failed to optimally allocate disk space");
    +    }
    +#endif
     #ifdef HAVE_POSIX_FALLOCATE
    posix_fallocate(theFd, 0, sz);
    #endif

    So get building your MySQL Cluster with the XFS headers installed and run on XFS for sweet, sweet disk allocation.

    Programme – linux.conf.au 2007

    The Programme for linux.conf.au 2007 has hit the streets (err.. web) and it’s looking pretty neat.

    I’m glad to see the MySQL and PostgreSQL miniconfs on different days – means I should be able to pop into the PostgreSQL one as well. Kernel could be interesting too… I guess it can depend on the sessions and stuff though.

    Greg Banks’ session on “Making NFS Suck Faster” should be interesting. Tridge’s session on “clustering tdb – a little database meets big iron” should be really interesting (after all, I hack on a clustered database for a crust). After lunch, I’m a bit torn between a few sessions – but Matthew Garrett‘s “Fixing suspend for fun and profit” could be a laugh.

    The next session will involve last minute jitters for my session (which is next: “eat my data: how everybody gets file IO wrong” – which will be great fun as there will no doubt be a bunch of smart people about ready to expand and clarify things.

    By the end of the day I’ll be torn between Keith Packard’s “X Monitor Hotplugging Sweetness” (Hopefully the extension will be called XBLING – as I keep tryning to convince him to call an X extension that) and Garbage Collection in LogFS by Jorn Engel.

    On Thursday, I’ll want to be in all the sessions at once – including Practical MythTV as presented by Mikal Still and myself. If you’re not in our session (and damn you for not being :) you should check out the no doubt other great things on: Dave Miller on Routing and IPSEC Lookup scaling in the linux kernel should be great fun, OzDMCA by Kim Weatherall will no doubt bring a tear to the eye, Rasmus on Faster and Richer Web Apps with PHP 5 (aparrently the aim when coding PHP is to not suck… so a lot of PHP “programmers” should take note – and ask to see how fast he can down a beer in), Andrew Cowie is talking on writing rad GTK apps (always fun when you can see something from your coding efforts). My photographer side of my brain is telling me to go to the GIMP Tutorial too. Hrrm… busy day (but our MythTV tute will ROCK – so show up and be converted).

    After a morning berocca (err… tea), the NUMA sessions sound interesting (especially on memory mapped files – going to be thinking about this and databases odly enough). Lunch, then the Heartbeat tutorial sounds interesting (from a “we have an internal one and i wonder what this does” PoV).

    Ted Ts’o is on enterprise real time… could be interesting as Ted’s a fun guy.

    On Friday, Ted’s ext4 talk is a must see – especially from a poking him in the ribs about what would be neat from a DB PoV (and the reminder of the real numbers in a benchmark boost to performance we see with XFS versus ext3).

    While wanting to be a cool kid like Rusty, Disk Encryption also sounds interesting, and Robert Collins could be talking about some interesting stuff (although the title “do it our way” isn’t giving much away).

    So, I’ve pretty much just planned a week in January down to the hour. If you’re not already going – get booked for linux.conf.au 2007 now – sure to sell out quickly. Going to be totally kick-ass.

    mysql NDB team trees up on bkbits.net

    If you head over here: mysql on bkbits.net you can get a copy of the NDB team trees. This is where we push stuff before it hits the main MySQL trees so that we can get some extra testing in (also for when pulling from the main tree). So you can be relatively assured that this is going to work fairly well for NDB and have the latest bug fixes.

    Of course, if anything is going to break here – it’s going to be NDB :)

    This should allow you to get easy access to the latest-and-greatest NDB code.

    At some point soon I’ll update my scripts that generate doxygen output (and builds) to do the -ndb trees.

    enjoy!

    svn pisses me off

    $ svn ci
    Connection closed by $MAGIC_IP_ADDR
    svn: Commit failed (details follow):
    svn: Connection closed unexpectedly
    svn: Your commit message was left in a temporary file:
    svn:    '/full/path/to/parent/directory/where/no/changed/files/svn-commit.tmp'
    $ svn ci
    Log message unchanged or not specified
    a)bort, c)ontinue, e)dit
    a

    could have at least saved the commit message somewhere useful. like, say, somewhere so when i type “svn ci” again it reads the commit message.

    grrr

    
    					

    weekly builds

    Saturn’s autoweb

    I’ve hacked my scripts that generate doxygen docs to also build MySQL 4.1, 5.0 and 5.1 for AMD64 (the box that it’s running on) with Cluster. This is to help my idea of running Gallery at home with NDB disk data tables in very recent MySQL builds.

    How’s it going so far? Well… I’ve found some bugs and some seemingly strange behaviour here and there. However, bug reports will come, and I’m currently running a bit of an older build.

    I’ll make the URL of the Gallery public at some point too

    Recent happennings and releases…

    So a bunch of stuff has happenned (or happenning) that I’ve been wanting to blog about for a bit. Some stuff had to wait, others it’s just been me being slack.

    Anyway, anyone who hangs closely around the MySQL circles probably now knows about MySQL Enterprise. There’s been a fair bit of talk about this internally for a little while now. When it was being talked about a bit wider within the company some of the initial communication was (in my mind) rather unclear. So I took the “what’s the worst way somebody could interpret this” viewpoint and replied with my thoughts. The idea behind this was to simulate what some of the loud-mouthed trolls of the non-shifted question mark e on a qwerty keyboard mapped to dvorak kind may do.
    After a few phone calls (some at strange hours) my worst fears were not realised – we were still not being insane.

    So I hope I’ve been of some use in making sure that communication has been clear and any possible fears put to rest.
    There is also an increased willingness to make things saner for getting non-MySQL AB authored code into the main trees (err… now labelled Community).

    We’re also getting geared up for another 5.1 release – the Cluster team has recently chased down some failures: from out-of-disk on build machine (why is it us who had to find that out?) to an “actual bug”.

    Kudos goes out to Jonas who has recently found a few bugs that have been Can’t Repeat since about the year 2000 – ones that were real hard to hit, but naturally, somebody has.

    I also added some new things to the Cluster Management Server (ndb_mgmd) in 5.1 that should help with debugging in the future. I basically just exposed the MgmApiSession stuff a bit, giving each session a unique id (64 bit int) that you could then check if the session had gone away or not (or list all sessions). This gives us a test case for bug 13987 which is pretty neat.

    I also have geared up a change to the handler API to fix bug 19914 – and being a good boy, I’ve mailed out to the public internals list so that people are ready for the building of outside of tree storage engines to break (on 4.1 and up!). The good news is, however, that this is a real fix and that any errors on COUNT(*) will be reported back to the user (a customer was affected by this).
    Also, I updated how engines fill out the INFORMATION_SCHEMA.FILES table to make it a bit nicer (Brian wants to add support for it to some of the other engines). He also pointed out a really obvious bug of mine in a recent push to that code (that probably showed up in a compiler warning come to think of it…). Paul is looking at it for PBXT too (or at least thinks it’s cool :).
    Also had a bit of a ask-around the cluster team about if making the team trees (VERSION-ndb) public (up on bkbits.net etc) was good. Nobody seems to have any objections, so will (as soon as I get a minute) persue that. Basically it’ll let people get access to the latest NDB bug fixes in source-tree form (certainly not recommended for production, but could be useful in testing environments).
    I’ve also been thinking about talks for the MySQL UC next year as Cluster tends to be a popular topic (had a rather full room this year).

    There’s probably more to talk about too, but i’m getting sleepy.

    Rusty on LCA talks and other stuff…

    As email is *sooo* non-“Web 2.0”, i reply in blog form….
    Rusty’s Bleeding Edge Page talks about a “Writing an x86 hypervisor: all the cool kids are doing it!” session that sounds really cool (better not be on at the same time as my talk… :)

    I don’t (currently) intend to be one of the cool kids though.

    He also mentions a session entitled “First-timer’s Introduction to LCA”. A couple of possible suggestions (or thoughts, and stuff I’ve seen):

    • be careful if you intend to bitch endlessly about a piece of software – it’s quite likely you’re talking to the person who wrote it (or a chunk of it)
    • sometimes it can be really good to just listen and ask a few good questions to understand. there are a lot of really smart people about
    • you will (at some point) ask a really dumb question (that you’ll only realise is dumb a few months later). Don’t panic – we all do it.
    • Don’t be scared – nobody bites too hard.
    • when staying in the halls, odds are the coffee isn’t that good – be prepared to bring your own or go out every morning.
    • do not be afraid to go up and start talking to people – it’s a great way to meet interesting characters and cool hackers.
    • wash
    • use deodorant
    • encourage others to do the above 2
    • read the summary of a session, not just the title. sometimes you can be misled by the title (for example, not everybody thinks of the same thing when “hacking BLAH” is the title of a session)
    • especially if talking, bring backups, backup (without erasing old backups) and backup. Also, be sure restore works.
    • While a lot of people do enjoy downing a few (or more than a few) Ales, it’s not compulsary. There are people attending LCA who don’t drink (and who may/may not join others at the pub even though they don’t drink alcohol). It’s also okay to not drink too much – in fact, it’s often recommended.
    • Don’t be afraid to ask people who they are, what they do etc. Even if you then immediately recognise the name, it’s good to put a face to the name.
    • You will never see everything you want to.
    • do join the IRC channels – great way of meeting people and organising groups to go do things (like get food, go to pub etc).
    • do talk to people around the dorms – great way of meeting people
    • expect to want a day of rest afterwards
    • there are some “in” jokes – but don’t be afraid to ask what they’re about, strange traditions are part of the LCA experience

    I wonder what should/could be written about going all fanboy/fangirl over favourite hackers? and taking/asking to get taken photos?

    The last thing Rusty talks about is the “Hacking in groups” tutorial. I really liked his and Robert Love’s tutorial in Canberra (Kernel Hacking – where you wrote a PCI driver for the excellent Love Rusty 3000. A device with real specifications, coffee cup stain and all). I’ve had a bit of a mixed feeling about it from Rusty since then, but I reckon it was seriously one of the best tutorials I have ever attended. I also took the hands-on approach as great inspiration for various MySQL Cluster Tutorials I’ve given since (and people have commented on how the hands-on part is great).

    I guess the thing about the kernel hacking tute was that not everybody in the room was at the same skill level (which is something you totally run the risk of with hands-on). Also, if you hadn’t done the prep material, you were probably going to be in trouble.

    But anyway, the idea of having 20 talented coders with 5 people in the tute for each of them and working on some project could be interesting – although rather ambitious. I worry that people without a good enough skillset would rock up and not get much out of it. Although those with adequate skill would do well.

    Picking a project that could be doable in a handful of hours (or a day) is tricky – as it’d probably be an extension to some existing project, which requires learning of it. Or, starting something from scratch can be equally as hard (to end up anywhere useful).

    Some ideas for projects could include:

    • linux file system driver (perhaps read only) for a simple file system (mkfs provided)
    • MySQL table handler for some simple format (indexes get trickier… but maybe simple bitmapped index… or just an in memory table handler)
    • fsck for some file format/file system format

    These have the benefit of being able to run existing good test suites against the software and see how well people did. They’d probably also help people land jobs :)

    Another interesting one would be implementing a library for journaling writes to a file. i.e. instead of write to temp, sync, rename – do journaling.  This would let people easily write apps that did safe updates to large files. You could then use this to implement other things (like a really simple crash-safe storage engine, FUSE file system or something).

    I’m just not sure how much “cool tricks” could really happpen in that time (instead of just getting the job done). 20 coders talking about their neat tricks would probably make a good book though…

    Saturn comes back around…

    For certain evil purposes last week, I assembled the old Saturn with a hard disk I found when cleaning a little while ago (I have that kind of tech stuff – you clean up and find 40GB disks – I’m pretty sure I have an 8.4 bumming around somewhere too).

    Saturn comes back around

    I ended up being able to do the evil I needed to, but I could tell that the room was a bit warmer due to the extra box being alive. I was also lazy and couldn’t be bothered going downstairs for the D200, so this was shot with my old and trusty Coolpix 4500.

    I used the box to be able to get remote access to a customers’ test setup to do some diagnosis on a bug (that’s notoriously hard to reproduce). I think I have a fair idea of what it is now though (timing related – not fun).

    Remember kids, threads are evil.

    Also, an interesting thing to note is that there is, in fact, a limit to not the number of fds you can pass to the select(2) system call, but to the actual number (on my Ubuntu box here, passing a fd of, say 2000 is probably going to lead to trouble). This has nothing to do with the previously mentioned bug, but an interesting point.

    Storing Passwords (securly) in MySQL

    Frank talks about Storing Passwords in MySQL. He does, however, miss something that’s really, really important. I’m talking about the salting of passwords.

    If I want to find out what  5d41402abc4b2a76b9719d911017c592 or 015f28b9df1bdd36427dd976fb73b29d MD5s mean, the first thing I’m going to try is a dictionary attack (especially if i’ve seen a table with only user and password columns). Guess what? A list of words and their MD5SUMS can be used to very quickly find what these hashes represent.

    I’ll probably have this dictionary in a MySQL database with an index as well. Try it yourself – you’ll probably find a dictionary with the words “hello” and “fire” in it to help. In fact, do this:

    mysql> create table words (word varchar(100));
    Query OK, 0 rows affected (0.13 sec)
    mysql> load data local infile ‘/usr/share/dict/words’ into table words;
    Query OK, 98326 rows affected (0.85 sec)
    Records: 98326  Deleted: 0  Skipped: 0  Warnings: 0

    mysql> alter table words add column md5hash char(32);
    Query OK, 98326 rows affected (0.39 sec)
    Records: 98326  Duplicates: 0  Warnings: 0

    mysql> update words set md5hash=md5(word);
    Query OK, 98326 rows affected (3.19 sec)
    Rows matched: 98326  Changed: 98326  Warnings: 0
    mysql> alter table words add index md5_idx (md5hash);
    Query OK, 98326 rows affected (2.86 sec)
    Records: 98326  Duplicates: 0  Warnings: 0
    mysql> select * from words where md5hash=’5d41402abc4b2a76b9719d911017c592′;
    +——-+———————————-+
    | word  | md5hash                          |
    +——-+———————————-+
    | hello | 5d41402abc4b2a76b9719d911017c592 |
    +——-+———————————-+
    1 row in set (0.11 sec)
    mysql> select * from words where md5hash=’015f28b9df1bdd36427dd976fb73b29d’;
    +——+———————————-+
    | word | md5hash                          |
    +——+———————————-+
    | fire | 015f28b9df1bdd36427dd976fb73b29d |
    +——+———————————-+
    1 row in set (0.00 sec)
    $EXCLAMATION I hear you go.

    Yes, this is not a good way to “secure” passwords. Oddly enough, people have known about this for a long time and there’s a real easy  solution. It’s called salting.

    Salting is prepending a random string to the start of the password when you store it (and when you check it).

    So, let’s look at how our new password table may look:

    mysql> select * from passwords;
    +——+——–+———————————-+
    | user | salt   | md5pass                          |
    +——+——–+———————————-+
    | u1   | ntuk24 | ce6ac665c753714cb3df2aa525943a12 |
    | u2   | drc,3  | 7f573abbb9e086ccc4a85d8b66731ac8 |
    +——+——–+———————————-+
    2 rows in set (0.00 sec)
    As you can see, the MD5s are different than before. If we search these up in our dictionary, we won’t find a match.

    mysql> select * from words where md5hash=’ce6ac665c753714cb3df2aa525943a12′;
    Empty set (0.01 sec)

    instead, we’d have to get the salt and do an md5 of the salt and the dictionary word and see if the md5 matches. Guess what, no index for that! and with all the possible values for salt, we’ve substantially increased the problem space to construct a dictionary (i won’t go into the maths here).

    mysql> create view v as select word, md5(CONCAT(‘ntuk24′,word)) as salted from words;
    Query OK, 0 rows affected (0.05 sec)

    mysql> select * from v where salted=’ce6ac665c753714cb3df2aa525943a12’;
    +——-+———————————-+
    | word  | salted                           |
    +——-+———————————-+
    | hello | ce6ac665c753714cb3df2aa525943a12 |
    +——-+———————————-+
    1 row in set (2.04 sec)

    mysql> create or replace view v as select word, md5(CONCAT(‘drc,3′,word)) as salted from words;
    Query OK, 0 rows affected (0.00 sec)

    mysql> select * from v where salted=’7f573abbb9e086ccc4a85d8b66731ac8’; +——+———————————-+
    | word | salted                           |
    +——+———————————-+
    | fire | 7f573abbb9e086ccc4a85d8b66731ac8 |
    +——+———————————-+
    1 row in set (2.12 sec)

    So we’ve gone from essentially instantaneous retreival, to now taking about 2 seconds. Even if I assume that one of your users is going to be stupid enough to have a dictionary password, It’s going to take me 2 seconds to check each user – as the salt is different for each user! So it could take me hours just to find that user. Think about how many users are in your user table – with 1000 users, it’s over 1/2hr. For larger systems, it’s going to be hours.

    Welcome to Beijing (day 1)

    I’ve just come back from lunch. I’ve managed to eat Chinese food, in China, with chopsticks and not totally embarass myself. Ate some new food, new vegetables and a seemingly different type of seaweed than I have eaten before. It tasted good though. I even think Kit would have liked some of it (once she got over the fact that it looked different and some things were green things).
    I arrived safely after a flight that was fine (except for getting up rather early to get to Sydney to then take a sane timed flight). Beijing seems to be a bit like the firefly world, except with less flying cars. You’ve got heaps of stuff in English and Chinese. It could be really interesting to live here and experience things.

    There’s a national English language newspaper which is fairly up to date on world events – the fact that our dear Mr Howard is going to go to the election seems to be news here! It’s not packed with local news, which would be interesting to read (although I think I’ll have to learn to read first).

    The hotel is a short walk from the office (down the street, across the road). Oh, the roads are at least 7 lanes – they’re big!

    Hotel is pretty nice, probably about half the price of what I’d expect to pay back home. Breakfast was good – some totally delicious watermelon. Honestly thinking of just having watermelon for breakfast tomorrow :)

    Although it’s rather obvious that the hotel is aimed at western visitors. At breakfast you could only really tell you’re in China by: looking out the front window at all the Chinese writing or looking at the waiters and waitresses and noticing they all a) spoke Chinese to each other and b) were Chinese. About 5 languages before my first coffee – what a way to start the day!

    At some point I’m going to have to have some Chinese tea – it seems like a real obvious must-do. Although maybe I should give in at some point and buy coffee from starbucks as well….

    Heroes in Tyrol

    23rd Mostra – “Heroes in Tyrol”, by Niki List (Austria-Sweeden-Germany)

    I managed to see most of this film a few years ago. Anybody know where or how I can get a DVD of it? (with English subtitles). I know somebody in the wider community has to know where (hence why i’ll put this entry in the MySQL category – i know somebody there has to know something about this film).

    Besides – it has drinking songs, and MySQLers will get the connection.

    MySQL Bug Deskbar plugin

    Over at my junkcode section, I have mysqlbug.py which is a plugin for the GNOME deskbar panel applet.

    If you’ve used Quicksilver on MacOSX, then you know the kind of app that Deskbar Applet is.

    This one lets you type “bug 1234” and be given the action of “open mysql bug 1234”. If you type “edit bug 1234” it gives you the option of editing that bug number.

    We’ll see if this proves useful.

    Many thanks to kamstrup (one of the Deskbar developers) on #deskbar on gimpnet for helping me out with the plugin.

    I totally heart Deskbar. It’s awesome.