Inspired by PeterZ’s Opening Tables scalability post, I decided to try a little benchmark. This benchmark involved the following:
- Create 50,000 tables
- CREATE TABLE t{$i} (i int primary key)
I wanted to test file system impact on this benchmark. So, I created a new LVM volume, 10GB in size. I extracted a ‘make bin-dist’ of a recent MySQL 5.1 tree, did a “mysql-test-run.pl –start-and-exit” and ran my script, timing real time with time.
For a default ext3 file system creating MyISAM tables, the test took 15min 8sec.
For a default xfs file sytem creating MyISAM tables, the test took 7min 20sec.
For an XFS file system with a 100MB Version 2 log creating MyISAM tables, the test took 7min 32sec – which is within repeatability of the default XFS file system. So log size and version made no real difference.
For a default reiserfs (v3) file system creating MyISAM tables, the test took 9m 44sec.
For a ext3 file system with the dir_index option enabled creating MyISAM tables, the test took 14min 21sec.
For an approximate measure of the CREATE performance…. ext3 and reiserfs averaged about 100 tables/second (although after the 20,000 mark, reiserfs seemed to speed up a little). XFSÂ averaged about 333 tables/second. I credit this to the check for if the files exist being performed by a b-tree lookup in XFS once the directory reached a certain size.
Interestingly, DROPPING the tables was amazingly fast on ext3 – about 2500/sec. XFS about 1000/sec. So ext3 can destroy easier than it can create while XFS keeps up to speed with itself.
What about InnoDB tables? Well…
ext3(default): 21m 11s
xfs(default): 12m 48s
ext3(dir_index): 21m 11s
Interestingly the create rate for XFS was around 500 tables/second – half that of MyISAM tables.
These are interesting results for those who use a lot of temporary tables or do lots of create/drop tables as part of daily life.
All tests performed on a Western Digital 250GB 7200rpm drive in a 2.8Ghz 800Mhz FSB P4 with 2GB memory running Ubuntu 6.10 with HT enabled.
At the end of the test, the ibdata1 file had grown to a little over 800MB – still enough to fit in memory. If we increased this to maybe 200,000 tables (presumably about a 3.2GB file) that wouldn’t fit in cache, then the extents of XFS would probably make it perform better when doing INSERT and SELECT queries as opposed to the list of blocks that ext3 uses. This is because the Linux kernel caches the mapping of in memory block to disk block lookup making the efficiency of this in the file system irrelevant for data sets less than memory size.
So go tell your friends: XFS is still the coolest kid on the block.