Stored Procedures/Functions for Drizzle

Previously, in “Thoughts on Thoughts on Drizzle” I theorized that one of the major reasons why we did not see lots of people jumping at stored procedures in MySQL was that it wasn’t in their native language (for lack of a better term). We’ve seen External Language Stored Procedures for MySQL that let you write stored procedures in some other languages…. but I felt something was missing.

Firstly, I wanted a language I was really familiar with and comfortable writing complex things in.

Secondly, it should be compiled so that it runs as fast as possible.

Thirdly, it shouldn’t just be linking to a pre-compiled library (drizzle function plugins do that already)

So… the obvious choice was C.

I have a really, really, really early prototype:

drizzle> SELECT LIBTCC("int foo(char* s) { s[0]='4'; s[1]='2'; s[2]=0; return 2; }") AS RESULT;

+--------+
| RESULT |
+--------+
| 42     |
+--------+
1 row in set (0 sec)

or… a bit more sophisticated:

drizzle> SELECT LIBTCC("#include <string.h>\nint foo(char* s) { strcpy(s,\"Hello World!\");; return strlen(s); }") AS RESULT;

+--------------+
| RESULT       |
+--------------+
| Hello World! |
+--------------+
1 row in set (0 sec)

I’m using a function as a bit of a cheat… but the string is passed to libtcc (modified so it’s a shared library so I can load it into drizzle), where it is compiled into native object code (in my case x86-64) and then run.

With the right bits of foo… I could allow calling of all sorts of server functions…. such as those to execute SQL inside the current transaction context.

There are a number of reasons why this is Pure Evil(TM):

  • It executes inside the address space of your database server
    one null pointer dereference and your database server is all gone.
  • It’s arbitrary code injection by design
    Exactly how insane are you? Security–;
  • While great for me and my C hacking friends, possibly not for web app developers, who likely aren’t writing their apps in C every day.
  • See the first reason. Is that not reason enough? Memory protection is a good thing yo.

Anyway, you can see the code up on launchpad in the drizzle-libtcc-function branch. You’ll need to modify your tcc source so that the Makefile snippet for libtcc.o looks like this:

# libtcc generation and test
libtcc.o: $(NATIVE_FILES)
        $(CC) -fPIC -o $@ -c libtcc.c $(NATIVE_TARGET) $(CFLAGS)

libtcc.a: libtcc.o
        $(AR) rcs $@ $^

libtcc.so: libtcc.o
        $(CC) -shared -Wl,-soname,libtcc.so.1 -o $@ libtcc.o

Embedded InnoDB: InnoDB Status

Using the Embedded InnoDB plugin I’m working on, you can use the INNODB_STATUS table function in the data_dictionary, you can do pretty neat things.

For example, we can see that each autocommit transaction causes an fsync and if you insert multiple rows ina  single statement, you still only get 1 fsync:

drizzle> SELECT * FROM DATA_DICTIONARY.INNODB_STATUS
    -> WHERE name="fsync_req_done";
+----------------+-------+
| NAME           | VALUE |
+----------------+-------+
| fsync_req_done |    25 |
+----------------+-------+
1 row in set (0 sec)

drizzle> insert into t1 values (1);
Query OK, 1 row affected (0.05 sec)

drizzle> SELECT * FROM DATA_DICTIONARY.INNODB_STATUS WHERE name="fsync_req_done";
+----------------+-------+
| NAME           | VALUE |
+----------------+-------+
| fsync_req_done |    26 |
+----------------+-------+
1 row in set (0 sec)

drizzle> insert into t1 values (1),(2),(3),(4);Query OK, 4 rows affected (0 sec)
Records: 4  Duplicates: 0  Warnings: 0

drizzle> SELECT * FROM DATA_DICTIONARY.INNODB_STATUS WHERE name="fsync_req_done";
+----------------+-------+
| NAME           | VALUE |
+----------------+-------+
| fsync_req_done |    27 |
+----------------+-------+
1 row in set (0 sec)

Embedded InnoDB: querying the configuration

I am rather excited about being able to do awesome things such as this to get the current configuration of your server:

drizzle> SELECT NAME,VALUE 
    -> FROM DATA_DICTIONARY.INNODB_CONFIGURATION
    ->  WHERE NAME IN ("data_file_path", "data_home_dir");
+----------------+-------+
| NAME           | VALUE |
+----------------+-------+
| data_file_path | NULL  | 
| data_home_dir  | ./    | 
+----------------+-------+
2 rows in set (0 sec)

drizzle> SELECT NAME,VALUE
    -> FROM DATA_DICTIONARY.INNODB_CONFIGURATION
    -> WHERE NAME IN ("data_file_path", "data_home_dir");
+----------------+-------+
| NAME           | VALUE |
+----------------+-------+
| data_file_path | NULL  | 
| data_home_dir  | ./    | 
+----------------+-------+
2 rows in set (0 sec)

drizzle> SELECT NAME,VALUE 
    -> FROM DATA_DICTIONARY.INNODB_CONFIGURATION 
    -> WHERE NAME = "io_capacity";
+-------------+-------+
| NAME        | VALUE |
+-------------+-------+
| io_capacity | 200   | 
+-------------+-------+
1 row in set (0 sec)

Coming soon: status in a table.

(this is for the upcoming embedded_innodb plugin, which using the API provided by Embedded InnoDB to implement a Storage Engine for Drizzle)

Thoughts on Thoughts on Drizzle :)

Mark has some good thoughts on drizzle. I think they’re all valid… and have some extra thoughts too:

“I have problems to solve today”. This is (of course) an active concern in my brain… If we don’t have something out that solves some set of problems with reasonable stability and reliability (and soon), then we are failing. I feel we’re getting there, and will have a solid foundation to build upon.

Drizzle replication, MySQL replication: “I can’t compare the two until Drizzle replication is running in production.“. Completely agree. We need to only say replication is stable and reliable when it really is. Realistic test suites are needed. Very defensive programming of the replication system is needed (you want to know when something has gone wrong). We also need to have it constantly be verifying the right thing is going on. We want our problems to be user visible, not silent and invisible. Having high standards will hopefully pay off when people start running it in production….

3 byte int: “Does this mean that some of my tables will grow from 3GB to 4GB on disk?” I think we’re moving the responsibility down to the engines. The 3 byte int type says two things: use less storage space, limit the maximum value. Often you want the former, not the latter. There are many ways to more efficiently pack integers for storage when they are usually smaller than the maximum you want. The protobuf library does a good job of it.

I think it is the job of storage engines to do better here. Once you’re in memory, 3 byte numbers are horrible to work with.. copy out of a row buffer, convert into a 32bit number and then do foo. Modern CPUs favor 32 or 64bit alignment of data a *lot*. 3byte numbers do not align to 32 or 64bits very well… making things much slower for the common case of using cached data.

“I need stored procedures. They are required for high-performance OLTP as they minimize transaction duration for multi-statement transactions.” The reduction of network round trips is always crucial. I think a lot of round trips could go away if you could issue multiple statements at once (not via semicolon separating them, by protocol awesomeness).

There should be a way to send a set of statements that should be executed. There should also be a way to specify that if no error occurred, commit. This could then be (in the common case) a single round trip to the database. You then only have to make round-trips when what statement to issue next depends on the result of a previous one. The next step being to reduce these round trips… which can either be solved by executing something inside the database server (e.g. stored procedures) or something closer to the database server so that the round trips aren’t as large. This would be where Gearman enters.

I’m interested to see where these two approaches (issuing in batches and executing closer to the DB server) fall down… I know that latency may not be as good… but throughput should be a lot better.

I take heart with “I have yet to use them in MySQL” though. I have my own theories as to why this is… my biggest thought is that it’s because the many, many programmers writing SQL that Mark sees aren’t SQL Stored Procedure programmers. They spend their days in a couple of languages (maybe Perl, Python, PHP, Java, C, C++) and never programmed SQL:2003 Stored Procedures and it just doesn’t come as quickly (or as bug free) as writing code in the languages you use every day.

“Long Running insert, update and delete statements consume too many resources in InnoDB.” I wonder if this desire for MyISAM could be filled by PBXT or BlitzDB? The main reason that MyISAM is currently a temporary table only engine is that MyISAM and the server core were never that well separated.

My ultimate wish is that all engine authors take the approach of that there is an API to their engine and the Storage Engine is merely glue between the database server and their API.

The BlitzDB engine has this, Innobase partially does (and my Embedded InnoDB work goes the whole way) and MySQL Cluster is likely the oldest example.

As a side note, the BlitzDB plugin should go into the main Drizzle tree fairly soon. One of the joys of having an optional plugin that doesn’t touch the core of the server is that we can do this without much worry at all.

“Does Drizzle build on Windows?” Well… no. Funnily enough though, it is increasingly easy to make a Windows port. All the platform specific things are increasingly just plugins. The build system is a sticker… and no, we’re not going to switch to CMake. The C stands for something, and it’s something that even I may not print here… (I had never thought that being able to open up automake generated Makefiles and look at them would be a feature).

This next Drizzle milestone release should be exciting though…

I look forward to having Drizzle widely deployed and relied upon… I think we’ll do well..

Drizzle Developer Day 2010

Hi one and all!

Interested in database systems? Interested because you use them? Because you manage them? Write SQL that goes to them? Or are you one of the people of questionable sanity like myself who develops them?

Well… do we have the offer for you.

Friday, April 16th. Right after the MySQL Conference and Expo at the Santa Clara Convention Center, you can come along to the Drizzle Developer Day.

You will want to add your name to this wiki page: http://drizzle.org/wiki/Drizzle_Developer_Day_2010_signup

Suggest topics over at:
http://drizzle.org/wiki/Drizzle_Developer_Day_2010

Hope to see you there!

Writing A Storage Engine for Drizzle, Part 2: CREATE TABLE

The DDL code paths for Drizzle are increasingly different from MySQL. For example, the embedded_innodb StorageEngine CREATE TABLE code path is completely different than what it would have to be for MySQL. This is because of a number of reasons, the primary one being that Drizzle uses a protobuf message to describe the table format instead of several data structures and a FRM file.

We are pretty close to having the table protobuf message format being final (there’s a few bits left to clean up, but expect them done Real Soon Now (TM)). You can see the definition (which is pretty simple to follow) in drizzled/message/table.proto. Also check out my series of blog posts on the table message (more posts coming, I promise!).

Drizzle allows either your StorageEngine or the Drizzle kernel to take care of storage of table metadata. You tell the Drizzle kernel that your engine will take care of metadata itself by specifying HTON_HAS_DATA_DICTIONARY to the StorageEngine constructor. If you don’t specify HTON_HAS_DATA_DICTIONARY, the Drizzle kernel stores the serialized Table protobuf message in a “table_name.dfe” file in a directory named after the database. If you have specified that you have a data dictionary, you’ll also have to implement some other methods in your StorageEngine. We’ll cover these in a later post.

If you ever dealt with creating a table in MySQL, you may recognize this method:

virtual int create(const char *name, TABLE *form, HA_CREATE_INFO *info)=0;

This is not how we do things in Drizzle. We now have this function in StorageEngine that you have to implement:

int doCreateTable(Session* session, const char *path,
                  Table& table_obj,
                  drizzled::message::Table& table_message)

The existence of the Table parameter is largely historic and at some point will go away. In the Embedded InnoDB engine, we don’t use the Table parameter at all. Shortly we’ll also get rid of the path parameter, instead having the table schema in the Table message and helper functions to construct path names.

Methods name “doFoo” (such as doCreateTable) mean that there is a method named foo() (such as createTable()) in the base class. It does some base work (such as making sure the table_message is filled out and handling any errors) while the “real” work is done by your StorageEngine in the doCreateTable() method.

The Embedded InnoDB engine goes through the table message and constructs a data structure for the Embedded InnoDB library to create a table. The ARCHIVE storage engine is much simpler, and it pretty much just creates the header of the ARZ file, mostly ignoring the format of the table. The best bet is to look at the code from one of these engines, depending on what type of engine you’re working on. This code, along with the table message definition should be more than enough.

This blog post (but not the whole blog) is published under the Creative Commons Attribution-Share Alike License. Attribution is by linking back to this post and mentioning my name (Stewart Smith).

Continuing the journey

A couple of months ago (December 1st for those playing along at home) it marked five years to the day that I started at MySQL AB (now Sun, now Oracle). A good part of me is really surprised it was for that long and other parts surprised it wasn’t longer. Through MySQL and Sun, I met some pretty amazing people, worked with some really smart ones and formed really solid and awesome friendships. Of course, not everything was perfect (sometimes not even close), but we did have some fun.

Up until November 2008 (that’s 3 years and 11 months for those playing at home) I worked on MySQL Cluster. Still love the product and love how much better we’re making Drizzle so it’ll be the best SQL interface to NDB :)

The ideas behind Drizzle had been talked about for a while… and with my experience with internals of the MySQL server, I thought that some change and dramatic improvement was sorely needed.

Then, in 2008, Brian created a tree. I was soon sending in patches at nights, we announced to the whole world at OSCON and it captured a lot of attention.

Since November 2008 I’ve been working on Drizzle full time. It was absolutely awesome that I had the opportunity to spend all my days hacking on Drizzle – both directly with fantastic people and for fantastic people.

But… the Sun set… which was exciting and sad at the same time.

Never to fear! There were plenty of places wanting Drizzle hackers (and MySQL hackers). For me, it came down to this: “real artists ship”. While there were other places where I would no doubt be happy and work on something really cool, the only way I could end up working out where I should really be was: what is the best way to have Drizzle make a stable release that we’d see be suitable for deployment? So, Where Am I Now?

Rackspace.

Where I’ll again be spending all my time hacking Drizzle.

Writing A Storage Engine for Drizzle, Part 1: Plugin basics

So, you’ve decided to write a Storage Engine for Drizzle. This is excellent news! The API is continually being improved and if you’ve worked on a Storage Engine for MySQL, you’ll notice quite a few differences in some areas.

The first step is to create a skeleton StorageEngine plugin.

You can see my skeleton embedded_innodb StorageEngine plugin in its merge request.

The important steps are:

1. Create the plugin directory

e.g. mkdir plugin/embedded_innodb

2. Create the plugin.ini file describing the plugin

create the plugin.ini file in the plugin directory (so it’s plugin/plugin_name/plugin.ini)
An example plugin.ini for embedded_innodb is.

[plugin]
title=InnoDB Storage Engine using the Embedded InnoDB library
description=Work in progress engine using libinnodb instead of including it in tree.
sources=embedded_innodb_engine.cc
headers=embedded_innodb_engine.h

This gives us a title and description, along with telling the build system what sources to compile and what headers to make sure to include in any source distribution.

3. Add plugin dependencies

Your plugin may require extra libraries on the system. For example, the embedded_innodb plugin uses the Embedded InnoDB library (libinnodb).

Other examples include the MD5 function requiring either openssl or gnutls, the gearman related plugins requiring gearman libraries, the UUID() function requiring libuuid and BlitzDB requiring Tokyo Cabinet libraries.

For embedded_innodb, pandora-build has a macro for finding libinnodb on the system. We want to run this configure check, so we create a plugin.ac file in the plugin directory (i.e. plugin/plugin_name/plugin.ac) and add the check to it.

For embedded_innodb, the plugin.ac file just contains this one line:

PANDORA_HAVE_LIBINNODB

We also want to add two things to plugin.ini; one to tell the build system only to build our plugin if libinnodb was found and the other to link our plugin with libinnodb. For embedded_innodb, it’s these two lines:

build_conditional="x${ac_cv_libinnodb}" = "xyes"
ldflags=${LTLIBINNODB}
Not too hard at all! This should look relatively familiar for those who have seen autoconf and automake in the past.

Some plugins (such as the md5 function) have a bit more custom auto-foo in plugin.ini and plugin.ac (as one of two libraries can be used). You can do pretty much anything with the plugin system, but you’re a lot more likely to keep it simple like we have here.

4. Add skeleton source code for your StorageEngine

While this will change a little bit over time (and is a little long to just paste into here), you can see what I did for embedded_innodb in the skeleton-embedded-innodb-engine tree.

5. Build!

You will need to re-run ./config/autorun.sh so the build system picks up your new plugin. When you run ./configure --help afterwards, you should see options for building with/without your new plugin.

6. Add a test

You will probably want to add a test to see that your plugin loads successfully. When your plugin is built, the test suite automatically picks up any tests you have in the plugin/plugin_name/tests directory. This is in the same format as general MySQL and Drizzle tests: tests go in a t/ directory, expected results in a r/ directory.

Since we are loading a plugin, we will also need some server options to make sure that plugin is loaded. These are stored in the rather inappropriately named test-master.opt file (that’s the test name with “-master.opt” appended to the end instead of “.test“). For the embedded_innodb plugin_load test, we have a plugin/embedded_innodb/tests/t/plugin_load-master.opt file with the following content:

--plugin_add=embedded_innodb

You can have pretty much anything in the plugin_load.test file… if you’re fancy, you’ll have a SELECT query on data_dictionary.plugins to check that the plugin really is there. Be sure to also add a r/plugin_load.result file (My preferred method is to just create an empty result file, run the test suite and examine the rejected output before renaming the .reject file to .result)

Once you’ve added your test, you can run it either by just typing “make test” (which will run the whole test suite), or you can go into the main tests/ directory and run ./test-run.pl --suite=plugin_name (which will just run the tests for your plugin).

7. Check the code in, feel good about self

and you’re done. Well… the start of a Storage Engine plugin is done :)

This blog post (but not the whole blog) is published under the Creative Commons Attribution-Share Alike License. Attribution is by linking back to this post and mentioning my name (Stewart Smith).

Shocked and Stunned (that code exists and does work)

#define READ_ALL		1	/* openfrm: Read all parameters */
#define EXTRA_RECORD		8	/* Reservera plats f|r extra record */

and later on….

  if (prgflag & (READ_ALL+EXTRA_RECORD))
    records++;

Feel free to think about that for a second.

(I have an urge to add this to questions asked in a job interview…)

The Table protobuf message format

If you’ve ever opened up drizzled/message/table.proto in the Drizzle source tree you will have seen what’s in the table message: the structure that describes a database table in Drizzle. Previously I’ve talked about the Table message more generally, giving a fair bit of history of the FRM file and how we’ve replaced it with both the Table protobuf message and an infrastructure inside Drizzle so that Storage Engines own their own metadata.

Yesterday I talked about the Schema protobuf message format in more detail, and this time I’m talking about the Table protobuf message in a similar amount.

The first time we were loading (then only part of) the table definition out of a protobuf message was way back in January 2009 (I blogged about it too). It was an adventure untangling all sorts of things to get to a much nicer place (where we are now). The code in the server is not perfect… I’ll be the first to admit that some of it is rather strange, but that’s mostly all behind the scenes for people interested in the protobuf Table message!

The Table message has several embedded messages in it too. We need to have information on the Storage Engine, Fields and Indexes (and each of those can have other properties). It is much more complex than the simple Schema message.

Let’s have a look at the basic structure of the Table message:

message Table {

    /* *SNIP* (Here goes the definitions for TableType, StorageEngine, Field, Index, ForeignKeyConstrain, TableOptions and TableStats) */

  required string name = 1;
  required TableType type = 5;
  required StorageEngine engine = 2;
  repeated Field field = 3;
  repeated Index indexes = 4;

  repeated ForeignKeyConstraint fk_constraint = 8;
  optional TableOptions options = 9;
  optional TableStats stats = 10;
}

(We’ve skipped the definitions for the embedded messages for now)

This seems all pretty logical; a table has a name, a type, is in a Storage Engine, has Fields, may have Indexes, may have foreign key constraints, it has some options and statistics (the statistics may go away at some point “soon”).

Let’s have a look at the TableType message definition:

  enum TableType {
    STANDARD = 0;
    TEMPORARY = 1;
    INTERNAL = 2;
  }

It’s pretty simple, the table type is either a standard table (what you get from CREATE TABLE), a temporary table (what you get from CREATE TEMPORARY TABLE) or an INTERNAL table (what you get when Drizzle uses a temporary table during query execution).

Next, the StorageEngine message:

  message StorageEngine {

    message EngineOption {
      enum EngineOptionType {
        BOOL = 0;
        INTEGER = 1;
        STRING = 2;
      }

      required string option_name = 1;
      required string option_value = 2;
      required EngineOptionType option_type = 3;
    }

    required string name = 1;
    repeated EngineOption option = 2;
  }

The main part is the “name” member, which is just the name of the storage engine (e.g. “PBXT”,  “INNODB”, “ARCHIVE”). We do however have support specified in the StorageEngine message for engine specific options (in key value form). Expect these to be used more in the near future.

Specifying Fields is probably the most complex part of the table message. The Field message looks like this (with many embedded messages):

message Field {
    required string name = 1;
    required FieldType type = 2;
    optional FieldFormatType format = 3;
    optional FieldOptions options = 4;
    optional FieldConstraints constraints = 5;
    optional NumericFieldOptions numeric_options = 6;
    optional StringFieldOptions string_options = 7;

    optional string comment = 16; /* Reserve 0-15 for frequently accessed attributes */
    optional SetFieldOptions set_options = 17;
    optional TimestampFieldOptions timestamp_options = 18;
}

So… what does this all mean? Well, Fields have a type, they’re stored in a format, there’s options attached to them, there may be constraints as well as field type specific options.

The different field types should be fairly familiar by now:

    enum FieldType {
      DOUBLE = 0;
      VARCHAR = 1;
      BLOB = 2;
      ENUM = 3;
      INTEGER = 4;
      BIGINT = 5;
      DECIMAL = 6;
      DATE = 7;
      TIME = 8;
      TIMESTAMP = 9;
      DATETIME = 10;
    }

We also allow fields in different formats. Currently, these are default, fixed and dynamic. The idea is you can tell the engine (or the engine can tell you) how it’s storing the field. This is currently here as a nicety and the users for this are few and far between.

    enum FieldFormatType {
      DefaultFormat= 0;
      FixedFormat= 1;
      DynamicFormat= 2;
    }

The FieldOptions get interesting though:

    message FieldOptions {
      optional string default_value = 1;
      optional string update_value = 2;
      optional bool default_null = 3 [default = false];
      optional bytes default_bin_value = 4;
    }

You’ll no doubt be intrigued by the existence of both “default_value” and “default_bin_value”. Ordinarily, using a string to contain a textual representation of the default value (e.g. “foo” or “42”) is fine. However, for BLOB columns, you can have defaults that aren’t representable in a text string, you need binary data (e.g. the default value contains ‘\0’).

For TIMESTAMP columns, we continue to support DEFAULT NOW() and the ability to update the timestamp column on UPDATE. How is this represented in the table message? Well… default_value will be “NOW()” and update_value will be “NOW()”. It is intended that in the future it will be possible to have arbitrary SQL expressions for these. This does, of course, require support in the Drizzle server.

The default_null bool should be rather obvious :)

Well… that’s enough for today. Next time: more of the Field message!

The Schema protobuf message: Drizzle’s metadata on a schema

I’ve previously talked about table metadata in Drizzle and how we use the table protobuf message to describe a table (see Drizzle FRM Replacement and others). The model in Drizzle is that the engine is responsible for its metadata. For schemas (you may be thinking ‘database’ but we’re moving to the Schema terminology in Drizzle) we also have a small amount of metadata.

The protobuf message is specified in drizzled/message/schema.proto and is incredibly short. In fact, here it is in its entirety:

1
package drizzled.message;
2
option optimize_for = SPEED;
3
4
message Schema {
5
  required string name = 1;
6
  optional string collation = 2;
7
}

We don’t keep an awful lot of metadata about schemas. A Schema has a name and it has a default collation.

You can also read the db.opt file directly using the provided (and very simple) schema_reader utility.

In the near future, we could have CREATE DATABASE and CREATE SCHEMA replicated via this protobuf message. This would make it extremely easy to parse for utilities parsing the replication stream.

We’ll also (rather shortly) have key,value pairs for options to CREATE SCHEMA/CREATE DATABASE. More on that later :)

Drizzle FRM replacement: the table proto

Drizzle originally inherited the FRM file from MySQL (which inherited it from UNIREG). The FRM file stores metadata about a table; what columns it has, what type those columns are, what indexes, any default values, comments etc are all stored in the FRM. In the days of MyISAM, this worked relatively well. The row data was stored in table.MYD, indexes on top of it in table.MYI and information about the format of the row was
in table.FRM. Since MyISAM itself wasn’t crash safe, it didn’t really matter if creating/deleting the FRM file along with the table was either.

As more sophisticated engines were introduced (e.g. InnoDB) that had their own data dictionary, there started to be more of a problem. There were now two places storing information about a table: the FRM file and the data dictionary specific to the engine. Even if the data dictionary of the storage engine was crash safe, the FRM file was not plugged into that, so you could end up in a situation where the storage engine
recovered from a crash okay, but the FRM was incorrect for what the engine recovered to. This would always require manual intervention to find out what went wrong and then fix it (in some rather unusual ways).

When the MySQL Cluster (NDB) engine was introduced, a new set of problems arose. Now the MySQL server was connecting to an existing database, where tables could be created on other nodes connected to the cluster. You now not only had the problems of crash recovery, but the problems of keeping the FRM files in sync across many nodes, requiring
all sorts of interesting solutions that, for the most part, do work.

The “obvious” solution to some of these problems would be for an engine to write out an FRM file itself. This is much easier said than done. The file format was never created to be read and written by multiple pieces of software, the code that did the reading and writing inside the server was not reusable elsewhere and the only documentation (that
wasn’t a decent chunk of the MySQL source tree) is the rather incomplete definition in the MySQL Internals wiki (http://forge.mysql.com/wiki/MySQL_Internals_File_Formats) – not nearly enough to write a correct FRM file as the specifics are very, very odd.

Our goals for reworking the metadata system in Drizzle were: to allow engines to own their own metadata (removing any opportunity to have inconsistencies between the engine and the ‘FRM’) and for engines without their own data dictionary, to replace the FRM file format with something simple and well documented.

One option was to use SQL as the standard storage format, but it is rather non-trivial and expensive to parse – especially if we were to use it as the preferred way of talking table definitions with storage engines. We had been looking at the protobuf library
(http://code.google.com/p/protobuf/) ever since its first release and it has a number of very nice characteristics: a description language of a data structure that is then used to generate APIs for reading and writing it in a number of programming languages and a standard (documented) way to serialize the data structure.

After a bit of discussion, we arrived at a good outline for the table definition proto. The current one can always be found in the Drizzle source tree at drizzled/message/table.proto. The current format is very close to final (i.e. one that we’ll suppport upgrades from).

The process of modifying the Drizzle code base so that it would write (and read) a file format different to the FRM isn’t worth going too much into here although there were some interesting hurdles to overcome. An interesting one was the FRM file contains a binary image of the default row for the table (which is in the row format that the server uses); we now store the default value for each column in the proto and generate the default row when we read the proto. Another interesting one was removing and refactoring “pack_flag” – the details of which should only be extracted from Jay or Stewart with a liberal application of fine ale.

The end result is that we now have storage engines that are completely responsible for their own metadata. One example is the ARCHIVE engine. In the CREATE TABLE code path, the ARCHIVE storage engine gets the table definition in an object that represents the table proto. It can examine the parameters it needs to and then either store the proto directly, or convert it into its own format. Since ARCHIVE is simple, it just stores
the table proto in a serialised form (using a standard function provided by the protobuf library) and stores it in the .ARZ data file for the table. This instantly makes the ARCHIVE storage engine crash safe for CREATE and DROP table as there is only 1 file on disk, so no two files to get out of sync.

If an engine does not have its own data dictionary, it can still use the default implementation which just stores the serialised table proto in a file on disk.

We can also now use this interface to move INFORMATION_SCHEMA into its own storage engine. This means we can remove a lot of special case code throughout the server for INFORMATION_SCHEMA and instead just have a INFORMATION_SCHEMA storage engine that says it has the following tables in the INFORMATION_SCHEMA database. Because the table definition is now in a documented format with a standard API, this becomes a relatively trivial exercise.

What we’re all looking forward to is when the InnoDB data dictionary is linked into the new interface and we can have a truly crash safe database server.

Another wonderful side effect is since we now have a standard data structure for representing a table definition, we can integrate this with the replication system. In the “near” future, we can represent a CREATE TABLE in the replication stream as a table proto and not the raw SQL. If you were wanting to apply the replication stream to a different database server, you then only have to write a table proto to SQL
converter. If the target database system doesn’t do SQL at all, you could generate API calls to create the table.

So we now have a rather flexible system in place, with the code implementing it being increasingly simple and possible to be “obviously correct”.

Things that easily fall out of this work that people have written about:
– CREATE TABLE LIKE with ENGINE clause
http://krow.livejournal.com/671235.html
– table_raw_reader – looking at the raw representation of table metadata
http://www.flamingspork.com/blog/2009/10/01/table_raw_reader-reading-the-table-proto-from-disk-and-examining-everything/
– Table discovery
http://www.flamingspork.com/blog/2009/07/29/table-discovery-for-drizzle-take-2-now-merged/

Some more info:
http://krow.livejournal.com/642329.html

Learning Drizzle

Wanting to learn Drizzle? My friend Kent is doing just that, and documenting everything as he goes. It’s a great time to learn, as we head up to the bell milestone (which will be awesome) and big user visible changes are certainly slowing.

So, head on over to Learning Drizzle and learn Drizzle with Kent. Should be awesome.

Return of the “Top 5 MySQL Wishlist” and looking at Drizzle

It’s coming up on a year since I started working full time on Drizzle. So, I got a bit reflective…

Have we done things that I (and others) really wanted done? Back in 2007, I wrote my top 5 wishlist for the MySQL Server.

I am not going to pretend I speak for the MySQL development team; I’m just trying to evaluate how Drizzle is doing against some wishlists that (to me) embodied some of the reasons we started Drizzle.

Please think of this as “database server wishlists” and comparing them against Drizzle….

My wishlist was:

5. Six-monthly release cycles

Done. Not only does Drizzle have milestone releases, but we’re also dropping tarballs every two weeks (currently for the bell milestone). We’re also doing a decent job of keeping trunk free of massive breakage.

4. Much more in depth automated testing

Done (and in progress). We have drizzle-automation running things all the time. Hudson (and buildbot) test across many platforms (what pushbuild did and more when inside MySQL) before code hits trunk. We also have regular performance benchmarks that we compare across versions, crash-me, the random query generator as well as checking that we don’t regress in code size (via sloccount).

3. Sane build system

(slightly distorting my original words). Well, we’re not quite at the ready for packaging in distributions stage, but a debug or non debug build is just -g and optimization level for the compiler, plugins are using autofoo to work otu if they can be built… so yeah, this is pretty sane.

We also are building with -Werror (and more!) which increases code quality no end.

So, mostly done.

3.5. (yes, i have a 3.5): Kill HPUX

Done.

2. Increased liberal use of asserts

An in-progress thing, but the better compiler warnings have won us a lot.

1. Pluggable data dictionary

Not only that, but done away with FRM totally. Really happy with this.

What about other peoples wishlist?

Kostya had one:

1. Remove excessive fuss.

i.e. “just do it”.

I think we’re doing really well with this for Drizzle. Plugins are pretty easy to get merged, and if your patches to the kernel are good, they’re also easy. Big changes can be harder, but in the end it has turned out well.

2. Open the development process.

Done. There is no internal wiki, there are no “committers” versus “non-committers”.. everything is judged on merit of the idea/code. Sometimes the most valuable contribution is somebody telling you their real world experience.

3. Get to a normal release schedule.

Done.

4. Establish productive relationship with the majority of users.

I think drizzle-discuss mailing list is doing quite well in this regard. Is quite active with discussion.

5. Find a way to do incompatible changes with minimal pain for users.

We’ll see how we go :)

Ronald also had a list:

1. Real time Query Monitoring

With gearman logging and my recent experimentation with using CPU performance counters I think we’ll end up somewhere rather awesome.

If you’re looking for MySQL monitoring though, the MySQl Enterprise query monitoring stuff looks pretty good to me.

2. Consistent Release Cycles

We’re doing pretty well so far!

3. INFORMATION_SCHEMA Extensions

We’ve inherited the architecture from MySQL 5.1 (and 6.0) of being able to pretty easily add INFORMATION_SCHEMA tables and improved it. It’s currently pretty easy to add them. We also have ongoing work having an INFORMATION_SCHEMA storage engine which means that you won’t have to have the I_S tables be materialized every time you query them.

4. Online table maintenance

All progress has been due to Storage Engine authors. With the data dictionary work though, this gets easier and saner to do.

5. Published benchmarks

We’re encouraging others who will be more objective :) Although we also do regular performance regression tests as part of our standard development process.

Dormando also had a list (complete with “there is no five”):

1) Logical separation of connections from threads

We have this in Drizzle through plugins. Interesting ones are pool_of_threads (fixed number of threads), multithread (thread per connection) and single_thread (one thread).

2) A more modular core

We’re very much doing well here. It’s a long process, but I’m quite impressed by our progress.

3) Better replication (better replication management/protocol?)

The work being done on Drizzle replication is really exciting. I love the fact that modularity is encouraged and the ability to replace any bit you want easily, as well as read the replication stream in about any language you want.

4) Better test suite

I may never be 100% happy with a test suite, but we’re doing good…

PeterZ‘s view is always interesting, and he had one too:

1. Be Pluggable

Check. (and also, of course, in progress)

2. Be Scalable

We’ve done a lot of work scaling on many CPUs and many connections. Really, 8 concurrent database connections just isn’t interesting. We ever run as part of our regression suite up to 2048 concurrent connections.

3. Be Distributed

With the new protocol work to have built in sharding, plugins for logging and replication via Gearman, we’re getting better.

4. Be Solid

This will be a test for us. I think we should end up pretty good because of a number of reasons:

    • clearer, easier to understand code without nasty side effects or really odd things (e.g. relying on a bool storing the value 2)
    • Better modularity (a module you don’t use and don’t load cannot screw you up)
    • Smaller core and removal of problematic features.
    • All the testing stuff I’ve previously mentioned.

So I hope we’re going to be okay here.

5. Don’t forget about the roots

The group of us working at Sun on Drizzle have said we want to focus on being awesome for large scale Web apps while enabling others to make Drizzle good for other things. I think this is the right approach to not forget our roots (and target users) while allowing it to be awesome for any use somebody wants to have of it.

From a Storage Engine author PoV, Paul had some insights while thinking about PBXT:

1. A generic engine test suite

We’re doing pretty well… the whole Drizzle test suite runs with InnoDB, and doesn’t require much change to get going with another transactional engine. The proof is in the Drizzle PBXT branch! But I also think we could do better and have a test suite more directed at each part of an engine (including error cases!).

2. Internal APIs

Paul mentions FRMs, which are gone :) In their place is a simple interface that engines can implement for ever increased functionality (i.e. they own their own metadata). We’re getting better in other places too.

3. Customizable table and column attributes

MariaDB has this now, and we have space in our table definition proto, but not at the parser level (yet).

4. Push-down restrict and join conditions.

Not yet, and not for a little while.

5. Custom data types

It’ll be great when we rework the type system even more so that this really is as easy as it should be – not only from a SQL level but also for adding new types as server modules.

Finally, Antony’s wish list:

1. Modular Architecture

Covered.

2. libmysys as a separate project

We’ve removed it where we can, and are using gnulib where we can. It very much improves the situation when you ditch weird-ass platforms and assume some level of POSIX.

3. New/modular parser

We’re getting close to a stage where you could load a different parser… not there, but relatively close. It would still be messy, but a lot better than even 6 months ago.

4. Unit tests for server components

With our move towards modularity, this is actually getting possible!

5. Aggregate Stored functions and External Stored Procedures

We don’t currently have either, we have decent thoughts though.

Antony also cheated and added a few more:

A new Recursive Descent parser

The required work to be able to replace the parser is in progress.

Abstract Syntax Trees

See above.. getting the pre-work done.

SCTP and/or link aggregation

We’ll see improvements around this with the new drizzle protocol.

Parsing within the client

We’ve had some very good discussion on the drizzle-discuss list. We’ll no doubt have something to help remove more of the cycles used in executing a query.

Integrated Federation

That’s the game plan :)

Elimination of FRM Files

I never get tired of saying this is done :)

Elimination of errmsg.sys

We’re now just using gettext, like every other free software project on the planet. Although I think we could take a few steps in making errors more easily parsable by code.

So how do we stack up?

I think we’re doing pretty well. There’s still a lot of work to get where we want to be, but it’s amazing how much progress we’ve made in the short time we’ve been around.

I also just realized I missed Jay’s list… but we’re doing pretty well there too.

How many CPU cycles does a SQL query take? (or pagefaults caused… or L2 cache misses… or CPU migrations…)

I like profilers. I use them when trying to make software (such as Drizzle) faster. Many profilers suck – and pretty much all of them are impossible to attach to a running system. Two notable exceptions are oprofile and dtrace (for Linux and Solaris respectively). The downside of oprofile is that it is non trivial to configure and get running and is pretty much all or nothing. Dtrace has the major disadvantage of that it is Solaris specific, so is only available to a minority of our users (and developers).

The new Linux Performance Events interface (perf_event) presents to userspace a nice abstraction of the hardware Performance Monitoring Unit inside the CPU. Typically these are processor specific (i.e. the one in a Core is different than the one in a Core 2) and can only be used by one thing at a time. The perf_events interface lets multiple applications/threads use the PMU (switching state at context switch as needed), even giving us ratios of how much time we got events for so we can do realistic estimates. It also provides some common defines to ask for things like CPU Cycles (the value you program into the PMU differs per CPU architecture for that, so an abstraction is rather welcome: it means we don’t need to have #ifdef __powerpc__ in our userspace code to support PowerPC, just a kernel that does).

Since perf_events gives us an interface to only get counts for our thread, we can map this onto connected sessions to Drizzle (and if we were using a pool_of_threads type scheduler in Drizzle, we’d need a bit of extra code to get things right, but with a thread per connection scheduler, we get it for free). A simple use of this could be to answer the question “How many CPU cycles does this SQL query take?” with the condition that you do not want how many CPU cycles were spent executing other things (e.g. the 1,000 other SQL queries currently being executed on your database server).

Many of you will now point out the RDTSC instruction for the x86 and ask why I’m just not using it. With RDTSC you’re only getting “how many cycles since reboot”. So using two values from the TSC and finding the difference only tells you how many cycles were between the two reads, not how many cycles were spent executing your thread/process. So the value of “cycles executed” gathered with RDTSC varies between a loaded and non-loaded system. With perf_events, it does not.

So…. after talking to paulus about perf_events, I decided to see how I could plug this into Drizzle to start getting interesting numbers out. Sure enough, a little bit of work later and I have my first proof of concept implementation over in lp:~stewart-flamingspork/drizzle/perf-events. That tree has a perf_udf() function that is like BENCHMARK except it returns the number of CPU cycles spent executing the expression N times. For example, how many CPU cycles does it take for the SQL expression MD5(‘pants’) to be evaluated?

drizzle> select perf_udf(1, MD5(‘pants’));
+—————————+
| perf_udf(1, MD5(‘pants’)) |
+—————————+
| 43540 |
+—————————+
1 row in set (0 sec)

So on my laptop, it’s about 40,000 cycles (over multiple runs I saw it vary between about 39,000 to 44,000). The really neat thing about using the perf_events interface is that if you run this on a different CPU architecture that has perf_events available in the kernel you’re currently running, it “just works”. e.g. if I ran this on a POWER5 box, I’d find out how many cycles it took there! No special code in Drizzle, yay!

The really neat next thing I tried was to run the same perf_udf query while running drizzleslap on the same database server, completely hammering it. I get exactly the same result (within normal variance)!

That isn’t the best part though. The best part is the other bits of information we can get out of the PMU:

  • PERF_COUNT_HW_CPU_CYCLES
  • PERF_COUNT_HW_INSTRUCTIONS
  • PERF_COUNT_HW_CACHE_REFERENCES
  • PERF_COUNT_HW_CACHE_MISSES
  • PERF_COUNT_HW_BRANCH_INSTRUCTIONS
  • PERF_COUNT_HW_BRANCH_MISSES
  • PERF_COUNT_HW_BUS_CYCLES

So the same way we can use the new ‘perf’ utility to see what a process is doing to the machine, we can enable people to do exactly the same thing with specific SQL queries (and through a bit of extra code, you could aggregate for users/applications). Not only that, but we could write a plugin for Drizzle to occasionally sample queries running through the database server and build up a quite complete profile of what’s going on over time.

We can also get software events out of perf_events such as:

  • PERF_COUNT_SW_CPU_CLOCK
  • PERF_COUNT_SW_TASK_CLOCK
  • PERF_COUNT_SW_PAGE_FAULTS
  • PERF_COUNT_SW_CONTEXT_SWITCHES
  • PERF_COUNT_SW_CPU_MIGRATIONS
  • PERF_COUNT_SW_PAGE_FAULTS_MIN
  • PERF_COUNT_SW_PAGE_FAULTS_MAJ

So for engines that do memory mapping of files on disk, we can find out which of your SQL queries are causing the page faults! We should also be able to find out if the operating system kernel is bouncing your execution threads around CPUs a lot.

The biggest possibility for awesomeness comes from the perf_event ability to get periodic call traces (you specify how often) including down into the kernel. This means that we could, on demand and only when we enable it, profile where time is being spent for that specific SQL query. The most important thing to note is that when this gathering is not enabled, the overhead is zero. Even when enabled for one query, this should have minimal impact on other things currently running (you’re going to use some extra CPU doing the profile, but we’re only profiling that one SQL query, not the 1000 other ones executing at the same time). So we could tell you that your query spends 9/10ths of its time in filesort() without ever adding any extra instructions to the filesort() codepath (and anywhere else where you may want to find out how much CPU time was spent). We could even tell you how much time was being spent in the kernel doing specific IO operations!

So I’m thinking that this could be used to make something pretty useful that could easily be used on production systems due to the zero overhead when not enabled and the small overhead when enabled.

table_raw_reader – reading the table proto from disk and examining everything

Ever wondered exactly *what* was in a Drizzle Table proto? Well, wonder no more. A while back this little utility called table_raw_reader hit the drizzle codebase. It’s a simple command line utility that takes a .proto file as an argument, reads it off disk and then prints out a text representation using the TextFormat class of the protobuf library.

An example:

stewart@willster:~/drizzle/jay-and-stewart-remove-pack_flag$ ./drizzled/message/table_raw_reader ./tests/var/master-data/test/t1.dfe
name: "t1"
engine {
  name: "InnoDB"
}
field {
  name: "id"
  type: INTEGER
  format: DefaultFormat
  options {
    length: 11
  }
}
field {
  name: "padding"
  type: VARCHAR
  format: DefaultFormat
  options {
    length: 800
  }
  string_options {
    length: 200
    collation_id: 45
    collation: "utf8_general_ci"
  }
}
indexes {
  name: "PRIMARY"
  is_primary: true
  is_unique: true
  type: UNKNOWN_INDEX
  key_length: 4
  index_part {
    fieldnr: 0
    compare_length: 4
    key_type: 8
  }
  options {
  }
}
type: STANDARD
options {
  collation: "utf8_general_ci"
  collation_id: 45
  pack_record: true
  row_type: ROW_TYPE_DEFAULT
}

This can help a lot when debugging, it already has helped me a lot. It’s left as an exercise for the reader to create a utility that lets you modify the table proto in this text based format.

pack_flag is dead!

(At least in Drizzle)

Brian merged my jay-and-stewart-remove-pack_flag branch and it’s now in trunk. Removing pack_flag was a task for the bell milestone and in true collaborative effort, it took more than one person to unravel its dark mysteries. Hats go off to Jay who bravely ripped more of it out with the aid of excellent Seattle coffee. I finished it off with removing the last remnants and fixing an outstanding bug (and at the same time making us look at the DECIMAL code and shuddering in fear of what it may mean).

With pack_flag gone, we can now do insane things like remove the last bits of TINYINT from the code, as previous to removing pack_flag, if you did that, things broke.

This also means we are one more step towards a table proto that is maintainable long term and doesn’t have incomprehensible magic.