Ghosts of MySQL Past, Part 7.1: Hey look, I found an old business card

Digging through piles of old stuff in the house, I came across a nice little artifact of MySQL history, an old business card. One benefit of a small company is that you can tend to get your first name @company.com, which of course, I had.

IMG_20140310_111500I wonder which other MySQL AB employees still have a business card laying around somewhere?

Ghosts of MySQL Past, Part 7: PBXT

Recently, I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of. Also see Part 1, Part 2, Part 3, Part 4, Part 5 and Part 6. My feed feel off Planet MySQL for a bit so you may have missed those posts – so feel free to go on a trip down memory lane before returning here for, well, more of a trip down memory lane.

At the start of 2005, Paul McCullagh started working on a new transactional storage engine for MySQL. He announced it in early 2006, so the majority of initial development was done before Oracle bought InnoBase Oy.

It’s at this point I should correct myself from my Part 5 where I was talking about when Maria started – it was actually at the start of 2005 when the project started, about 10 months before InnoDB Friday. I think this was mostly small scale work at first, before months later having a larger focus on it.

But anyway, let’s talk about PBXT! It had a different architecture than InnoDB and thus could have an interesting set of performance characteristics. The shortest way to describe the original architecture is “kind of log based”. Rows are written to a log file and there’s a handle file that points to where the rows are in the log.

Also, the design had both a fixed size and a variable sized part of the row. The fixed size part was stored alongside the handle in the record data file, so it was incredibly quick at getting the fixed size part of the row.

Not having to write data twice and having really quick access to some columns gave PBXT a quite decent performance advantage for many work loads.

The BLOB Streaming feature that was also worked on (originally just for PBXT and then for any engine) was quite ahead of its time. Think HandlerSocket but think of it done more correctly. HandlerSocket is an awful binary protocol that supports very limited operations and uses two TCP ports (one for reads, one for writes). The blob streaming used HTTP, something that anyone could use and manipulate and proxy and whatever you’d want.

If we look at Paul’s Top 5 Wishes for storage engines, an engine test suite and sane APIs (or at least documented) would clearly have helped. Having to do API tracing to discover when you should start a transaction is probably not the best way to attract developers – the amount of time that could have been saved given better interfaces in the server was immense (and not just for PBXT developers).

So, why aren’t we all using PBXT now? Well… if MySQL had shipped with PBXT, it would possibly have been a different story. There was (and indeed is) a very uneven playing field for MySQL storage engines. If you’re in the main MySQL tree then you’re everywhere. If not, then you’ve got a heck of a lot of work on packaging and keeping up to date with various API changes (not to mention ABI, which since we’re talking C++ and since we’re talking MySQL, changed a lot). With InnoDB being “good enough” for many and actually being forced to improve due to challengers such as PBXT (as well as being in the tree) it just continued to have the majority of the users… and it’s not easy making money as a storage engine vendor – and developing a storage engine does cost money.

With no more than a couple of people working on it at any one time, it’s amazing that PBXT had such a big influence – and we can likely credit many InnoDB performance improvement to PBXT being so much better in some areas.

An interesting side story: I was sending some build fixes/complier warning fixes and other minimal patches to Paul when MySQL belonged to Sun and he mentioned that perhaps it was time he had a contributor agreement. I pointed out the Sun contributor agreement as an example of one that could be used, and maybe he could just replace “Sun Microsystems” with “PrimeBase” and I could submit that to the Sun system – at the very least, it would be amusing. Paul said that sounded like a great idea and I submitted Sun’s contributor agreement (which it expected outside contributors to agree to) to Sun for them to agree to.

Well… a few weeks later I got a response, and Sun wasn’t exactly keen on it – until I pointed out a few times that it was in fact their agreement and then finally it was all okay.

If your company wants people to agree to a contributor agreement: see if they’d agree to it if it was for someone else. It was an interesting exercise.

Further reading:

Ghosts of MySQL Past, Part 6: The engine revs

This week I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of. Also see Part 1, Part 2, Part 3, Part 4 and Part 5. My feed feel off Planet MySQL for a bit so you may have missed those posts.

Netfrastructure was many other things apart from just a database engine, but the one part that was interesting to MySQL was the storage engine. One interesting thing about Netfrastructure was the inbuilt Java VM, which meant you could do all sorts of neat tricks right near the data and with Jim and Ann coming from much more feature complete databases, some parts of MySQL were going to be a little bit of a shock.

There was still work to be done on the Falcon engine itself, it wasn’t completely finished and ready to just drop in, even though you may have heard things like “to be completed later this year”.

There was also another experiment to see if MySQL could get a storage engine that wasn’t owned by somebody else: resurrect Gemini. Remember, it was GPL licensed now, and it could conceivably be updated to the latest MySQL versions. You can go and see the code for the Amira project on launchpad. Ultimately, this went nowhere, but it could have been an interesting story if this project got more resources. Ultimately, it was Falcon that got all the attention and effort.

This was going to be the ultimate triumph of the Pluggable Storage Engine Architecture, something that had been touted in presentations and marketing material for years. What really happened? Well, it took NDB (now MySQL Cluster) somewhere between two and four years before you could really say it was stable and free of many “obvious” bugs/unintended deviations in behavior. Would Falcon be any different?

In reality, the Falcon project unearthed all of the warts of the storage engine interface. There were many exchanges of “Is it really like that?”, “Yes, yes it is, sorry.”

The APIs (and I use the term loosely) between a storage engine and the database server were originally used to plug in ISAM and then MyISAM, and the first transactional store (InnoDB) required a bunch of hacks to work and the API was never cleaned up. When the third transactional engine came along (MySQL Cluster) – which was also a distributed engine – there were again no real fixes in the API, just more of the same work-arounds and oddness. So by the time Falcon came along, if you were lucky the API was merely just not what you’d expect. There were many, many situations where not only was the official documentation inaccurate, but you’d have to commit several gross layering violations just to get something rather fundamental to a transactional database working.

Arguably the worst thing was foreign keys, a requirement for the Falcon project. You had to implement your own SQL parser to get any information on foreign keys. Let’s not even get started on auto_increment – something which the correct behavior is pretty much completely undocumented.

Next, we’ll look at PBXT.

Google Tests Homegrown Power8 Servers

http://www.enterprisetech.com/2014/02/13/google-tests-homegrown-power8-servers/

Having joined IBM now and working on Linux on Power, I’m allowed to be all happy and gleeful about a non x86 CPU architecture again, and one where Linux and free software really is a big deal.

Some of my now colleagues talked about some things related to Power 8 at Linux.conf.au so you should go and check out their talks!

Ghosts of MySQL Past Part 5: The Era of Acquisitions

This week I’ve been writing based on my linux.conf.au 2014 talk, which you can watch the recording of.

Also see Part 1, Part 2, Part 3 and Part 4. My feed feel off Planet MySQL for a bit so you may have missed those posts.

Now we head into the era of acquisitions… there have been a few in MySQL history, and in 2005 came the second (the first was MySQL AB acquiring Alzato for NDB). In what was to be known as “InnoDB Friday”, the makers of InnoDB – Innobase Oy – was acquired by Oracle. That very same month….

MySQL 5.0 GA. The first GA release of MySQL 5.0 is infamous. It was nowhere near ready and everybody who tried to use 5.0 in the early GA days has a story about something obvious that was broken. Basically, the majority of the new features simply didn’t work. It took many point releases before people would consider 5.0 ready.

The real measure of 5.0 quality was that it took MySQL AB over a year before we started to use it for our support database.

At the end of 2005, the Maria project was started: a project to create a transactional storage engine. This should not be confused with MariaDB, which would come years later. This is Maria, now called Aria. The basic idea was to fork MyISAM and work on adding features. In hindsight, it’s easy to see that when you have a quality problem with your main product, you should probably not take a bunch of senior engineers and have them work on a different project. IIRC there was some initial estimate of a GA by the end of 2007. It’s now eight years since the project started and there’s still no stable release.

There were other efforts to get a transactional storage engine not owned by Oracle, and in 2006 MySQL AB acquired Netfrastructure and along with it Jim Starkey and Ann Harrison came to work for MySQL AB.

Originally named JSTAR, this would become known as Falcon (probably something to do with the Swedish beer by the same name).

Ghosts of MySQL Past: Part 4, A million features for Enterprise

Continuing on from Part 3….

SAP is all about Enterprises and as such, used all the Enterprise features of databases. This is a much different application than every user of MySQL so far (which were often web applications, and increasingly being used at scale). This was “MySQL focuses on Enterprise”.

This is 2003: before VIEWS, before stored procedures, before triggers, before cursors, before prepared statements, before precision math, before SQL access to logs, before any real deadlock info, before you could use a / in a table name, before any performance statistics, before full unicode and before partitioning, row level binary logs, fractional seconds in temporal types and before EXPLAIN for INSERT/UPDATE/DELETE (which we still don’t have). There was a long way to go before MySQL would fit with SAP. A long, long way.

But how were we going to run our cell phone networks on MySQL? There was an answer to that as MySQL AB acquired an Ericsson spinoff, Alzato for it’s clustered database server, NDB, which would become MySQL Cluster. This is separate from replication and originally designed for GSM HLR databases. Early versions had two types of availability: five nines (99.999%) or zero nines (0.000%) – it either worked or exploded. Taking a database engine that wasn’t fully mature and integrating it with MySQL is not a task for the faint of heart. It wouldn’t be the last time this was attempted and it would prove to be the most successful.

In October of 2004, MySQL 4.1 went GA – the first version with MySQL Cluster and incidentally, just before I joined MySQL AB on the MySQL Cluster team.

By this time, MySQL AB had hired just about all outside contributors, as, well, what better job interview than a patch? The struggle to have a good relationship with outside contributors started a long time ago, it’s certainly nothing new.

2005 was the year of MySQL 5.0 instability. At the time, 5.0 was the development version and it was plagued by instability. It was really hard for development teams (such as the SAP team) to work on deliverables when every time they’d rebase their work on a new 5.0 trunk version, the server would explode in new and interesting ways.

In fact, the situation was so bad that Continuous Integration was re-invented. Basically, it was feature++; stability–. The fact that at the same time even less stable development trees existed did not bode well.

Ghosts of MySQL Past: Part 3

See Part 1 and Part 2.

We rejoin our story with a lawsuit. While MySQL suing Progress NuSphere is not perhaps the first GPL lawsuit that comes to mind, it was the first time that the GPL was tested in court. Basically, the GEMINI storage engine was a proprietary storage engine bundled with a copy of MySQL. Guess what? The GPL was found to be valid and GEMINI was eventually GPLed, and it didn’t really go anywhere after that. Why? Probably some business reasons and also, InnoDB was actually rather good and there wasn’t a lawsuit to enforce the GPL there, making business relationships remarkably easier.

In 2003 there was a second round of VC funding. The development team increased in size. One thing that MySQL AB did was invest heavily in technology. I think this is what gave the company a lot of value, you need to spend money developing technology if you wish to be seen as a giant and if you wish to be able to provide a high level of quality service to customers.

MySQL 4.0 went GA in March 2003 while at the same time there were 4.1 and 5.0 development trees. Three concurrent development trees may seem too many – and of course, it was. But these were heady days of working on features that MySQL was missing and ever wanting to gain users and market share. Would all these extra features be able to be added to MySQL? Time would tell…

The big news of 2003 for MySQL? A partnership with SAP. There was this idea: “run SAP on MySQL” which would push the MySQL Server in a bit of an odd direction. For a start, the bootstrap SQL script for SAP created something like 10,000 tables and loaded gigabytes of data – before you even started setting it up. In 2003, on MySQL 4.0, this didn’t go so well. Why was SAP interested? Well, then you’d be able to run SAP without paying Oracle licenses!

Ghosts of MySQL Past: Part 2

This continues on from my post yesterday and also contains content from my linux.conf.au 2014 talk (view video here).

Way back in May in the year 2000, a feature was added to MySQL that would keep many people employed for many years – replication. In 3.23.15 you could replicate from one MySQL instance to another. This is commonly cited as the results of two weeks of work by one developer. The idea is simple: create a log of all the SQL queries that modify the database and then replay them on a slave. Remember, this is before there was concurrency and everything was ISAM or MyISAM, so this worked (for certain definitions of worked).

The key things to remember about MySQL replication are: it was easy to use, it was easy to set up and it was built into the MySQL Server. This is why it won. You have to fast forward to September in 2010 before PostgreSQL caught up! It was only with PostgreSQL 9.0 that you could have queryable read-only slaves with stock standard PostgreSQL.

If you want to know why MySQL was so much bigger than PostgreSQL, this built in and easy to use replication was a huge reason. There is the age of a decent scotch between read-only slaves for MySQL and PostgreSQL (although I don’t think I’ve ever pointed that out to my PostgreSQL friends when having scotch with them… I shall have to!)

In 2001, when space was an odyssey, the first GA (General Availability) release of MySQL 3.23 hit the streets (quite literally, this was back in the day of software that came in actual physical boxes, so it quite probably was literally hitting the streets).

For a good piece of trivia, it’s 3.23.22-beta that is the first release in the current bzr tree, which means that it was around this time that BitKeeper first came into use for MySQL source code.

We also saw the integration of InnoDB in 2001. What was supremely interesting is that the transactional storage engine was not from MySQL AB, it was from Innobase Oy. The internals of the MySQL server were certainly not set up for transactions, and for many years (in fact, to this day) we talk about how a transactional engine was shoehorned in there. Every transactional engine since has had to do the same odd things to, say, find out when a transaction was being started. The exception here is in Drizzle, where we finally cleaned up a bunch of this mess.

Having a major component of the MySQL server owned and controlled by another company was an interesting situation, and one that would prove interesting in a few years time.

We also saw MÃ¥rten Mickos become CEO in 2001, a role he would have through the Sun acquisition – an acquisition that definitively proved that you can build an open source company and sell it for a *lot* of money. It was also the year that saw MySQL AB accept its first round of VC funding, and this would (of course) have interesting implications: some good, some less ideal.

(We’ll continue tomorrow with Part 3!)

Past, Present and future of MySQL and variants Part 1: Ghosts of MySQL Past

You can watch the video of my linux.conf.au 2014 talk here: http://mirror.linux.org.au/linux.conf.au/2014/Wednesday/28-Past_Present_and_future_of_MySQL_and_variants_-_Stewart_Smith.mp4

But let’s talk about things in blog form rather than video form :)

Back in 1979, there was UNIREG. A text UI to records (rows) in a database (err, table). The reason I mention UNIREG is that it had FoRMs which as you may have guessed by my capitalization there is where the FRM file comes from.

In 1986, UNIREG came to UNIX. That’s right kids, the 80×24 VT100 interface to ISAM (Index Sequential Access Method – basically rows are written in insert order and indexes point to them) came to UNIX. There was no generic query language, just FoRMs and reports. In fact, to this day, that 80×24 text interface is stored in the FRM file by MySQL and never ever used (I’ve written about this before).

Then there was this mSQL thing around the 1990s, which was a small SQL server (with source) but not FOSS. Originally, Monty W plugged in his ISAM engine but it wasn’t quite the right fit… so in 1995, we had MySQL 1.0 and MySQL AB was founded.

Fast forward a bit and in 1996 we had MySQL 3.19 and development continued. It managed to gain features, performance, ports to different operating systems and CPU architectures and, of course, stability.

It wasn’t until the year 2000 that MySQL adopted the GPL. This turned out to be a huge step in the right direction for increased adoption. At the time, this was a huge risk for the company, essentially risking all the revenue of the company on making the software more free.

This was the birth of the dual licensing business model. You see, the client library (libmysql) was also GPL, which meant it was easy to use if your application was also GPL, but if you were going to distribute your application and it wasn’t under a GPL compatible license (there was also a FOSS exception so that things like PHP could use it) then you needed a license.

Revenue from licensing was to be significant throughout the entire history of MySQL AB.

(We’ll continue this in part 2 tomorrow)

The accurately named “best tofu dish ever”: Mapo Tofu

I have been having a craving for Mapo Tofu for a few days now and I finally could be bothered attempting to make it for the first time. It’s odd that for a dish I really do quite like, this was my very first time attempting to make it.

Well… I was googling around and found this recipe: http://www.kitchenriffs.com/2011/03/best-darn-tofu-dish-ever-mapo-tofu.html. I didn’t have fermented black beans, so I left that out. I used the broth from the dried shiitake mushrooms (with a tiny bit of Massel stock added) and did take the option of adding pepper (chilli) flakes.

It was absolutely delicious and really quite simple to make.

The simmering of the tofu (I just used the Woolworths Macro Classic Tofu that you can pick up at any Woolworths store… because I’m often just plain lazy when it comes to acquiring tofu) was something I haven’t done before. It turns out that makes the outside of it more likely to keep its shape, which it rather did. Neat trick!

Did I mention this was delicious? I wish there was more!

$3 Kaiser Baas keyring photo frame and linux

I picked one up for $3 from OfficeWorks and wanted to put pictures on it.

So… I could list (and get) the default photos with gphoto2 (shipped with Fedora 20 it just worked): gphoto2 -L (list), -P (get all).

It turns out that the images are 128×128 PNGs! Easy: change resolution in GIMP of a picture I want, and then “gphoto2 -u filename.png” sent it to the device.

Wheee, it works! Not bad for $3

and now for something completely different…

As many of you know, I’ve been working in the MySQL world for quite a while now. IN fact, it was nearly 10 years ago when I first started hacking on MySQL Cluster at MySQL AB.

Most recently, I was at Percona which was a wonderful journey where over my nearly three years there the company at least doubled in size, launched several new software products and greatly improved the quality and frequency of releases.

However the time has come for something completely different. The MySQL world is rather mature, the future of Percona software is bright and, well, I could do with poking into something rather different.

So a couple of weeks ago I started at IBM in the Linux Technology Centre working on KVM on POWER and related things. No doubt there’ll be interesting things to blog about as time goes on, but it’s about time I posted my change of employment :)

Converting MySQL trees to git

I have put up a set of scripts on github: https://github.com/stewartsmith/bzr-to-git-conversion-scripts. Why do I need these? Well… if only bzr fast-export|git fast-import worked flawlessly for large, complex and old trees. It doesn’t.

Basically, when you clone this repo you can run “./sync-BLAH.sh” and it’ll pull BZR trees for the project, convert to git and clean things up a bit. You will likely have to edit the sync-BLAH.sh scripts as I have them pointed at branches on my own machine (to speed up the process, not having to do fresh BZR branches of MySQL trees over the network is a feature – it’s never been fast.). You’ll also want to edit the git remotes to point where you want git trees to end up.

I’ve done it for:

What problems did I hit? Well… the first is performance, things are slow unless you tweak a bunch of knobs, and then it’s just rather slow rather than slow. So in the empty git repo I set core.compression=1, which makes zlib a whole lot faster.

I naturally give the correct incantation to bzr fast-export to munge tag names appropriately, set a git branch name (each BZR branch ends up as a git branch) and use a marks file (this speeds up incremental syncs).

For one of these branches I was importing, BZR had allowed the invalid committer of “billy-earney billy.earney@gmail.com\n <>” – yes, a newline in the committer. This messes up the fast-import format so I have to run the entire fast-export output through sed to clean it up.

We then use bzr fast-import-filter to apply a user map – which is me looking at the appropriate committers and cleaning them up so that we get better attribution in the resulting git trees as well as cleaning up some errors in the bzr tree so that Git likes them (most notably, missing < or (not and) > around email addresses). The user map is fairly Percona specific, but there’s at least one or two for Oracle committers too.

Next, I pass the output through pv(1) – to do two things: monitor the output to see that it’s still going, and to have a transfer buffer so that git fast-import doesn’t stall waiting for output – amazingly enough, this gave a decent speed boost to import speed.

Finally, when we’re done doing the import of all of the revisions for all of the bzr branches, if this is our first run, we set the HEAD ref to the last BZR branch name and then do a git repack. Through experimentation, I’ve found that “git repack -AdfF –depth=100 –window=500” is what gives me the smallest size possible.

My lca2014 talk video: Past, Present and Future of MySQL and variants

On last Wednesday morning I gave my talk at linux.conf.au 2014. You can now view and download the recording of it here:

http://mirror.linux.org.au/linux.conf.au/2014/Wednesday/28-Past_Present_and_future_of_MySQL_and_variants_-_Stewart_Smith.mp4

(hopefully more free formats will come soon, the all volunteer AV team has been absolutely amazing getting things up this quickly).

CFP:Developer, Testing, Release and CI Automation miniconf @ linux.conf.au 2014

I have just opened the Call For Papers for the Developer, Testing, Release and Continuous integration Automation miniconf at linux.conf.au 2014.

This miniconf is all about improving the way we produce, collaborate, test and release software.

We want to cover tools and techniques to improve the way we work together to produce higher quality software:

– code review tools and techniques (e.g. gerrit)
– continuous integration tools (e.g. jenkins)
– CI techniques (e.g. gated trunk, zuul)
– testing tools and techniques (e.g. subunit, fuzz testing tools)
– release tools and techniques: daily builds, interacting with distributions, ensuring you test the software that you ship.

All sessions are 30 minutes unless there is prior arrangement. Typically there is a VGA plug at the front of the room but if you have any specialized A/V requirements please enter them as notes at the end and we’ll see what we can do.

Submissions are open until November 20th, with notifications going out over the following 1-2 weeks.

Submit now.

How many pages of ToS and Privacy Policies?

So, I started this thought experiment: let’s assume for the moment that government is completely trustworthy, only has your interests at heart and doesn’t secretly sell you out to whoever they feel like. Now, on top of that, what about the agreements you enter into with corporations? How long are they and could you properly understand all the implications to your privacy and give informed consent?

So… I started with when I left home. I got on a Virgin Flight, they have a privacy policy which is eight pages. I then arrived in New Zealand and filled out a customs form. I could not find anything about what the New Zealand customs service could do with that information, but let’s just assume they’re publishing it all on the internet and selling it to the highest bidder. The other alternative is that they follow the New Zealand Privacy act, which is a mere 182 pages.

Once getting through customs I turned on my phone. The basics are probably covered by the New Zealand Telecommunications Privacy Code (35 pages) and since I was on Vodafone NZ, their three page privacy policy likely applies. Of course, I’m roaming, so the Vodafone Australia three page privacy policy also likely applies (of course, under a completely different legal framework). There’s likely things in the other agreements I have with Vodafone, the standard agreement summary is a mere 4 pages and the complete agreement is 84 pages.

I arrived at my hotel and the Langham privacy policy is two pages. I then log into Facebook, 30 pages of important things there, into Twitter, another 11 pages. My phone is all hooked up to Google Play, so that’s another 10 pages. I walk into the conference, the code of conduct is a single page which was a pleasant relief. I then log into work mail, and the GMail terms of service is three pages with a four page privacy policy.

If I was someone who used the iTunes, it would be reasonable that I would watch something in the hotel room – another 24 pages of agreement before then deciding to call home, carefully reading the full 20 pages of Skype terms of service and privacy policy.

In total, that’s 428 pages.

This excludes any license agreements to the operating system you’re using on your laptop, phone and all the application software. It also excludes whatever agreement you enter into about the CCTV footage of you in the taxi to and from the airport.

So, my question to the panel at OSDC was: how on earth is the average consumer meant to be able to make an informed decision and give their informed consent to any of this?

Carbon footprint of interpreted languages

Thought from a good discussion with at François at OSDC today, what is the carbon footprint of various languages? He mentioned that the carbon footprint of a new Haskell compiler release is remarkably non-trivial due to every Haskell package in Debian needing to be rebuilt.

So, I thought, what’s the impact of something like Python? (or Perl). Every machine running the code has to do the bytecode compilation/JIT/interpretation of that code so when, say, Ubuntu ships some new version of $random_desktop_thing_written_in_python, we’re actually compiling it well over 20 million times. That’s a remarkably non-trivial amount of CPU time (and thus CO2 emissions).

So, program in compiled languages such as C or C++ as doing so will save polar bears.

Hong Kong (OpenStack Summit)

I’ll be in Hong Kong for the upcoming OpenStack Summit Nov 5-8. I’d be thrilled to talk database things with others present, especially around Trove DBaaS (DataBase as a Service) and high availability MySQL for OpenStack deployments.

I was last in Hong Kong in 2010 when I worked for Rackspace. The closest office to me was in Hong Kong so that’s where I did my HR onboarding training. I remember telling friends on the Sunday night before leaving for Hong Kong that I may be able to make dinner later in the week purely depending on if somebody got back to me on if I was going to Hong Kong that week. I was, and I went. I took some photos while there.

Walking from the hotel where we were staying to the Rackspace office could be done pretty much entirely through buildings without going outside. There were bits of art around too, which is just kind of awesome – I’m always in favour of random art.
Statues in walkways

The photo below was the view from my hotel room. The OpenStack summit is just by the airport rather than in the middle of town, so the views will be decidedly different to this, but still probably quite spectacular if you’re around the right place (I plan to take camera gear, so shout if you want to journey too)
Hotel Window (Hong Kong)

There are some pretty awesome markets around Hong Kong offering just about everything you’d want, including a lot just out on the street.
Java Road
Hong Kong Street Market

Nightime was pretty awesome, having people from around the world journey out into the night was great.
Rackers walking Hong Kong at Night

I was there during the World Cup, and the streets were wonderfully decorated. I’m particularly proud of this photo as it was handheld, at night, after beer.
Hong Kong streetlife