19 January 2009

Tokutek challenge vs. 128GB RAM

Like a few other folks, I decided to play with iiBench and see how fast I could insert 1B rows into an indexed InnoDB table. Mark Callaghan published  an excellent writeup of the theoretical and practical limits when index size >> buffer pool... but I bent the rules a bit ;)

The platform I ran iiBench on has four dual-core Xeon 3.2Ghz, internal 10k RPM 8 disk RAID 10, and 128GB of RAM. That's a lot, I know, but it was built precisely to avoid InnoDB's performance drop when an index does not fit in memory. It took just 22 hours to run a single iiBench process and 10 hours to run four processes in parallel (each doing 250m rows)! This was achieved without modifications to the iiBench code, using standard MySQL 5.1.30 binaries, the following innodb configuration parameters:
  innodb_buffer_pool=100G
  innodb_max_dirty_pages_pct=50
  innodb_support_xa=0
  innodb_flush_log_at_trx_commit=1
  innodb_flush_method=O_DIRECT
and binary logging enabled (had to purge master logs very frequently!). 

Here's a graph of inserts/sec and CPU usage during the single-process test ...


... and during the multi-process test ...

I'll leave any analysis of these results to those more knowledgable than myself, but close with a question: why do the graphs of insert performance decrease linearly even when all data should be in memory?

09 January 2009

Twitter

With the new year and all the changes right now - namely, moving to the Olympic Peninsula (WA) - I've decided to try Twitter. I seem to have lots of ideas to blog about, scratch them down, then never finish the post. I've seen posts by a few other bloggers that I follow(ed) who said the same thing prior to migrating (and they haven't come back...) so I think I'll try the same.