23 August 2005

Distributed database, a few tables at a time

We've got six dedicated web servers reading from and writing to just one database server. All fine and good - until something goes wrong. Over the past few months, lots of little things have been going wrong ... some developer runs a "bad" query which locks the tables needed by the web servers; a new ad campaign generates much more traffic than any previous, and the database is not up to handling that many requests all at once. Of course, I would love it if the web developers never accessed the database without first showing me their SQL statements, and if we could afford the server downtime necessary to completely restructure the tables (change them to InnoDB and add much better indexes). However, I came up with another means to alleviate some (not all) of the stress on the database...

Distribute the most critical tables to a tiny mysqld process on each web server, so that the Apache/PHP processes no longer depend on the central database server to generate those pages.

This diagram shows the path of replication within MySQL. The web servers draw their data from either their local mysqld process or from the master, depending on the table; all writes are directed to the master.

No comments: