Reading around on mailing lists, blog comments and forums etc. about methods of effective caching, I’ve seen people dismiss any form of caching, such as memcache, with the argument that MySQL has got a query cache anyways.
This is what I have to say about that:
1234567
mysql> show status like "QCache_hits";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Qcache_hits | 0 |
+---------------+-------+
1 row in set (0.01 sec)
12345678
mysql> select txt from a where id=2;
+------+
| txt |
+------+
| data |
+------+
1 row in set (0.00 sec)
1234567
mysql> select txt from a where id=2;
+------+
| txt |
+------+
| data |
+------+
1 row in set (0.00 sec)
1234567
mysql> show status like "QCache_hits";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Qcache_hits | 1 |
+---------------+-------+
1 row in set (0.01 sec)
12
mysql> insert into a values (5,"other unrelated data");
Query OK, 1 row affected (0.00 sec)
1234567
mysql> select txt from a where id=2;
+------+
| txt |
+------+
| data |
+------+
1 row in set (0.00 sec)
1234567
mysql> show status like "QCache_hits";
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| Qcache_hits | 1 |
+---------------+-------+
1 row in set (0.00 sec)
In plain English, it’s not really comparable. They don’t even do the same thing. The MySQL query cache is invalidated as soon as the table is modified in any shape or form. Be it an UPDATE, INSERT or DELETE. This is not a problem with memcache, where nothing one action do will change the state of an already cached object. Sure, the MySQL query cache is a really good feature to have, despite its limitations and quirks. However, if you’re working on a typical OLTP site and find yourself at a point where you are considering (or are more or less forced to) implementing caching, you probably have enough frequent table modifications to not have a lot of benefit from the query cache.
Also, if you run MySQL in multi-server setup, there is no distribution of the query cache. So if your application/load balancer decides to do the same query on another instance the next time around, you will also lose out. This is not true with multiple instances of memcached thanks to it’s brilliantly simple means of clustering.
There are obviously more arguments for this, but I believe this is enough to get the point across.