Databases Reference
In-Depth Information
the log_slow_slave_statements option, so you can see which queries are slow when
they're replicated. Percona Server and MariaDB let you enable and disable this without
restarting the server.
There's not much you can tweak or tune on a replica that can't keep up, aside from
buying faster disks and CPUs (solid-state drives can help tremendously; see Chap-
ter 9 for details). Most of the options involve disabling some things that cause extra
work on the replica to try to reduce its load. One easy change is to configure InnoDB
to flush changes to disk less frequently, so transactions commit more quickly. You can
accomplish this by setting innodb_flush_log_at_trx_commit to 2 . You can also disable
binary logging on the replica, set innodb_locks_unsafe_for_binlog to 1 , and set
delay_key_write to ALL for MyISAM. These settings trade safety for speed, though. If
you promote a replica to be a master, make sure to reset these settings to safe values.
Don't duplicate the expensive part of writes
Rearchitecting your application and/or optimizing your queries is often the best way
to help the replicas keep up. Try to minimize the amount of work that has to be du-
plicated through your system. Any write that's expensive on the master will be replayed
on every replica. If you can move the work off the master onto a replica, only one of
the replicas will have to do the work. You can then push the write results back up to
the master, for example, with LOAD DATA INFILE .
Here's an example. Suppose you have a very large table that you summarize into a
smaller table for frequent processing:
mysql> REPLACE INTO main_db.summary_table (col1, col2, ...)
-> SELECT col1, sum(col2, ...)
-> FROM main_db.enormous_table GROUP BY col1;
If you perform that operation on the master, every replica will have to repeat the enor-
mous GROUP BY query. If you do enough of this, the replicas will not be able to keep up.
Moving the number crunching to one of the replicas can help. On the replica, perhaps
in a special database reserved for the purpose of avoiding conflicts with the data being
replicated from the master, you can run the following:
mysql> REPLACE INTO summary_db.summary_table (col1, col2, ...)
-> SELECT col1, sum(col2, ...)
-> FROM main_db.enormous_table GROUP BY col1;
Now you can use SELECT INTO OUTFILE , followed by LOAD DATA INFILE on the master,
to move the results back up to the master. Voilà —the duplicated work is reduced to a
simple LOAD DATA INFILE . If you have N replicas, you have just saved N - 1 enormous
GROUP BY queries.
The problem with this strategy is dealing with stale data. Sometimes it's hard to get
consistent results by reading on the replica and writing on the master (a problem we
address in detail in the following chapters). If it's hard to do the read on the replica,
you can simplify and still save your replicas a lot of work. If you separate the REPLACE
 
Search WWH ::




Custom Search