Database Reference
In-Depth Information
% mv query.log query.log.1
% mysqladmin flush-logs
The first few times you execute the command sequence, the initial commands are un‐
needed until the respective query.log. N files exist.
Successive executions of that command sequence rotate query.log through the names
query.log.1 , query.log.2 , and query.log.3 ; then query.log.3 is overwritten and its contents
lost. To maintain an archive copy, include the rotated files in your filesystem backups
before removing them.
Rotating the binary log
The server creates binary logfiles in numbered sequence. To expire them, you need only
arrange that it removes files when they're old enough. Several factors affect how many
files the server creates and maintains:
• The frequency of server restarts and log flushing operations: one new file is gen‐
erated each time either of those occurs.
• The size to which files can grow: larger sizes lead to fewer files. To control this size,
set the max_binlog_size system variable.
• How old files are permitted to become: longer expiration times lead to more files.
To control this age, set the expire_logs_days system variable. The server makes
expiration checks at server startup and when it opens a new binary logfile.
The following settings enable the binary log, set the maximum file size to 4GB, and
expire files after four days:
[mysqld]
log-bin=binlog
max_binlog_size=4G
expire_logs_days=4
You can also remove binary logfiles manually with the PURGE BINARY LOGS statement.
For example, to remove all files up to and including the one named binlog.001028 , do
this:
PURGE BINARY LOGS TO 'binlog.001028' ;
If your server is a replication master, don't be too aggressive about removing binary
logfiles. No file should be removed until you're certain its contents have been completely
transmitted to all slaves.
Automating logfile rotation
To make it easier to perform a rotation operation, put the commands that implement
it in a file to create a shell script. To perform the rotation automatically, arrange to
execute the script from a job scheduler such as cron . The script will need to access
Search WWH ::




Custom Search