Databases Reference
In-Depth Information
That's pretty cryptic, we admit. If you have to do this kind of work to restore data, your
backups are poorly designed. With a little planning, it's possible to prevent a situation
in which you're panicked and trying to figure out how sed works. Just back up each
table into its own file, or, better yet, back up the data and schema separately.
Loading delimited files
If you dumped the data via SELECT INTO OUTFILE , you'll have to use LOAD DATA INFILE
with the same parameters to restore it. You can also use mysqlimport , which is a wrapper
around LOAD DATA INFILE . It relies on naming conventions to determine where to load
a file's data.
We hope you dumped your schema, not just your data. If so, it's a SQL dump, and you
can use the techniques outlined in the previous section to load it.
There's a great optimization you can use with LOAD DATA INFILE . It must read directly
from a file, so you might think you have to decompress the file before loading it, which
is very slow and disk-intensive. However, there's a way around that, at least on systems
that support FIFO “named pipe” files, such as GNU/Linux. First, create a named pipe
and stream the decompressed data into it:
$ mkfifo /tmp/backup/default/sakila/payment.fifo
$ chmod 666 /tmp/backup/default/sakila/payment.fifo
$ gunzip -c /tmp/backup/default/sakila/payment.txt.gz
> /tmp/backup/default/sakila/payment.fifo
Notice we're using a greater-than character (>) to redirect the decompressed output
into the payment.fifo file—not a pipe symbol, which creates anonymous pipes between
programs. The payment.fifo file is a named pipe, so there's no need for an anonymous
one.
The pipe will wait until some program opens it for reading from the other end. Here's
the neat part: the MySQL server can read the decompressed data from the pipe, just
like any other file. Don't forget to disable binary logging if appropriate:
mysql> SET SQL_LOG_BIN = 0; -- Optional
-> LOAD DATA INFILE '/tmp/backup/default/sakila/payment.fifo'
-> INTO TABLE sakila.payment;
Query OK, 16049 rows affected (2.29 sec)
Records: 16049 Deleted: 0 Skipped: 0 Warnings: 0
Once MySQL is done loading the data, gunzip will exit, and you can delete the named
pipe. You can use this same technique to load compressed files from within the MySQL
command-line client with the SOURCE command. The pt-fifo-split program in Percona
Toolkit can help you load large files in chunks, rather than one large transaction, which
can be a lot more efficient.
 
Search WWH ::




Custom Search