I spent some time looking at the mysqldump command in detail this afternoon with a view to backing up some in-house MySQL databases which were configured in a simple replicated environment (single master, single slave). I knew that executing something like:
# mysqldump -u user --password=xxxx dbname > dumpfile.sql
on the replication slave would do most of what I needed but I wanted to be sure that I was getting a consistent backup of the data nonetheless. At first, I thought I might have to stop MySQL before and after the dump but then I recalled of course that MySQL actually needs to be running to do a dump. My Subversion hat was duly removed for the afternoon!
Then I started to think about locking and flushing databases and tables, a simple database lock/unlock (read) before/after the dump should do the trick. Fortunately, this is taken care of for me by the mysqldump
utility through the --opt
parameter. This parameter is a shorthand way of enabling several other options, most notably the --lock-tables
option, which locks the tables in the database being dumped for the duration of the dump itself.
Interestingly though, the replication process is also suspended by mysqldump
and this actually lead me to falsely believe (for a short time) that all databases had been locked instead of just the one being dumped.
So, in essence, the simple command I started out with above actually does everything I need.