It seems by sheer luck that I stumbled over a way to take care of everything. I save a copy of the interpreted binlog as it files through the pipe:This is in regards to piping mysqlbinlog output from one mysql server into the mysql client to execute on another; the post I linked above discusses doing so for switching to Amazon RDS. The basic strategy is to minimize downtime by loading a database dump from the source on the destination, then use mysqlbinlog on the source and the mysql client to feed updates from the source to the destination. The updates can be faster to load than a new dump; and when it's time to switch servers, it's a matter of stopping database clients, turning off the source mysqld, sending the final binlog updates, pointing the clients to the destination server, and turning the clients back on. As opposed to waiting for a whole dump to load while the clients are off.
mysqlbinlog ... | tee binlog-play.sql | mysql ...
Then if I get an error message, mysql will tell me e.g. "Error ... at line 42100". Running "vim +42100 binlog-play.sql" lets me inspect the stream to see what went wrong in detail.
Inside binlog-play.sql, the "#at 112294949" comments can be used in e.g. "--start-position=112294949" to the next mysqlbinlog command, to retry the statement after I fix the problem. (Alternatively end_pos seems to tell the position of the next command, if I need to skip the one which failed, e.g. I was testing out CREATE FUNCTION and it was logged as "CREATE DEFINER=... FUNCTION" which RDS refuses.)
The final piece of the puzzle is that executing "FLUSH LOGS;" or "mysqladmin flush-logs" will push mysqld on to the next binlog file, so you can safely play out the one you want. Once you've finished processing a file through mysqlbinlog, you can just remember the file boundary, and flush mysql's logs if you want to process the one it's presently writing to.
Wednesday, October 19, 2011
Notes on using mysqlbinlog for copying updates
I commented on this post, but for posterity: