Error reading packet from server: Client requested master to start replication from impossible position (server_errno=1236)
Start replication all over. Follow normal instructions on taking the main database down, record the master position, make a snapshot, copy it over to slave, start things back up, etc.
A more
Assmption: Lets assume that your slave was up-to-date with the master when things went sour. This should usually be the case on a properly-configured not-overly-loaded replication setup.
First, lets look at the master status on the master:
mysql> show master status;
+----------------+----------+--------------+------------------+
| File | Position | Binlog_do_db | Binlog_ignore_db |
+----------------+----------+--------------+------------------+
| box162-bin.014 | 29510700 | | |
+----------------+----------+--------------+------------------+
1 row in set (0.00 sec)
Okay, lets assume that when you rebooted the master or brought it back up after the power failure, that it will have started this new log, number 014.
My slave was stuck on number 013, and giving the "Client requested master to start replication from impossible position" error.
How to fix? Well, again, with the big assumption that the slave was generally up-to-date anyway, I simply went to the slave and did:
slave stop;
CHANGE MASTER TO MASTER_HOST='sunabc.net', MASTER_USER='repluser',MASTER_PASSWORD='replPass', MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=0
slave start;
And there we have it! The slave will start replicating on the new file. When power goes out and you have incomplete replication logs like this, we're really hoping that things are close enough in sync that this works.
Just do a "show slave status" on your slave, and make sure Slave_IO_Running and Slave_SQL_Running both say yes.
If you get errors about duplicate keys or other SQL inconsistency, then sorry, you were not as lucky this time. Do the normal backup.
Continue reading MySQL Error: Client requested master to start replication from impossible position
Got fatal error 1236: 'Client requested master to start replication from impossible position' from master when reading data from binary log
This might happen, for example, if there is a power failure on the master and it is unable to write out all the transactions to the replication log.
The proper way to fix it
Start replication all over. Follow normal instructions on taking the main database down, record the master position, make a snapshot, copy it over to slave, start things back up, etc.
A more lazy clever approach
Assmption: Lets assume that your slave was up-to-date with the master when things went sour. This should usually be the case on a properly-configured not-overly-loaded replication setup.
First, lets look at the master status on the master:
mysql> show master status;
+----------------+----------+--------------+------------------+
| File | Position | Binlog_do_db | Binlog_ignore_db |
+----------------+----------+--------------+------------------+
| box162-bin.014 | 29510700 | | |
+----------------+----------+--------------+------------------+
1 row in set (0.00 sec)
Okay, lets assume that when you rebooted the master or brought it back up after the power failure, that it will have started this new log, number 014.
My slave was stuck on number 013, and giving the "Client requested master to start replication from impossible position" error.
How to fix? Well, again, with the big assumption that the slave was generally up-to-date anyway, I simply went to the slave and did:
slave stop;
CHANGE MASTER TO MASTER_HOST='sunabc.net', MASTER_USER='repluser',MASTER_PASSWORD='replPass', MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=0
slave start;
And there we have it! The slave will start replicating on the new file. When power goes out and you have incomplete replication logs like this, we're really hoping that things are close enough in sync that this works.
Just do a "show slave status" on your slave, and make sure Slave_IO_Running and Slave_SQL_Running both say yes.
If you get errors about duplicate keys or other SQL inconsistency, then sorry, you were not as lucky this time. Do the normal backup.