Handle replicator instance start time during upgrades better

During cluster upgrades from 3.2 to 3.3 when instance start time switched from
being always  `0` to an actual timestamp, replication jobs will crash when
endpoints are upgraded. Replication jobs were started when endpoint
emitted a `0` and then it becomes a non-`0` value which will crash the next checkpoint attempt.

After the crash jobs will restart and continue fine were they left off without
rewinding. However they will make a logging mess while they crash. All four
workers will exit the `{checkpoint_commit_failure,...}` error. This commit make
it the checkpoint ignore mismatches if one of the instance start times is 0.
1 file changed