When a {% pop bookie %} crashes, all {% pop ledgers %} on that bookie become under-replicated. In order to bring all ledgers in your BookKeeper cluster back to full replication, you‘ll need to recover the data from any offline bookies. There are two ways to recover bookies’ data:
You can manually recover failed bookies using the bookkeeper
command-line tool. You need to specify:
shell recover
optionHere's an example:
$ bookkeeper-server/bin/bookkeeper shell recover \ zk1.example.com:2181 \ # IP and port for ZooKeeper 192.168.1.10:3181 # IP and port for the failed bookie
If you wish, you can also specify which bookie you‘d like to rereplicate to. Here’s an example:
$ bookkeeper-server/bin/bookkeeper shell recover \ zk1.example.com:2181 \ # IP and port for ZooKeeper 192.168.1.10:3181 \ # IP and port for the failed bookie 192.168.1.11:3181 # IP and port for the bookie to rereplicate to
When you initiate a manual recovery process, the following happens:
AutoRecovery is a process that:
AutoRecovery can be run in two ways:
You can start up AutoRecovery using the autorecovery
command of the bookkeeper
CLI tool.
$ bookkeeper-server/bin/bookkeeper autorecovery
The most important thing to ensure when starting up AutoRecovery is that the ZooKeeper connection string specified by the
zkServers
parameter points to the right ZooKeeper cluster.
If you start up AutoRecovery on a machine that is already running a bookie, then the AutoRecovery process will run alongside the bookie on a separate thread.
You can also start up AutoRecovery on a fresh machine if you'd like to create a dedicated cluster of AutoRecovery nodes.
There are a handful of AutoRecovery-related configs in the bk_server.conf
configuration file. For a listing of those configs, see AutoRecovery settings.
You can disable AutoRecovery at any time, for example during maintenance. Disabling AutoRecovery ensures that bookies' data isn't unnecessarily rereplicated when the bookie is only taken down for a short period of time, for example when the bookie is being updated or the configuration if being changed.
You can disable AutoRecover using the bookkeeper
CLI tool:
$ bookkeeper-server/bin/bookkeeper shell autorecovery -disable
Once disabled, you can reenable AutoRecovery using the enable
shell command:
$ bookkeeper-server/bin/bookkeeper shell autorecovery -enable
AutoRecovery has two components:
Auditor
class) is a singleton node that watches bookies to see if they fail and creates rereplication tasks for the ledgers on failed bookies.ReplicationWorker
class) runs on each bookie and executes rereplication tasks provided by the auditor.Both of these components run as threads in the AutoRecoveryMain
process, which runs on each bookie in the cluster. All recovery nodes participate in leader election---using ZooKeeper---to decide which node becomes the auditor. Nodes that fail to become the auditor watch the elected auditor and run an election process again if they see that the auditor node has failed.
The auditor watches all bookies in the cluster that are registered with ZooKeeper. Bookies register with ZooKeeper at startup. If the bookie crashes or is killed, the bookie's registration in ZooKeeper disappears and the auditor is notified of the change in the list of registered bookies.
When the auditor sees that a bookie has disappeared, it immediately scans the complete {% pop ledger %} list to find ledgers that have data stored on the failed bookie. Once it has a list of ledgers for that bookie, the auditor will publish a rereplication task for each ledger under the /underreplicated/
znode in ZooKeeper.
Each replication worker watches for tasks being published by the auditor on the /underreplicated/
znode in ZooKeeper. When a new task appears, the replication worker will try to get a lock on it. If it cannot acquire the lock, it will try the next entry. The locks are implemented using ZooKeeper ephemeral znodes.
The replication worker will scan through the rereplication task's ledger for fragments of which its local bookie is not a member. When it finds fragments matching this criterion, it will replicate the entries of that fragment to the local bookie. If, after this process, the ledger is fully replicated, the ledgers entry under /underreplicated/ is deleted, and the lock is released. If there is a problem replicating, or there are still fragments in the ledger which are still underreplicated (due to the local bookie already being part of the ensemble for the fragment), then the lock is simply released.
If the replication worker finds a fragment which needs rereplication, but does not have a defined endpoint (i.e. the final fragment of a ledger currently being written to), it will wait for a grace period before attempting rereplication. If the fragment needing rereplication still does not have a defined endpoint, the ledger is fenced and rereplication then takes place.
This avoids the situation in which a client is writing to a ledger and one of the bookies goes down, but the client has not written an entry to that bookie before rereplication takes place. The client could continue writing to the old fragment, even though the ensemble for the fragment had changed. This could lead to data loss. Fencing prevents this scenario from happening. In the normal case, the client will try to write to the failed bookie within the grace period, and will have started a new fragment before rereplication starts.
You can configure this grace period using the openLedgerRereplicationGracePeriod
parameter.
The ledger rereplication process happens in these steps: