blob: 3cff20671da596a27e6dfa2557831a00a1506ace [file] [log] [blame]
<div class="wiki-content maincontent"><structured-macro ac:macro-id="9bd9729a-2a60-451b-8c3b-fcf0ab16dfcb" ac:name="warning" ac:schema-version="1"><parameter ac:name="title">Warning</parameter><rich-text-body><p>The LevelDB store has been deprecated and is no longer supported or recommended for use. The recommended store is <link><page ri:content-title="KahaDB"></page></link></p></rich-text-body></structured-macro><h2>Synopsis</h2><p>The Replicated LevelDB Store uses Apache ZooKeeper to pick a master from a set of broker nodes configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB Stores with the master keeps them up to date by replicating all updates from the master.</p><p>The Replicated LevelDB Store uses the same data files as a LevelDB Store, so you can switch a broker configuration between replicated and non replicated whenever you want.</p><structured-macro ac:macro-id="2430bb56-885a-4fe1-a01a-dad92e2e86c4" ac:name="info" ac:schema-version="1"><parameter ac:name="title">Version Compatibility</parameter><rich-text-body><p>Available as of ActiveMQ 5.9.0.</p></rich-text-body></structured-macro><h2>How it works.</h2><p></p><p>It uses <a shape="rect" href="http://zookeeper.apache.org/">Apache ZooKeeper</a> to coordinate which node in the cluster becomes the master. The elected master broker node starts and accepts client connections. The other nodes go into slave mode and connect the the master and synchronize their persistent state /w it. The slave nodes do not accept client connections. All persistent operations are replicated to the connected slaves. If the master dies, the slaves with the latest update gets promoted to become the master. The failed node can then be brought back online and it will go into slave mode.</p><p>All messaging operations which require a sync to disk will wait for the update to be replicated to a quorum of the nodes before completing. So if you configure the store with <code>replicas="3"</code> then the quorum size is <code>(3/2+1)=2</code>. The master will store the update locally and wait for 1 other slave to store the update before reporting success. Another way to think about it is that store will do synchronous replication to a quorum of the replication nodes and asynchronous replication replication to any additional nodes.</p><p>When a new master is elected, you also need at least a quorum of nodes online to be able to find a node with the lastest updates. The node with the lastest updates will become the new master. Therefore, it's recommend that you run with at least 3 replica nodes so that you can take one down without suffering a service outage.</p><h3>Deployment Tips</h3><p>Clients should be using the <link><page ri:content-title="Failover Transport Reference"></page><plain-text-link-body>Failover Transport</plain-text-link-body></link> to connect to the broker nodes in the replication cluster. e.g. using a URL something like the following:</p><structured-macro ac:macro-id="6f40f660-59d9-47bd-9621-f557dcb71e22" ac:name="code" ac:schema-version="1"><plain-text-body>failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
</plain-text-body></structured-macro><p>You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is highly available. Don't overcommit your ZooKeeper servers. An overworked ZooKeeper might start thinking live replication nodes have gone offline due to delays in processing their 'keep-alive' messages.</p><p>For best results, make sure you explicitly configure the hostname attribute with a hostname or ip address for the node that other cluster members to access the machine with. The automatically determined hostname is not always accessible by the other cluster members and results in slaves not being able to establish a replication session with the master.</p><h2>Configuration</h2><p>You can configure ActiveMQ to use LevelDB for its persistence adapter - like below :</p><structured-macro ac:macro-id="28174a1b-24fc-496a-8a73-c48b7004b0a6" ac:name="code" ac:schema-version="1"><plain-text-body> &lt;broker brokerName="broker" ... &gt;
...
&lt;persistenceAdapter&gt;
&lt;replicatedLevelDB
directory="activemq-data"
replicas="3"
bind="tcp://0.0.0.0:0"
zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181"
zkPassword="password"
zkPath="/activemq/leveldb-stores"
hostname="broker1.example.org"
/&gt;
&lt;/persistenceAdapter&gt;
...
&lt;/broker&gt;
</plain-text-body></structured-macro><h3>Replicated LevelDB Store Properties</h3><p>All the broker nodes that are part of the same replication set should have matching <code>brokerName</code> XML attributes. The following configuration properties should be the same on all the broker nodes that are part of the same replication set:</p><table><tbody><tr><th colspan="1" rowspan="1"><p>property name</p></th><th colspan="1" rowspan="1"><p>default value</p></th><th colspan="1" rowspan="1"><p>Comments</p></th></tr><tr><td colspan="1" rowspan="1"><p><code>replicas</code></p></td><td colspan="1" rowspan="1"><p><code>3</code></p></td><td colspan="1" rowspan="1"><p>The number of nodes that will exist in the cluster. At least (replicas/2)+1 nodes must be online to avoid service outage.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>securityToken</code></p></td><td colspan="1" rowspan="1"><p>&#160;</p></td><td colspan="1" rowspan="1"><p>A security token which must match on all replication nodes for them to accept each others replication requests.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>zkAddress</code></p></td><td colspan="1" rowspan="1"><p><code>127.0.0.1:2181</code></p></td><td colspan="1" rowspan="1"><p>A comma separated list of ZooKeeper servers.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>zkPassword</code></p></td><td colspan="1" rowspan="1"><p>&#160;</p></td><td colspan="1" rowspan="1"><p>The password to use when connecting to the ZooKeeper server.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>zkPath</code></p></td><td colspan="1" rowspan="1"><p><code>/default</code></p></td><td colspan="1" rowspan="1"><p>The path to the ZooKeeper directory where Master/Slave election information will be exchanged.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>zkSessionTimeout</code></p></td><td colspan="1" rowspan="1"><p><code>2s</code></p></td><td colspan="1" rowspan="1"><p>How quickly a node failure will be detected by ZooKeeper. (prior to 5.11 - this had a typo <span>zkSessionTmeout)</span></p></td></tr><tr><td colspan="1" rowspan="1"><p><code>sync</code></p></td><td colspan="1" rowspan="1"><p><code>quorum_mem</code></p></td><td colspan="1" rowspan="1"><p>Controls where updates are reside before being considered complete. This setting is a comma separated list of the following options: <code>local_mem</code>, <code>local_disk</code>, <code>remote_mem</code>, <code>remote_disk</code>, <code>quorum_mem</code>, <code>quorum_disk</code>. If you combine two settings for a target, the stronger guarantee is used. For example, configuring <code>local_mem, local_disk</code> is the same as just using <code>local_disk</code>. quorum_mem is the same as <code>local_mem, remote_mem</code> and <code>quorum_disk</code> is the same as <code>local_disk, remote_disk</code></p></td></tr></tbody></table><p>Different replication sets can share the same <code>zkPath</code> as long they have different <code>brokerName</code>.</p><p>The following configuration properties can be unique per node:</p><table><tbody><tr><th colspan="1" rowspan="1"><p>property name</p></th><th colspan="1" rowspan="1"><p>default value</p></th><th colspan="1" rowspan="1"><p>Comments</p></th></tr><tr><td colspan="1" rowspan="1"><p><code>bind</code></p></td><td colspan="1" rowspan="1"><p><code>tcp://0.0.0.0:61619</code></p></td><td colspan="1" rowspan="1"><p>When this node becomes a master, it will bind the configured address and port to service the replication protocol. Using dynamic ports is also supported. Just configure with <code>tcp://0.0.0.0:0</code></p></td></tr><tr><td colspan="1" rowspan="1"><p><code>hostname</code></p></td><td colspan="1" rowspan="1"><p>&#160;</p></td><td colspan="1" rowspan="1"><p>The host name used to advertise the replication service when this node becomes the master. If not set it will be automatically determined.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>weight</code></p></td><td colspan="1" rowspan="1"><p>1</p></td><td colspan="1" rowspan="1"><p>The replication node that has the latest update with the highest weight will become the master. Used to give preference to some nodes towards becoming master.</p></td></tr></tbody></table><p>The store also supports the same configuration properties of a standard <link><page ri:content-title="LevelDB Store"></page></link> but it does not support the pluggable storage lockers :</p><h3>Standard LevelDB Store Properties</h3><table><tbody><tr><th colspan="1" rowspan="1"><p>property name</p></th><th colspan="1" rowspan="1"><p>default value</p></th><th colspan="1" rowspan="1"><p>Comments</p></th></tr><tr><td colspan="1" rowspan="1"><p><code>directory</code></p></td><td colspan="1" rowspan="1"><p><code>LevelDB</code></p></td><td colspan="1" rowspan="1"><p>The directory which the store will use to hold it's data files. The store will create the directory if it does not already exist.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>readThreads</code></p></td><td colspan="1" rowspan="1"><p><code>10</code></p></td><td colspan="1" rowspan="1"><p>The number of concurrent IO read threads to allowed.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>logSize</code></p></td><td colspan="1" rowspan="1"><p><code>104857600</code> (100 MB)</p></td><td colspan="1" rowspan="1"><p>The max size (in bytes) of each data log file before log file rotation occurs.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>verifyChecksums</code></p></td><td colspan="1" rowspan="1"><p><code>false</code></p></td><td colspan="1" rowspan="1"><p>Set to true to force checksum verification of all data that is read from the file system.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>paranoidChecks</code></p></td><td colspan="1" rowspan="1"><p><code>false</code></p></td><td colspan="1" rowspan="1"><p>Make the store error out as soon as possible if it detects internal corruption.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexFactory</code></p></td><td colspan="1" rowspan="1"><p><code>org.fusesource.leveldbjni.JniDBFactory, org.iq80.leveldb.impl.Iq80DBFactory</code></p></td><td colspan="1" rowspan="1"><p>The factory classes to use when creating the LevelDB indexes</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexMaxOpenFiles</code></p></td><td colspan="1" rowspan="1"><p><code>1000</code></p></td><td colspan="1" rowspan="1"><p>Number of open files that can be used by the index.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexBlockRestartInterval</code></p></td><td colspan="1" rowspan="1"><p><code>16</code></p></td><td colspan="1" rowspan="1"><p>Number keys between restart points for delta encoding of keys.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexWriteBufferSize</code></p></td><td colspan="1" rowspan="1"><p><code>6291456</code> (6 MB)</p></td><td colspan="1" rowspan="1"><p>Amount of index data to build up in memory before converting to a sorted on-disk file.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexBlockSize</code></p></td><td colspan="1" rowspan="1"><p><code>4096</code> (4 K)</p></td><td colspan="1" rowspan="1"><p>The size of index data packed per block.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexCacheSize</code></p></td><td colspan="1" rowspan="1"><p><code>268435456</code> (256 MB)</p></td><td colspan="1" rowspan="1"><p>The maximum amount of off-heap memory to use to cache index blocks.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>indexCompression</code></p></td><td colspan="1" rowspan="1"><p><code>snappy</code></p></td><td colspan="1" rowspan="1"><p>The type of compression to apply to the index blocks. Can be snappy or none.</p></td></tr><tr><td colspan="1" rowspan="1"><p><code>logCompression</code></p></td><td colspan="1" rowspan="1"><p><code>none</code></p></td><td colspan="1" rowspan="1"><p>The type of compression to apply to the log records. Can be snappy or none.</p></td></tr></tbody></table><structured-macro ac:macro-id="64752425-9c11-44f7-9161-355edf9feb3b" ac:name="warning" ac:schema-version="1"><parameter ac:name="title">Caveats</parameter><rich-text-body><p>The LevelDB store does not yet support storing data associated with <link><page ri:content-title="Delay and Schedule Message Delivery"></page></link>. Those are are stored in a separate non-replicated KahaDB data files. Unexpected results will occur if you use <link><page ri:content-title="Delay and Schedule Message Delivery"></page></link> with the replicated leveldb store since that data will be not be there when the master fails over to a slave.</p></rich-text-body></structured-macro></div>