Warning The LevelDB store has been deprecated and is no longer supported or recommended for use. The recommended store is KahaDB SynopsisThe Replicated LevelDB Store uses Apache ZooKeeper to pick a master from a set of broker nodes configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB Stores with the master keeps them up to date by replicating all updates from the master. The Replicated LevelDB Store uses the same data files as a LevelDB Store, so you can switch a broker configuration between replicated and non replicated whenever you want. Version Compatibility Available as of ActiveMQ 5.9.0. How it works.It uses Apache ZooKeeper to coordinate which node in the cluster becomes the master. The elected master broker node starts and accepts client connections. The other nodes go into slave mode and connect the the master and synchronize their persistent state /w it. The slave nodes do not accept client connections. All persistent operations are replicated to the connected slaves. If the master dies, the slaves with the latest update gets promoted to become the master. The failed node can then be brought back online and it will go into slave mode. All messaging operations which require a sync to disk will wait for the update to be replicated to a quorum of the nodes before completing. So if you configure the store with When a new master is elected, you also need at least a quorum of nodes online to be able to find a node with the lastest updates. The node with the lastest updates will become the new master. Therefore, it's recommend that you run with at least 3 replica nodes so that you can take one down without suffering a service outage. Deployment TipsClients should be using the Failover Transport to connect to the broker nodes in the replication cluster. e.g. using a URL something like the following: failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616) You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is highly available. Don't overcommit your ZooKeeper servers. An overworked ZooKeeper might start thinking live replication nodes have gone offline due to delays in processing their 'keep-alive' messages. For best results, make sure you explicitly configure the hostname attribute with a hostname or ip address for the node that other cluster members to access the machine with. The automatically determined hostname is not always accessible by the other cluster members and results in slaves not being able to establish a replication session with the master. ConfigurationYou can configure ActiveMQ to use LevelDB for its persistence adapter - like below : <broker brokerName="broker" ... > ... <persistenceAdapter> <replicatedLevelDB directory="activemq-data" replicas="3" bind="tcp://0.0.0.0:0" zkAddress="zoo1.example.org:2181,zoo2.example.org:2181,zoo3.example.org:2181" zkPassword="password" zkPath="/activemq/leveldb-stores" hostname="broker1.example.org" /> </persistenceAdapter> ... </broker> Replicated LevelDB Store PropertiesAll the broker nodes that are part of the same replication set should have matching
Different replication sets can share the same The following configuration properties can be unique per node:
The store also supports the same configuration properties of a standard LevelDB Store but it does not support the pluggable storage lockers : Standard LevelDB Store Properties
Caveats The LevelDB store does not yet support storing data associated with Delay and Schedule Message Delivery. Those are are stored in a separate non-replicated KahaDB data files. Unexpected results will occur if you use Delay and Schedule Message Delivery with the replicated leveldb store since that data will be not be there when the master fails over to a slave. |