This mode is risky, upon broker restart or broken down, the whole service is unavailable. It's not recommended in production environment, it can be used for local test.
### Start Name Server $ nohup sh mqnamesrv & ### check whether Name Server is successfully started $ tail -f ~/logs/rocketmqlogs/namesrv.log The Name Server boot success...
### start Broker $ nohup sh bin/mqbroker -n localhost:9876 & ### check whether Name Server is successfully started, eg: Broker's IP is 192.168.1.2, Broker's name is broker-a $ tail -f ~/logs/rocketmqlogs/Broker.log The broker[broker-a, 192.169.1.2:10911] boot success...
Cluster contains Master node only, no Slave node, eg: 2 Master nodes, 3 Master nodes, advantages and disadvantages of this mode are shown below:
advantages:simple configuration, single Master node broke down or restart do not impact application. Under RAID10 disk config, even if machine broken down and cannot recover, message do not get lost because of RAID10's high reliable(async flush to disk lost little message, sync to disk do not lost message), this mode get highest performance.
disadvantages:during the machine‘s down time, messages have not be consumed on this machine can not be subscribed before recovery. That will impacts message’s instantaneity.
NameServer should be started before broker. If under production environment, we recommend start 3 NameServer nodes for high available. Startup command is equal, as shown below:
### start Name Server $ nohup sh mqnamesrv & ### check whether Name Server is successfully started $ tail -f ~/logs/rocketmqlogs/namesrv.log The Name Server boot success...
### start the first Master on machine A, eg:NameServer's IP is :192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-noslave/broker-a.properties & ### start the second Master on machine B, eg:NameServer's IP is :192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-noslave/broker-b.properties & ...
The above commands only used for single NameServer. In multi NameServer cluster, multi addresses concat by semicolon followed by -n in broker start command.
Each Master node is equipped with one Slave node, this mode has many Master-Slave group, using async replication for HA, slaver has a lag(ms level) behind master, advantages and disadvantages of this mode are shown below:
advantages: message lost a little, even if disk is broken; message instantaneity do not loss; Consumer can still consume from slave when master is down, this process is transparency to user, no human intervention is required; Performance is almost equal to Multi Master mode.
disadvantages: message lost a little data, when Master is down and disk broken.
### start Name Server $ nohup sh mqnamesrv & ### check whether Name Server is successfully started $ tail -f ~/logs/rocketmqlogs/namesrv.log The Name Server boot success...
### start first Master on machine A, eg: NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-async/broker-a.properties & ### start second Master on machine B, eg: NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-async/broker-b.properties & ### start first Slave on machine C, eg: NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-async/broker-a-s.properties & ### start second Slave on machine D, eg: NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-async/broker-b-s.properties &
Each Master node is equipped with one Slave node, this mode has many Master-Slave group, using synchronous double write for HA, application's write operation is successful means both master and slave write successful, advantages and disadvantages of this mode are shown below:
advantages:both data and service have no single point failure, message has no lantancy even if Master is down, service available and data available is very high;
disadvantages:this mode's performance is 10% lower than async replication mode, sending latency is a little high, in the current version, it do not have auto Master-Slave switch when Master is down.
### start Name Server $ nohup sh mqnamesrv & ### check whether Name Server is successfully started $ tail -f ~/logs/rocketmqlogs/namesrv.log The Name Server boot success...
### start first Master on machine A, eg:NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-sync/broker-a.properties & ### start second Master on machine B, eg:NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-sync/broker-b.properties & ### start first Slave on machine C, eg: NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-sync/broker-a-s.properties & ### start second Slave on machine D, eg: NameServer's IP is 192.168.1.1 $ nohup sh mqbroker -n 192.168.1.1:9876 -c $ROCKETMQ_HOME/conf/2m-2s-sync/broker-b-s.properties &
The above Broker matches Slave by specifying the same BrokerName, Master‘s BrokerId must be 0, Slave’s BrokerId must larger than 0. Besides, a Master can have multi Slaves that each has a distinct BrokerId. $ROCKETMQ_HOME indicates RocketMQ's install directory, user needs to set this environment parameter.
Attentions:
- execute command:
./mqadmin {command} {args}- almost all commands need -n indicates NameSerer address, format is ip:port
- almost all commands can get help info by -h
- if command contains both Broker address(-b) and cluster name(-c), it's prior to use broker address. If command do not contains broker address, it will executed on all hosts in this cluster. Support only one broker host. -b format is ip:port, default port is 10911
- there are many commands under tools, but not all command can be used, only commands that initialized in MQAdminStartup can be used, you can modify this class, add or self-define command.
- because of version update, little command do not update timely, please refer to source code directly when occur error.
question description:execute mqadmin occur below exception after deploy RocketMQ cluster.
org.apache.rocketmq.remoting.exception.RemotingConnectException: connect to <null> failed
Solution: execute command export NAMESRV_ADDR=ip:9876 (ip is NameServer's ip address), then execute mqadmin commands.
question description: one producer produce message, consumer A can consume, consume B cannot consume, RocketMQ console print:
Not found the consumer group consume stats, because return offset table is empty, maybe the consumer not consume any message。
Solution: make sure that producer and consumer has the same version of rocketmq-client.
question description: when a new consumer group start, it consumes from current offset, do not fetch oldest message.
Solution: rocketmq's default policy is consume from latest, that is skip oldest message. If you want consume oldest message, you need to set org.apache.rocketmq.client.consumer.DefaultMQPushConsumer#setConsumeFromWhere. The following is three common configurations:
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_TIMESTAMP);
In some cases, consumer need reset offset to a day or two before, if Master Broker has limited memory, it‘s CommitLog will have a high IO load, then it will impact other message’s read and write that on this broker. When slaveReadEnable=true is set, and consumer's offset exceeds accessMessageInMemoryMaxRatio=40%, Master Broker will recommend consumer consume from Slave Broker to lower Master Broker IO.
A spin lock is recommended for asynchronous disk flush, a reentrant lock is recommended for synchronous disk flush, configuration item is useReentrantLockWhenPutMessage, default is false; Enable TransientStorePoolEnable is recommended when use asynchronous disk flush; Recommend to close transferMsgByHeap to improve fetch efficiency; Set a little larger sendMessageThreadPoolNums, when use synchronous disk flush.
You will usually see the following log print message after sending message by using RocketMQ sdk.
SendResult [sendStatus=SEND_OK, msgId=0A42333A0DC818B4AAC246C290FD0000, offsetMsgId=0A42333A00002A9F000000000134F1F5, messageQueue=MessageQueue [topic=topicTest1, BrokerName=mac.local, queueId=3], queueOffset=4]
MessageClientIDSetter.createUniqIDBuffer() to generate unique Id;