Revert "Fix some problems with the catalog (#58)" (#60)

This reverts commit 6909cbc5d2c3c6a0590d5a7bd32bd9f45cf18f97.
3 files changed
tree: a4bffd582733e34e3e84a0f6a84f314ccd39c2b0
  1. src/
  2. style/
  3. .asf.yaml
  4. .gitignore
  5. .travis.yml
  6. LICENSE
  7. NOTICE
  8. pom.xml
  9. README.md
README.md

RocketMQ-Flink

Build Status Coverage Status License Average time to resolve an issue Percentage of issues still open Twitter Follow

RocketMQ integration for Apache Flink. This module includes the RocketMQ source and sink that allows a flink job to either write messages into a topic or read from topics in a flink job.

RocketMQSourceFunction

To use the RocketMQSourceFunction, you construct an instance of it by specifying a KeyValueDeserializationSchema instance and a Properties instance which including rocketmq configs. RocketMQSourceFunction(KeyValueDeserializationSchema<OUT> schema, Properties props) The RocketMQSourceFunction is based on RocketMQ pull consumer mode, and provides exactly once reliability guarantees when checkpoints are enabled. Otherwise, the source doesn't provide any reliability guarantees.

KeyValueDeserializationSchema

The main API for deserializing topic and tags is the org.apache.rocketmq.flink.legacy.common.serialization.KeyValueDeserializationSchema interface. rocketmq-flink includes general purpose KeyValueDeserializationSchema implementations called SimpleKeyValueDeserializationSchema.

public interface KeyValueDeserializationSchema<T> extends ResultTypeQueryable<T>, Serializable {
    T deserializeKeyAndValue(byte[] key, byte[] value);
}

RocketMQSink

To use the RocketMQSink, you construct an instance of it by specifying KeyValueSerializationSchema & TopicSelector instances and a Properties instance which including rocketmq configs. RocketMQSink(KeyValueSerializationSchema<IN> schema, TopicSelector<IN> topicSelector, Properties props) The RocketMQSink provides at-least-once reliability guarantees when checkpoints are enabled and withBatchFlushOnCheckpoint(true) is set. Otherwise, the sink reliability guarantees depends on rocketmq producer's retry policy, for this case, the messages sending way is sync by default, but you can change it by invoking withAsync(true).

KeyValueSerializationSchema

The main API for serializing topic and tags is the org.apache.rocketmq.flink.legacy.common.serialization.KeyValueSerializationSchema interface. rocketmq-flink includes general purpose KeyValueSerializationSchema implementations called SimpleKeyValueSerializationSchema.

public interface KeyValueSerializationSchema<T> extends Serializable {

    byte[] serializeKey(T tuple);

    byte[] serializeValue(T tuple);
}

TopicSelector

The main API for selecting topic and tags is the org.apache.rocketmq.flink.legacy.common.selector.TopicSelector interface. rocketmq-flink includes general purpose TopicSelector implementations called DefaultTopicSelector and SimpleTopicSelector.

public interface TopicSelector<T> extends Serializable {

    String getTopic(T tuple);

    String getTag(T tuple);
}

Examples

The following is an example which receive messages from RocketMQ brokers and send messages to broker after processing.

StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

       // enable checkpoint
       env.enableCheckpointing(3000);

       Properties consumerProps = new Properties();
       consumerProps.setProperty(RocketMQConfig.NAME_SERVER_ADDR, "localhost:9876");
       consumerProps.setProperty(RocketMQConfig.CONSUMER_GROUP, "c002");
       consumerProps.setProperty(RocketMQConfig.CONSUMER_TOPIC, "flink-source2");

       Properties producerProps = new Properties();
       producerProps.setProperty(RocketMQConfig.NAME_SERVER_ADDR, "localhost:9876");

       env.addSource(new RocketMQSourceFunction(new SimpleKeyValueDeserializationSchema("id", "address"), consumerProps))
           .name("rocketmq-source")
           .setParallelism(2)
           .process(new ProcessFunction<Map, Map>() {
               @Override
               public void processElement(Map in, Context ctx, Collector<Map> out) throws Exception {
                   HashMap result = new HashMap();
                   result.put("id", in.get("id"));
                   String[] arr = in.get("address").toString().split("\\s+");
                   result.put("province", arr[arr.length-1]);
                   out.collect(result);
               }
           })
           .name("upper-processor")
           .setParallelism(2)
           .addSink(new RocketMQSink(new SimpleKeyValueSerializationSchema("id", "province"),
               new DefaultTopicSelector("flink-sink2"), producerProps).withBatchFlushOnCheckpoint(true))
           .name("rocketmq-sink")
           .setParallelism(2);

       try {
           env.execute("rocketmq-flink-example");
       } catch (Exception e) {
           e.printStackTrace();
       }

Configurations

The following configurations are all from the class org.apache.rocketmq.flink.legacy.RocketMQConfig.

Producer Configurations

NAMEDESCRIPTIONDEFAULT
nameserver.addressname server address Requirednull
nameserver.poll.intervalname server poll topic info interval30000
brokerserver.heartbeat.intervalbroker server heartbeat interval30000
producer.groupproducer groupUUID.randomUUID().toString()
producer.retry.timesproducer send messages retry times3
producer.timeoutproducer send messages timeout3000

Consumer Configurations

NAMEDESCRIPTIONDEFAULT
nameserver.addressname server address Requirednull
nameserver.poll.intervalname server poll topic info interval30000
brokerserver.heartbeat.intervalbroker server heartbeat interval30000
consumer.groupconsumer group Requirednull
consumer.topicconsumer topic Requirednull
consumer.tagconsumer topic tag*
consumer.offset.reset.towhat to do when there is no initial offset on the serverlatest/earliest/timestamp
consumer.offset.from.timestampthe timestamp when consumer.offset.reset.to=timestamp was setSystem.currentTimeMillis()
consumer.offset.persist.intervalauto commit offset interval5000
consumer.pull.thread.pool.sizeconsumer pull thread pool size20
consumer.batch.sizeconsumer messages batch size32
consumer.delay.when.message.not.foundthe delay time when messages were not found10

RocketMQ SQL Connector

How to create a RocketMQ table

The example below shows how to create a RocketMQ table:

CREATE TABLE rocketmq_source (
  `user_id` BIGINT,
  `item_id` BIGINT,
  `behavior` STRING
) WITH (
  'connector' = 'rocketmq',
  'topic' = 'user_behavior',
  'consumerGroup' = 'behavior_consumer_group',
  'nameServerAddress' = '127.0.0.1:9876'
);

CREATE TABLE rocketmq_sink (
  `user_id` BIGINT,
  `item_id` BIGINT,
  `behavior` STRING
) WITH (
  'connector' = 'rocketmq',
  'topic' = 'user_behavior',
  'produceGroup' = 'behavior_produce_group',
  'nameServerAddress' = '127.0.0.1:9876'
);

Available Metadata

The following connector metadata can be accessed as metadata columns in a table definition.

The R/W column defines whether a metadata field is readable (R) and/or writable (W). Read-only columns must be declared VIRTUAL to exclude them during an INSERT INTO operation.

KEYDATA TYPEDESCRIPTIONDEFAULT
topicSTRING NOT NULLTopic name of the RocketMQ record.R

The extended CREATE TABLE example demonstrates the syntax for exposing these metadata fields:

CREATE TABLE rocketmq_source (
  `topic` STRING METADATA VIRTUAL,
  `user_id` BIGINT,
  `item_id` BIGINT,
  `behavior` STRING
) WITH (
  'connector' = 'rocketmq',
  'topic' = 'user_behavior',
  'consumerGroup' = 'behavior_consumer_group',
  'nameServerAddress' = '127.0.0.1:9876'
);

License

Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to you under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.