Installation of the connector is similar in process to other Kafka Connectors. For now, we will follow the guide for Manual Installation.
In summary, we will use the standalone worker for this example.
plugin.path=/Users/jhuynh/Pivotal/geode-kafka-connector/build/libs/
name=geode-kafka-sink connector.class=GeodeKafkaSink tasks.max=1 topicToRegions=[someTopicToSinkFrom:someRegionToConsume] locators=localHost[10334]
name=geode-kafka-source connector.class=GeodeKafkaSource tasks.max=1 regionToTopics=[someRegionToSourceFrom:someTopicToConsume] locators=localHost[10334]
Property | Required | Description | Default |
---|---|---|---|
locators | no, but... | A comma separated string of locators that configure which locators to connect to | localhost[10334] |
topicToRegions | yes | A comma separated list of “one topic to many regions” bindings. Each binding is surrounded by brackets. For example “[topicName:regionName], [anotherTopic: regionName, anotherRegion]” | None. This is required to be set in the source connector properties |
security-client-auth-init | no | Point to class that implements the AuthInitialize Interface | |
nullValuesMeanRemove | no | If set to true, when topics send a SinkRecord with a null value, we will convert to an operation similar to region.remove instead of putting a null value into the region | true |
Property | Required | Description | Default |
---|---|---|---|
locators | no, but... | A comma separated string of locators that configure which locators to connect to | localhost[10334] |
regionToTopics | yes | A comma separated list of “one region to many topics” mappings. Each mapping is surrounded by brackets. For example "[regionName:topicName], “[anotherRegion: topicName, anotherTopic]” | None. This is required to be set in the source connector properties |
security-client-auth-init | no | Point to class that implements the AuthInitialize Interface | |
geodeConnectorBatchSize | no | Maximum number of records to return on each poll | 100 |
geodeConnectorQueueSize | no | Maximum number of entries in the connector queue before backing up all Geode cq listeners sharing the task queue | 10000 |
loadEntireRegion | no | Determines if we should queue up all entries that currently exist in a region. This allows us to copy existing region data. Will be replayed whenever a task needs to re-register a cq | true |
durableClientIdPrefix | no | Prefix string for tasks to append to when registering as a durable client. If empty string, will not register as a durable client | "" |
durableClientTimeout | no | How long in milliseconds to persist values in Geode's durable queue before the queue is invalidated | 60000 |
cqPrefix | no | Prefix string to identify Connector cq's on a Geode server | cqForGeodeKafka |
Extra Details