blob: c61a1f6074f5c2e575f377f28973664c62f5721d [file] [log] [blame]
= Camel-Kafka-connector AWS2 SQS Source
This is an example for Camel-Kafka-connector AWS2-SQS
== Standalone
=== What is needed
- An AWS SQS queue
=== Running Kafka
[source]
----
$KAFKA_HOME/bin/zookeeper-server-start.sh $KAFKA_HOME/config/zookeeper.properties
$KAFKA_HOME/bin/kafka-server-start.sh $KAFKA_HOME/config/server.properties
$KAFKA_HOME/bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic mytopic
----
=== Download the connector package
Download the connector package tar.gz and extract the content to a directory.In this example we'll use `/home/oscerd/connectors/`
[source]
----
> cd /home/oscerd/connectors/
> wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.10.1/camel-aws2-sqs-kafka-connector-0.10.1-package.tar.gz
> untar.gz camel-aws2-sqs-kafka-connector-0.10.1-package.tar.gz
----
=== Configuring Kafka Connect
You'll need to set up the `plugin.path` property in your kafka
Open the `$KAFKA_HOME/config/connect-standalone.properties` and set the `plugin.path` property to your choosen location
[source]
----
...
plugin.path=/home/oscerd/connectors
...
----
=== Setup the connectors
Open the AWS2 SQS configuration file at `$EXAMPLES/aws2-sqs/aws2-sqs-source/config/CamelAWS2SQSSourceConnector.properties`
[source]
----
name=CamelAWS2SQSSourceConnector
connector.class=org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.storage.StringConverter
camel.source.maxPollDuration=10000
topics=mytopic
camel.source.path.queueNameOrArn=camel-1
camel.source.endpoint.deleteAfterRead=false
camel.component.aws2-sqs.accessKey=xxxx
camel.component.aws2-sqs.secretKey=yyyy
camel.component.aws2-sqs.region=eu-west-1
----
and add the correct credentials for AWS.
=== Running the example
Run the kafka connect with the SQS Source connector:
[source]
----
$KAFKA_HOME/bin/connect-standalone.sh $KAFKA_HOME/config/connect-standalone.properties $EXAMPLES/aws2-sqs/aws2-sqs-source/config/CamelAWS2SQSSourceConnector.properties
----
Just connect to your AWS Console and send a message to the `camel-1` queue, through the AWS Console.
On a different terminal run the kafka-consumer and you should see messages from the SQS queue arriving through Kafka Broker.
[source]
----
$KAFKA_HOME/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic mytopic --from-beginning
<your message 1>
<your message 2>
...
----
== Openshift
=== What is needed
- An AWS SQS queue
- An Openshift instance
=== Running Kafka using Strimzi Operator
First we install the Strimzi operator and use it to deploy the Kafka broker and Kafka Connect into our OpenShift project.
We need to create security objects as part of installation so it is necessary to switch to admin user.
If you use Minishift, you can do it with the following command:
[source,bash,options="nowrap"]
----
oc login -u system:admin
----
We will use OpenShift project `myproject`.
If it doesn't exist yet, you can create it using following command:
[source,bash,options="nowrap"]
----
oc new-project myproject
----
If the project already exists, you can switch to it with:
[source,bash,options="nowrap"]
----
oc project myproject
----
We can now install the Strimzi operator into this project:
[source,bash,options="nowrap",subs="attributes"]
----
oc apply -f https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.20.1/strimzi-cluster-operator-0.20.1.yaml
----
Next we will deploy a Kafka broker cluster and a Kafka Connect cluster and then create a Kafka Connect image with the SQS connectors installed:
[source,bash,options="nowrap",subs="attributes"]
----
# Deploy a single node Kafka broker
oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.20.1/examples/kafka/kafka-persistent-single.yaml
# Deploy a single instance of Kafka Connect with no plug-in installed
oc apply -f https://github.com/strimzi/strimzi-kafka-operator/raw/0.20.1/examples/connect/kafka-connect-s2i-single-node-kafka.yaml
----
In the OpenShift environment, you can instantiate the Kafka Connectors in two ways, either using the Kafka Connect API, or through an OpenShift custom resource.
If you want to use the custom resources, you need to add following annotation to the Kafka Connect S2I custom resource:
[source,bash,options="nowrap"]
----
oc annotate kafkaconnects2is my-connect-cluster strimzi.io/use-connector-resources=true
----
=== Add Camel Kafka connector binaries
Strimzi uses `Source2Image` builds to allow users to add their own connectors to the existing Strimzi Docker images.
We now need to build the connectors and add them to the image.
If you have built the whole `Camel Kafka Connector` project (`mvn clean package`) decompress the connectors you need in a folder (i.e. like `my-connectors/`)
so that each one is in its own subfolder
(alternatively you can download the latest officially released and packaged connectors from maven):
[source]
----
> cd my-connectors/
> wget https://repo1.maven.org/maven2/org/apache/camel/kafkaconnector/camel-aws2-sqs-kafka-connector/0.10.1/camel-aws2-sqs-kafka-connector-0.10.1-package.tar.gz
> untar.gz camel-aws2-sqs-kafka-connector-0.10.1-package.tar.gz
----
Now we can start the build
[source,bash,options="nowrap"]
----
oc start-build my-connect-cluster-connect --from-dir=./my-connectors/ --follow
----
We should now wait for the rollout of the new image to finish and the replica set with the new connector to become ready.
Once it is done, we can check that the connectors are available in our Kafka Connect cluster.
Strimzi is running Kafka Connect in a distributed mode.
To check the available connector plugins, you can run the following command:
[source,bash,options="nowrap"]
----
oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connector-plugins
----
You should see something like this:
[source,json,options="nowrap"]
----
[{"class":"org.apache.camel.kafkaconnector.CamelSinkConnector","type":"sink","version":"0.10.1"},{"class":"org.apache.camel.kafkaconnector.CamelSourceConnector","type":"source","version":"0.10.1"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSinkConnector","type":"sink","version":"0.10.1"},{"class":"org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector","type":"source","version":"0.10.1"},{"class":"org.apache.kafka.connect.file.FileStreamSinkConnector","type":"sink","version":"2.5.0"},{"class":"org.apache.kafka.connect.file.FileStreamSourceConnector","type":"source","version":"2.5.0"},{"class":"org.apache.kafka.connect.mirror.MirrorCheckpointConnector","type":"source","version":"1"},{"class":"org.apache.kafka.connect.mirror.MirrorHeartbeatConnector","type":"source","version":"1"},{"class":"org.apache.kafka.connect.mirror.MirrorSourceConnector","type":"source","version":"1"}]
----
=== Set the AWS credentials as OpenShift secret (optional)
Credentials to your AWS account can be specified directly in the connector instance definition in plain text, or you can create an OpenShift secret object beforehand and then reference the secret.
If you want to use the secret, you'll need to edit the file `$EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-cred.properties` with the correct credentials and then create the secret with the following command:
[source,bash,options="nowrap"]
----
oc create secret generic aws2-sqs --from-file=$EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-cred.properties
----
Then you need to edit KafkaConnectS2I custom resource to reference the secret. You can do that either in the OpenShift console or using `oc edit KafkaConnectS2I` command.
Add following configuration to the custom resource:
[source,bash,options="nowrap"]
----
spec:
# ...
config:
config.providers: file
config.providers.file.class: org.apache.kafka.common.config.provider.FileConfigProvider
#...
externalConfiguration:
volumes:
- name: aws-credentials
secret:
secretName: aws2-sqs
----
In this way the secret `aws2-sqs` will be mounted as volume with path `/opt/kafka/external-configuration/aws-credentials/`
=== Create connector instance
If you have enabled the connector custom resources using the `use-connector-resources` annotation, you can create the connector instance by creating a specific custom resource:
[source,bash,options="nowrap"]
----
oc apply -f - << EOF
apiVersion: kafka.strimzi.io/v1alpha1
kind: KafkaConnector
metadata:
name: sqs-source-connector
namespace: myproject
labels:
strimzi.io/cluster: my-connect-cluster
spec:
class: org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector
tasksMax: 1
config:
key.converter: org.apache.kafka.connect.storage.StringConverter
value.converter: org.apache.kafka.connect.storage.StringConverter
topics: sqs-topic
camel.source.path.queueNameOrArn: camel-connector-test
camel.source.maxPollDuration: 10000
camel.component.aws2-sqs.accessKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}
camel.component.aws2-sqs.secretKey: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}
camel.component.aws2-sqs.region: ${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}
EOF
----
If you don't want to use the OpenShift secret for storing the credentials, replace the properties in the custom resource for the actual values,
otherwise you can now create the custom resource using:
[source]
----
oc apply -f $EXAMPLES/aws2-sqs/aws2-sqs-source/config/openshift/aws2-sqs-source-connector.yaml
----
The other option, if you are not using the custom resources, is to create the instance of AWS2 SQS source connector through the Kafka Connect API:
[source,bash,options="nowrap"]
----
oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -X POST \
-H "Accept:application/json" \
-H "Content-Type:application/json" \
http://my-connect-cluster-connect-api:8083/connectors -d @- <<'EOF'
{
"name": "sqs-source-connector",
"config": {
"connector.class": "org.apache.camel.kafkaconnector.aws2sqs.CamelAws2sqsSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"topics": "sqs-topic",
"camel.source.path.queueNameOrArn": "camel-connector-test",
"camel.source.maxPollDuration": 10000,
"camel.component.aws2-sqs.accessKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:accessKey}",
"camel.component.aws2-sqs.secretKey": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:secretKey}",
"camel.component.aws2-sqs.region": "${file:/opt/kafka/external-configuration/aws-credentials/aws2-sqs-cred.properties:region}"
}
}
EOF
----
Again, if you don't use the OpenShift secret, replace the properties with your actual AWS credentials.
You can check the status of the connector using:
[source,bash,options="nowrap"]
----
oc exec -i `oc get pods --field-selector status.phase=Running -l strimzi.io/name=my-connect-cluster-connect -o=jsonpath='{.items[0].metadata.name}'` -- curl -s http://my-connect-cluster-connect-api:8083/connectors/sqs-source-connector/status
----
Then you can connect to your AWS Console and send a message to the `camel-connector-test` queue.
=== Check received messages
You can also run the Kafka console consumer to see the messages received from the topic:
[source,bash,options="nowrap"]
----
oc exec -i -c kafka my-cluster-kafka-0 -- bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic sqs-topic --from-beginning
<your message 1>
<your message 2>
...
----