This document contains usage guide as well as examples for docker image. Docker compose files are provided in this directory for the example use cases.
Kafka server can be started using following ways:-
If no user provided configs are passed to docker container or configs provided are empty, default configs will be used (configs that are packaged in kafka tarball). If any user provided config is provided, default configs will not be used.
/mnt/shared/config in docker container.docker run --volume path/to/property/folder:/mnt/shared/config -p 9092:9092 apache/kafka:latest can be used to mount the folder containing property files.Kafka property defined via env variables will override the value of that property defined in file input and default config.
If properties are provided via environment variables only, default configs will be replaced by user provided properties.
To construct the environment key variable name for server.properties configs, following steps can be followed:-
To provide configs to log4j property files, following points should be considered:-
log4j properties provided via environment variables will be appended to the default properties file (log4j properties files bundled with kafka)
KAFKA_LOG4J_ROOT_LOGLEVEL can be provided to set the value of log4j.rootLogger in log4j.properties and tools-log4j.properties
log4j loggers can be added to log4j.properties by setting them in KAFKA_LOG4J_LOGGERS environment variable in a single comma separated string
Environment variables commonly used in Kafka can be provided via environment variables, for example CLUSTER_ID.
Command docker run --env CONFIG_NAME=CONFIG_VALUE -p 9092:9092 apache/kafka:latest can be used to provide environment variables to docker container
Note that it is recommended to use docker compose files to provide configs using environment variables.
/etc/kafka/secrets in docker container and providing configs following through environment variables (KAFKA_SSL_KEYSTORE_FILENAME, KAFKA_SSL_KEYSTORE_CREDENTIALS, KAFKA_SSL_KEY_CREDENTIALS, KAFKA_SSL_TRUSTSTORE_FILENAME and KAFKA_SSL_TRUSTSTORE_CREDENTIALS) to let the docker image scripts extract passwords and populate correct paths in server.properties.KAFKA_ADVERTISED_LISTENERS are provided through environment variables to enable SSL mode in Kafka server, i.e. it should contain an SSL listener.jvm/single-node/file-input for better clarity.jvm directory contains docker compose files for some example configs to run apache/kafka docker image.single-node examples for quick small examples to play around with.cluster contains multi node examples, for combined mode as well as isolated mode.jvm/single-node directory.KAFKA_LISTENERS is getting supplied. But if it was not provided, defaulting would have kicked in and we would have used KAFKA_ADVERTISED_LISTENERS to generate KAFKA_LISTENERS, by replacing the host with 0.0.0.0.CLUSTER_ID, but it's not mandatory as there is a default cluster id present in container.KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR and set it explicitly to 1, because if we don't provide it default value provided by kafka will be taken which is 3.# Run from root of the repo $ docker compose -f docker/examples/jvm/single-node/plaintext/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:9092
/etc/kafka/secrets folder in docker container, given that the path of the files will be derived from that, as we are just providing file names in other SSL configs.# Run from root of the repo $ docker compose -f docker/examples/jvm/single-node/ssl/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:9093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
# Run from root of the repo $ docker compose -f docker/examples/jvm/single-node/file-input/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:9093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
These examples are for real world usecases where multiple nodes of kafka are required.
Combined:-
jvm/cluster/combined directory.# Run from root of the repo $ docker compose -f docker/examples/jvm/cluster/combined/plaintext/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:29092
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM is set to empty as hostname was not set in credentials. This won't be needed in production usecases.# Run from root of the repo $ docker compose -f docker/examples/jvm/cluster/combined/ssl/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:29093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
Isolated:-
jvm/cluster/isolated directory.# Run from root of the repo $ docker compose -f docker/examples/jvm/cluster/isolated/plaintext/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:29092
SSL-INTERNAL is only for inter broker communication and controllers are using PLAINTEXT.# Run from root of the repo $ docker compose -f docker/examples/jvm/cluster/isolated/ssl/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:29093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
Note that the examples are meant to be tried one at a time, make sure you close an example server before trying out the other to avoid conflicts.