This document contains usage guide as well as examples for Docker image. Docker Compose files are provided in this directory for the example use cases.
Kafka server can be started using following ways:
If no user provided configuration (file input or environment variables) is passed to the Docker container, the default KRaft configuration for single combined-mode node will be used. This default configuration is packaged in the Kafka tarball.
docker run --volume /path/to/property/folder:/mnt/shared/config -p 9092:9092 apache/kafka:latest can be used to mount the folder containing the property files.When using the environment variables, you need to set all properties required to start the KRaft node. Therefore, the recommended way to use environment variables is via Docker Compose, which allows users to set all the properties that are needed. It is also possible to use the input file to have a common set of configurations, and then override specific node properties using the environment variables.
Kafka property defined via environment variables will override the value of that property defined in the user provided property file.
If properties are provided via environment variables only, all required properties must be specified.
The following rules must be used to construct the environment variable key name:
. with __ with __ (double underscore)- with ___ (triple underscore)KAFKA_abc.def, use KAFKA_ABC_DEFabc-def, use KAFKA_ABC___DEFabc_def, use KAFKA_ABC__DEFTo provide configs to log4j property files, following points should be considered:
KAFKA_LOG4J_ROOT_LOGLEVEL can be provided to set the value of log4j.rootLogger in log4j.properties and tools-log4j.properties.KAFKA_LOG4J_LOGGERS environment variable in a single comma separated string.KAFKA_LOG4J_LOGGERS='property1=value1,property2=value2' environment variable is provided to Docker container.log4j.logger.property1=value1 and log4j.logger.property2=value2 will be added to the log4j.properties file inside Docker container./etc/kafka/secrets in Docker container and providing configs following through environment variables (KAFKA_SSL_KEYSTORE_FILENAME, KAFKA_SSL_KEYSTORE_CREDENTIALS, KAFKA_SSL_KEY_CREDENTIALS, KAFKA_SSL_TRUSTSTORE_FILENAME and KAFKA_SSL_TRUSTSTORE_CREDENTIALS) to let the Docker image scripts extract passwords and populate correct paths in server.properties.KAFKA_ADVERTISED_LISTENERS are provided through environment variables to enable SSL mode in Kafka server, i.e. it should contain an SSL listener.docker-compose-files/single-node/file-input for better clarity.docker-compose-files directory contains Docker Compose files for some example configs to run apache/kafka OR apache/kafka-native Docker image.IMAGE variable with the Docker Compose file to specify which Docker image to use for bringing up the containers.# to bring up containers using apache/kafka docker image IMAGE=apache/kafka:latest <docker compose command> # to bring up containers using apache/kafka-native docker image IMAGE=apache/kafka-native:latest <docker compose command>
single-node examples for quick small examples to play around with.cluster contains multi node examples, for combined mode as well as isolated mode.docker-compose-files/single-node directory.KAFKA_LISTENERS is getting supplied. But if it was not provided, defaulting would have kicked in and we would have used KAFKA_ADVERTISED_LISTENERS to generate KAFKA_LISTENERS, by replacing the host with 0.0.0.0.CLUSTER_ID, but it's not mandatory as there is a default cluster id present in container.KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR and set it explicitly to 1, because if we don't provide it default value provided by kafka will be taken which is 3.# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/single-node/plaintext/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/single-node/plaintext/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:9092
/etc/kafka/secrets folder in docker container, given that the path of the files will be derived from that, as we are just providing file names in other SSL configs.# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/single-node/ssl/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/single-node/ssl/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:9093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/single-node/file-input/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/single-node/file-input/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:9093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
These examples are for real world use cases where multiple nodes of kafka are required.
Combined:
docker-compose-files/cluster/combined directory.# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/cluster/combined/plaintext/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/cluster/combined/plaintext/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:29092
KAFKA_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM is set to empty as hostname was not set in credentials. This won't be needed in production use cases.# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/cluster/combined/ssl/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/cluster/combined/ssl/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:29093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
Isolated:
docker-compose-files/cluster/isolated directory.# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/cluster/isolated/plaintext/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/cluster/isolated/plaintext/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:29092
SSL-INTERNAL is only for inter broker communication and controllers are using PLAINTEXT.# Run from root of the repo # JVM based Apache Kafka Docker Image $ IMAGE=apache/kafka:latest docker compose -f docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml up # GraalVM based Native Apache Kafka Docker Image $ IMAGE=apache/kafka-native:latest docker compose -f docker/examples/docker-compose-files/cluster/isolated/ssl/docker-compose.yml up
# Run from root of the repo $ bin/kafka-console-producer.sh --topic test --bootstrap-server localhost:29093 --producer.config ./docker/examples/fixtures/client-secrets/client-ssl.properties
Note that the examples are meant to be tried one at a time, make sure you close an example server before trying out the other to avoid conflicts.