blob: 343e8b07e43005a9792de8b961b9418818620009 [file] [log] [blame]
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<script id="security-template" type="text/x-handlebars-template">
<h3 class="anchor-heading">
<a class="anchor-link" id="security_overview" href="#security_overview"></a>
<a href="#security_overview">7.1 Security Overview
</a></h3>
In release 0.9.0.0, the Kafka community added a number of features that, used either separately or together, increases security in a Kafka cluster. The following security measures are currently supported:
<ol>
<li>Authentication of connections to brokers from clients (producers and consumers), other brokers and tools, using either SSL or SASL. Kafka supports the following SASL mechanisms:
<ul>
<li>SASL/GSSAPI (Kerberos) - starting at version 0.9.0.0</li>
<li>SASL/PLAIN - starting at version 0.10.0.0</li>
<li>SASL/SCRAM-SHA-256 and SASL/SCRAM-SHA-512 - starting at version 0.10.2.0</li>
<li>SASL/OAUTHBEARER - starting at version 2.0</li>
</ul></li>
<li>Authentication of connections from brokers to ZooKeeper</li>
<li>Encryption of data transferred between brokers and clients, between brokers, or between brokers and tools using SSL (Note that there is a performance degradation when SSL is enabled, the magnitude of which depends on the CPU type and the JVM implementation.)</li>
<li>Authorization of read / write operations by clients</li>
<li>Authorization is pluggable and integration with external authorization services is supported</li>
</ol>
It's worth noting that security is optional - non-secured clusters are supported, as well as a mix of authenticated, unauthenticated, encrypted and non-encrypted clients.
The guides below explain how to configure and use the security features in both clients and brokers.
<h3 class="anchor-heading">
<a class="anchor-link" id="security_ssl" href="#security_ssl"></a>
<a href="#security_ssl">7.2 Encryption and Authentication using SSL
</a></h3>
Apache Kafka allows clients to connect over SSL. By default, SSL is disabled but can be turned on as needed.
<ol>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_ssl_key" href="#security_ssl_key"></a>
<a href="#security_ssl_key">Generate SSL key and certificate for each Kafka broker
</a></h4>
The first step of deploying one or more brokers with the SSL support is to generate the key and the certificate for each machine in the cluster. You can use Java's keytool utility to accomplish this task.
We will generate the key into a temporary keystore initially so that we can export and sign it later with CA.
<pre class="line-numbers"><code class="language-bash"> keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA</code></pre>
You need to specify two parameters in the above command:
<ol>
<li>keystore: the keystore file that stores the certificate. The keystore file contains the private key of the certificate; therefore, it needs to be kept safely.</li>
<li>validity: the valid time of the certificate in days.</li>
</ol>
<br>
<h5 class="anchor-heading">
<a class="anchor-link" id="security_confighostname" href="#security_confighostname"></a>
<a href="#security_confighostname">Configuring Host Name Verification
</a></h5>
From Kafka version 2.0.0 onwards, host name verification of servers is enabled by default for client connections
as well as inter-broker connections to prevent man-in-the-middle attacks. Server host name verification may be disabled
by setting <code>ssl.endpoint.identification.algorithm</code> to an empty string. For example,
<pre class="line-numbers"><code class="language-text"> ssl.endpoint.identification.algorithm=</code></pre>
For dynamically configured broker listeners, hostname verification may be disabled using <code>kafka-configs.sh</code>.
For example,
<pre class="line-numbers"><code class="language-text"> bin/kafka-configs.sh --bootstrap-server localhost:9093 --entity-type brokers --entity-name 0 --alter --add-config "listener.name.internal.ssl.endpoint.identification.algorithm="</code></pre>
For older versions of Kafka, <code>ssl.endpoint.identification.algorithm</code> is not defined by default, so host name
verification is not performed. The property should be set to <code>HTTPS</code> to enable host name verification.
<pre class="line-numbers"><code class="language-text"> ssl.endpoint.identification.algorithm=HTTPS </code></pre>
Host name verification must be enabled to prevent man-in-the-middle attacks if server endpoints are not validated
externally.
<h5 class="anchor-heading">
<a class="anchor-link" id="security_configcerthostname" href="#security_configcerthstname"></a>
<a href="#security_configcerthstname">Configuring Host Name In Certificates
</a></h5>
If host name verification is enabled, clients will verify the server's fully qualified domain name (FQDN) against one of
the following two fields:
<ol>
<li>Common Name (CN)
<li>Subject Alternative Name (SAN)
</ol>
<br>
Both fields are valid, RFC-2818 recommends the use of SAN however. SAN is also more flexible, allowing for multiple DNS entries to be declared. Another advantage is that the CN can be set to a more meaningful value for authorization purposes. To add a SAN field append the following argument <code> -ext SAN=DNS:{FQDN} </code> to the keytool command:
<pre class="line-numbers"><code class="language-bash"> keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -ext SAN=DNS:{FQDN}</code></pre>
The following command can be run afterwards to verify the contents of the generated certificate:
<pre class="line-numbers"><code class="language-bash"> keytool -list -v -keystore server.keystore.jks</code></pre>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_ssl_ca" href="#security_ssl_ca"></a>
<a href="#security_ssl_ca">Creating your own CA
</a></h4>
After the first step, each machine in the cluster has a public-private key pair, and a certificate to identify the machine. The certificate, however, is unsigned, which means that an attacker can create such a certificate to pretend to be any machine.<p>
Therefore, it is important to prevent forged certificates by signing them for each machine in the cluster. A certificate authority (CA) is responsible for signing certificates. CA works likes a government that issues passportsthe government stamps (signs) each passport so that the passport becomes difficult to forge. Other governments verify the stamps to ensure the passport is authentic. Similarly, the CA signs the certificates, and the cryptography guarantees that a signed certificate is computationally difficult to forge. Thus, as long as the CA is a genuine and trusted authority, the clients have high assurance that they are connecting to the authentic machines.
<pre class="line-numbers"><code class="language-bash"> openssl req -new -x509 -keyout ca-key -out ca-cert -days 365</code></pre>
The generated CA is simply a public-private key pair and certificate, and it is intended to sign other certificates.<br>
The next step is to add the generated CA to the **clients' truststore** so that the clients can trust this CA:
<pre class="line-numbers"><code class="language-bash"> keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
<b>Note:</b> If you configure the Kafka brokers to require client authentication by setting ssl.client.auth to be "requested" or "required" on the <a href="#config_broker">Kafka brokers config</a> then you must provide a truststore for the Kafka brokers as well and it should have all the CA certificates that clients' keys were signed by.
<pre class="line-numbers"><code class="language-bash"> keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert</code></pre>
In contrast to the keystore in step 1 that stores each machine's own identity, the truststore of a client stores all the certificates that the client should trust. Importing a certificate into one's truststore also means trusting all certificates that are signed by that certificate. As the analogy above, trusting the government (CA) also means trusting all passports (certificates) that it has issued. This attribute is called the chain of trust, and it is particularly useful when deploying SSL on a large Kafka cluster. You can sign all certificates in the cluster with a single CA, and have all machines share the same truststore that trusts the CA. That way all machines can authenticate all other machines.</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_ssl_signing" href="#security_ssl_signing"></a>
<a href="#security_ssl_signing">Signing the certificate
</a></h4>
The next step is to sign all certificates generated by step 1 with the CA generated in step 2. First, you need to export the certificate from the keystore:
<pre class="line-numbers"><code class="language-bash"> keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file</code></pre>
Then sign it with the CA:
<pre class="line-numbers"><code class="language-bash"> openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}</code></pre>
Finally, you need to import both the certificate of the CA and the signed certificate into the keystore:
<pre class="line-numbers"><code class="language-bash"> keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed</code></pre>
The definitions of the parameters are the following:
<ol>
<li>keystore: the location of the keystore</li>
<li>ca-cert: the certificate of the CA</li>
<li>ca-key: the private key of the CA</li>
<li>ca-password: the passphrase of the CA</li>
<li>cert-file: the exported, unsigned certificate of the server</li>
<li>cert-signed: the signed certificate of the server</li>
</ol>
Here is an example of a bash script with all above steps. Note that one of the commands assumes a password of `test1234`, so either use that password or edit the command before running it.
<pre>
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 365 -keyalg RSA -genkey
#Step 2
openssl req -new -x509 -keyout ca-key -out ca-cert -days 365
keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert
keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert
#Step 3
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:test1234
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert
keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed</code></pre></li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_configbroker" href="#security_configbroker"></a>
<a href="#security_configbroker">Configuring Kafka Brokers
</a></h4>
Kafka Brokers support listening for connections on multiple ports.
We need to configure the following property in server.properties, which must have one or more comma-separated values:
<pre>listeners</code></pre>
If SSL is not enabled for inter-broker communication (see below for how to enable it), both PLAINTEXT and SSL ports will be necessary.
<pre class="line-numbers"><code class="language-text"> listeners=PLAINTEXT://host.name:port,SSL://host.name:port</code></pre>
Following SSL configs are needed on the broker side
<pre class="line-numbers"><code class="language-text"> ssl.keystore.location=/var/private/ssl/server.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/server.truststore.jks
ssl.truststore.password=test1234</code></pre>
Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
Optional settings that are worth considering:
<ol>
<li>ssl.client.auth=none ("required" => client authentication is required, "requested" => client authentication is requested and client without certs can still connect. The usage of "requested" is discouraged as it provides a false sense of security and misconfigured clients will still connect successfully.)</li>
<li>ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. (Default is an empty list)</li>
<li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1 (list out the SSL protocols that you are going to accept from clients. Do note that SSL is deprecated in favor of TLS and using SSL in production is not recommended)</li>
<li>ssl.keystore.type=JKS</li>
<li>ssl.truststore.type=JKS</li>
<li>ssl.secure.random.implementation=SHA1PRNG</li>
</ol>
If you want to enable SSL for inter-broker communication, add the following to the server.properties file (it defaults to PLAINTEXT)
<pre>
security.inter.broker.protocol=SSL</code></pre>
<p>
Due to import regulations in some countries, the Oracle implementation limits the strength of cryptographic algorithms available by default. If stronger algorithms are needed (for example, AES with 256-bit keys), the <a href="http://www.oracle.com/technetwork/java/javase/downloads/index.html">JCE Unlimited Strength Jurisdiction Policy Files</a> must be obtained and installed in the JDK/JRE. See the
<a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/SunProviders.html">JCA Providers Documentation</a> for more information.
</p>
<p>
The JRE/JDK will have a default pseudo-random number generator (PRNG) that is used for cryptography operations, so it is not required to configure the
implementation used with the <pre>ssl.secure.random.implementation</code></pre>. However, there are performance issues with some implementations (notably, the
default chosen on Linux systems, <pre>NativePRNG</code></pre>, utilizes a global lock). In cases where performance of SSL connections becomes an issue,
consider explicitly setting the implementation to be used. The <pre>SHA1PRNG</code></pre> implementation is non-blocking, and has shown very good performance
characteristics under heavy load (50 MB/sec of produced messages, plus replication traffic, per-broker).
</p>
Once you start the broker you should be able to see in the server.log
<pre>
with addresses: PLAINTEXT -> EndPoint(192.168.64.1,9092,PLAINTEXT),SSL -> EndPoint(192.168.64.1,9093,SSL)</code></pre>
To check quickly if the server keystore and truststore are setup properly you can run the following command
<pre>openssl s_client -debug -connect localhost:9093 -tls1</code></pre> (Note: TLSv1 should be listed under ssl.enabled.protocols)<br>
In the output of this command you should see server's certificate:
<pre>
-----BEGIN CERTIFICATE-----
{variable sized random bytes}
-----END CERTIFICATE-----
subject=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=Sriharsha Chintalapani
issuer=/C=US/ST=CA/L=Santa Clara/O=org/OU=org/CN=kafka/emailAddress=test@test.com</code></pre>
If the certificate does not show up or if there are any other error messages then your keystore is not setup properly.</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_configclients" href="#security_configclients"></a>
<a href="#security_configclients">Configuring Kafka Clients
</a></h4>
SSL is supported only for the new Kafka Producer and Consumer, the older API is not supported. The configs for SSL will be the same for both producer and consumer.<br>
If client authentication is not required in the broker, then the following is a minimal configuration example:
<pre class="line-numbers"><code class="language-text"> security.protocol=SSL
ssl.truststore.location=/var/private/ssl/client.truststore.jks
ssl.truststore.password=test1234</code></pre>
Note: ssl.truststore.password is technically optional but highly recommended. If a password is not set access to the truststore is still available, but integrity checking is disabled.
If client authentication is required, then a keystore must be created like in step 1 and the following must also be configured:
<pre class="line-numbers"><code class="language-text"> ssl.keystore.location=/var/private/ssl/client.keystore.jks
ssl.keystore.password=test1234
ssl.key.password=test1234</code></pre>
Other configuration settings that may also be needed depending on our requirements and the broker configuration:
<ol>
<li>ssl.provider (Optional). The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</li>
<li>ssl.cipher.suites (Optional). A cipher suite is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol.</li>
<li>ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1. It should list at least one of the protocols configured on the broker side</li>
<li>ssl.truststore.type=JKS</li>
<li>ssl.keystore.type=JKS</li>
</ol>
<br>
Examples using console-producer and console-consumer:
<pre class="line-numbers"><code class="language-bash"> kafka-console-producer.sh --bootstrap-server localhost:9093 --topic test --producer.config client-ssl.properties
kafka-console-consumer.sh --bootstrap-server localhost:9093 --topic test --consumer.config client-ssl.properties</code></pre>
</li>
</ol>
<h3 class="anchor-heading">
<a class="anchor-link" id="security_sasl" href="#security_sasl"></a>
<a href="#security_sasl">7.3 Authentication using SASL
</a></h3>
<ol>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_jaasconfig" href="#security_sasl_jaasconfig"></a>
<a href="#security_sasl_jaasconfig">JAAS configuration
</a></h4>
<p>Kafka uses the Java Authentication and Authorization Service
(<a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jaas/JAASRefGuide.html">JAAS</a>)
for SASL configuration.</p>
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_jaas_broker"
href="#security_jaas_broker"></a>
JAAS configuration for Kafka brokers
</h5>
<p><tt>KafkaServer</tt> is the section name in the JAAS file used by each
KafkaServer/Broker. This section provides SASL configuration options
for the broker including any SASL client connections made by the broker
for inter-broker communication. If multiple listeners are configured to use
SASL, the section name may be prefixed with the listener name in lower-case
followed by a period, e.g. <tt>sasl_ssl.KafkaServer</tt>.</p>
<p><tt>Client</tt> section is used to authenticate a SASL connection with
zookeeper. It also allows the brokers to set SASL ACL on zookeeper
nodes which locks these nodes down so that only the brokers can
modify it. It is necessary to have the same principal name across all
brokers. If you want to use a section name other than Client, set the
system property <tt>zookeeper.sasl.clientconfig</tt> to the appropriate
name (<i>e.g.</i>, <tt>-Dzookeeper.sasl.clientconfig=ZkClient</tt>).</p>
<p>ZooKeeper uses "zookeeper" as the service name by default. If you
want to change this, set the system property
<tt>zookeeper.sasl.client.username</tt> to the appropriate name
(<i>e.g.</i>, <tt>-Dzookeeper.sasl.client.username=zk</tt>).</p>
<p>Brokers may also configure JAAS using the broker configuration property <code>sasl.jaas.config</code>.
The property name must be prefixed with the listener prefix including the SASL mechanism,
i.e. <code>listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config</code>. Only one
login module may be specified in the config value. If multiple mechanisms are configured on a
listener, configs must be provided for each mechanism using the listener and mechanism prefix.
For example,
<pre class="line-numbers"><code class="language-text"> listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" \
password="admin-secret";
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="admin" \
password="admin-secret" \
user_admin="admin-secret" \
user_alice="alice-secret";</code></pre>
If JAAS configuration is defined at different levels, the order of precedence used is:
<ul>
<li>Broker configuration property <code>listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config</code></li>
<li><code>{listenerName}.KafkaServer</code> section of static JAAS configuration</code></li>
<li><code>KafkaServer</code> section of static JAAS configuration</code></li>
</ul>
Note that ZooKeeper JAAS config may only be configured using static JAAS configuration.
<p>See <a href="#security_sasl_kerberos_brokerconfig">GSSAPI (Kerberos)</a>,
<a href="#security_sasl_plain_brokerconfig">PLAIN</a>,
<a href="#security_sasl_scram_brokerconfig">SCRAM</a> or
<a href="#security_sasl_oauthbearer_brokerconfig">OAUTHBEARER</a> for example broker configurations.</p></li>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_jaas_client"
href="#security_jaas_client"></a>JAAS configuration for Kafka clients
</h5>
<p>Clients may configure JAAS using the client configuration property
<a href="#security_client_dynamicjaas">sasl.jaas.config</a>
or using the <a href="#security_client_staticjaas">static JAAS config file</a>
similar to brokers.</p>
<ol>
<li><h6 class="anchor-heading">
<a class="anchor-link" id="security_client_dynamicjaas"
href="#security_client_dynamicjaas"></a>
JAAS configuration using client configuration property
</h6>
<p>Clients may specify JAAS configuration as a producer or consumer property without
creating a physical configuration file. This mode also enables different producers
and consumers within the same JVM to use different credentials by specifying
different properties for each client. If both static JAAS configuration system property
<code>java.security.auth.login.config</code> and client property <code>sasl.jaas.config</code>
are specified, the client property will be used.</p>
<p>See <a href="#security_sasl_kerberos_clientconfig">GSSAPI (Kerberos)</a>,
<a href="#security_sasl_plain_clientconfig">PLAIN</a>,
<a href="#security_sasl_scram_clientconfig">SCRAM</a> or
<a href="#security_sasl_oauthbearer_clientconfig">OAUTHBEARER</a> for example configurations.</p></li>
<li><h6 class="anchor-heading">
<a class="anchor-link" id="security_client_staticjaas" href="#security_client_staticjaas"></a>
<a href="#security_client_staticjaas">JAAS configuration using static config file
</a></h6>
To configure SASL authentication on the clients using static JAAS config file:
<ol>
<li>Add a JAAS config file with a client login section named <tt>KafkaClient</tt>. Configure
a login module in <tt>KafkaClient</tt> for the selected mechanism as described in the examples
for setting up <a href="#security_sasl_kerberos_clientconfig">GSSAPI (Kerberos)</a>,
<a href="#security_sasl_plain_clientconfig">PLAIN</a>,
<a href="#security_sasl_scram_clientconfig">SCRAM</a> or
<a href="#security_sasl_oauthbearer_clientconfig">OAUTHBEARER</a>.
For example, <a href="#security_sasl_gssapi_clientconfig">GSSAPI</a>
credentials may be configured as:
<pre class="line-numbers"><code class="language-text"> KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_client.keytab"
principal="kafka-client-1@EXAMPLE.COM";
};</code></pre>
</li>
<li>Pass the JAAS config file location as JVM parameter to each client JVM. For example:
<pre class="line-numbers"><code class="language-bash"> -Djava.security.auth.login.config=/etc/kafka/kafka_client_jaas.conf</code></pre></li>
</ol>
</li>
</ol>
</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_config"
href="#security_sasl_config"></a>
SASL configuration
</h4>
<p>SASL may be used with PLAINTEXT or SSL as the transport layer using the
security protocol SASL_PLAINTEXT or SASL_SSL respectively. If SASL_SSL is
used, then <a href="#security_ssl">SSL must also be configured</a>.</p>
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_mechanism"
href="#security_sasl_mechanism"></a>
SASL mechanisms
</h5>
Kafka supports the following SASL mechanisms:
<ul>
<li><a href="#security_sasl_kerberos">GSSAPI</a> (Kerberos)</li>
<li><a href="#security_sasl_plain">PLAIN</a></li>
<li><a href="#security_sasl_scram">SCRAM-SHA-256</a></li>
<li><a href="#security_sasl_scram">SCRAM-SHA-512</a></li>
<li><a href="#security_sasl_oauthbearer">OAUTHBEARER</a></li>
</ul>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_brokerconfig"
href="#security_sasl_brokerconfig"></a>
SASL configuration for Kafka brokers
</h5>
<ol>
<li>Configure a SASL port in server.properties, by adding at least one of
SASL_PLAINTEXT or SASL_SSL to the <i>listeners</i> parameter, which
contains one or more comma-separated values:
<pre> listeners=SASL_PLAINTEXT://host.name:port</code></pre>
If you are only configuring a SASL port (or if you want
the Kafka brokers to authenticate each other using SASL) then make sure
you set the same SASL protocol for inter-broker communication:
<pre> security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)</code></pre></li>
<li>Select one or more <a href="#security_sasl_mechanism">supported mechanisms</a>
to enable in the broker and follow the steps to configure SASL for the mechanism.
To enable multiple mechanisms in the broker, follow the steps
<a href="#security_sasl_multimechanism">here</a>.</li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_clientconfig"
href="#security_sasl_clientconfig"></a>
SASL configuration for Kafka clients
</h5>
<p>SASL authentication is only supported for the new Java Kafka producer and
consumer, the older API is not supported.</p>
<p>To configure SASL authentication on the clients, select a SASL
<a href="#security_sasl_mechanism">mechanism</a> that is enabled in
the broker for client authentication and follow the steps to configure SASL
for the selected mechanism.</p></li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_kerberos" href="#security_sasl_kerberos"></a>
<a href="#security_sasl_kerberos">Authentication using SASL</a>/Kerberos
</h4>
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_kerberos_prereq" href="#security_sasl_kerberos_prereq"></a>
<a href="#security_sasl_kerberos_prereq">Prerequisites
</a></h5>
<ol>
<li><b>Kerberos</b><br>
If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (<a href="https://help.ubuntu.com/community/Kerberos">Ubuntu</a>, <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Managing_Smart_Cards/installing-kerberos.html">Redhat</a>). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.</li>
<li><b>Create Kerberos Principals</b><br>
If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).</br>
If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
<pre class="line-numbers"><code class="language-bash"> sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"</code></pre></li>
<li><b>Make sure all hosts can be reachable using hostnames</b> - it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.</li>
</ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_kerberos_brokerconfig" href="#security_sasl_kerberos_brokerconfig"></a>
<a href="#security_sasl_kerberos_brokerconfig">Configuring Kafka Brokers
</a></h5>
<ol>
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
<pre class="line-numbers"><code class="language-text"> KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};</code></pre>
</li>
<tt>KafkaServer</tt> section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It
allows the broker to login using the keytab specified in this section. See <a href="#security_sasl_brokernotes">notes</a> for more details on Zookeeper SASL configuration.
<li>Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
<pre> -Djava.security.krb5.conf=/etc/kafka/krb5.conf
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre>
</li>
<li>Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.</li>
<li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
<pre> listeners=SASL_PLAINTEXT://host.name:port
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI</code></pre>
</li>We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
<pre> sasl.kerberos.service.name=kafka</code></pre>
</ol></li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_kerberos_clientconfig" href="#security_sasl_kerberos_clientconfig"></a>
<a href="#security_sasl_kerberos_clientconfig">Configuring Kafka Clients
</a></h5>
To configure SASL authentication on the clients:
<ol>
<li>
Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their
own principal (usually with the same name as the user running the client), so obtain or create
these principals as needed. Then configure the JAAS configuration property for each client.
Different clients within a JVM may run as different users by specifying different principals.
The property <code>sasl.jaas.config</code> in producer.properties or consumer.properties describes
how clients like producer and consumer can connect to the Kafka Broker. The following is an example
configuration for a client using a keytab (recommended for long-running processes):
<pre>
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka_client.keytab" \
principal="kafka-client-1@EXAMPLE.COM";</code></pre>
For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used
along with "useTicketCache=true" as in:
<pre>
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useTicketCache=true;</code></pre>
JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
<tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</li>
<li>Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.</li>
<li>Optionally pass the krb5 file locations as JVM parameters to each client JVM (see <a href="https://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/KerberosReq.html">here</a> for more details):
<pre> -Djava.security.krb5.conf=/etc/kafka/krb5.conf</code></pre></li>
<li>Configure the following properties in producer.properties or consumer.properties:
<pre>
security.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka</code></pre></li>
</ol>
</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_plain" href="#security_sasl_plain"></a>
<a href="#security_sasl_plain">Authentication using SASL</a>/PLAIN
</h4>
<p>SASL/PLAIN is a simple username/password authentication mechanism that is typically used with TLS for encryption to implement secure authentication.
Kafka supports a default implementation for SASL/PLAIN which can be extended for production use as described <a href="#security_sasl_plain_production">here</a>.</p>
The username is used as the authenticated <code>Principal</code> for configuration of ACLs etc.
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_plain_brokerconfig" href="#security_sasl_plain_brokerconfig"></a>
<a href="#security_sasl_plain_brokerconfig">Configuring Kafka Brokers
</a></h5>
<ol>
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
<pre class="line-numbers"><code class="language-text"> KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};</code></pre>
This configuration defines two users (<i>admin</i> and <i>alice</i>). The properties <tt>username</tt> and <tt>password</tt>
in the <tt>KafkaServer</tt> section are used by the broker to initiate connections to other brokers. In this example,
<i>admin</i> is the user for inter-broker communication. The set of properties <tt>user_<i>userName</i></tt> defines
the passwords for all users that connect to the broker and the broker validates all client connections including
those from other brokers using these properties.</li>
<li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
<pre> -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
<li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>. For example:
<pre> listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=PLAIN
sasl.enabled.mechanisms=PLAIN</code></pre></li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_plain_clientconfig" href="#security_sasl_plain_clientconfig"></a>
<a href="#security_sasl_plain_clientconfig">Configuring Kafka Clients
</a></h5>
To configure SASL authentication on the clients:
<ol>
<li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the PLAIN mechanism:
<pre class="line-numbers"><code class="language-text"> sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="alice" \
password="alice-secret";</code></pre>
<p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
Different clients within a JVM may connect as different users by specifying different user names
and passwords in <code>sasl.jaas.config</code>.</p>
<p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
<tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
<li>Configure the following properties in producer.properties or consumer.properties:
<pre>
security.protocol=SASL_SSL
sasl.mechanism=PLAIN</code></pre></li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_plain_production" href="#security_sasl_plain_production"></a>
<a href="#security_sasl_plain_production">Use of SASL</a>/PLAIN in production
</h5>
<ul>
<li>SASL/PLAIN should be used only with SSL as transport layer to ensure that clear passwords are not transmitted on the wire without encryption.</li>
<li>The default implementation of SASL/PLAIN in Kafka specifies usernames and passwords in the JAAS configuration file as shown
<a href="#security_sasl_plain_brokerconfig">here</a>. From Kafka version 2.0 onwards, you can avoid storing clear passwords on disk
by configuring your own callback handlers that obtain username and password from an external source using the configuration options
<code>sasl.server.callback.handler.class</code> and <code>sasl.client.callback.handler.class</code>.</li>
<li>In production systems, external authentication servers may implement password authentication. From Kafka version 2.0 onwards,
you can plug in your own callback handlers that use external authentication servers for password verification by configuring
<code>sasl.server.callback.handler.class</code>.</li>
</ul>
</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_scram" href="#security_sasl_scram"></a>
<a href="#security_sasl_scram">Authentication using SASL</a>/SCRAM
</h4>
<p>Salted Challenge Response Authentication Mechanism (SCRAM) is a family of SASL mechanisms that
addresses the security concerns with traditional mechanisms that perform username/password authentication
like PLAIN and DIGEST-MD5. The mechanism is defined in <a href="https://tools.ietf.org/html/rfc5802">RFC 5802</a>.
Kafka supports <a href="https://tools.ietf.org/html/rfc7677">SCRAM-SHA-256</a> and SCRAM-SHA-512 which
can be used with TLS to perform secure authentication. The username is used as the authenticated
<code>Principal</code> for configuration of ACLs etc. The default SCRAM implementation in Kafka
stores SCRAM credentials in Zookeeper and is suitable for use in Kafka installations where Zookeeper
is on a private network. Refer to <a href="#security_sasl_scram_security">Security Considerations</a>
for more details.</p>
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_scram_credentials" href="#security_sasl_scram_credentials"></a>
<a href="#security_sasl_scram_credentials">Creating SCRAM Credentials
</a></h5>
<p>The SCRAM implementation in Kafka uses Zookeeper as credential store. Credentials can be created in
Zookeeper using <tt>kafka-configs.sh</tt>. For each SCRAM mechanism enabled, credentials must be created
by adding a config with the mechanism name. Credentials for inter-broker communication must be created
before Kafka brokers are started. Client credentials may be created and updated dynamically and updated
credentials will be used to authenticate new connections.</p>
<p>Create SCRAM credentials for user <i>alice</i> with password <i>alice-secret</i>:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice</code></pre>
<p>The default iteration count of 4096 is used if iterations are not specified. A random salt is created
and the SCRAM identity consisting of salt, iterations, StoredKey and ServerKey are stored in Zookeeper.
See <a href="https://tools.ietf.org/html/rfc5802">RFC 5802</a> for details on SCRAM identity and the individual fields.
<p>The following examples also require a user <i>admin</i> for inter-broker communication which can be created using:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin</code></pre>
<p>Existing credentials may be listed using the <i>--describe</i> option:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users --entity-name alice</code></pre>
<p>Credentials may be deleted for one or more SCRAM mechanisms using the <i>--delete</i> option:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice</code></pre>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_scram_brokerconfig" href="#security_sasl_scram_brokerconfig"></a>
<a href="#security_sasl_scram_brokerconfig">Configuring Kafka Brokers
</a></h5>
<ol>
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
<pre>
KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username="admin"
password="admin-secret";
};</code></pre>
The properties <tt>username</tt> and <tt>password</tt> in the <tt>KafkaServer</tt> section are used by
the broker to initiate connections to other brokers. In this example, <i>admin</i> is the user for
inter-broker communication.</li>
<li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
<pre> -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
<li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>.</code></pre> For example:
<pre>
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256 (or SCRAM-SHA-512)
sasl.enabled.mechanisms=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_scram_clientconfig" href="#security_sasl_scram_clientconfig"></a>
<a href="#security_sasl_scram_clientconfig">Configuring Kafka Clients
</a></h5>
To configure SASL authentication on the clients:
<ol>
<li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the SCRAM mechanisms:
<pre class="line-numbers"><code class="language-text"> sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="alice" \
password="alice-secret";</code></pre>
<p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure
the user for client connections. In this example, clients connect to the broker as user <i>alice</i>.
Different clients within a JVM may connect as different users by specifying different user names
and passwords in <code>sasl.jaas.config</code>.</p>
<p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
<tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
<li>Configure the following properties in producer.properties or consumer.properties:
<pre>
security.protocol=SASL_SSL
sasl.mechanism=SCRAM-SHA-256 (or SCRAM-SHA-512)</code></pre></li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_scram_security" href="#security_sasl_scram_security"></a>
<a href="#security_sasl_scram_security">Security Considerations for SASL</a>/SCRAM
</h5>
<ul>
<li>The default implementation of SASL/SCRAM in Kafka stores SCRAM credentials in Zookeeper. This
is suitable for production use in installations where Zookeeper is secure and on a private network.</li>
<li>Kafka supports only the strong hash functions SHA-256 and SHA-512 with a minimum iteration count
of 4096. Strong hash functions combined with strong passwords and high iteration counts protect
against brute force attacks if Zookeeper security is compromised.</li>
<li>SCRAM should be used only with TLS-encryption to prevent interception of SCRAM exchanges. This
protects against dictionary or brute force attacks and against impersonation if Zookeeper is compromised.</li>
<li>From Kafka version 2.0 onwards, the default SASL/SCRAM credential store may be overridden using custom callback handlers
by configuring <code>sasl.server.callback.handler.class</code> in installations where Zookeeper is not secure.</li>
<li>For more details on security considerations, refer to
<a href="https://tools.ietf.org/html/rfc5802#section-9">RFC 5802</a>.
</ul>
</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer" href="#security_sasl_oauthbearer"></a>
<a href="#security_sasl_oauthbearer">Authentication using SASL</a>/OAUTHBEARER
</h4>
<p>The <a href="https://tools.ietf.org/html/rfc6749">OAuth 2 Authorization Framework</a> "enables a third-party application to obtain limited access to an HTTP service,
either on behalf of a resource owner by orchestrating an approval interaction between the resource owner and the HTTP
service, or by allowing the third-party application to obtain access on its own behalf." The SASL OAUTHBEARER mechanism
enables the use of the framework in a SASL (i.e. a non-HTTP) context; it is defined in <a href="https://tools.ietf.org/html/rfc7628">RFC 7628</a>.
The default OAUTHBEARER implementation in Kafka creates and validates <a href="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Tokens</a>
and is only suitable for use in non-production Kafka installations. Refer to <a href="#security_sasl_oauthbearer_security">Security Considerations</a>
for more details.</p>
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_brokerconfig" href="#security_sasl_oauthbearer_brokerconfig"></a>
<a href="#security_sasl_oauthbearer_brokerconfig">Configuring Kafka Brokers
</a></h5>
<ol>
<li>Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example:
<pre>
KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
unsecuredLoginStringClaim_sub="admin";
};</code></pre>
The property <tt>unsecuredLoginStringClaim_sub</tt> in the <tt>KafkaServer</tt> section is used by
the broker when it initiates connections to other brokers. In this example, <i>admin</i> will appear in the
subject (<tt>sub</tt>) claim and will be the user for inter-broker communication.</li>
<li>Pass the JAAS config file location as JVM parameter to each Kafka broker:
<pre> -Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf</code></pre></li>
<li>Configure SASL port and SASL mechanisms in server.properties as described <a href="#security_sasl_brokerconfig">here</a>.</code></pre> For example:
<pre>
listeners=SASL_SSL://host.name:port (or SASL_PLAINTEXT if non-production)
security.inter.broker.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
sasl.mechanism.inter.broker.protocol=OAUTHBEARER
sasl.enabled.mechanisms=OAUTHBEARER</code></pre></li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_clientconfig" href="#security_sasl_oauthbearer_clientconfig"></a>
<a href="#security_sasl_oauthbearer_clientconfig">Configuring Kafka Clients
</a></h5>
To configure SASL authentication on the clients:
<ol>
<li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the OAUTHBEARER mechanisms:
<pre class="line-numbers"><code class="language-text"> sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \
unsecuredLoginStringClaim_sub="alice";</code></pre>
<p>The option <tt>unsecuredLoginStringClaim_sub</tt> is used by clients to configure
the subject (<tt>sub</tt>) claim, which determines the user for client connections.
In this example, clients connect to the broker as user <i>alice</i>.
Different clients within a JVM may connect as different users by specifying different subject (<tt>sub</tt>)
claims in <code>sasl.jaas.config</code>.</p>
<p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
<tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
<li>Configure the following properties in producer.properties or consumer.properties:
<pre>
security.protocol=SASL_SSL (or SASL_PLAINTEXT if non-production)
sasl.mechanism=OAUTHBEARER</code></pre></li>
<li>The default implementation of SASL/OAUTHBEARER depends on the jackson-databind library.
Since it's an optional dependency, users have to configure it as a dependency via their build tool.</li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_unsecured_retrieval" href="#security_sasl_oauthbearer_unsecured_retrieval"></a>
<a href="#security_sasl_oauthbearer_unsecured_retrieval">Unsecured Token Creation Options for SASL</a>/OAUTHBEARER
</h5>
<ul>
<li>The default implementation of SASL/OAUTHBEARER in Kafka creates and validates <a href="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Tokens</a>.
While suitable only for non-production use, it does provide the flexibility to create arbitrary tokens in a DEV or TEST environment.</li>
<li>Here are the various supported JAAS module options on the client side (and on the broker side if OAUTHBEARER is the inter-broker protocol):
<table class="data-table">
<tr>
<th>JAAS Module Option for Unsecured Token Creation</th>
<th>Documentation</th>
</tr>
<tr>
<td><tt>unsecuredLoginStringClaim_&lt;claimname&gt;="value"</tt></td>
<td>Creates a <tt>String</tt> claim with the given name and value. Any valid
claim name can be specified except '<tt>iat</tt>' and '<tt>exp</tt>' (these are
automatically generated).</td>
</tr>
<tr>
<td><tt>unsecuredLoginNumberClaim_&lt;claimname&gt;="value"</tt></td>
<td>Creates a <tt>Number</tt> claim with the given name and value. Any valid
claim name can be specified except '<tt>iat</tt>' and '<tt>exp</tt>' (these are
automatically generated).</td>
</tr>
<tr>
<td><tt>unsecuredLoginListClaim_&lt;claimname&gt;="value"</tt></td>
<td>Creates a <tt>String List</tt> claim with the given name and values parsed
from the given value where the first character is taken as the delimiter. For
example: <tt>unsecuredLoginListClaim_fubar="|value1|value2"</tt>. Any valid
claim name can be specified except '<tt>iat</tt>' and '<tt>exp</tt>' (these are
automatically generated).</td>
</tr>
<tr>
<td><tt>unsecuredLoginExtension_&lt;extensionname&gt;="value"</tt></td>
<td>Creates a <tt>String</tt> extension with the given name and value.
For example: <tt>unsecuredLoginExtension_traceId="123"</tt>. A valid extension name
is any sequence of lowercase or uppercase alphabet characters. In addition, the "auth" extension name is reserved.
A valid extension value is any combination of characters with ASCII codes 1-127.
</tr>
<tr>
<td><tt>unsecuredLoginPrincipalClaimName</tt></td>
<td>Set to a custom claim name if you wish the name of the <tt>String</tt>
claim holding the principal name to be something other than '<tt>sub</tt>'.</td>
</tr>
<tr>
<td><tt>unsecuredLoginLifetimeSeconds</tt></td>
<td>Set to an integer value if the token expiration is to be set to something
other than the default value of 3600 seconds (which is 1 hour). The
'<tt>exp</tt>' claim will be set to reflect the expiration time.</td>
</tr>
<tr>
<td><tt>unsecuredLoginScopeClaimName</tt></td>
<td>Set to a custom claim name if you wish the name of the <tt>String</tt> or
<tt>String List</tt> claim holding any token scope to be something other than
'<tt>scope</tt>'.</td>
</tr>
</table>
</li>
</ul>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_unsecured_validation" href="#security_sasl_oauthbearer_unsecured_validation"></a>
<a href="#security_sasl_oauthbearer_unsecured_validation">Unsecured Token Validation Options for SASL</a>/OAUTHBEARER
</h5>
<ul>
<li>Here are the various supported JAAS module options on the broker side for <a href="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Token</a> validation:
<table class="data-table">
<tr>
<th>JAAS Module Option for Unsecured Token Validation</th>
<th>Documentation</th>
</tr>
<tr>
<td><tt>unsecuredValidatorPrincipalClaimName="value"</tt></td>
<td>Set to a non-empty value if you wish a particular <tt>String</tt> claim
holding a principal name to be checked for existence; the default is to check
for the existence of the '<tt>sub</tt>' claim.</td>
</tr>
<tr>
<td><tt>unsecuredValidatorScopeClaimName="value"</tt></td>
<td>Set to a custom claim name if you wish the name of the <tt>String</tt> or
<tt>String List</tt> claim holding any token scope to be something other than
'<tt>scope</tt>'.</td>
</tr>
<tr>
<td><tt>unsecuredValidatorRequiredScope="value"</tt></td>
<td>Set to a space-delimited list of scope values if you wish the
<tt>String/String List</tt> claim holding the token scope to be checked to
make sure it contains certain values.</td>
</tr>
<tr>
<td><tt>unsecuredValidatorAllowableClockSkewMs="value"</tt></td>
<td>Set to a positive integer value if you wish to allow up to some number of
positive milliseconds of clock skew (the default is 0).</td>
</tr>
</table>
</li>
<li>The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments)
using custom login and SASL Server callback handlers.</li>
<li>For more details on security considerations, refer to <a href="https://tools.ietf.org/html/rfc6749#section-10">RFC 6749, Section 10</a>.</li>
</ul>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_refresh" href="#security_sasl_oauthbearer_refresh"></a>
<a href="#security_sasl_oauthbearer_refresh">Token Refresh for SASL</a>/OAUTHBEARER
</h5>
Kafka periodically refreshes any token before it expires so that the client can continue to make
connections to brokers. The parameters that impact how the refresh algorithm
operates are specified as part of the producer/consumer/broker configuration
and are as follows. See the documentation for these properties elsewhere for
details. The default values are usually reasonable, in which case these
configuration parameters would not need to be explicitly set.
<table class="data-table">
<tr>
<th>Producer/Consumer/Broker Configuration Property</th>
</tr>
<tr>
<td><tt>sasl.login.refresh.window.factor</tt></td>
</tr>
<tr>
<td><tt>sasl.login.refresh.window.jitter</tt></td>
</tr>
<tr>
<td><tt>sasl.login.refresh.min.period.seconds</tt></td>
</tr>
<tr>
<td><tt>sasl.login.refresh.min.buffer.seconds</tt></td>
</tr>
</table>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_prod" href="#security_sasl_oauthbearer_prod"></a>
<a href="#security_sasl_oauthbearer_prod">Secure</a>/Production Use of SASL/OAUTHBEARER
</h5>
Production use cases will require writing an implementation of
<tt>org.apache.kafka.common.security.auth.AuthenticateCallbackHandler</tt> that can handle an instance of
<tt>org.apache.kafka.common.security.oauthbearer.OAuthBearerTokenCallback</tt> and declaring it via either the
<tt>sasl.login.callback.handler.class</tt> configuration option for a
non-broker client or via the
<tt>listener.name.sasl_ssl.oauthbearer.sasl.login.callback.handler.class</tt>
configuration option for brokers (when SASL/OAUTHBEARER is the inter-broker
protocol).
<p>
Production use cases will also require writing an implementation of
<tt>org.apache.kafka.common.security.auth.AuthenticateCallbackHandler</tt> that can handle an instance of
<tt>org.apache.kafka.common.security.oauthbearer.OAuthBearerValidatorCallback</tt> and declaring it via the
<tt>listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class</tt>
broker configuration option.
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_oauthbearer_security" href="#security_sasl_oauthbearer_security"></a>
<a href="#security_sasl_oauthbearer_security">Security Considerations for SASL</a>/OAUTHBEARER
</h5>
<ul>
<li>The default implementation of SASL/OAUTHBEARER in Kafka creates and validates <a href="https://tools.ietf.org/html/rfc7515#appendix-A.5">Unsecured JSON Web Tokens</a>.
This is suitable only for non-production use.</li>
<li>OAUTHBEARER should be used in production enviromnments only with TLS-encryption to prevent interception of tokens.</li>
<li>The default unsecured SASL/OAUTHBEARER implementation may be overridden (and must be overridden in production environments)
using custom login and SASL Server callback handlers as described above.</li>
<li>For more details on OAuth 2 security considerations in general, refer to <a href="https://tools.ietf.org/html/rfc6749#section-10">RFC 6749, Section 10</a>.</li>
</ul>
</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_sasl_multimechanism" href="#security_sasl_multimechanism"></a>
<a href="#security_sasl_multimechanism">Enabling multiple SASL mechanisms in a broker
</a></h4>
<ol>
<li>Specify configuration for the login modules of all enabled mechanisms in the <tt>KafkaServer</tt> section of the JAAS config file. For example:
<pre>
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};</code></pre></li>
<li>Enable the SASL mechanisms in server.properties: <pre> sasl.enabled.mechanisms=GSSAPI,PLAIN,SCRAM-SHA-256,SCRAM-SHA-512,OAUTHBEARER</code></pre></li>
<li>Specify the SASL security protocol and mechanism for inter-broker communication in server.properties if required:
<pre>
security.inter.broker.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism.inter.broker.protocol=GSSAPI (or one of the other enabled mechanisms)</code></pre></li>
<li>Follow the mechanism-specific steps in <a href="#security_sasl_kerberos_brokerconfig">GSSAPI (Kerberos)</a>,
<a href="#security_sasl_plain_brokerconfig">PLAIN</a>,
<a href="#security_sasl_scram_brokerconfig">SCRAM</a> and <a href="#security_sasl_oauthbearer_brokerconfig">OAUTHBEARER</a>
to configure SASL for the enabled mechanisms.</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="saslmechanism_rolling_upgrade" href="#saslmechanism_rolling_upgrade"></a>
<a href="#saslmechanism_rolling_upgrade">Modifying SASL mechanism in a Running Cluster
</a></h4>
<p>SASL mechanism can be modified in a running cluster using the following sequence:</p>
<ol>
<li>Enable new SASL mechanism by adding the mechanism to <tt>sasl.enabled.mechanisms</tt> in server.properties for each broker. Update JAAS config file to include both
mechanisms as described <a href="#security_sasl_multimechanism">here</a>. Incrementally bounce the cluster nodes.</li>
<li>Restart clients using the new mechanism.</li>
<li>To change the mechanism of inter-broker communication (if this is required), set <tt>sasl.mechanism.inter.broker.protocol</tt> in server.properties to the new mechanism and
incrementally bounce the cluster again.</li>
<li>To remove old mechanism (if this is required), remove the old mechanism from <tt>sasl.enabled.mechanisms</tt> in server.properties and remove the entries for the
old mechanism from JAAS config file. Incrementally bounce the cluster again.</li>
</ol>
</li>
<li><h4 class="anchor-heading">
<a class="anchor-link" id="security_delegation_token" href="#security_delegation_token"></a>
<a href="#security_delegation_token">Authentication using Delegation Tokens
</a></h4>
<p>Delegation token based authentication is a lightweight authentication mechanism to complement existing SASL/SSL
methods. Delegation tokens are shared secrets between kafka brokers and clients. Delegation tokens will help processing
frameworks to distribute the workload to available workers in a secure environment without the added cost of distributing
Kerberos TGT/keytabs or keystores when 2-way SSL is used. See <a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka">KIP-48</a>
for more details.</p>
<p>Typical steps for delegation token usage are:</p>
<ol>
<li>User authenticates with the Kafka cluster via SASL or SSL, and obtains a delegation token. This can be done
using Admin APIs or using <tt>kafka-delegation-tokens.sh</tt> script.</li>
<li>User securely passes the delegation token to Kafka clients for authenticating with the Kafka cluster.</li>
<li>Token owner/renewer can renew/expire the delegation tokens.</li>
</ol>
<ol>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_token_management" href="#security_token_management"></a>
<a href="#security_token_management">Token Management
</a></h5>
<p> A master key/secret is used to generate and verify delegation tokens. This is supplied using config
option <tt>delegation.token.master.key</tt>. Same secret key must be configured across all the brokers.
If the secret is not set or set to empty string, brokers will disable the delegation token authentication.</p>
<p>In current implementation, token details are stored in Zookeeper and is suitable for use in Kafka installations where
Zookeeper is on a private network. Also currently, master key/secret is stored as plain text in server.properties
config file. We intend to make these configurable in a future Kafka release.</p>
<p>A token has a current life, and a maximum renewable life. By default, tokens must be renewed once every 24 hours
for up to 7 days. These can be configured using <tt>delegation.token.expiry.time.ms</tt>
and <tt>delegation.token.max.lifetime.ms</tt> config options.</p>
<p>Tokens can also be cancelled explicitly. If a token is not renewed by the tokens expiration time or if token
is beyond the max life time, it will be deleted from all broker caches as well as from zookeeper.</p>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_sasl_create_tokens" href="#security_sasl_create_tokens"></a>
<a href="#security_sasl_create_tokens">Creating Delegation Tokens
</a></h5>
<p>Tokens can be created by using Admin APIs or using <tt>kafka-delegation-tokens.sh</tt> script.
Delegation token requests (create/renew/expire/describe) should be issued only on SASL or SSL authenticated channels.
Tokens can not be requests if the initial authentication is done through delegation token.
<tt>kafka-delegation-tokens.sh</tt> script examples are given below.</p>
<p>Create a delegation token:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --create --max-life-time-period -1 --command-config client.properties --renewer-principal User:user1</code></pre>
<p>Renew a delegation token:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --renew --renew-time-period -1 --command-config client.properties --hmac ABCDEFGHIJK</code></pre>
<p>Expire a delegation token:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --expire --expiry-time-period -1 --command-config client.properties --hmac ABCDEFGHIJK</code></pre>
<p>Existing tokens can be described using the --describe option:
<pre class="line-numbers"><code class="language-bash"> > bin/kafka-delegation-tokens.sh --bootstrap-server localhost:9092 --describe --command-config client.properties --owner-principal User:user1</code></pre>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_token_authentication" href="#security_token_authentication"></a>
<a href="#security_token_authentication">Token Authentication
</a></h5>
<p>Delegation token authentication piggybacks on the current SASL/SCRAM authentication mechanism. We must enable
SASL/SCRAM mechanism on Kafka cluster as described in <a href="#security_sasl_scram">here</a>.</p>
<p>Configuring Kafka Clients:</p>
<ol>
<li>Configure the JAAS configuration property for each client in producer.properties or consumer.properties.
The login module describes how the clients like producer and consumer can connect to the Kafka Broker.
The following is an example configuration for a client for the token authentication:
<pre class="line-numbers"><code class="language-text"> sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="tokenID123" \
password="lAYYSFmLs4bTjf+lTZ1LCHR/ZZFNA==" \
tokenauth="true";</code></pre>
<p>The options <tt>username</tt> and <tt>password</tt> are used by clients to configure the token id and
token HMAC. And the option <tt>tokenauth</tt> is used to indicate the server about token authentication.
In this example, clients connect to the broker using token id: <i>tokenID123</i>. Different clients within a
JVM may connect using different tokens by specifying different token details in <code>sasl.jaas.config</code>.</p>
<p>JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers
as described <a href="#security_client_staticjaas">here</a>. Clients use the login section named
<tt>KafkaClient</tt>. This option allows only one user for all client connections from a JVM.</p></li>
</ol>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_token_secret_rotation" href="#security_token_secret_rotation"></a>
<a href="#security_token_secret_rotation">Procedure to manually rotate the secret</a>:
</h5>
<p>We require a re-deployment when the secret needs to be rotated. During this process, already connected clients
will continue to work. But any new connection requests and renew/expire requests with old tokens can fail. Steps are given below.</p>
<ol>
<li>Expire all existing tokens.</li>
<li>Rotate the secret by rolling upgrade, and</li>
<li>Generate new tokens</li>
</ol>
<p>We intend to automate this in a future Kafka release.</p>
</li>
<li><h5 class="anchor-heading">
<a class="anchor-link" id="security_token_notes" href="#security_token_notes"></a>
<a href="#security_token_notes">Notes on Delegation Tokens
</a></h5>
<ul>
<li>Currently, we only allow a user to create delegation token for that user only. Owner/Renewers can renew or expire tokens.
Owner/renewers can always describe their own tokens. To describe others tokens, we need to add DESCRIBE permission on Token Resource.</li>
</ul>
</li>
</ol>
</li>
</ol>
<h3 class="anchor-heading">
<a class="anchor-link" id="security_authz" href="#security_authz"></a>
<a href="#security_authz">7.4 Authorization and ACLs
</a></h3>
Kafka ships with a pluggable Authorizer and an out-of-box authorizer implementation that uses zookeeper to store all the acls. The Authorizer is configured by setting <tt>authorizer.class.name</tt> in server.properties. To enable the out of the box implementation use:
<pre>authorizer.class.name=kafka.security.authorizer.AclAuthorizer</code></pre>
Kafka acls are defined in the general format of "Principal P is [Allowed/Denied] Operation O From Host H on any Resource R matching ResourcePattern RP". You can read more about the acl structure in KIP-11 and resource patterns in KIP-290. In order to add, remove or list acls you can use the Kafka authorizer CLI. By default, if no ResourcePatterns match a specific Resource R, then R has no associated acls, and therefore no one other than super users is allowed to access R. If you want to change that behavior, you can include the following in server.properties.
<pre>allow.everyone.if.no.acl.found=true</code></pre>
One can also add super users in server.properties like the following (note that the delimiter is semicolon since SSL user names may contain comma). Default PrincipalType string "User" is case sensitive.
<pre>super.users=User:Bob;User:Alice</code></pre>
<h5 class="anchor-heading">
<a class="anchor-link" id="security_authz_ssl" href="#security_authz_ssl"></a>
<a href="#security_authz_ssl">Customizing SSL User Name
</a></h5>
By default, the SSL user name will be of the form "CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown". One can change that by setting <code>ssl.principal.mapping.rules</code> to a customized rule in server.properties.
This config allows a list of rules for mapping X.500 distinguished name to short name. The rules are evaluated in order and the first rule that matches a distinguished name is used to map it to a short name. Any later rules in the list are ignored.
<br>The format of <code>ssl.principal.mapping.rules</code> is a list where each rule starts with "RULE:" and contains an expression as the following formats. Default rule will return
string representation of the X.500 certificate distinguished name. If the distinguished name matches the pattern, then the replacement command will be run over the name.
This also supports lowercase/uppercase options, to force the translated result to be all lower/uppercase case. This is done by adding a "/L" or "/U' to the end of the rule.
<pre>
RULE:pattern/replacement/
RULE:pattern/replacement/[LU]</code></pre>
Example <code>ssl.principal.mapping.rules</code> values are:
<pre>
RULE:^CN=(.*?),OU=ServiceUsers.*$/$1/,
RULE:^CN=(.*?),OU=(.*?),O=(.*?),L=(.*?),ST=(.*?),C=(.*?)$/$1@$2/L,
RULE:^.*[Cc][Nn]=([a-zA-Z0-9.]*).*$/$1/L,
DEFAULT</code></pre>
Above rules translate distinguished name "CN=serviceuser,OU=ServiceUsers,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "serviceuser"
and "CN=adminUser,OU=Admin,O=Unknown,L=Unknown,ST=Unknown,C=Unknown" to "adminuser@admin".
<br>For advanced use cases, one can customize the name by setting a customized PrincipalBuilder in server.properties like the following.
<pre>principal.builder.class=CustomizedPrincipalBuilderClass</code></pre>
<h5 class="anchor-heading">
<a class="anchor-link" id="security_authz_sasl" href="#security_authz_sasl"></a>
<a href="#security_authz_sasl">Customizing SASL User Name
</a></h5>
By default, the SASL user name will be the primary part of the Kerberos principal. One can change that by setting <code>sasl.kerberos.principal.to.local.rules</code> to a customized rule in server.properties.
The format of <code>sasl.kerberos.principal.to.local.rules</code> is a list where each rule works in the same way as the auth_to_local in <a href="http://web.mit.edu/Kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html">Kerberos configuration file (krb5.conf)</a>. This also support additional lowercase/uppercase rule, to force the translated result to be all lowercase/uppercase. This is done by adding a "/L" or "/U" to the end of the rule. check below formats for syntax.
Each rules starts with RULE: and contains an expression as the following formats. See the kerberos documentation for more details.
<pre>
RULE:[n:string](regexp)s/pattern/replacement/
RULE:[n:string](regexp)s/pattern/replacement/g
RULE:[n:string](regexp)s/pattern/replacement//L
RULE:[n:string](regexp)s/pattern/replacement/g/L
RULE:[n:string](regexp)s/pattern/replacement//U
RULE:[n:string](regexp)s/pattern/replacement/g/U</code></pre>
An example of adding a rule to properly translate user@MYDOMAIN.COM to user while also keeping the default rule in place is:
<pre>sasl.kerberos.principal.to.local.rules=RULE:[1:$1@$0](.*@MYDOMAIN.COM)s/@.*//,DEFAULT</code></pre>
<h4 class="anchor-heading">
<a class="anchor-link" id="security_authz_cli" href="#security_authz_cli"></a>
<a href="#security_authz_cli">Command Line Interface
</a></h4>
Kafka Authorization management CLI can be found under bin directory with all the other CLIs. The CLI script is called <b>kafka-acls.sh</b>. Following lists all the options that the script supports:
<p></p>
<table class="data-table">
<tr>
<th>Option</th>
<th>Description</th>
<th>Default</th>
<th>Option type</th>
</tr>
<tr>
<td>--add</td>
<td>Indicates to the script that user is trying to add an acl.</td>
<td></td>
<td>Action</td>
</tr>
<tr>
<td>--remove</td>
<td>Indicates to the script that user is trying to remove an acl.</td>
<td></td>
<td>Action</td>
</tr>
<tr>
<td>--list</td>
<td>Indicates to the script that user is trying to list acls.</td>
<td></td>
<td>Action</td>
</tr>
<tr>
<td>--authorizer</td>
<td>Fully qualified class name of the authorizer.</td>
<td>kafka.security.authorizer.AclAuthorizer</td>
<td>Configuration</td>
</tr>
<tr>
<td>--authorizer-properties</td>
<td>key=val pairs that will be passed to authorizer for initialization. For the default authorizer the example values are: zookeeper.connect=localhost:2181</td>
<td></td>
<td>Configuration</td>
</tr>
<tr>
<td>--bootstrap-server</td>
<td>A list of host/port pairs to use for establishing the connection to the Kafka cluster. Only one of --bootstrap-server or --authorizer option must be specified.</td>
<td></td>
<td>Configuration</td>
</tr>
<tr>
<td>--command-config</td>
<td>A property file containing configs to be passed to Admin Client. This option can only be used with --bootstrap-server option.</td>
<td></td>
<td>Configuration</td>
</tr>
<tr>
<td>--cluster</td>
<td>Indicates to the script that the user is trying to interact with acls on the singular cluster resource.</td>
<td></td>
<td>ResourcePattern</td>
</tr>
<tr>
<td>--topic [topic-name]</td>
<td>Indicates to the script that the user is trying to interact with acls on topic resource pattern(s).</td>
<td></td>
<td>ResourcePattern</td>
</tr>
<tr>
<td>--group [group-name]</td>
<td>Indicates to the script that the user is trying to interact with acls on consumer-group resource pattern(s)</td>
<td></td>
<td>ResourcePattern</td>
</tr>
<tr>
<td>--transactional-id [transactional-id]</td>
<td>The transactionalId to which ACLs should be added or removed. A value of * indicates the ACLs should apply to all transactionalIds.</td>
<td></td>
<td>ResourcePattern</td>
</tr>
<tr>
<td>--delegation-token [delegation-token]</td>
<td>Delegation token to which ACLs should be added or removed. A value of * indicates ACL should apply to all tokens.</td>
<td></td>
<td>ResourcePattern</td>
</tr>
<tr>
<td>--resource-pattern-type [pattern-type]</td>
<td>Indicates to the script the type of resource pattern, (for --add), or resource pattern filter, (for --list and --remove), the user wishes to use.<br>
When adding acls, this should be a specific pattern type, e.g. 'literal' or 'prefixed'.<br>
When listing or removing acls, a specific pattern type filter can be used to list or remove acls from a specific type of resource pattern,
or the filter values of 'any' or 'match' can be used, where 'any' will match any pattern type, but will match the resource name exactly,
and 'match' will perform pattern matching to list or remove all acls that affect the supplied resource(s).<br>
WARNING: 'match', when used in combination with the '--remove' switch, should be used with care.
</td>
<td>literal</td>
<td>Configuration</td>
</tr>
<tr>
<td>--allow-principal</td>
<td>Principal is in PrincipalType:name format that will be added to ACL with Allow permission. Default PrincipalType string "User" is case sensitive. <br>You can specify multiple --allow-principal in a single command.</td>
<td></td>
<td>Principal</td>
</tr>
<tr>
<td>--deny-principal</td>
<td>Principal is in PrincipalType:name format that will be added to ACL with Deny permission. Default PrincipalType string "User" is case sensitive. <br>You can specify multiple --deny-principal in a single command.</td>
<td></td>
<td>Principal</td>
</tr>
<tr>
<td>--principal</td>
<td>Principal is in PrincipalType:name format that will be used along with --list option. Default PrincipalType string "User" is case sensitive. This will list the ACLs for the specified principal. <br>You can specify multiple --principal in a single command.</td>
<td></td>
<td>Principal</td>
</tr>
<tr>
<td>--allow-host</td>
<td>IP address from which principals listed in --allow-principal will have access.</td>
<td> if --allow-principal is specified defaults to * which translates to "all hosts"</td>
<td>Host</td>
</tr>
<tr>
<td>--deny-host</td>
<td>IP address from which principals listed in --deny-principal will be denied access.</td>
<td>if --deny-principal is specified defaults to * which translates to "all hosts"</td>
<td>Host</td>
</tr>
<tr>
<td>--operation</td>
<td>Operation that will be allowed or denied.<br>
Valid values are:
<ul>
<li>Read</li>
<li>Write</li>
<li>Create</li>
<li>Delete</li>
<li>Alter</li>
<li>Describe</li>
<li>ClusterAction</li>
<li>DescribeConfigs</li>
<li>AlterConfigs</li>
<li>IdempotentWrite</li>
<li>All</li>
</ul>
</td>
<td>All</td>
<td>Operation</td>
</tr>
<tr>
<td>--producer</td>
<td> Convenience option to add/remove acls for producer role. This will generate acls that allows WRITE,
DESCRIBE and CREATE on topic.</td>
<td></td>
<td>Convenience</td>
</tr>
<tr>
<td>--consumer</td>
<td> Convenience option to add/remove acls for consumer role. This will generate acls that allows READ,
DESCRIBE on topic and READ on consumer-group.</td>
<td></td>
<td>Convenience</td>
</tr>
<tr>
<td>--idempotent</td>
<td>Enable idempotence for the producer. This should be used in combination with the --producer option.<br>
Note that idempotence is enabled automatically if the producer is authorized to a particular transactional-id.
</td>
<td></td>
<td>Convenience</td>
</tr>
<tr>
<td>--force</td>
<td> Convenience option to assume yes to all queries and do not prompt.</td>
<td></td>
<td>Convenience</td>
</tr>
<tr>
<td>--zk-tls-config-file</td>
<td> Identifies the file where ZooKeeper client TLS connectivity properties for the authorizer are defined.
Any properties other than the following (with or without an "authorizer." prefix) are ignored:
zookeeper.clientCnxnSocket, zookeeper.ssl.cipher.suites, zookeeper.ssl.client.enable,
zookeeper.ssl.crl.enable, zookeeper.ssl.enabled.protocols, zookeeper.ssl.endpoint.identification.algorithm,
zookeeper.ssl.keystore.location, zookeeper.ssl.keystore.password, zookeeper.ssl.keystore.type,
zookeeper.ssl.ocsp.enable, zookeeper.ssl.protocol, zookeeper.ssl.truststore.location,
zookeeper.ssl.truststore.password, zookeeper.ssl.truststore.type
</td>
<td></td>
<td>Configuration</td>
</tr>
</tbody></table>
<h4 class="anchor-heading">
<a class="anchor-link" id="security_authz_examples" href="#security_authz_examples"></a>
<a href="#security_authz_examples">Examples
</a></h4>
<ul>
<li><b>Adding Acls</b><br>
Suppose you want to add an acl "Principals User:Bob and User:Alice are allowed to perform Operation Read and Write on Topic Test-Topic from IP 198.51.100.0 and IP 198.51.100.1". You can do that by executing the CLI with following options:
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic</code></pre>
By default, all principals that don't have an explicit acl that allows access for an operation to a resource are denied. In rare cases where an allow acl is defined that allows access to all but some principal we will have to use the --deny-principal and --deny-host option. For example, if we want to allow all users to Read from Test-topic but only deny User:BadBob from IP 198.51.100.3 we can do so using following commands:
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:* --allow-host * --deny-principal User:BadBob --deny-host 198.51.100.3 --operation Read --topic Test-topic</code></pre>
Note that ``--allow-host`` and ``deny-host`` only support IP addresses (hostnames are not supported).
Above examples add acls to a topic by specifying --topic [topic-name] as the resource pattern option. Similarly user can add acls to cluster by specifying --cluster and to a consumer group by specifying --group [group-name].
You can add acls on any resource of a certain type, e.g. suppose you wanted to add an acl "Principal User:Peter is allowed to produce to any Topic from IP 198.51.200.0"
You can do that by using the wildcard resource '*', e.g. by executing the CLI with following options:
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Peter --allow-host 198.51.200.1 --producer --topic *</code></pre>
You can add acls on prefixed resource patterns, e.g. suppose you want to add an acl "Principal User:Jane is allowed to produce to any Topic whose name starts with 'Test-' from any host".
You can do that by executing the CLI with following options:
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Jane --producer --topic Test- --resource-pattern-type prefixed</code></pre>
Note, --resource-pattern-type defaults to 'literal', which only affects resources with the exact same name or, in the case of the wildcard resource name '*', a resource with any name.</li>
<li><b>Removing Acls</b><br>
Removing acls is pretty much the same. The only difference is instead of --add option users will have to specify --remove option. To remove the acls added by the first example above we can execute the CLI with following options:
<pre class="line-numbers"><code class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Bob --allow-principal User:Alice --allow-host 198.51.100.0 --allow-host 198.51.100.1 --operation Read --operation Write --topic Test-topic </code></pre>
If you wan to remove the acl added to the prefixed resource pattern above we can execute the CLI with following options:
<pre class="line-numbers"><code class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:Jane --producer --topic Test- --resource-pattern-type Prefixed</code></pre></li>
<li><b>List Acls</b><br>
We can list acls for any resource by specifying the --list option with the resource. To list all acls on the literal resource pattern Test-topic, we can execute the CLI with following options:
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic</code></pre>
However, this will only return the acls that have been added to this exact resource pattern. Other acls can exist that affect access to the topic,
e.g. any acls on the topic wildcard '*', or any acls on prefixed resource patterns. Acls on the wildcard resource pattern can be queried explicitly:
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic *</code></pre>
However, it is not necessarily possible to explicitly query for acls on prefixed resource patterns that match Test-topic as the name of such patterns may not be known.
We can list <i>all</i> acls affecting Test-topic by using '--resource-pattern-type match', e.g.
<pre class="line-numbers"><code class="language-bash">bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic Test-topic --resource-pattern-type match</code></pre>
This will list acls on all matching literal, wildcard and prefixed resource patterns.</li>
<li><b>Adding or removing a principal as producer or consumer</b><br>
The most common use case for acl management are adding/removing a principal as producer or consumer so we added convenience options to handle these cases. In order to add User:Bob as a producer of Test-topic we can execute the following command:
<pre class="line-numbers"><code class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --producer --topic Test-topic</code></pre>
Similarly to add Alice as a consumer of Test-topic with consumer group Group-1 we just have to pass --consumer option:
<pre class="line-numbers"><code class="language-bash"> bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1 </code></pre>
Note that for consumer option we must also specify the consumer group.
In order to remove a principal from producer or consumer role we just need to pass --remove option. </li>
<li><b>Admin API based acl management</b><br>
Users having Alter permission on ClusterResource can use Admin API for ACL management. kafka-acls.sh script supports AdminClient API to manage ACLs without interacting with zookeeper/authorizer directly.
All the above examples can be executed by using <b>--bootstrap-server</b> option. For example:
<pre class="line-numbers"><code class="language-bash"> bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --producer --topic Test-topic
bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --add --allow-principal User:Bob --consumer --topic Test-topic --group Group-1
bin/kafka-acls.sh --bootstrap-server localhost:9092 --command-config /tmp/adminclient-configs.conf --list --topic Test-topic</code></pre></li>
</ul>
<h4 class="anchor-heading">
<a class="anchor-link" id="security_authz_primitives" href="#security_authz_primitives"></a>
<a href="#security_authz_primitives">Authorization Primitives
</a></h4>
<p>Protocol calls are usually performing some operations on certain resources in Kafka. It is required to know the
operations and resources to set up effective protection. In this section we'll list these operations and
resources, then list the combination of these with the protocols to see the valid scenarios.</p>
<h5 class="anchor-heading">
<a class="anchor-link" id="operations_in_kafka" href="#operations_in_kafka"></a>
<a href="#operations_in_kafka">Operations in Kafka
</a></h5>
<p>There are a few operation primitives that can be used to build up privileges. These can be matched up with
certain resources to allow specific protocol calls for a given user. These are:</p>
<ul>
<li>Read</li>
<li>Write</li>
<li>Create</li>
<li>Delete</li>
<li>Alter</li>
<li>Describe</li>
<li>ClusterAction</li>
<li>DescribeConfigs</li>
<li>AlterConfigs</li>
<li>IdempotentWrite</li>
<li>All</li>
</ul>
<h5 class="anchor-heading">
<a class="anchor-link" id="resources_in_kafka" href="#resources_in_kafka"></a>
<a href="#resources_in_kafka">Resources in Kafka
</a></h5>
<p>The operations above can be applied on certain resources which are described below.</p>
<ul>
<li><b>Topic:</b> this simply represents a Topic. All protocol calls that are acting on topics (such as reading,
writing them) require the corresponding privilege to be added. If there is an authorization error with a
topic resource, then a TOPIC_AUTHORIZATION_FAILED (error code: 29) will be returned.</li>
<li><b>Group:</b> this represents the consumer groups in the brokers. All protocol calls that are working with
consumer groups, like joining a group must have privileges with the group in subject. If the privilege is not
given then a GROUP_AUTHORIZATION_FAILED (error code: 30) will be returned in the protocol response.</li>
<li><b>Cluster:</b> this resource represents the cluster. Operations that are affecting the whole cluster, like
controlled shutdown are protected by privileges on the Cluster resource. If there is an authorization problem
on a cluster resource, then a CLUSTER_AUTHORIZATION_FAILED (error code: 31) will be returned.</li>
<li><b>TransactionalId:</b> this resource represents actions related to transactions, such as committing.
If any error occurs, then a TRANSACTIONAL_ID_AUTHORIZATION_FAILED (error code: 53) will be returned by brokers.</li>
<li><b>DelegationToken:</b> this represents the delegation tokens in the cluster. Actions, such as describing
delegation tokens could be protected by a privilege on the DelegationToken resource. Since these objects have
a little special behavior in Kafka it is recommended to read
<a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka#KIP-48DelegationtokensupportforKafka-DescribeDelegationTokenRequest">KIP-48</a>
and the related upstream documentation at <a href="#security_delegation_token">Authentication using Delegation Tokens</a>.</li>
</ul>
<h5 class="anchor-heading">
<a class="anchor-link" id="operations_resources_and_protocols" href="#operations_resources_and_protocols"></a>
<a href="#operations_resources_and_protocols">Operations and Resources on Protocols
</a></h5>
<p>In the below table we'll list the valid operations on resources that are executed by the Kafka API protocols.</p>
<table class="data-table">
<thead>
<tr>
<th>Protocol (API key)</th>
<th>Operation</th>
<th>Resource</th>
<th>Note</th>
</tr>
</thead>
<tbody>
<tr>
<td>PRODUCE (0)</td>
<td>Write</td>
<td>TransactionalId</td>
<td>An transactional producer which has its transactional.id set requires this privilege.</td>
</tr>
<tr>
<td>PRODUCE (0)</td>
<td>IdempotentWrite</td>
<td>Cluster</td>
<td>An idempotent produce action requires this privilege.</td>
</tr>
<tr>
<td>PRODUCE (0)</td>
<td>Write</td>
<td>Topic</td>
<td>This applies to a normal produce action.</td>
</tr>
<tr>
<td>FETCH (1)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td>A follower must have ClusterAction on the Cluster resource in order to fetch partition data.</td>
</tr>
<tr>
<td>FETCH (1)</td>
<td>Read</td>
<td>Topic</td>
<td>Regular Kafka consumers need READ permission on each partition they are fetching.</td>
</tr>
<tr>
<td>LIST_OFFSETS (2)</td>
<td>Describe</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>METADATA (3)</td>
<td>Describe</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>METADATA (3)</td>
<td>Create</td>
<td>Cluster</td>
<td>If topic auto-creation is enabled, then the broker-side API will check for the existence of a Cluster
level privilege. If it's found then it'll allow creating the topic, otherwise it'll iterate through the
Topic level privileges (see the next one).</td>
</tr>
<tr>
<td>METADATA (3)</td>
<td>Create</td>
<td>Topic</td>
<td>This authorizes auto topic creation if enabled but the given user doesn't have a cluster level
permission (above).</td>
</tr>
<tr>
<td>LEADER_AND_ISR (4)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>STOP_REPLICA (5)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>UPDATE_METADATA (6)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>CONTROLLED_SHUTDOWN (7)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>OFFSET_COMMIT (8)</td>
<td>Read</td>
<td>Group</td>
<td>An offset can only be committed if it's authorized to the given group and the topic too (see below).
Group access is checked first, then Topic access.</td>
</tr>
<tr>
<td>OFFSET_COMMIT (8)</td>
<td>Read</td>
<td>Topic</td>
<td>Since offset commit is part of the consuming process, it needs privileges for the read action.</td>
</tr>
<tr>
<td>OFFSET_FETCH (9)</td>
<td>Describe</td>
<td>Group</td>
<td>Similarly to OFFSET_COMMIT, the application must have privileges on group and topic level too to be able
to fetch. However in this case it requires describe access instead of read. Group access is checked first,
then Topic access.</td>
</tr>
<tr>
<td>OFFSET_FETCH (9)</td>
<td>Describe</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>FIND_COORDINATOR (10)</td>
<td>Describe</td>
<td>Group</td>
<td>The FIND_COORDINATOR request can be of "Group" type in which case it is looking for consumergroup
coordinators. This privilege would represent the Group mode.</td>
</tr>
<tr>
<td>FIND_COORDINATOR (10)</td>
<td>Describe</td>
<td>TransactionalId</td>
<td>This applies only on transactional producers and checked when a producer tries to find the transaction
coordinator.</td>
</tr>
<tr>
<td>JOIN_GROUP (11)</td>
<td>Read</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>HEARTBEAT (12)</td>
<td>Read</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>LEAVE_GROUP (13)</td>
<td>Read</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>SYNC_GROUP (14)</td>
<td>Read</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>DESCRIBE_GROUPS (15)</td>
<td>Describe</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>LIST_GROUPS (16)</td>
<td>Describe</td>
<td>Cluster</td>
<td>When the broker checks to authorize a list_groups request it first checks for this cluster
level authorization. If none found then it proceeds to check the groups individually. This operation
doesn't return CLUSTER_AUTHORIZATION_FAILED.</td>
</tr>
<tr>
<td>LIST_GROUPS (16)</td>
<td>Describe</td>
<td>Group</td>
<td>If none of the groups are authorized, then just an empty response will be sent back instead
of an error. This operation doesn't return CLUSTER_AUTHORIZATION_FAILED. This is applicable from the
2.1 release.</td>
</tr>
<tr>
<td>SASL_HANDSHAKE (17)</td>
<td></td>
<td></td>
<td>The SASL handshake is part of the authentication process and therefore it's not possible to
apply any kind of authorization here.</td>
</tr>
<tr>
<td>API_VERSIONS (18)</td>
<td></td>
<td></td>
<td>The API_VERSIONS request is part of the Kafka protocol handshake and happens on connection
and before any authentication. Therefore it's not possible to control this with authorization.</td>
</tr>
<tr>
<td>CREATE_TOPICS (19)</td>
<td>Create</td>
<td>Cluster</td>
<td>If there is no cluster level authorization then it won't return CLUSTER_AUTHORIZATION_FAILED but
fall back to use topic level, which is just below. That'll throw error if there is a problem.</td>
</tr>
<tr>
<td>CREATE_TOPICS (19)</td>
<td>Create</td>
<td>Topic</td>
<td>This is applicable from the 2.0 release.</td>
</tr>
<tr>
<td>DELETE_TOPICS (20)</td>
<td>Delete</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>DELETE_RECORDS (21)</td>
<td>Delete</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>INIT_PRODUCER_ID (22)</td>
<td>Write</td>
<td>TransactionalId</td>
<td></td>
</tr>
<tr>
<td>INIT_PRODUCER_ID (22)</td>
<td>IdempotentWrite</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>OFFSET_FOR_LEADER_EPOCH (23)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td>If there is no cluster level privilege for this operation, then it'll check for topic level one.</td>
</tr>
<tr>
<td>OFFSET_FOR_LEADER_EPOCH (23)</td>
<td>Describe</td>
<td>Topic</td>
<td>This is applicable from the 2.1 release.</td>
</tr>
<tr>
<td>ADD_PARTITIONS_TO_TXN (24)</td>
<td>Write</td>
<td>TransactionalId</td>
<td>This API is only applicable to transactional requests. It first checks for the Write action on the
TransactionalId resource, then it checks the Topic in subject (below).</td>
</tr>
<tr>
<td>ADD_PARTITIONS_TO_TXN (24)</td>
<td>Write</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>ADD_OFFSETS_TO_TXN (25)</td>
<td>Write</td>
<td>TransactionalId</td>
<td>Similarly to ADD_PARTITIONS_TO_TXN this is only applicable to transactional request. It first checks
for Write action on the TransactionalId resource, then it checks whether it can Read on the given group
(below).</td>
</tr>
<tr>
<td>ADD_OFFSETS_TO_TXN (25)</td>
<td>Read</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>END_TXN (26)</td>
<td>Write</td>
<td>TransactionalId</td>
<td></td>
</tr>
<tr>
<td>WRITE_TXN_MARKERS (27)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>TXN_OFFSET_COMMIT (28)</td>
<td>Write</td>
<td>TransactionalId</td>
<td></td>
</tr>
<tr>
<td>TXN_OFFSET_COMMIT (28)</td>
<td>Read</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>TXN_OFFSET_COMMIT (28)</td>
<td>Read</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>DESCRIBE_ACLS (29)</td>
<td>Describe</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>CREATE_ACLS (30)</td>
<td>Alter</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>DELETE_ACLS (31)</td>
<td>Alter</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>DESCRIBE_CONFIGS (32)</td>
<td>DescribeConfigs</td>
<td>Cluster</td>
<td>If broker configs are requested, then the broker will check cluster level privileges.</td>
</tr>
<tr>
<td>DESCRIBE_CONFIGS (32)</td>
<td>DescribeConfigs</td>
<td>Topic</td>
<td>If topic configs are requested, then the broker will check topic level privileges.</td>
</tr>
<tr>
<td>ALTER_CONFIGS (33)</td>
<td>AlterConfigs</td>
<td>Cluster</td>
<td>If broker configs are altered, then the broker will check cluster level privileges.</td>
</tr>
<tr>
<td>ALTER_CONFIGS (33)</td>
<td>AlterConfigs</td>
<td>Topic</td>
<td>If topic configs are altered, then the broker will check topic level privileges.</td>
</tr>
<tr>
<td>ALTER_REPLICA_LOG_DIRS (34)</td>
<td>Alter</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>DESCRIBE_LOG_DIRS (35)</td>
<td>Describe</td>
<td>Cluster</td>
<td>An empty response will be returned on authorization failure.</td>
</tr>
<tr>
<td>SASL_AUTHENTICATE (36)</td>
<td></td>
<td></td>
<td>SASL_AUTHENTICATE is part of the authentication process and therefore it's not possible to
apply any kind of authorization here.</td>
</tr>
<tr>
<td>CREATE_PARTITIONS (37)</td>
<td>Alter</td>
<td>Topic</td>
<td></td>
</tr>
<tr>
<td>CREATE_DELEGATION_TOKEN (38)</td>
<td></td>
<td></td>
<td>Creating delegation tokens has special rules, for this please see the
<a id="security_delegation_token" href="#security_delegation_token">Authentication using Delegation Tokens</a> section.</td>
</tr>
<tr>
<td>RENEW_DELEGATION_TOKEN (39)</td>
<td></td>
<td></td>
<td>Renewing delegation tokens has special rules, for this please see the
<a id="security_delegation_token" href="#security_delegation_token">Authentication using Delegation Tokens</a> section.</td>
</tr>
<tr>
<td>EXPIRE_DELEGATION_TOKEN (40)</td>
<td></td>
<td></td>
<td>Expiring delegation tokens has special rules, for this please see the
<a id="security_delegation_token" href="#security_delegation_token">Authentication using Delegation Tokens</a> section.</td>
</tr>
<tr>
<td>DESCRIBE_DELEGATION_TOKEN (41)</td>
<td>Describe</td>
<td>DelegationToken</td>
<td>Describing delegation tokens has special rules, for this please see the
<a id="security_delegation_token" href="#security_delegation_token">Authentication using Delegation Tokens</a> section.</td>
</tr>
<tr>
<td>DELETE_GROUPS (42)</td>
<td>Delete</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>ELECT_PREFERRED_LEADERS (43)</td>
<td>ClusterAction</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>INCREMENTAL_ALTER_CONFIGS (44)</td>
<td>AlterConfigs</td>
<td>Cluster</td>
<td>If broker configs are altered, then the broker will check cluster level privileges.</td>
</tr>
<tr>
<td>INCREMENTAL_ALTER_CONFIGS (44)</td>
<td>AlterConfigs</td>
<td>Topic</td>
<td>If topic configs are altered, then the broker will check topic level privileges.</td>
</tr>
<tr>
<td>ALTER_PARTITION_REASSIGNMENTS (45)</td>
<td>Alter</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>LIST_PARTITION_REASSIGNMENTS (46)</td>
<td>Describe</td>
<td>Cluster</td>
<td></td>
</tr>
<tr>
<td>OFFSET_DELETE (47)</td>
<td>Delete</td>
<td>Group</td>
<td></td>
</tr>
<tr>
<td>OFFSET_DELETE (47)</td>
<td>Read</td>
<td>Topic</td>
<td></td>
</tr>
</tbody>
</table>
<h3 class="anchor-heading">
<a class="anchor-link" id="security_rolling_upgrade" href="#security_rolling_upgrade"></a>
<a href="#security_rolling_upgrade">7.5 Incorporating Security Features in a Running Cluster
</a></h3>
You can secure a running cluster via one or more of the supported protocols discussed previously. This is done in phases:
<p></p>
<ul>
<li>Incrementally bounce the cluster nodes to open additional secured port(s).</li>
<li>Restart clients using the secured rather than PLAINTEXT port (assuming you are securing the client-broker connection).</li>
<li>Incrementally bounce the cluster again to enable broker-to-broker security (if this is required)</li>
<li>A final incremental bounce to close the PLAINTEXT port.</li>
</ul>
<p></p>
The specific steps for configuring SSL and SASL are described in sections <a href="#security_ssl">7.2</a> and <a href="#security_sasl">7.3</a>.
Follow these steps to enable security for your desired protocol(s).
<p></p>
The security implementation lets you configure different protocols for both broker-client and broker-broker communication.
These must be enabled in separate bounces. A PLAINTEXT port must be left open throughout so brokers and/or clients can continue to communicate.
<p></p>
When performing an incremental bounce stop the brokers cleanly via a SIGTERM. It's also good practice to wait for restarted replicas to return to the ISR list before moving onto the next node.
<p></p>
As an example, say we wish to encrypt both broker-client and broker-broker communication with SSL. In the first incremental bounce, an SSL port is opened on each node:
<pre>
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092</code></pre>
We then restart the clients, changing their config to point at the newly opened, secured port:
<pre>
bootstrap.servers = [broker1:9092,...]
security.protocol = SSL
...etc</code></pre>
In the second incremental server bounce we instruct Kafka to use SSL as the broker-broker protocol (which will use the same SSL port):
<pre>
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092
security.inter.broker.protocol=SSL</code></pre>
In the final bounce we secure the cluster by closing the PLAINTEXT port:
<pre>
listeners=SSL://broker1:9092
security.inter.broker.protocol=SSL</code></pre>
Alternatively we might choose to open multiple ports so that different protocols can be used for broker-broker and broker-client communication. Say we wished to use SSL encryption throughout (i.e. for broker-broker and broker-client communication) but we'd like to add SASL authentication to the broker-client connection also. We would achieve this by opening two additional ports during the first bounce:
<pre>
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093</code></pre>
We would then restart the clients, changing their config to point at the newly opened, SASL & SSL secured port:
<pre>
bootstrap.servers = [broker1:9093,...]
security.protocol = SASL_SSL
...etc</code></pre>
The second server bounce would switch the cluster to use encrypted broker-broker communication via the SSL port we previously opened on port 9092:
<pre>
listeners=PLAINTEXT://broker1:9091,SSL://broker1:9092,SASL_SSL://broker1:9093
security.inter.broker.protocol=SSL</code></pre>
The final bounce secures the cluster by closing the PLAINTEXT port.
<pre>
listeners=SSL://broker1:9092,SASL_SSL://broker1:9093
security.inter.broker.protocol=SSL</code></pre>
ZooKeeper can be secured independently of the Kafka cluster. The steps for doing this are covered in section <a href="#zk_authz_migration">7.6.2</a>.
<h3 class="anchor-heading">
<a class="anchor-link" id="zk_authz" href="#zk_authz"></a>
<a href="#zk_authz">7.6 ZooKeeper Authentication
</a></h3>
ZooKeeper supports mutual TLS (mTLS) authentication beginning with the 3.5.x versions.
Kafka supports authenticating to ZooKeeper with SASL and mTLS -- either individually or both together --
beginning with version 2.5. See
<a href="https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication">KIP-515: Enable ZK client to use the new TLS supported authentication</a>
for more details.
<p>When using mTLS alone, every broker and any CLI tools (such as the <a href="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
should identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed.
This can be changed as described below, but it involves writing and deploying a custom ZooKeeper authentication provider.
Generally each certificate should have the same DN but a different Subject Alternative Name (SAN)
so that hostname verification of the brokers and any CLI tools by ZooKeeper will succeed.
<p>
When using SASL authentication to ZooKeeper together with mTLS, both the SASL identity and
either the DN that created the znode (i.e. the creating broker's certificate)
or the DN of the Security Migration Tool (if migration was performed after the znode was created)
will be ACL'ed, and all brokers and CLI tools will be authorized even if they all use different DNs
because they will all use the same ACL'ed SASL identity.
It is only when using mTLS authentication alone that all the DNs must match (and SANs become critical --
again, in the absence of writing and deploying a custom ZooKeeper authentication provider as described below).
</p>
</p>
<p>
Use the broker properties file to set TLS configs for brokers as described below.
</p>
<p>
Use the <tt>--zk-tls-config-file &lt;file&gt;</tt> option to set TLS configs in the Zookeeper Security Migration Tool.
The <tt>kafka-acls.sh</tt> and <tt>kafka-configs.sh</tt> CLI tools also support the <tt>--zk-tls-config-file &lt;file&gt;</tt> option.
</p>
<p>
Use the <tt>-zk-tls-config-file &lt;file&gt;</tt> option (note the single-dash rather than double-dash)
to set TLS configs for the <tt>zookeeper-shell.sh</tt> CLI tool.
</p>
<h4 class="anchor-heading">
<a class="anchor-link" id="zk_authz_new" href="#zk_authz_new"></a>
<a href="#zk_authz_new">7.6.1 New clusters
</a></h4>
<h5 class="anchor-heading">
<a class="anchor-link" id="zk_authz_new_sasl" href="#zk_authz_new_sasl"></a>
<a href="#zk_authz_new_sasl">7.6.1.1 ZooKeeper SASL Authentication
</a></h5>
To enable ZooKeeper SASL authentication on brokers, there are two necessary steps:
<ol>
<li> Create a JAAS login file and set the appropriate system property to point to it as described above</li>
<li> Set the configuration property <tt>zookeeper.set.acl</tt> in each broker to true</li>
</ol>
The metadata stored in ZooKeeper for the Kafka cluster is world-readable, but can only be modified by the brokers. The rationale behind this decision is that the data stored in ZooKeeper is not sensitive, but inappropriate manipulation of that data can cause cluster disruption. We also recommend limiting the access to ZooKeeper via network segmentation (only brokers and some admin tools need access to ZooKeeper).
<h5 class="anchor-heading">
<a class="anchor-link" id="zk_authz_new_mtls" href="#zk_authz_new_mtls"></a>
<a href="#zk_authz_new_mtls">7.6.1.2 ZooKeeper Mutual TLS Authentication
</a></h5>
ZooKeeper mTLS authentication can be enabled with or without SASL authentication. As mentioned above,
when using mTLS alone, every broker and any CLI tools (such as the <a href="#zk_authz_migration">ZooKeeper Security Migration Tool</a>)
must generally identify itself with the same Distinguished Name (DN) because it is the DN that is ACL'ed, which means
each certificate should have an appropriate Subject Alternative Name (SAN) so that
hostname verification of the brokers and any CLI tool by ZooKeeper will succeed.
<p>
It is possible to use something other than the DN for the identity of mTLS clients by writing a class that
extends <tt>org.apache.zookeeper.server.auth.X509AuthenticationProvider</tt> and overrides the method
<tt>protected String getClientId(X509Certificate clientCert)</tt>.
Choose a scheme name and set <tt>authProvider.[scheme]</tt> in ZooKeeper to be the fully-qualified class name
of the custom implementation; then set <tt>ssl.authProvider=[scheme]</tt> to use it.
</p>
Here is a sample (partial) ZooKeeper configuration for enabling TLS authentication.
These configurations are described in the
<a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#sc_authOptions">ZooKeeper Admin Guide</a>.
<pre>
secureClientPort=2182
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
ssl.keyStore.location=/path/to/zk/keystore.jks
ssl.keyStore.password=zk-ks-passwd
ssl.trustStore.location=/path/to/zk/truststore.jks
ssl.trustStore.password=zk-ts-passwd</code></pre>
<strong>IMPORTANT</strong>: ZooKeeper does not support setting the key password in the ZooKeeper server keystore
to a value different from the keystore password itself.
Be sure to set the key password to be the same as the keystore password.
<p>Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with mTLS authentication.
These configurations are described above in <a href="#brokerconfigs">Broker Configs</a>.
</p>
<pre>
# connect to the ZooKeeper port configured for TLS
zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
# required to use TLS to ZooKeeper (default is false)
zookeeper.ssl.client.enable=true
# required to use TLS to ZooKeeper
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
# define key/trust stores to use TLS to ZooKeeper; ignored unless zookeeper.ssl.client.enable=true
zookeeper.ssl.keystore.location=/path/to/kafka/keystore.jks
zookeeper.ssl.keystore.password=kafka-ks-passwd
zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
zookeeper.ssl.truststore.password=kafka-ts-passwd
# tell broker to create ACLs on znodes
zookeeper.set.acl=true</code></pre>
<strong>IMPORTANT</strong>: ZooKeeper does not support setting the key password in the ZooKeeper client (i.e. broker) keystore
to a value different from the keystore password itself.
Be sure to set the key password to be the same as the keystore password.
<h4 class="anchor-heading">
<a class="anchor-link" id="zk_authz_migration" href="#zk_authz_migration"></a>
<a href="#zk_authz_migration">7.6.2 Migrating clusters
</a></h4>
If you are running a version of Kafka that does not support security or simply with security disabled, and you want to make the cluster secure, then you need to execute the following steps to enable ZooKeeper authentication with minimal disruption to your operations:
<ol>
<li>Enable SASL and/or mTLS authentication on ZooKeeper. If enabling mTLS, you would now have both a non-TLS port and a TLS port, like this:
<pre>
clientPort=2181
secureClientPort=2182
serverCnxnFactory=org.apache.zookeeper.server.NettyServerCnxnFactory
authProvider.x509=org.apache.zookeeper.server.auth.X509AuthenticationProvider
ssl.keyStore.location=/path/to/zk/keystore.jks
ssl.keyStore.password=zk-ks-passwd
ssl.trustStore.location=/path/to/zk/truststore.jks
ssl.trustStore.password=zk-ts-passwd</code></pre>
</li>
<li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations (including connecting to the TLS-enabled ZooKeeper port) as required, which enables brokers to authenticate to ZooKeeper. At the end of the rolling restart, brokers are able to manipulate znodes with strict ACLs, but they will not create znodes with those ACLs</li>
<li>If you enabled mTLS, disable the non-TLS port in ZooKeeper</li>
<li>Perform a second rolling restart of brokers, this time setting the configuration parameter <tt>zookeeper.set.acl</tt> to true, which enables the use of secure ACLs when creating znodes</li>
<li>Execute the ZkSecurityMigrator tool. To execute the tool, there is this script: <tt>./bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to secure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file &lt;file&gt;</code> option if you enable mTLS.</li>
</ol>
<p>It is also possible to turn off authentication in a secure cluster. To do it, follow these steps:</p>
<ol>
<li>Perform a rolling restart of brokers setting the JAAS login file and/or defining ZooKeeper mutual TLS configurations, which enables brokers to authenticate, but setting <tt>zookeeper.set.acl</tt> to false. At the end of the rolling restart, brokers stop creating znodes with secure ACLs, but are still able to authenticate and manipulate all znodes</li>
<li>Execute the ZkSecurityMigrator tool. To execute the tool, run this script <tt>./bin/zookeeper-security-migration.sh</tt> with <tt>zookeeper.acl</tt> set to unsecure. This tool traverses the corresponding sub-trees changing the ACLs of the znodes. Use the <code>--zk-tls-config-file &lt;file&gt;</code> option if you need to set TLS configuration.</li></li>
<li>If you are disabling mTLS, enable the non-TLS port in ZooKeeper</li>
<li>Perform a second rolling restart of brokers, this time omitting the system property that sets the JAAS login file and/or removing ZooKeeper mutual TLS configuration (including connecting to the non-TLS-enabled ZooKeeper port) as required</li>
<li>If you are disabling mTLS, disable the TLS port in ZooKeeper</li>
</ol>
Here is an example of how to run the migration tool:
<pre class="line-numbers"><code class="language-bash"> ./bin/zookeeper-security-migration.sh --zookeeper.acl=secure --zookeeper.connect=localhost:2181</code></pre>
<p>Run this to see the full list of parameters:</p>
<pre class="line-numbers"><code class="language-bash"> ./bin/zookeeper-security-migration.sh --help</code></pre>
<h4 class="anchor-heading">
<a class="anchor-link" id="zk_authz_ensemble" href="#zk_authz_ensemble"></a>
<a href="#zk_authz_ensemble">7.6.3 Migrating the ZooKeeper ensemble
</a></h4>
It is also necessary to enable SASL and/or mTLS authentication on the ZooKeeper ensemble. To do it, we need to perform a rolling restart of the server and set a few properties. See above for mTLS information. Please refer to the ZooKeeper documentation for more detail:
<ol>
<li><a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperProgrammers.html#sc_ZooKeeperAccessControl">Apache ZooKeeper documentation</a></li>
<li><a href="https://cwiki.apache.org/confluence/display/ZOOKEEPER/Zookeeper+and+SASL">Apache ZooKeeper wiki</a></li>
</ol>
<h4 class="anchor-heading">
<a class="anchor-link" id="zk_authz_quorum" href="#zk_authz_quorum"></a>
<a href="#zk_authz_quorum">7.6.4 ZooKeeper Quorum Mutual TLS Authentication
</a></h4>
It is possible to enable mTLS authentication between the ZooKeeper servers themselves.
Please refer to the <a href="https://zookeeper.apache.org/doc/r3.5.7/zookeeperAdmin.html#Quorum+TLS">ZooKeeper documentation</a> for more detail.
<h3 class="anchor-heading">
<a class="anchor-link" id="zk_encryption" href="#zk_encryption"></a>
<a href="#zk_encryption">7.7 ZooKeeper Encryption
</a></h3>
ZooKeeper connections that use mutual TLS are encrypted.
Beginning with ZooKeeper version 3.5.7 (the version shipped with Kafka version 2.5) ZooKeeper supports a sever-side config
<tt>ssl.clientAuth</tt> (case-insensitively: <tt>want</tt>/<tt>need</tt>/<tt>none</tt> are the valid options, the default is <tt>need</tt>),
and setting this value to <tt>none</tt> in ZooKeeper allows clients to connect via a TLS-encrypted connection
without presenting their own certificate. Here is a sample (partial) Kafka Broker configuration for connecting to ZooKeeper with just TLS encryption.
These configurations are described above in <a href="#brokerconfigs">Broker Configs</a>.
<pre>
# connect to the ZooKeeper port configured for TLS
zookeeper.connect=zk1:2182,zk2:2182,zk3:2182
# required to use TLS to ZooKeeper (default is false)
zookeeper.ssl.client.enable=true
# required to use TLS to ZooKeeper
zookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
# define trust stores to use TLS to ZooKeeper; ignored unless zookeeper.ssl.client.enable=true
# no need to set keystore information assuming ssl.clientAuth=none on ZooKeeper
zookeeper.ssl.truststore.location=/path/to/kafka/truststore.jks
zookeeper.ssl.truststore.password=kafka-ts-passwd
# tell broker to create ACLs on znodes (if using SASL authentication, otherwise do not set this)
zookeeper.set.acl=true</code></pre>
</script>
<div class="p-security"></div>