MINOR: fixes for intro page and various docs page headings (#287)

diff --git a/25/generated/admin_client_config.html b/25/generated/admin_client_config.html
index 5802dcc..ff71b04 100644
--- a/25/generated/admin_client_config.html
+++ b/25/generated/admin_client_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="bootstrap.servers" href="#bootstrap.servers"></a>
-      bootstrap.servers
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
 </h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table class="data-table"><tbody>
@@ -14,8 +14,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.key.password" href="#ssl.key.password"></a>
-      ssl.key.password
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
 </h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -27,8 +27,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keystore.location" href="#ssl.keystore.location"></a>
-      ssl.keystore.location
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
 </h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table class="data-table"><tbody>
@@ -40,8 +40,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keystore.password" href="#ssl.keystore.password"></a>
-      ssl.keystore.password
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
 </h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table class="data-table"><tbody>
@@ -53,8 +53,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.truststore.location" href="#ssl.truststore.location"></a>
-      ssl.truststore.location
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
 </h4>
 <p>The location of the trust store file. </p>
 <table class="data-table"><tbody>
@@ -66,8 +66,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.truststore.password" href="#ssl.truststore.password"></a>
-      ssl.truststore.password
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
 </h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table class="data-table"><tbody>
@@ -79,8 +79,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="client.dns.lookup" href="#client.dns.lookup"></a>
-      client.dns.lookup
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
 </h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code> then, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers. If the value is <code>resolve_canonical_bootstrap_servers_only</code> each entry will be resolved and expanded into a list of canonical names.</p>
 <table class="data-table"><tbody>
@@ -92,8 +92,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="client.id" href="#client.id"></a>
-      client.id
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
 </h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table class="data-table"><tbody>
@@ -105,8 +105,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="connections.max.idle.ms" href="#connections.max.idle.ms"></a>
-      connections.max.idle.ms
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
 </h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table class="data-table"><tbody>
@@ -118,8 +118,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="default.api.timeout.ms" href="#default.api.timeout.ms"></a>
-      default.api.timeout.ms
+   <a class="anchor-link" id="default.api.timeout.ms"></a>
+   <a href="#default.api.timeout.ms">default.api.timeout.ms</a>
 </h4>
 <p>Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a <code>timeout</code> parameter.</p>
 <table class="data-table"><tbody>
@@ -131,8 +131,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="receive.buffer.bytes" href="#receive.buffer.bytes"></a>
-      receive.buffer.bytes
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
 </h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -144,8 +144,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="request.timeout.ms" href="#request.timeout.ms"></a>
-      request.timeout.ms
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
 </h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table class="data-table"><tbody>
@@ -157,8 +157,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class"></a>
-      sasl.client.callback.handler.class
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table class="data-table"><tbody>
@@ -170,8 +170,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.jaas.config" href="#sasl.jaas.config"></a>
-      sasl.jaas.config
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
 </h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html"></a>
       here. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
@@ -184,8 +184,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name"></a>
-      sasl.kerberos.service.name
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
 </h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table class="data-table"><tbody>
@@ -197,8 +197,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class"></a>
-      sasl.login.callback.handler.class
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table class="data-table"><tbody>
@@ -210,8 +210,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.class" href="#sasl.login.class"></a>
-      sasl.login.class
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table class="data-table"><tbody>
@@ -223,8 +223,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.mechanism" href="#sasl.mechanism"></a>
-      sasl.mechanism
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
 </h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table class="data-table"><tbody>
@@ -236,8 +236,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="security.protocol" href="#security.protocol"></a>
-      security.protocol
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
 </h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table class="data-table"><tbody>
@@ -249,8 +249,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="send.buffer.bytes" href="#send.buffer.bytes"></a>
-      send.buffer.bytes
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
 </h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -262,8 +262,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.enabled.protocols" href="#ssl.enabled.protocols"></a>
-      ssl.enabled.protocols
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
 </h4>
 <p>The list of protocols enabled for SSL connections.</p>
 <table class="data-table"><tbody>
@@ -275,8 +275,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keystore.type" href="#ssl.keystore.type"></a>
-      ssl.keystore.type
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
 </h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -288,8 +288,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.protocol" href="#ssl.protocol"></a>
-      ssl.protocol
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
 </h4>
 <p>The SSL protocol used to generate the SSLContext. Default setting is TLSv1.2, which is fine for most cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</p>
 <table class="data-table"><tbody>
@@ -301,8 +301,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.provider" href="#ssl.provider"></a>
-      ssl.provider
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
 </h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table class="data-table"><tbody>
@@ -314,8 +314,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.truststore.type" href="#ssl.truststore.type"></a>
-      ssl.truststore.type
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
 </h4>
 <p>The file format of the trust store file.</p>
 <table class="data-table"><tbody>
@@ -327,8 +327,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metadata.max.age.ms" href="#metadata.max.age.ms"></a>
-      metadata.max.age.ms
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
 </h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table class="data-table"><tbody>
@@ -340,8 +340,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metric.reporters" href="#metric.reporters"></a>
-      metric.reporters
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
 </h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table class="data-table"><tbody>
@@ -353,8 +353,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metrics.num.samples" href="#metrics.num.samples"></a>
-      metrics.num.samples
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
 </h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table class="data-table"><tbody>
@@ -366,8 +366,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metrics.recording.level" href="#metrics.recording.level"></a>
-      metrics.recording.level
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
 </h4>
 <p>The highest recording level for metrics.</p>
 <table class="data-table"><tbody>
@@ -379,8 +379,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metrics.sample.window.ms" href="#metrics.sample.window.ms"></a>
-      metrics.sample.window.ms
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
 </h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table class="data-table"><tbody>
@@ -392,8 +392,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms"></a>
-      reconnect.backoff.max.ms
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
 </h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table class="data-table"><tbody>
@@ -405,8 +405,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="reconnect.backoff.ms" href="#reconnect.backoff.ms"></a>
-      reconnect.backoff.ms
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
 </h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table class="data-table"><tbody>
@@ -431,8 +431,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="retry.backoff.ms" href="#retry.backoff.ms"></a>
-      retry.backoff.ms
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
 </h4>
 <p>The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table class="data-table"><tbody>
@@ -444,8 +444,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd"></a>
-      sasl.kerberos.kinit.cmd
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
 </h4>
 <p>Kerberos kinit command path.</p>
 <table class="data-table"><tbody>
@@ -457,8 +457,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin"></a>
-      sasl.kerberos.min.time.before.relogin
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
 </h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table class="data-table"><tbody>
@@ -470,8 +470,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter"></a>
-      sasl.kerberos.ticket.renew.jitter
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
 </h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table class="data-table"><tbody>
@@ -483,8 +483,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor"></a>
-      sasl.kerberos.ticket.renew.window.factor
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
 </h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table class="data-table"><tbody>
@@ -496,8 +496,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds"></a>
-      sasl.login.refresh.buffer.seconds
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
 </h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -509,8 +509,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds"></a>
-      sasl.login.refresh.min.period.seconds
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
 </h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -522,8 +522,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor"></a>
-      sasl.login.refresh.window.factor
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
 </h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -535,8 +535,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter"></a>
-      sasl.login.refresh.window.jitter
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
 </h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -548,8 +548,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="security.providers" href="#security.providers"></a>
-      security.providers
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
 </h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table class="data-table"><tbody>
@@ -561,8 +561,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.cipher.suites" href="#ssl.cipher.suites"></a>
-      ssl.cipher.suites
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
 </h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table class="data-table"><tbody>
@@ -574,8 +574,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm"></a>
-      ssl.endpoint.identification.algorithm
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
 </h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table class="data-table"><tbody>
@@ -587,8 +587,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm"></a>
-      ssl.keymanager.algorithm
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
 </h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -600,8 +600,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation"></a>
-      ssl.secure.random.implementation
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
 </h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table class="data-table"><tbody>
@@ -613,8 +613,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm"></a>
-      ssl.trustmanager.algorithm
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
 </h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
diff --git a/25/generated/connect_config.html b/25/generated/connect_config.html
index 734faa0..e1752e7 100644
--- a/25/generated/connect_config.html
+++ b/25/generated/connect_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="config.storage.topic" href="#config.storage.topic"></a>
-      config.storage.topic
+   <a class="anchor-link" id="config.storage.topic"></a>
+   <a href="#config.storage.topic">config.storage.topic</a>
 </h4>
 <p>The name of the Kafka topic where connector configurations are stored</p>
 <table class="data-table"><tbody>
@@ -14,8 +14,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="group.id" href="#group.id"></a>
-      group.id
+   <a class="anchor-link" id="group.id"></a>
+   <a href="#group.id">group.id</a>
 </h4>
 <p>A unique string that identifies the Connect cluster group this worker belongs to.</p>
 <table class="data-table"><tbody>
@@ -27,8 +27,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="key.converter" href="#key.converter"></a>
-      key.converter
+   <a class="anchor-link" id="key.converter"></a>
+   <a href="#key.converter">key.converter</a>
 </h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table class="data-table"><tbody>
@@ -40,8 +40,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="offset.storage.topic" href="#offset.storage.topic"></a>
-      offset.storage.topic
+   <a class="anchor-link" id="offset.storage.topic"></a>
+   <a href="#offset.storage.topic">offset.storage.topic</a>
 </h4>
 <p>The name of the Kafka topic where connector offsets are stored</p>
 <table class="data-table"><tbody>
@@ -53,8 +53,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="status.storage.topic" href="#status.storage.topic"></a>
-      status.storage.topic
+   <a class="anchor-link" id="status.storage.topic"></a>
+   <a href="#status.storage.topic">status.storage.topic</a>
 </h4>
 <p>The name of the Kafka topic where connector and task status are stored</p>
 <table class="data-table"><tbody>
@@ -66,8 +66,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="value.converter" href="#value.converter"></a>
-      value.converter
+   <a class="anchor-link" id="value.converter"></a>
+   <a href="#value.converter">value.converter</a>
 </h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table class="data-table"><tbody>
@@ -79,8 +79,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="bootstrap.servers" href="#bootstrap.servers"></a>
-      bootstrap.servers
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
 </h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table class="data-table"><tbody>
@@ -92,8 +92,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="heartbeat.interval.ms" href="#heartbeat.interval.ms"></a>
-      heartbeat.interval.ms
+   <a class="anchor-link" id="heartbeat.interval.ms"></a>
+   <a href="#heartbeat.interval.ms">heartbeat.interval.ms</a>
 </h4>
 <p>The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</p>
 <table class="data-table"><tbody>
@@ -105,8 +105,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rebalance.timeout.ms" href="#rebalance.timeout.ms"></a>
-      rebalance.timeout.ms
+   <a class="anchor-link" id="rebalance.timeout.ms"></a>
+   <a href="#rebalance.timeout.ms">rebalance.timeout.ms</a>
 </h4>
 <p>The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.</p>
 <table class="data-table"><tbody>
@@ -118,8 +118,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="session.timeout.ms" href="#session.timeout.ms"></a>
-      session.timeout.ms
+   <a class="anchor-link" id="session.timeout.ms"></a>
+   <a href="#session.timeout.ms">session.timeout.ms</a>
 </h4>
 <p>The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</p>
 <table class="data-table"><tbody>
@@ -131,8 +131,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.key.password" href="#ssl.key.password"></a>
-      ssl.key.password
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
 </h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -144,8 +144,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keystore.location" href="#ssl.keystore.location"></a>
-      ssl.keystore.location
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
 </h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table class="data-table"><tbody>
@@ -157,8 +157,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keystore.password" href="#ssl.keystore.password"></a>
-      ssl.keystore.password
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
 </h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table class="data-table"><tbody>
@@ -170,8 +170,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.truststore.location" href="#ssl.truststore.location"></a>
-      ssl.truststore.location
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
 </h4>
 <p>The location of the trust store file. </p>
 <table class="data-table"><tbody>
@@ -183,8 +183,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.truststore.password" href="#ssl.truststore.password"></a>
-      ssl.truststore.password
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
 </h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table class="data-table"><tbody>
@@ -196,8 +196,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="client.dns.lookup" href="#client.dns.lookup"></a>
-      client.dns.lookup
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
 </h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code> then, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers. If the value is <code>resolve_canonical_bootstrap_servers_only</code> each entry will be resolved and expanded into a list of canonical names.</p>
 <table class="data-table"><tbody>
@@ -209,8 +209,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="connections.max.idle.ms" href="#connections.max.idle.ms"></a>
-      connections.max.idle.ms
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
 </h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table class="data-table"><tbody>
@@ -222,8 +222,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="connector.client.config.override.policy" href="#connector.client.config.override.policy"></a>
-      connector.client.config.override.policy
+   <a class="anchor-link" id="connector.client.config.override.policy"></a>
+   <a href="#connector.client.config.override.policy">connector.client.config.override.policy</a>
 </h4>
 <p>Class name or alias of implementation of <code>ConnectorClientConfigOverridePolicy</code>. Defines what client configurations can be overriden by the connector. The default implementation is `None`. The other possible policies in the framework include `All` and `Principal`. </p>
 <table class="data-table"><tbody>
@@ -235,8 +235,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="receive.buffer.bytes" href="#receive.buffer.bytes"></a>
-      receive.buffer.bytes
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
 </h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -248,8 +248,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="request.timeout.ms" href="#request.timeout.ms"></a>
-      request.timeout.ms
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
 </h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table class="data-table"><tbody>
@@ -261,8 +261,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class"></a>
-      sasl.client.callback.handler.class
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table class="data-table"><tbody>
@@ -274,8 +274,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.jaas.config" href="#sasl.jaas.config"></a>
-      sasl.jaas.config
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
 </h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html"></a>
       here. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
@@ -288,8 +288,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name"></a>
-      sasl.kerberos.service.name
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
 </h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table class="data-table"><tbody>
@@ -301,8 +301,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class"></a>
-      sasl.login.callback.handler.class
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table class="data-table"><tbody>
@@ -314,8 +314,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.class" href="#sasl.login.class"></a>
-      sasl.login.class
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table class="data-table"><tbody>
@@ -327,8 +327,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.mechanism" href="#sasl.mechanism"></a>
-      sasl.mechanism
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
 </h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table class="data-table"><tbody>
@@ -340,8 +340,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="security.protocol" href="#security.protocol"></a>
-      security.protocol
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
 </h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table class="data-table"><tbody>
@@ -353,8 +353,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="send.buffer.bytes" href="#send.buffer.bytes"></a>
-      send.buffer.bytes
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
 </h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -366,8 +366,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.enabled.protocols" href="#ssl.enabled.protocols"></a>
-      ssl.enabled.protocols
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
 </h4>
 <p>The list of protocols enabled for SSL connections.</p>
 <table class="data-table"><tbody>
@@ -379,8 +379,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keystore.type" href="#ssl.keystore.type"></a>
-      ssl.keystore.type
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
 </h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -392,8 +392,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.protocol" href="#ssl.protocol"></a>
-      ssl.protocol
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
 </h4>
 <p>The SSL protocol used to generate the SSLContext. Default setting is TLSv1.2, which is fine for most cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</p>
 <table class="data-table"><tbody>
@@ -405,8 +405,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.provider" href="#ssl.provider"></a>
-      ssl.provider
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
 </h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table class="data-table"><tbody>
@@ -418,8 +418,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.truststore.type" href="#ssl.truststore.type"></a>
-      ssl.truststore.type
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
 </h4>
 <p>The file format of the trust store file.</p>
 <table class="data-table"><tbody>
@@ -431,8 +431,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="worker.sync.timeout.ms" href="#worker.sync.timeout.ms"></a>
-      worker.sync.timeout.ms
+   <a class="anchor-link" id="worker.sync.timeout.ms"></a>
+   <a href="#worker.sync.timeout.ms">worker.sync.timeout.ms</a>
 </h4>
 <p>When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.</p>
 <table class="data-table"><tbody>
@@ -444,8 +444,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="worker.unsync.backoff.ms" href="#worker.unsync.backoff.ms"></a>
-      worker.unsync.backoff.ms
+   <a class="anchor-link" id="worker.unsync.backoff.ms"></a>
+   <a href="#worker.unsync.backoff.ms">worker.unsync.backoff.ms</a>
 </h4>
 <p>When the worker is out of sync with other workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.</p>
 <table class="data-table"><tbody>
@@ -457,8 +457,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="access.control.allow.methods" href="#access.control.allow.methods"></a>
-      access.control.allow.methods
+   <a class="anchor-link" id="access.control.allow.methods"></a>
+   <a href="#access.control.allow.methods">access.control.allow.methods</a>
 </h4>
 <p>Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.</p>
 <table class="data-table"><tbody>
@@ -470,8 +470,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="access.control.allow.origin" href="#access.control.allow.origin"></a>
-      access.control.allow.origin
+   <a class="anchor-link" id="access.control.allow.origin"></a>
+   <a href="#access.control.allow.origin">access.control.allow.origin</a>
 </h4>
 <p>Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.</p>
 <table class="data-table"><tbody>
@@ -483,8 +483,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="admin.listeners" href="#admin.listeners"></a>
-      admin.listeners
+   <a class="anchor-link" id="admin.listeners"></a>
+   <a href="#admin.listeners">admin.listeners</a>
 </h4>
 <p>List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property).</p>
 <table class="data-table"><tbody>
@@ -496,8 +496,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="client.id" href="#client.id"></a>
-      client.id
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
 </h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table class="data-table"><tbody>
@@ -509,8 +509,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="config.providers" href="#config.providers"></a>
-      config.providers
+   <a class="anchor-link" id="config.providers"></a>
+   <a href="#config.providers">config.providers</a>
 </h4>
 <p>Comma-separated names of <code>ConfigProvider</code> classes, loaded and used in the order specified. Implementing the interface  <code>ConfigProvider</code> allows you to replace variable references in connector configurations, such as for externalized secrets. </p>
 <table class="data-table"><tbody>
@@ -522,8 +522,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="config.storage.replication.factor" href="#config.storage.replication.factor"></a>
-      config.storage.replication.factor
+   <a class="anchor-link" id="config.storage.replication.factor"></a>
+   <a href="#config.storage.replication.factor">config.storage.replication.factor</a>
 </h4>
 <p>Replication factor used when creating the configuration storage topic</p>
 <table class="data-table"><tbody>
@@ -535,8 +535,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="connect.protocol" href="#connect.protocol"></a>
-      connect.protocol
+   <a class="anchor-link" id="connect.protocol"></a>
+   <a href="#connect.protocol">connect.protocol</a>
 </h4>
 <p>Compatibility mode for Kafka Connect Protocol</p>
 <table class="data-table"><tbody>
@@ -548,8 +548,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="header.converter" href="#header.converter"></a>
-      header.converter
+   <a class="anchor-link" id="header.converter"></a>
+   <a href="#header.converter">header.converter</a>
 </h4>
 <p>HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.</p>
 <table class="data-table"><tbody>
@@ -561,8 +561,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="inter.worker.key.generation.algorithm" href="#inter.worker.key.generation.algorithm"></a>
-      inter.worker.key.generation.algorithm
+   <a class="anchor-link" id="inter.worker.key.generation.algorithm"></a>
+   <a href="#inter.worker.key.generation.algorithm">inter.worker.key.generation.algorithm</a>
 </h4>
 <p>The algorithm to use for generating internal request keys</p>
 <table class="data-table"><tbody>
@@ -574,8 +574,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="inter.worker.key.size" href="#inter.worker.key.size"></a>
-      inter.worker.key.size
+   <a class="anchor-link" id="inter.worker.key.size"></a>
+   <a href="#inter.worker.key.size">inter.worker.key.size</a>
 </h4>
 <p>The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used.</p>
 <table class="data-table"><tbody>
@@ -587,8 +587,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="inter.worker.key.ttl.ms" href="#inter.worker.key.ttl.ms"></a>
-      inter.worker.key.ttl.ms
+   <a class="anchor-link" id="inter.worker.key.ttl.ms"></a>
+   <a href="#inter.worker.key.ttl.ms">inter.worker.key.ttl.ms</a>
 </h4>
 <p>The TTL of generated session keys used for internal request validation (in milliseconds)</p>
 <table class="data-table"><tbody>
@@ -600,8 +600,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="inter.worker.signature.algorithm" href="#inter.worker.signature.algorithm"></a>
-      inter.worker.signature.algorithm
+   <a class="anchor-link" id="inter.worker.signature.algorithm"></a>
+   <a href="#inter.worker.signature.algorithm">inter.worker.signature.algorithm</a>
 </h4>
 <p>The algorithm used to sign internal requests</p>
 <table class="data-table"><tbody>
@@ -613,8 +613,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="inter.worker.verification.algorithms" href="#inter.worker.verification.algorithms"></a>
-      inter.worker.verification.algorithms
+   <a class="anchor-link" id="inter.worker.verification.algorithms"></a>
+   <a href="#inter.worker.verification.algorithms">inter.worker.verification.algorithms</a>
 </h4>
 <p>A list of permitted algorithms for verifying internal requests</p>
 <table class="data-table"><tbody>
@@ -626,8 +626,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="internal.key.converter" href="#internal.key.converter"></a>
-      internal.key.converter
+   <a class="anchor-link" id="internal.key.converter"></a>
+   <a href="#internal.key.converter">internal.key.converter</a>
 </h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version.</p>
 <table class="data-table"><tbody>
@@ -639,8 +639,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="internal.value.converter" href="#internal.value.converter"></a>
-      internal.value.converter
+   <a class="anchor-link" id="internal.value.converter"></a>
+   <a href="#internal.value.converter">internal.value.converter</a>
 </h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version.</p>
 <table class="data-table"><tbody>
@@ -665,8 +665,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metadata.max.age.ms" href="#metadata.max.age.ms"></a>
-      metadata.max.age.ms
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
 </h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table class="data-table"><tbody>
@@ -678,8 +678,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metric.reporters" href="#metric.reporters"></a>
-      metric.reporters
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
 </h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table class="data-table"><tbody>
@@ -691,8 +691,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metrics.num.samples" href="#metrics.num.samples"></a>
-      metrics.num.samples
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
 </h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table class="data-table"><tbody>
@@ -704,8 +704,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metrics.recording.level" href="#metrics.recording.level"></a>
-      metrics.recording.level
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
 </h4>
 <p>The highest recording level for metrics.</p>
 <table class="data-table"><tbody>
@@ -717,8 +717,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="metrics.sample.window.ms" href="#metrics.sample.window.ms"></a>
-      metrics.sample.window.ms
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
 </h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table class="data-table"><tbody>
@@ -730,8 +730,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="offset.flush.interval.ms" href="#offset.flush.interval.ms"></a>
-      offset.flush.interval.ms
+   <a class="anchor-link" id="offset.flush.interval.ms"></a>
+   <a href="#offset.flush.interval.ms">offset.flush.interval.ms</a>
 </h4>
 <p>Interval at which to try committing offsets for tasks.</p>
 <table class="data-table"><tbody>
@@ -743,8 +743,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="offset.flush.timeout.ms" href="#offset.flush.timeout.ms"></a>
-      offset.flush.timeout.ms
+   <a class="anchor-link" id="offset.flush.timeout.ms"></a>
+   <a href="#offset.flush.timeout.ms">offset.flush.timeout.ms</a>
 </h4>
 <p>Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.</p>
 <table class="data-table"><tbody>
@@ -756,8 +756,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="offset.storage.partitions" href="#offset.storage.partitions"></a>
-      offset.storage.partitions
+   <a class="anchor-link" id="offset.storage.partitions"></a>
+   <a href="#offset.storage.partitions">offset.storage.partitions</a>
 </h4>
 <p>The number of partitions used when creating the offset storage topic</p>
 <table class="data-table"><tbody>
@@ -769,8 +769,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="offset.storage.replication.factor" href="#offset.storage.replication.factor"></a>
-      offset.storage.replication.factor
+   <a class="anchor-link" id="offset.storage.replication.factor"></a>
+   <a href="#offset.storage.replication.factor">offset.storage.replication.factor</a>
 </h4>
 <p>Replication factor used when creating the offset storage topic</p>
 <table class="data-table"><tbody>
@@ -782,8 +782,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="plugin.path" href="#plugin.path"></a>
-      plugin.path
+   <a class="anchor-link" id="plugin.path"></a>
+   <a href="#plugin.path">plugin.path</a>
 </h4>
 <p>List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: <br>a) directories immediately containing jars with plugins and their dependencies<br>b) uber-jars with plugins and their dependencies<br>c) directories immediately containing the package directory structure of classes of plugins and their dependencies<br>Note: symlinks will be followed to discover dependencies or plugins.<br>Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors</p>
 <table class="data-table"><tbody>
@@ -795,8 +795,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms"></a>
-      reconnect.backoff.max.ms
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
 </h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table class="data-table"><tbody>
@@ -808,8 +808,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="reconnect.backoff.ms" href="#reconnect.backoff.ms"></a>
-      reconnect.backoff.ms
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
 </h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table class="data-table"><tbody>
@@ -821,8 +821,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rest.advertised.host.name" href="#rest.advertised.host.name"></a>
-      rest.advertised.host.name
+   <a class="anchor-link" id="rest.advertised.host.name"></a>
+   <a href="#rest.advertised.host.name">rest.advertised.host.name</a>
 </h4>
 <p>If this is set, this is the hostname that will be given out to other workers to connect to.</p>
 <table class="data-table"><tbody>
@@ -834,8 +834,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rest.advertised.listener" href="#rest.advertised.listener"></a>
-      rest.advertised.listener
+   <a class="anchor-link" id="rest.advertised.listener"></a>
+   <a href="#rest.advertised.listener">rest.advertised.listener</a>
 </h4>
 <p>Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.</p>
 <table class="data-table"><tbody>
@@ -847,8 +847,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rest.advertised.port" href="#rest.advertised.port"></a>
-      rest.advertised.port
+   <a class="anchor-link" id="rest.advertised.port"></a>
+   <a href="#rest.advertised.port">rest.advertised.port</a>
 </h4>
 <p>If this is set, this is the port that will be given out to other workers to connect to.</p>
 <table class="data-table"><tbody>
@@ -860,8 +860,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rest.extension.classes" href="#rest.extension.classes"></a>
-      rest.extension.classes
+   <a class="anchor-link" id="rest.extension.classes"></a>
+   <a href="#rest.extension.classes">rest.extension.classes</a>
 </h4>
 <p>Comma-separated names of <code>ConnectRestExtension</code> classes, loaded and called in the order specified. Implementing the interface  <code>ConnectRestExtension</code> allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. </p>
 <table class="data-table"><tbody>
@@ -873,8 +873,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rest.host.name" href="#rest.host.name"></a>
-      rest.host.name
+   <a class="anchor-link" id="rest.host.name"></a>
+   <a href="#rest.host.name">rest.host.name</a>
 </h4>
 <p>Hostname for the REST API. If this is set, it will only bind to this interface.</p>
 <table class="data-table"><tbody>
@@ -886,8 +886,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="rest.port" href="#rest.port"></a>
-      rest.port
+   <a class="anchor-link" id="rest.port"></a>
+   <a href="#rest.port">rest.port</a>
 </h4>
 <p>Port for the REST API to listen on.</p>
 <table class="data-table"><tbody>
@@ -899,8 +899,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="retry.backoff.ms" href="#retry.backoff.ms"></a>
-      retry.backoff.ms
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
 </h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table class="data-table"><tbody>
@@ -912,8 +912,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd"></a>
-      sasl.kerberos.kinit.cmd
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
 </h4>
 <p>Kerberos kinit command path.</p>
 <table class="data-table"><tbody>
@@ -925,8 +925,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin"></a>
-      sasl.kerberos.min.time.before.relogin
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
 </h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table class="data-table"><tbody>
@@ -938,8 +938,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter"></a>
-      sasl.kerberos.ticket.renew.jitter
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
 </h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table class="data-table"><tbody>
@@ -951,8 +951,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor"></a>
-      sasl.kerberos.ticket.renew.window.factor
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
 </h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table class="data-table"><tbody>
@@ -964,8 +964,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds"></a>
-      sasl.login.refresh.buffer.seconds
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
 </h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -977,8 +977,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds"></a>
-      sasl.login.refresh.min.period.seconds
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
 </h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -990,8 +990,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor"></a>
-      sasl.login.refresh.window.factor
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
 </h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -1003,8 +1003,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter"></a>
-      sasl.login.refresh.window.jitter
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
 </h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -1016,8 +1016,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="scheduled.rebalance.max.delay.ms" href="#scheduled.rebalance.max.delay.ms"></a>
-      scheduled.rebalance.max.delay.ms
+   <a class="anchor-link" id="scheduled.rebalance.max.delay.ms"></a>
+   <a href="#scheduled.rebalance.max.delay.ms">scheduled.rebalance.max.delay.ms</a>
 </h4>
 <p>The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned</p>
 <table class="data-table"><tbody>
@@ -1029,8 +1029,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.cipher.suites" href="#ssl.cipher.suites"></a>
-      ssl.cipher.suites
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
 </h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table class="data-table"><tbody>
@@ -1042,8 +1042,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.client.auth" href="#ssl.client.auth"></a>
-      ssl.client.auth
+   <a class="anchor-link" id="ssl.client.auth"></a>
+   <a href="#ssl.client.auth">ssl.client.auth</a>
 </h4>
 <p>Configures kafka broker to request client authentication. The following settings are common:  <ul> <li><code>ssl.client.auth=required</code> If set to required client authentication is required. <li><code>ssl.client.auth=requested</code> This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself <li><code>ssl.client.auth=none</code> This means client authentication is not needed.</ul></p>
 <table class="data-table"><tbody>
@@ -1055,8 +1055,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm"></a>
-      ssl.endpoint.identification.algorithm
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
 </h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table class="data-table"><tbody>
@@ -1068,8 +1068,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm"></a>
-      ssl.keymanager.algorithm
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
 </h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -1081,8 +1081,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation"></a>
-      ssl.secure.random.implementation
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
 </h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table class="data-table"><tbody>
@@ -1094,8 +1094,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm"></a>
-      ssl.trustmanager.algorithm
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
 </h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -1107,8 +1107,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="status.storage.partitions" href="#status.storage.partitions"></a>
-      status.storage.partitions
+   <a class="anchor-link" id="status.storage.partitions"></a>
+   <a href="#status.storage.partitions">status.storage.partitions</a>
 </h4>
 <p>The number of partitions used when creating the status storage topic</p>
 <table class="data-table"><tbody>
@@ -1120,8 +1120,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="status.storage.replication.factor" href="#status.storage.replication.factor"></a>
-      status.storage.replication.factor
+   <a class="anchor-link" id="status.storage.replication.factor"></a>
+   <a href="#status.storage.replication.factor">status.storage.replication.factor</a>
 </h4>
 <p>Replication factor used when creating the status storage topic</p>
 <table class="data-table"><tbody>
@@ -1133,8 +1133,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="task.shutdown.graceful.timeout.ms" href="#task.shutdown.graceful.timeout.ms"></a>
-      task.shutdown.graceful.timeout.ms
+   <a class="anchor-link" id="task.shutdown.graceful.timeout.ms"></a>
+   <a href="#task.shutdown.graceful.timeout.ms">task.shutdown.graceful.timeout.ms</a>
 </h4>
 <p>Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.</p>
 <table class="data-table"><tbody>
@@ -1146,8 +1146,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="topic.tracking.allow.reset" href="#topic.tracking.allow.reset"></a>
-      topic.tracking.allow.reset
+   <a class="anchor-link" id="topic.tracking.allow.reset"></a>
+   <a href="#topic.tracking.allow.reset">topic.tracking.allow.reset</a>
 </h4>
 <p>If set to true, it allows user requests to reset the set of active topics per connector.</p>
 <table class="data-table"><tbody>
@@ -1159,8 +1159,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-  <a class="anchor-link" id="topic.tracking.enable" href="#topic.tracking.enable"></a>
-      topic.tracking.enable
+   <a class="anchor-link" id="topic.tracking.enable"></a>
+   <a href="#topic.tracking.enable">topic.tracking.enable</a>
 </h4>
 <p>Enable tracking the set of active topics per connector during runtime.</p>
 <table class="data-table"><tbody>
diff --git a/25/generated/connect_transforms.html b/25/generated/connect_transforms.html
index e7ab19e..efa516d 100644
--- a/25/generated/connect_transforms.html
+++ b/25/generated/connect_transforms.html
@@ -5,9 +5,9 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offset.field" href="#offset.field"></a>
-      offset.field
-    </h4>
+   <a class="anchor-link" id="offset.field"></a>
+   <a href="#offset.field">offset.field</a>
+</h4>
 <p>Field name for Kafka offset - only applicable to sink connectors.<br/>Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -18,9 +18,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="partition.field" href="#partition.field"></a>
-      partition.field
-    </h4>
+   <a class="anchor-link" id="partition.field"></a>
+   <a href="#partition.field">partition.field</a>
+</h4>
 <p>Field name for Kafka partition. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -31,9 +31,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="static.field" href="#static.field"></a>
-      static.field
-    </h4>
+   <a class="anchor-link" id="static.field"></a>
+   <a href="#static.field">static.field</a>
+</h4>
 <p>Field name for static data field. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -44,9 +44,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="static.value" href="#static.value"></a>
-      static.value
-    </h4>
+   <a class="anchor-link" id="static.value"></a>
+   <a href="#static.value">static.value</a>
+</h4>
 <p>Static field value, if field name configured.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -57,9 +57,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="timestamp.field" href="#timestamp.field"></a>
-      timestamp.field
-    </h4>
+   <a class="anchor-link" id="timestamp.field"></a>
+   <a href="#timestamp.field">timestamp.field</a>
+</h4>
 <p>Field name for record timestamp. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -70,9 +70,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="topic.field" href="#topic.field"></a>
-      topic.field
-    </h4>
+   <a class="anchor-link" id="topic.field"></a>
+   <a href="#topic.field">topic.field</a>
+</h4>
 <p>Field name for Kafka topic. Suffix with <code>!</code> to make this a required field, or <code>?</code> to keep it optional (the default).</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -222,9 +222,9 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="schema.name" href="#schema.name"></a>
-      schema.name
-    </h4>
+   <a class="anchor-link" id="schema.name"></a>
+   <a href="#schema.name">schema.name</a>
+</h4>
 <p>Schema name to set.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -235,9 +235,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="schema.version" href="#schema.version"></a>
-      schema.version
-    </h4>
+   <a class="anchor-link" id="schema.version"></a>
+   <a href="#schema.version">schema.version</a>
+</h4>
 <p>Schema version to set.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -256,9 +256,9 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="timestamp.format" href="#timestamp.format"></a>
-      timestamp.format
-    </h4>
+   <a class="anchor-link" id="timestamp.format"></a>
+   <a href="#timestamp.format">timestamp.format</a>
+</h4>
 <p>Format string for the timestamp that is compatible with <code>java.text.SimpleDateFormat</code>.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -269,9 +269,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="topic.format" href="#topic.format"></a>
-      topic.format
-    </h4>
+   <a class="anchor-link" id="topic.format"></a>
+   <a href="#topic.format">topic.format</a>
+</h4>
 <p>Format string which can contain <code>${topic}</code> and <code>${timestamp}</code> as placeholders for the topic and timestamp, respectively.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -366,9 +366,9 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="target.type" href="#target.type"></a>
-      target.type
-    </h4>
+   <a class="anchor-link" id="target.type"></a>
+   <a href="#target.type">target.type</a>
+</h4>
 <p>The desired timestamp representation: string, unix, Date, Time, or Timestamp</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
diff --git a/25/generated/consumer_config.html b/25/generated/consumer_config.html
index 7815bfd..0977ec7 100644
--- a/25/generated/consumer_config.html
+++ b/25/generated/consumer_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="key.deserializer" href="#key.deserializer"></a>
-      key.deserializer
+   <a class="anchor-link" id="key.deserializer"></a>
+   <a href="#key.deserializer">key.deserializer</a>
 </h4>
 <p>Deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Deserializer</code> interface.</p>
 <table class="data-table"><tbody>
@@ -14,8 +14,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="value.deserializer" href="#value.deserializer"></a>
-      value.deserializer
+   <a class="anchor-link" id="value.deserializer"></a>
+   <a href="#value.deserializer">value.deserializer</a>
 </h4>
 <p>Deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Deserializer</code> interface.</p>
 <table class="data-table"><tbody>
@@ -27,8 +27,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="bootstrap.servers" href="#bootstrap.servers"></a>
-      bootstrap.servers
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
 </h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table class="data-table"><tbody>
@@ -40,8 +40,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="fetch.min.bytes" href="#fetch.min.bytes"></a>
-      fetch.min.bytes
+   <a class="anchor-link" id="fetch.min.bytes"></a>
+   <a href="#fetch.min.bytes">fetch.min.bytes</a>
 </h4>
 <p>The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.</p>
 <table class="data-table"><tbody>
@@ -53,8 +53,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="group.id" href="#group.id"></a>
-      group.id
+   <a class="anchor-link" id="group.id"></a>
+   <a href="#group.id">group.id</a>
 </h4>
 <p>A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using <code>subscribe(topic)</code> or the Kafka-based offset management strategy.</p>
 <table class="data-table"><tbody>
@@ -66,8 +66,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="heartbeat.interval.ms" href="#heartbeat.interval.ms"></a>
-      heartbeat.interval.ms
+   <a class="anchor-link" id="heartbeat.interval.ms"></a>
+   <a href="#heartbeat.interval.ms">heartbeat.interval.ms</a>
 </h4>
 <p>The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</p>
 <table class="data-table"><tbody>
@@ -79,8 +79,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.partition.fetch.bytes" href="#max.partition.fetch.bytes"></a>
-      max.partition.fetch.bytes
+   <a class="anchor-link" id="max.partition.fetch.bytes"></a>
+   <a href="#max.partition.fetch.bytes">max.partition.fetch.bytes</a>
 </h4>
 <p>The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size.</p>
 <table class="data-table"><tbody>
@@ -92,8 +92,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="session.timeout.ms" href="#session.timeout.ms"></a>
-      session.timeout.ms
+   <a class="anchor-link" id="session.timeout.ms"></a>
+   <a href="#session.timeout.ms">session.timeout.ms</a>
 </h4>
 <p>The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</p>
 <table class="data-table"><tbody>
@@ -105,8 +105,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.key.password" href="#ssl.key.password"></a>
-      ssl.key.password
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
 </h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -118,8 +118,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.location" href="#ssl.keystore.location"></a>
-      ssl.keystore.location
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
 </h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table class="data-table"><tbody>
@@ -131,8 +131,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.password" href="#ssl.keystore.password"></a>
-      ssl.keystore.password
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
 </h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table class="data-table"><tbody>
@@ -144,8 +144,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.location" href="#ssl.truststore.location"></a>
-      ssl.truststore.location
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
 </h4>
 <p>The location of the trust store file. </p>
 <table class="data-table"><tbody>
@@ -157,8 +157,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.password" href="#ssl.truststore.password"></a>
-      ssl.truststore.password
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
 </h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table class="data-table"><tbody>
@@ -170,8 +170,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="allow.auto.create.topics" href="#allow.auto.create.topics"></a>
-      allow.auto.create.topics
+   <a class="anchor-link" id="allow.auto.create.topics"></a>
+   <a href="#allow.auto.create.topics">allow.auto.create.topics</a>
 </h4>
 <p>Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using `auto.create.topics.enable` broker configuration. This configuration must be set to `false` when using brokers older than 0.11.0</p>
 <table class="data-table"><tbody>
@@ -183,8 +183,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="auto.offset.reset" href="#auto.offset.reset"></a>
-      auto.offset.reset
+   <a class="anchor-link" id="auto.offset.reset"></a>
+   <a href="#auto.offset.reset">auto.offset.reset</a>
 </h4>
 <p>What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): <ul><li>earliest: automatically reset the offset to the earliest offset<li>latest: automatically reset the offset to the latest offset</li><li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul></p>
 <table class="data-table"><tbody>
@@ -196,8 +196,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.dns.lookup" href="#client.dns.lookup"></a>
-      client.dns.lookup
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
 </h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code> then, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers. If the value is <code>resolve_canonical_bootstrap_servers_only</code> each entry will be resolved and expanded into a list of canonical names.</p>
 <table class="data-table"><tbody>
@@ -209,8 +209,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connections.max.idle.ms" href="#connections.max.idle.ms"></a>
-      connections.max.idle.ms
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
 </h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table class="data-table"><tbody>
@@ -222,8 +222,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.api.timeout.ms" href="#default.api.timeout.ms"></a>
-      default.api.timeout.ms
+   <a class="anchor-link" id="default.api.timeout.ms"></a>
+   <a href="#default.api.timeout.ms">default.api.timeout.ms</a>
 </h4>
 <p>Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a <code>timeout</code> parameter.</p>
 <table class="data-table"><tbody>
@@ -235,8 +235,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="enable.auto.commit" href="#enable.auto.commit"></a>
-      enable.auto.commit
+   <a class="anchor-link" id="enable.auto.commit"></a>
+   <a href="#enable.auto.commit">enable.auto.commit</a>
 </h4>
 <p>If true the consumer's offset will be periodically committed in the background.</p>
 <table class="data-table"><tbody>
@@ -248,8 +248,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="exclude.internal.topics" href="#exclude.internal.topics"></a>
-      exclude.internal.topics
+   <a class="anchor-link" id="exclude.internal.topics"></a>
+   <a href="#exclude.internal.topics">exclude.internal.topics</a>
 </h4>
 <p>Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic.</p>
 <table class="data-table"><tbody>
@@ -261,8 +261,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="fetch.max.bytes" href="#fetch.max.bytes"></a>
-      fetch.max.bytes
+   <a class="anchor-link" id="fetch.max.bytes"></a>
+   <a href="#fetch.max.bytes">fetch.max.bytes</a>
 </h4>
 <p>The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel.</p>
 <table class="data-table"><tbody>
@@ -274,8 +274,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="group.instance.id" href="#group.instance.id"></a>
-      group.instance.id
+   <a class="anchor-link" id="group.instance.id"></a>
+   <a href="#group.instance.id">group.instance.id</a>
 </h4>
 <p>A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.</p>
 <table class="data-table"><tbody>
@@ -287,8 +287,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="isolation.level" href="#isolation.level"></a>
-      isolation.level
+   <a class="anchor-link" id="isolation.level"></a>
+   <a href="#isolation.level">isolation.level</a>
 </h4>
 <p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. <p>Messages will always be returned in offset order. Hence, in  <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, when in <code>read_committed</code> the seekToEnd method will return the LSO</p>
 <table class="data-table"><tbody>
@@ -300,8 +300,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.poll.interval.ms" href="#max.poll.interval.ms"></a>
-      max.poll.interval.ms
+   <a class="anchor-link" id="max.poll.interval.ms"></a>
+   <a href="#max.poll.interval.ms">max.poll.interval.ms</a>
 </h4>
 <p>The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null <code>group.instance.id</code> which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of <code>session.timeout.ms</code>. This mirrors the behavior of a static consumer which has shutdown.</p>
 <table class="data-table"><tbody>
@@ -313,8 +313,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.poll.records" href="#max.poll.records"></a>
-      max.poll.records
+   <a class="anchor-link" id="max.poll.records"></a>
+   <a href="#max.poll.records">max.poll.records</a>
 </h4>
 <p>The maximum number of records returned in a single call to poll().</p>
 <table class="data-table"><tbody>
@@ -326,8 +326,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="partition.assignment.strategy" href="#partition.assignment.strategy"></a>
-      partition.assignment.strategy
+   <a class="anchor-link" id="partition.assignment.strategy"></a>
+   <a href="#partition.assignment.strategy">partition.assignment.strategy</a>
 </h4>
 <p>A list of class names or class types, ordered by preference, of supported assignors responsible for the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerPartitionAssignor</code> interface allows you to plug in a custom assignment strategy.</p>
 <table class="data-table"><tbody>
@@ -339,8 +339,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="receive.buffer.bytes" href="#receive.buffer.bytes"></a>
-      receive.buffer.bytes
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
 </h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -352,8 +352,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="request.timeout.ms" href="#request.timeout.ms"></a>
-      request.timeout.ms
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
 </h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table class="data-table"><tbody>
@@ -365,8 +365,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class"></a>
-      sasl.client.callback.handler.class
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table class="data-table"><tbody>
@@ -378,8 +378,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.jaas.config" href="#sasl.jaas.config"></a>
-      sasl.jaas.config
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
 </h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table class="data-table"><tbody>
@@ -391,8 +391,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name"></a>
-      sasl.kerberos.service.name
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
 </h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table class="data-table"><tbody>
@@ -404,8 +404,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class"></a>
-      sasl.login.callback.handler.class
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table class="data-table"><tbody>
@@ -417,8 +417,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.class" href="#sasl.login.class"></a>
-      sasl.login.class
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table class="data-table"><tbody>
@@ -430,8 +430,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.mechanism" href="#sasl.mechanism"></a>
-      sasl.mechanism
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
 </h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table class="data-table"><tbody>
@@ -443,8 +443,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.protocol" href="#security.protocol"></a>
-      security.protocol
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
 </h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table class="data-table"><tbody>
@@ -456,8 +456,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="send.buffer.bytes" href="#send.buffer.bytes"></a>
-      send.buffer.bytes
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
 </h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -469,8 +469,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.enabled.protocols" href="#ssl.enabled.protocols"></a>
-      ssl.enabled.protocols
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
 </h4>
 <p>The list of protocols enabled for SSL connections.</p>
 <table class="data-table"><tbody>
@@ -482,8 +482,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.type" href="#ssl.keystore.type"></a>
-      ssl.keystore.type
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
 </h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -495,8 +495,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.protocol" href="#ssl.protocol"></a>
-      ssl.protocol
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
 </h4>
 <p>The SSL protocol used to generate the SSLContext. Default setting is TLSv1.2, which is fine for most cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</p>
 <table class="data-table"><tbody>
@@ -508,8 +508,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.provider" href="#ssl.provider"></a>
-      ssl.provider
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
 </h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table class="data-table"><tbody>
@@ -521,8 +521,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.type" href="#ssl.truststore.type"></a>
-      ssl.truststore.type
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
 </h4>
 <p>The file format of the trust store file.</p>
 <table class="data-table"><tbody>
@@ -534,8 +534,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="auto.commit.interval.ms" href="#auto.commit.interval.ms"></a>
-      auto.commit.interval.ms
+   <a class="anchor-link" id="auto.commit.interval.ms"></a>
+   <a href="#auto.commit.interval.ms">auto.commit.interval.ms</a>
 </h4>
 <p>The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code> is set to <code>true</code>.</p>
 <table class="data-table"><tbody>
@@ -547,8 +547,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="check.crcs" href="#check.crcs"></a>
-      check.crcs
+   <a class="anchor-link" id="check.crcs"></a>
+   <a href="#check.crcs">check.crcs</a>
 </h4>
 <p>Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.</p>
 <table class="data-table"><tbody>
@@ -560,8 +560,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.id" href="#client.id"></a>
-      client.id
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
 </h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table class="data-table"><tbody>
@@ -573,8 +573,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.rack" href="#client.rack"></a>
-      client.rack
+   <a class="anchor-link" id="client.rack"></a>
+   <a href="#client.rack">client.rack</a>
 </h4>
 <p>A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'</p>
 <table class="data-table"><tbody>
@@ -586,8 +586,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="fetch.max.wait.ms" href="#fetch.max.wait.ms"></a>
-      fetch.max.wait.ms
+   <a class="anchor-link" id="fetch.max.wait.ms"></a>
+   <a href="#fetch.max.wait.ms">fetch.max.wait.ms</a>
 </h4>
 <p>The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.</p>
 <table class="data-table"><tbody>
@@ -599,8 +599,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="interceptor.classes" href="#interceptor.classes"></a>
-      interceptor.classes
+   <a class="anchor-link" id="interceptor.classes"></a>
+   <a href="#interceptor.classes">interceptor.classes</a>
 </h4>
 <p>A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.</p>
 <table class="data-table"><tbody>
@@ -612,8 +612,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metadata.max.age.ms" href="#metadata.max.age.ms"></a>
-      metadata.max.age.ms
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
 </h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table class="data-table"><tbody>
@@ -625,8 +625,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metric.reporters" href="#metric.reporters"></a>
-      metric.reporters
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
 </h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table class="data-table"><tbody>
@@ -638,8 +638,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.num.samples" href="#metrics.num.samples"></a>
-      metrics.num.samples
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
 </h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table class="data-table"><tbody>
@@ -651,8 +651,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.recording.level" href="#metrics.recording.level"></a>
-      metrics.recording.level
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
 </h4>
 <p>The highest recording level for metrics.</p>
 <table class="data-table"><tbody>
@@ -664,8 +664,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.sample.window.ms" href="#metrics.sample.window.ms"></a>
-      metrics.sample.window.ms
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
 </h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table class="data-table"><tbody>
@@ -677,8 +677,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms"></a>
-      reconnect.backoff.max.ms
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
 </h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table class="data-table"><tbody>
@@ -690,8 +690,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reconnect.backoff.ms" href="#reconnect.backoff.ms"></a>
-      reconnect.backoff.ms
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
 </h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table class="data-table"><tbody>
@@ -703,8 +703,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="retry.backoff.ms" href="#retry.backoff.ms"></a>
-      retry.backoff.ms
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
 </h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table class="data-table"><tbody>
@@ -716,8 +716,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd"></a>
-      sasl.kerberos.kinit.cmd
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
 </h4>
 <p>Kerberos kinit command path.</p>
 <table class="data-table"><tbody>
@@ -729,8 +729,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin"></a>
-      sasl.kerberos.min.time.before.relogin
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
 </h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table class="data-table"><tbody>
@@ -742,8 +742,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter"></a>
-      sasl.kerberos.ticket.renew.jitter
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
 </h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table class="data-table"><tbody>
@@ -755,8 +755,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor"></a>
-      sasl.kerberos.ticket.renew.window.factor
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
 </h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table class="data-table"><tbody>
@@ -768,8 +768,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds"></a>
-      sasl.login.refresh.buffer.seconds
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
 </h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -781,8 +781,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds"></a>
-      sasl.login.refresh.min.period.seconds
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
 </h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -794,8 +794,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor"></a>
-      sasl.login.refresh.window.factor
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
 </h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -807,8 +807,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter"></a>
-      sasl.login.refresh.window.jitter
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
 </h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -820,8 +820,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.providers" href="#security.providers"></a>
-      security.providers
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
 </h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table class="data-table"><tbody>
@@ -833,8 +833,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.cipher.suites" href="#ssl.cipher.suites"></a>
-      ssl.cipher.suites
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
 </h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table class="data-table"><tbody>
@@ -846,8 +846,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm"></a>
-      ssl.endpoint.identification.algorithm
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
 </h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table class="data-table"><tbody>
@@ -859,8 +859,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm"></a>
-      ssl.keymanager.algorithm
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
 </h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -872,8 +872,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation"></a>
-      ssl.secure.random.implementation
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
 </h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table class="data-table"><tbody>
@@ -885,8 +885,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm"></a>
-      ssl.trustmanager.algorithm
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
 </h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
diff --git a/25/generated/kafka_config.html b/25/generated/kafka_config.html
index 9d43d37..8c253e4 100644
--- a/25/generated/kafka_config.html
+++ b/25/generated/kafka_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.connect" href="#zookeeper.connect"></a>
-      zookeeper.connect
+   <a class="anchor-link" id="zookeeper.connect"></a>
+   <a href="#zookeeper.connect">zookeeper.connect</a>
 </h4>
 <p>Specifies the ZooKeeper connection string in the form <code>hostname:port</code> where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form <code>hostname1:port1,hostname2:port2,hostname3:port3</code>.<br>The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of <code>/chroot/path</code> you would give the connection string as <code>hostname1:port1,hostname2:port2,hostname3:port3/chroot/path</code>.</p>
 <table class="data-table"><tbody>
@@ -15,8 +15,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="advertised.host.name" href="#advertised.host.name"></a>
-      advertised.host.name
+   <a class="anchor-link" id="advertised.host.name"></a>
+   <a href="#advertised.host.name">advertised.host.name</a>
 </h4>
 <p>DEPRECATED: only used when <code>advertised.listeners</code> or <code>listeners</code> are not set. Use <code>advertised.listeners</code> instead. <br>Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for <code>host.name</code> if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName().</p>
 <table class="data-table"><tbody>
@@ -29,8 +29,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="advertised.listeners" href="#advertised.listeners"></a>
-      advertised.listeners
+   <a class="anchor-link" id="advertised.listeners"></a>
+   <a href="#advertised.listeners">advertised.listeners</a>
 </h4>
 <p>Listeners to publish to ZooKeeper for clients to use, if different than the <code>listeners</code> config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for <code>listeners</code> will be used. Unlike <code>listeners</code> it is not valid to advertise the 0.0.0.0 meta-address.</p>
 <table class="data-table"><tbody>
@@ -43,8 +43,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="advertised.port" href="#advertised.port"></a>
-      advertised.port
+   <a class="anchor-link" id="advertised.port"></a>
+   <a href="#advertised.port">advertised.port</a>
 </h4>
 <p>DEPRECATED: only used when <code>advertised.listeners</code> or <code>listeners</code> are not set. Use <code>advertised.listeners</code> instead. <br>The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to.</p>
 <table class="data-table"><tbody>
@@ -57,8 +57,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="auto.create.topics.enable" href="#auto.create.topics.enable"></a>
-      auto.create.topics.enable
+   <a class="anchor-link" id="auto.create.topics.enable"></a>
+   <a href="#auto.create.topics.enable">auto.create.topics.enable</a>
 </h4>
 <p>Enable auto creation of topic on the server</p>
 <table class="data-table"><tbody>
@@ -71,8 +71,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="auto.leader.rebalance.enable" href="#auto.leader.rebalance.enable"></a>
-      auto.leader.rebalance.enable
+   <a class="anchor-link" id="auto.leader.rebalance.enable"></a>
+   <a href="#auto.leader.rebalance.enable">auto.leader.rebalance.enable</a>
 </h4>
 <p>Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by `leader.imbalance.check.interval.seconds`. If the leader imbalance exceeds `leader.imbalance.per.broker.percentage`, leader rebalance to the preferred leader for partitions is triggered.</p>
 <table class="data-table"><tbody>
@@ -85,8 +85,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="background.threads" href="#background.threads"></a>
-      background.threads
+   <a class="anchor-link" id="background.threads"></a>
+   <a href="#background.threads">background.threads</a>
 </h4>
 <p>The number of threads to use for various background processing tasks</p>
 <table class="data-table"><tbody>
@@ -99,8 +99,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="broker.id" href="#broker.id"></a>
-      broker.id
+   <a class="anchor-link" id="broker.id"></a>
+   <a href="#broker.id">broker.id</a>
 </h4>
 <p>The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.</p>
 <table class="data-table"><tbody>
@@ -113,8 +113,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="compression.type" href="#compression.type"></a>
-      compression.type
+   <a class="anchor-link" id="compression.type"></a>
+   <a href="#compression.type">compression.type</a>
 </h4>
 <p>Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.</p>
 <table class="data-table"><tbody>
@@ -127,8 +127,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="control.plane.listener.name" href="#control.plane.listener.name"></a>
-      control.plane.listener.name
+   <a class="anchor-link" id="control.plane.listener.name"></a>
+   <a href="#control.plane.listener.name">control.plane.listener.name</a>
 </h4>
 <p>Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is :<br>listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094<br>listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL<br>control.plane.listener.name = CONTROLLER<br>On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL".<br>On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker.<br>For example, if the broker's published endpoints on zookeeper are :<br>"endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"]<br> and the controller's config is :<br>listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL<br>control.plane.listener.name = CONTROLLER<br>then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker.<br>If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections.</p>
 <table class="data-table"><tbody>
@@ -141,8 +141,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delete.topic.enable" href="#delete.topic.enable"></a>
-      delete.topic.enable
+   <a class="anchor-link" id="delete.topic.enable"></a>
+   <a href="#delete.topic.enable">delete.topic.enable</a>
 </h4>
 <p>Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off</p>
 <table class="data-table"><tbody>
@@ -155,8 +155,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="host.name" href="#host.name"></a>
-      host.name
+   <a class="anchor-link" id="host.name"></a>
+   <a href="#host.name">host.name</a>
 </h4>
 <p>DEPRECATED: only used when <code>listeners</code> is not set. Use <code>listeners</code> instead. <br>hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces</p>
 <table class="data-table"><tbody>
@@ -169,8 +169,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="leader.imbalance.check.interval.seconds" href="#leader.imbalance.check.interval.seconds"></a>
-      leader.imbalance.check.interval.seconds
+   <a class="anchor-link" id="leader.imbalance.check.interval.seconds"></a>
+   <a href="#leader.imbalance.check.interval.seconds">leader.imbalance.check.interval.seconds</a>
 </h4>
 <p>The frequency with which the partition rebalance check is triggered by the controller</p>
 <table class="data-table"><tbody>
@@ -183,8 +183,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="leader.imbalance.per.broker.percentage" href="#leader.imbalance.per.broker.percentage"></a>
-      leader.imbalance.per.broker.percentage
+   <a class="anchor-link" id="leader.imbalance.per.broker.percentage"></a>
+   <a href="#leader.imbalance.per.broker.percentage">leader.imbalance.per.broker.percentage</a>
 </h4>
 <p>The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.</p>
 <table class="data-table"><tbody>
@@ -211,8 +211,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.dir" href="#log.dir"></a>
-      log.dir
+   <a class="anchor-link" id="log.dir"></a>
+   <a href="#log.dir">log.dir</a>
 </h4>
 <p>The directory in which the log data is kept (supplemental for log.dirs property)</p>
 <table class="data-table"><tbody>
@@ -225,8 +225,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.dirs" href="#log.dirs"></a>
-      log.dirs
+   <a class="anchor-link" id="log.dirs"></a>
+   <a href="#log.dirs">log.dirs</a>
 </h4>
 <p>The directories in which the log data is kept. If not set, the value in log.dir is used</p>
 <table class="data-table"><tbody>
@@ -239,8 +239,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.flush.interval.messages" href="#log.flush.interval.messages"></a>
-      log.flush.interval.messages
+   <a class="anchor-link" id="log.flush.interval.messages"></a>
+   <a href="#log.flush.interval.messages">log.flush.interval.messages</a>
 </h4>
 <p>The number of messages accumulated on a log partition before messages are flushed to disk </p>
 <table class="data-table"><tbody>
@@ -253,8 +253,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.flush.interval.ms" href="#log.flush.interval.ms"></a>
-      log.flush.interval.ms
+   <a class="anchor-link" id="log.flush.interval.ms"></a>
+   <a href="#log.flush.interval.ms">log.flush.interval.ms</a>
 </h4>
 <p>The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used</p>
 <table class="data-table"><tbody>
@@ -267,8 +267,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.flush.offset.checkpoint.interval.ms" href="#log.flush.offset.checkpoint.interval.ms"></a>
-      log.flush.offset.checkpoint.interval.ms
+   <a class="anchor-link" id="log.flush.offset.checkpoint.interval.ms"></a>
+   <a href="#log.flush.offset.checkpoint.interval.ms">log.flush.offset.checkpoint.interval.ms</a>
 </h4>
 <p>The frequency with which we update the persistent record of the last flush which acts as the log recovery point</p>
 <table class="data-table"><tbody>
@@ -281,8 +281,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.flush.scheduler.interval.ms" href="#log.flush.scheduler.interval.ms"></a>
-      log.flush.scheduler.interval.ms
+   <a class="anchor-link" id="log.flush.scheduler.interval.ms"></a>
+   <a href="#log.flush.scheduler.interval.ms">log.flush.scheduler.interval.ms</a>
 </h4>
 <p>The frequency in ms that the log flusher checks whether any log needs to be flushed to disk</p>
 <table class="data-table"><tbody>
@@ -295,8 +295,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.flush.start.offset.checkpoint.interval.ms" href="#log.flush.start.offset.checkpoint.interval.ms"></a>
-      log.flush.start.offset.checkpoint.interval.ms
+   <a class="anchor-link" id="log.flush.start.offset.checkpoint.interval.ms"></a>
+   <a href="#log.flush.start.offset.checkpoint.interval.ms">log.flush.start.offset.checkpoint.interval.ms</a>
 </h4>
 <p>The frequency with which we update the persistent record of log start offset</p>
 <table class="data-table"><tbody>
@@ -309,8 +309,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.retention.bytes" href="#log.retention.bytes"></a>
-      log.retention.bytes
+   <a class="anchor-link" id="log.retention.bytes"></a>
+   <a href="#log.retention.bytes">log.retention.bytes</a>
 </h4>
 <p>The maximum size of the log before deleting it</p>
 <table class="data-table"><tbody>
@@ -323,8 +323,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.retention.hours" href="#log.retention.hours"></a>
-      log.retention.hours
+   <a class="anchor-link" id="log.retention.hours"></a>
+   <a href="#log.retention.hours">log.retention.hours</a>
 </h4>
 <p>The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property</p>
 <table class="data-table"><tbody>
@@ -337,8 +337,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.retention.minutes" href="#log.retention.minutes"></a>
-      log.retention.minutes
+   <a class="anchor-link" id="log.retention.minutes"></a>
+   <a href="#log.retention.minutes">log.retention.minutes</a>
 </h4>
 <p>The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used</p>
 <table class="data-table"><tbody>
@@ -351,8 +351,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.retention.ms" href="#log.retention.ms"></a>
-      log.retention.ms
+   <a class="anchor-link" id="log.retention.ms"></a>
+   <a href="#log.retention.ms">log.retention.ms</a>
 </h4>
 <p>The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.</p>
 <table class="data-table"><tbody>
@@ -365,8 +365,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.roll.hours" href="#log.roll.hours"></a>
-      log.roll.hours
+   <a class="anchor-link" id="log.roll.hours"></a>
+   <a href="#log.roll.hours">log.roll.hours</a>
 </h4>
 <p>The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property</p>
 <table class="data-table"><tbody>
@@ -379,8 +379,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.roll.jitter.hours" href="#log.roll.jitter.hours"></a>
-      log.roll.jitter.hours
+   <a class="anchor-link" id="log.roll.jitter.hours"></a>
+   <a href="#log.roll.jitter.hours">log.roll.jitter.hours</a>
 </h4>
 <p>The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property</p>
 <table class="data-table"><tbody>
@@ -393,8 +393,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.roll.jitter.ms" href="#log.roll.jitter.ms"></a>
-      log.roll.jitter.ms
+   <a class="anchor-link" id="log.roll.jitter.ms"></a>
+   <a href="#log.roll.jitter.ms">log.roll.jitter.ms</a>
 </h4>
 <p>The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used</p>
 <table class="data-table"><tbody>
@@ -407,8 +407,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.roll.ms" href="#log.roll.ms"></a>
-      log.roll.ms
+   <a class="anchor-link" id="log.roll.ms"></a>
+   <a href="#log.roll.ms">log.roll.ms</a>
 </h4>
 <p>The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used</p>
 <table class="data-table"><tbody>
@@ -421,8 +421,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.segment.bytes" href="#log.segment.bytes"></a>
-      log.segment.bytes
+   <a class="anchor-link" id="log.segment.bytes"></a>
+   <a href="#log.segment.bytes">log.segment.bytes</a>
 </h4>
 <p>The maximum size of a single log file</p>
 <table class="data-table"><tbody>
@@ -435,8 +435,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.segment.delete.delay.ms" href="#log.segment.delete.delay.ms"></a>
-      log.segment.delete.delay.ms
+   <a class="anchor-link" id="log.segment.delete.delay.ms"></a>
+   <a href="#log.segment.delete.delay.ms">log.segment.delete.delay.ms</a>
 </h4>
 <p>The amount of time to wait before deleting a file from the filesystem</p>
 <table class="data-table"><tbody>
@@ -449,8 +449,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="message.max.bytes" href="#message.max.bytes"></a>
-      message.max.bytes
+   <a class="anchor-link" id="message.max.bytes"></a>
+   <a href="#message.max.bytes">message.max.bytes</a>
 </h4>
 <p>The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level <code>max.message.bytes</code> config.</p>
 <table class="data-table"><tbody>
@@ -463,8 +463,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="min.insync.replicas" href="#min.insync.replicas"></a>
-      min.insync.replicas
+   <a class="anchor-link" id="min.insync.replicas"></a>
+   <a href="#min.insync.replicas">min.insync.replicas</a>
 </h4>
 <p>When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.</p>
 <table class="data-table"><tbody>
@@ -477,8 +477,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.io.threads" href="#num.io.threads"></a>
-      num.io.threads
+   <a class="anchor-link" id="num.io.threads"></a>
+   <a href="#num.io.threads">num.io.threads</a>
 </h4>
 <p>The number of threads that the server uses for processing requests, which may include disk I/O</p>
 <table class="data-table"><tbody>
@@ -491,8 +491,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.network.threads" href="#num.network.threads"></a>
-      num.network.threads
+   <a class="anchor-link" id="num.network.threads"></a>
+   <a href="#num.network.threads">num.network.threads</a>
 </h4>
 <p>The number of threads that the server uses for receiving requests from the network and sending responses to the network</p>
 <table class="data-table"><tbody>
@@ -505,8 +505,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.recovery.threads.per.data.dir" href="#num.recovery.threads.per.data.dir"></a>
-      num.recovery.threads.per.data.dir
+   <a class="anchor-link" id="num.recovery.threads.per.data.dir"></a>
+   <a href="#num.recovery.threads.per.data.dir">num.recovery.threads.per.data.dir</a>
 </h4>
 <p>The number of threads per data directory to be used for log recovery at startup and flushing at shutdown</p>
 <table class="data-table"><tbody>
@@ -519,8 +519,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.replica.alter.log.dirs.threads" href="#num.replica.alter.log.dirs.threads"></a>
-      num.replica.alter.log.dirs.threads
+   <a class="anchor-link" id="num.replica.alter.log.dirs.threads"></a>
+   <a href="#num.replica.alter.log.dirs.threads">num.replica.alter.log.dirs.threads</a>
 </h4>
 <p>The number of threads that can move replicas between log directories, which may include disk I/O</p>
 <table class="data-table"><tbody>
@@ -533,8 +533,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.replica.fetchers" href="#num.replica.fetchers"></a>
-      num.replica.fetchers
+   <a class="anchor-link" id="num.replica.fetchers"></a>
+   <a href="#num.replica.fetchers">num.replica.fetchers</a>
 </h4>
 <p>Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker.</p>
 <table class="data-table"><tbody>
@@ -547,8 +547,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offset.metadata.max.bytes" href="#offset.metadata.max.bytes"></a>
-      offset.metadata.max.bytes
+   <a class="anchor-link" id="offset.metadata.max.bytes"></a>
+   <a href="#offset.metadata.max.bytes">offset.metadata.max.bytes</a>
 </h4>
 <p>The maximum size for a metadata entry associated with an offset commit</p>
 <table class="data-table"><tbody>
@@ -561,8 +561,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.commit.required.acks" href="#offsets.commit.required.acks"></a>
-      offsets.commit.required.acks
+   <a class="anchor-link" id="offsets.commit.required.acks"></a>
+   <a href="#offsets.commit.required.acks">offsets.commit.required.acks</a>
 </h4>
 <p>The required acks before the commit can be accepted. In general, the default (-1) should not be overridden</p>
 <table class="data-table"><tbody>
@@ -575,8 +575,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.commit.timeout.ms" href="#offsets.commit.timeout.ms"></a>
-      offsets.commit.timeout.ms
+   <a class="anchor-link" id="offsets.commit.timeout.ms"></a>
+   <a href="#offsets.commit.timeout.ms">offsets.commit.timeout.ms</a>
 </h4>
 <p>Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.</p>
 <table class="data-table"><tbody>
@@ -589,8 +589,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.load.buffer.size" href="#offsets.load.buffer.size"></a>
-      offsets.load.buffer.size
+   <a class="anchor-link" id="offsets.load.buffer.size"></a>
+   <a href="#offsets.load.buffer.size">offsets.load.buffer.size</a>
 </h4>
 <p>Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large).</p>
 <table class="data-table"><tbody>
@@ -603,8 +603,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.retention.check.interval.ms" href="#offsets.retention.check.interval.ms"></a>
-      offsets.retention.check.interval.ms
+   <a class="anchor-link" id="offsets.retention.check.interval.ms"></a>
+   <a href="#offsets.retention.check.interval.ms">offsets.retention.check.interval.ms</a>
 </h4>
 <p>Frequency at which to check for stale offsets</p>
 <table class="data-table"><tbody>
@@ -617,8 +617,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.retention.minutes" href="#offsets.retention.minutes"></a>
-      offsets.retention.minutes
+   <a class="anchor-link" id="offsets.retention.minutes"></a>
+   <a href="#offsets.retention.minutes">offsets.retention.minutes</a>
 </h4>
 <p>After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period.</p>
 <table class="data-table"><tbody>
@@ -631,8 +631,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.topic.compression.codec" href="#offsets.topic.compression.codec"></a>
-      offsets.topic.compression.codec
+   <a class="anchor-link" id="offsets.topic.compression.codec"></a>
+   <a href="#offsets.topic.compression.codec">offsets.topic.compression.codec</a>
 </h4>
 <p>Compression codec for the offsets topic - compression may be used to achieve "atomic" commits</p>
 <table class="data-table"><tbody>
@@ -645,8 +645,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.topic.num.partitions" href="#offsets.topic.num.partitions"></a>
-      offsets.topic.num.partitions
+   <a class="anchor-link" id="offsets.topic.num.partitions"></a>
+   <a href="#offsets.topic.num.partitions">offsets.topic.num.partitions</a>
 </h4>
 <p>The number of partitions for the offset commit topic (should not change after deployment)</p>
 <table class="data-table"><tbody>
@@ -659,8 +659,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.topic.replication.factor" href="#offsets.topic.replication.factor"></a>
-      offsets.topic.replication.factor
+   <a class="anchor-link" id="offsets.topic.replication.factor"></a>
+   <a href="#offsets.topic.replication.factor">offsets.topic.replication.factor</a>
 </h4>
 <p>The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.</p>
 <table class="data-table"><tbody>
@@ -673,8 +673,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="offsets.topic.segment.bytes" href="#offsets.topic.segment.bytes"></a>
-      offsets.topic.segment.bytes
+   <a class="anchor-link" id="offsets.topic.segment.bytes"></a>
+   <a href="#offsets.topic.segment.bytes">offsets.topic.segment.bytes</a>
 </h4>
 <p>The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads</p>
 <table class="data-table"><tbody>
@@ -701,8 +701,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="queued.max.requests" href="#queued.max.requests"></a>
-      queued.max.requests
+   <a class="anchor-link" id="queued.max.requests"></a>
+   <a href="#queued.max.requests">queued.max.requests</a>
 </h4>
 <p>The number of queued requests allowed for data-plane, before blocking the network threads</p>
 <table class="data-table"><tbody>
@@ -715,8 +715,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="quota.consumer.default" href="#quota.consumer.default"></a>
-      quota.consumer.default
+   <a class="anchor-link" id="quota.consumer.default"></a>
+   <a href="#quota.consumer.default">quota.consumer.default</a>
 </h4>
 <p>DEPRECATED: Used only when dynamic default quotas are not configured for <user, <client-id> or <user, client-id> in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second</p>
 <table class="data-table"><tbody>
@@ -729,8 +729,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="quota.producer.default" href="#quota.producer.default"></a>
-      quota.producer.default
+   <a class="anchor-link" id="quota.producer.default"></a>
+   <a href="#quota.producer.default">quota.producer.default</a>
 </h4>
 <p>DEPRECATED: Used only when dynamic default quotas are not configured for <user>, <client-id> or <user, client-id> in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second</p>
 <table class="data-table"><tbody>
@@ -743,8 +743,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.fetch.min.bytes" href="#replica.fetch.min.bytes"></a>
-      replica.fetch.min.bytes
+   <a class="anchor-link" id="replica.fetch.min.bytes"></a>
+   <a href="#replica.fetch.min.bytes">replica.fetch.min.bytes</a>
 </h4>
 <p>Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs</p>
 <table class="data-table"><tbody>
@@ -757,8 +757,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.fetch.wait.max.ms" href="#replica.fetch.wait.max.ms"></a>
-      replica.fetch.wait.max.ms
+   <a class="anchor-link" id="replica.fetch.wait.max.ms"></a>
+   <a href="#replica.fetch.wait.max.ms">replica.fetch.wait.max.ms</a>
 </h4>
 <p>max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics</p>
 <table class="data-table"><tbody>
@@ -771,8 +771,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.high.watermark.checkpoint.interval.ms" href="#replica.high.watermark.checkpoint.interval.ms"></a>
-      replica.high.watermark.checkpoint.interval.ms
+   <a class="anchor-link" id="replica.high.watermark.checkpoint.interval.ms"></a>
+   <a href="#replica.high.watermark.checkpoint.interval.ms">replica.high.watermark.checkpoint.interval.ms</a>
 </h4>
 <p>The frequency with which the high watermark is saved out to disk</p>
 <table class="data-table"><tbody>
@@ -785,8 +785,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.lag.time.max.ms" href="#replica.lag.time.max.ms"></a>
-      replica.lag.time.max.ms
+   <a class="anchor-link" id="replica.lag.time.max.ms"></a>
+   <a href="#replica.lag.time.max.ms">replica.lag.time.max.ms</a>
 </h4>
 <p>If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr</p>
 <table class="data-table"><tbody>
@@ -799,8 +799,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.socket.receive.buffer.bytes" href="#replica.socket.receive.buffer.bytes"></a>
-      replica.socket.receive.buffer.bytes
+   <a class="anchor-link" id="replica.socket.receive.buffer.bytes"></a>
+   <a href="#replica.socket.receive.buffer.bytes">replica.socket.receive.buffer.bytes</a>
 </h4>
 <p>The socket receive buffer for network requests</p>
 <table class="data-table"><tbody>
@@ -813,8 +813,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.socket.timeout.ms" href="#replica.socket.timeout.ms"></a>
-      replica.socket.timeout.ms
+   <a class="anchor-link" id="replica.socket.timeout.ms"></a>
+   <a href="#replica.socket.timeout.ms">replica.socket.timeout.ms</a>
 </h4>
 <p>The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms</p>
 <table class="data-table"><tbody>
@@ -827,8 +827,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="request.timeout.ms" href="#request.timeout.ms"></a>
-      request.timeout.ms
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
 </h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table class="data-table"><tbody>
@@ -841,8 +841,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="socket.receive.buffer.bytes" href="#socket.receive.buffer.bytes"></a>
-      socket.receive.buffer.bytes
+   <a class="anchor-link" id="socket.receive.buffer.bytes"></a>
+   <a href="#socket.receive.buffer.bytes">socket.receive.buffer.bytes</a>
 </h4>
 <p>The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -855,8 +855,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="socket.request.max.bytes" href="#socket.request.max.bytes"></a>
-      socket.request.max.bytes
+   <a class="anchor-link" id="socket.request.max.bytes"></a>
+   <a href="#socket.request.max.bytes">socket.request.max.bytes</a>
 </h4>
 <p>The maximum number of bytes in a socket request</p>
 <table class="data-table"><tbody>
@@ -869,8 +869,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="socket.send.buffer.bytes" href="#socket.send.buffer.bytes"></a>
-      socket.send.buffer.bytes
+   <a class="anchor-link" id="socket.send.buffer.bytes"></a>
+   <a href="#socket.send.buffer.bytes">socket.send.buffer.bytes</a>
 </h4>
 <p>The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -883,8 +883,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.max.timeout.ms" href="#transaction.max.timeout.ms"></a>
-      transaction.max.timeout.ms
+   <a class="anchor-link" id="transaction.max.timeout.ms"></a>
+   <a href="#transaction.max.timeout.ms">transaction.max.timeout.ms</a>
 </h4>
 <p>The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.</p>
 <table class="data-table"><tbody>
@@ -897,8 +897,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.state.log.load.buffer.size" href="#transaction.state.log.load.buffer.size"></a>
-      transaction.state.log.load.buffer.size
+   <a class="anchor-link" id="transaction.state.log.load.buffer.size"></a>
+   <a href="#transaction.state.log.load.buffer.size">transaction.state.log.load.buffer.size</a>
 </h4>
 <p>Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).</p>
 <table class="data-table"><tbody>
@@ -911,8 +911,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.state.log.min.isr" href="#transaction.state.log.min.isr"></a>
-      transaction.state.log.min.isr
+   <a class="anchor-link" id="transaction.state.log.min.isr"></a>
+   <a href="#transaction.state.log.min.isr">transaction.state.log.min.isr</a>
 </h4>
 <p>Overridden min.insync.replicas config for the transaction topic.</p>
 <table class="data-table"><tbody>
@@ -925,8 +925,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.state.log.num.partitions" href="#transaction.state.log.num.partitions"></a>
-      transaction.state.log.num.partitions
+   <a class="anchor-link" id="transaction.state.log.num.partitions"></a>
+   <a href="#transaction.state.log.num.partitions">transaction.state.log.num.partitions</a>
 </h4>
 <p>The number of partitions for the transaction topic (should not change after deployment).</p>
 <table class="data-table"><tbody>
@@ -939,8 +939,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.state.log.replication.factor" href="#transaction.state.log.replication.factor"></a>
-      transaction.state.log.replication.factor
+   <a class="anchor-link" id="transaction.state.log.replication.factor"></a>
+   <a href="#transaction.state.log.replication.factor">transaction.state.log.replication.factor</a>
 </h4>
 <p>The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.</p>
 <table class="data-table"><tbody>
@@ -953,8 +953,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.state.log.segment.bytes" href="#transaction.state.log.segment.bytes"></a>
-      transaction.state.log.segment.bytes
+   <a class="anchor-link" id="transaction.state.log.segment.bytes"></a>
+   <a href="#transaction.state.log.segment.bytes">transaction.state.log.segment.bytes</a>
 </h4>
 <p>The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads</p>
 <table class="data-table"><tbody>
@@ -967,8 +967,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transactional.id.expiration.ms" href="#transactional.id.expiration.ms"></a>
-      transactional.id.expiration.ms
+   <a class="anchor-link" id="transactional.id.expiration.ms"></a>
+   <a href="#transactional.id.expiration.ms">transactional.id.expiration.ms</a>
 </h4>
 <p>The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings.</p>
 <table class="data-table"><tbody>
@@ -981,8 +981,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="unclean.leader.election.enable" href="#unclean.leader.election.enable"></a>
-      unclean.leader.election.enable
+   <a class="anchor-link" id="unclean.leader.election.enable"></a>
+   <a href="#unclean.leader.election.enable">unclean.leader.election.enable</a>
 </h4>
 <p>Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss</p>
 <table class="data-table"><tbody>
@@ -995,8 +995,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.connection.timeout.ms" href="#zookeeper.connection.timeout.ms"></a>
-      zookeeper.connection.timeout.ms
+   <a class="anchor-link" id="zookeeper.connection.timeout.ms"></a>
+   <a href="#zookeeper.connection.timeout.ms">zookeeper.connection.timeout.ms</a>
 </h4>
 <p>The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used</p>
 <table class="data-table"><tbody>
@@ -1009,8 +1009,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.max.in.flight.requests" href="#zookeeper.max.in.flight.requests"></a>
-      zookeeper.max.in.flight.requests
+   <a class="anchor-link" id="zookeeper.max.in.flight.requests"></a>
+   <a href="#zookeeper.max.in.flight.requests">zookeeper.max.in.flight.requests</a>
 </h4>
 <p>The maximum number of unacknowledged requests the client will send to Zookeeper before blocking.</p>
 <table class="data-table"><tbody>
@@ -1023,8 +1023,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.session.timeout.ms" href="#zookeeper.session.timeout.ms"></a>
-      zookeeper.session.timeout.ms
+   <a class="anchor-link" id="zookeeper.session.timeout.ms"></a>
+   <a href="#zookeeper.session.timeout.ms">zookeeper.session.timeout.ms</a>
 </h4>
 <p>Zookeeper session timeout</p>
 <table class="data-table"><tbody>
@@ -1037,8 +1037,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.set.acl" href="#zookeeper.set.acl"></a>
-      zookeeper.set.acl
+   <a class="anchor-link" id="zookeeper.set.acl"></a>
+   <a href="#zookeeper.set.acl">zookeeper.set.acl</a>
 </h4>
 <p>Set client to use secure ACLs</p>
 <table class="data-table"><tbody>
@@ -1051,8 +1051,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="broker.id.generation.enable" href="#broker.id.generation.enable"></a>
-      broker.id.generation.enable
+   <a class="anchor-link" id="broker.id.generation.enable"></a>
+   <a href="#broker.id.generation.enable">broker.id.generation.enable</a>
 </h4>
 <p>Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.</p>
 <table class="data-table"><tbody>
@@ -1065,8 +1065,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="broker.rack" href="#broker.rack"></a>
-      broker.rack
+   <a class="anchor-link" id="broker.rack"></a>
+   <a href="#broker.rack">broker.rack</a>
 </h4>
 <p>Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`</p>
 <table class="data-table"><tbody>
@@ -1079,8 +1079,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connections.max.idle.ms" href="#connections.max.idle.ms"></a>
-      connections.max.idle.ms
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
 </h4>
 <p>Idle connections timeout: the server socket processor threads close the connections that idle more than this</p>
 <table class="data-table"><tbody>
@@ -1093,8 +1093,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connections.max.reauth.ms" href="#connections.max.reauth.ms"></a>
-      connections.max.reauth.ms
+   <a class="anchor-link" id="connections.max.reauth.ms"></a>
+   <a href="#connections.max.reauth.ms">connections.max.reauth.ms</a>
 </h4>
 <p>When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000</p>
 <table class="data-table"><tbody>
@@ -1107,8 +1107,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="controlled.shutdown.enable" href="#controlled.shutdown.enable"></a>
-      controlled.shutdown.enable
+   <a class="anchor-link" id="controlled.shutdown.enable"></a>
+   <a href="#controlled.shutdown.enable">controlled.shutdown.enable</a>
 </h4>
 <p>Enable controlled shutdown of the server</p>
 <table class="data-table"><tbody>
@@ -1121,8 +1121,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="controlled.shutdown.max.retries" href="#controlled.shutdown.max.retries"></a>
-      controlled.shutdown.max.retries
+   <a class="anchor-link" id="controlled.shutdown.max.retries"></a>
+   <a href="#controlled.shutdown.max.retries">controlled.shutdown.max.retries</a>
 </h4>
 <p>Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens</p>
 <table class="data-table"><tbody>
@@ -1135,8 +1135,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="controlled.shutdown.retry.backoff.ms" href="#controlled.shutdown.retry.backoff.ms"></a>
-      controlled.shutdown.retry.backoff.ms
+   <a class="anchor-link" id="controlled.shutdown.retry.backoff.ms"></a>
+   <a href="#controlled.shutdown.retry.backoff.ms">controlled.shutdown.retry.backoff.ms</a>
 </h4>
 <p>Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.</p>
 <table class="data-table"><tbody>
@@ -1149,8 +1149,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="controller.socket.timeout.ms" href="#controller.socket.timeout.ms"></a>
-      controller.socket.timeout.ms
+   <a class="anchor-link" id="controller.socket.timeout.ms"></a>
+   <a href="#controller.socket.timeout.ms">controller.socket.timeout.ms</a>
 </h4>
 <p>The socket timeout for controller-to-broker channels</p>
 <table class="data-table"><tbody>
@@ -1163,8 +1163,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.replication.factor" href="#default.replication.factor"></a>
-      default.replication.factor
+   <a class="anchor-link" id="default.replication.factor"></a>
+   <a href="#default.replication.factor">default.replication.factor</a>
 </h4>
 <p>default replication factors for automatically created topics</p>
 <table class="data-table"><tbody>
@@ -1177,8 +1177,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delegation.token.expiry.time.ms" href="#delegation.token.expiry.time.ms"></a>
-      delegation.token.expiry.time.ms
+   <a class="anchor-link" id="delegation.token.expiry.time.ms"></a>
+   <a href="#delegation.token.expiry.time.ms">delegation.token.expiry.time.ms</a>
 </h4>
 <p>The token validity time in miliseconds before the token needs to be renewed. Default value 1 day.</p>
 <table class="data-table"><tbody>
@@ -1191,8 +1191,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delegation.token.master.key" href="#delegation.token.master.key"></a>
-      delegation.token.master.key
+   <a class="anchor-link" id="delegation.token.master.key"></a>
+   <a href="#delegation.token.master.key">delegation.token.master.key</a>
 </h4>
 <p>Master/secret key to generate and verify delegation tokens. Same key must be configured across all the brokers.  If the key is not set or set to empty string, brokers will disable the delegation token support.</p>
 <table class="data-table"><tbody>
@@ -1205,8 +1205,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delegation.token.max.lifetime.ms" href="#delegation.token.max.lifetime.ms"></a>
-      delegation.token.max.lifetime.ms
+   <a class="anchor-link" id="delegation.token.max.lifetime.ms"></a>
+   <a href="#delegation.token.max.lifetime.ms">delegation.token.max.lifetime.ms</a>
 </h4>
 <p>The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.</p>
 <table class="data-table"><tbody>
@@ -1219,8 +1219,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delete.records.purgatory.purge.interval.requests" href="#delete.records.purgatory.purge.interval.requests"></a>
-      delete.records.purgatory.purge.interval.requests
+   <a class="anchor-link" id="delete.records.purgatory.purge.interval.requests"></a>
+   <a href="#delete.records.purgatory.purge.interval.requests">delete.records.purgatory.purge.interval.requests</a>
 </h4>
 <p>The purge interval (in number of requests) of the delete records request purgatory</p>
 <table class="data-table"><tbody>
@@ -1233,8 +1233,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="fetch.max.bytes" href="#fetch.max.bytes"></a>
-      fetch.max.bytes
+   <a class="anchor-link" id="fetch.max.bytes"></a>
+   <a href="#fetch.max.bytes">fetch.max.bytes</a>
 </h4>
 <p>The maximum number of bytes we will return for a fetch request. Must be at least 1024.</p>
 <table class="data-table"><tbody>
@@ -1247,8 +1247,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="fetch.purgatory.purge.interval.requests" href="#fetch.purgatory.purge.interval.requests"></a>
-      fetch.purgatory.purge.interval.requests
+   <a class="anchor-link" id="fetch.purgatory.purge.interval.requests"></a>
+   <a href="#fetch.purgatory.purge.interval.requests">fetch.purgatory.purge.interval.requests</a>
 </h4>
 <p>The purge interval (in number of requests) of the fetch request purgatory</p>
 <table class="data-table"><tbody>
@@ -1261,8 +1261,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="group.initial.rebalance.delay.ms" href="#group.initial.rebalance.delay.ms"></a>
-      group.initial.rebalance.delay.ms
+   <a class="anchor-link" id="group.initial.rebalance.delay.ms"></a>
+   <a href="#group.initial.rebalance.delay.ms">group.initial.rebalance.delay.ms</a>
 </h4>
 <p>The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.</p>
 <table class="data-table"><tbody>
@@ -1275,8 +1275,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="group.max.session.timeout.ms" href="#group.max.session.timeout.ms"></a>
-      group.max.session.timeout.ms
+   <a class="anchor-link" id="group.max.session.timeout.ms"></a>
+   <a href="#group.max.session.timeout.ms">group.max.session.timeout.ms</a>
 </h4>
 <p>The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.</p>
 <table class="data-table"><tbody>
@@ -1289,8 +1289,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="group.max.size" href="#group.max.size"></a>
-      group.max.size
+   <a class="anchor-link" id="group.max.size"></a>
+   <a href="#group.max.size">group.max.size</a>
 </h4>
 <p>The maximum number of consumers that a single consumer group can accommodate.</p>
 <table class="data-table"><tbody>
@@ -1303,8 +1303,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="group.min.session.timeout.ms" href="#group.min.session.timeout.ms"></a>
-      group.min.session.timeout.ms
+   <a class="anchor-link" id="group.min.session.timeout.ms"></a>
+   <a href="#group.min.session.timeout.ms">group.min.session.timeout.ms</a>
 </h4>
 <p>The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.</p>
 <table class="data-table"><tbody>
@@ -1317,8 +1317,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="inter.broker.listener.name" href="#inter.broker.listener.name"></a>
-      inter.broker.listener.name
+   <a class="anchor-link" id="inter.broker.listener.name"></a>
+   <a href="#inter.broker.listener.name">inter.broker.listener.name</a>
 </h4>
 <p>Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.</p>
 <table class="data-table"><tbody>
@@ -1331,8 +1331,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="inter.broker.protocol.version" href="#inter.broker.protocol.version"></a>
-      inter.broker.protocol.version
+   <a class="anchor-link" id="inter.broker.protocol.version"></a>
+   <a href="#inter.broker.protocol.version">inter.broker.protocol.version</a>
 </h4>
 <p>Specify which version of the inter-broker protocol will be used.<br> This is typically bumped after all brokers were upgraded to a new version.<br> Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.</p>
 <table class="data-table"><tbody>
@@ -1345,8 +1345,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.backoff.ms" href="#log.cleaner.backoff.ms"></a>
-      log.cleaner.backoff.ms
+   <a class="anchor-link" id="log.cleaner.backoff.ms"></a>
+   <a href="#log.cleaner.backoff.ms">log.cleaner.backoff.ms</a>
 </h4>
 <p>The amount of time to sleep when there are no logs to clean</p>
 <table class="data-table"><tbody>
@@ -1359,8 +1359,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.dedupe.buffer.size" href="#log.cleaner.dedupe.buffer.size"></a>
-      log.cleaner.dedupe.buffer.size
+   <a class="anchor-link" id="log.cleaner.dedupe.buffer.size"></a>
+   <a href="#log.cleaner.dedupe.buffer.size">log.cleaner.dedupe.buffer.size</a>
 </h4>
 <p>The total memory used for log deduplication across all cleaner threads</p>
 <table class="data-table"><tbody>
@@ -1373,8 +1373,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.delete.retention.ms" href="#log.cleaner.delete.retention.ms"></a>
-      log.cleaner.delete.retention.ms
+   <a class="anchor-link" id="log.cleaner.delete.retention.ms"></a>
+   <a href="#log.cleaner.delete.retention.ms">log.cleaner.delete.retention.ms</a>
 </h4>
 <p>How long are delete records retained?</p>
 <table class="data-table"><tbody>
@@ -1387,8 +1387,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.enable" href="#log.cleaner.enable"></a>
-      log.cleaner.enable
+   <a class="anchor-link" id="log.cleaner.enable"></a>
+   <a href="#log.cleaner.enable">log.cleaner.enable</a>
 </h4>
 <p>Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.</p>
 <table class="data-table"><tbody>
@@ -1401,8 +1401,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.io.buffer.load.factor" href="#log.cleaner.io.buffer.load.factor"></a>
-      log.cleaner.io.buffer.load.factor
+   <a class="anchor-link" id="log.cleaner.io.buffer.load.factor"></a>
+   <a href="#log.cleaner.io.buffer.load.factor">log.cleaner.io.buffer.load.factor</a>
 </h4>
 <p>Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions</p>
 <table class="data-table"><tbody>
@@ -1415,8 +1415,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.io.buffer.size" href="#log.cleaner.io.buffer.size"></a>
-      log.cleaner.io.buffer.size
+   <a class="anchor-link" id="log.cleaner.io.buffer.size"></a>
+   <a href="#log.cleaner.io.buffer.size">log.cleaner.io.buffer.size</a>
 </h4>
 <p>The total memory used for log cleaner I/O buffers across all cleaner threads</p>
 <table class="data-table"><tbody>
@@ -1429,8 +1429,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.io.max.bytes.per.second" href="#log.cleaner.io.max.bytes.per.second"></a>
-      log.cleaner.io.max.bytes.per.second
+   <a class="anchor-link" id="log.cleaner.io.max.bytes.per.second"></a>
+   <a href="#log.cleaner.io.max.bytes.per.second">log.cleaner.io.max.bytes.per.second</a>
 </h4>
 <p>The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average</p>
 <table class="data-table"><tbody>
@@ -1443,8 +1443,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.max.compaction.lag.ms" href="#log.cleaner.max.compaction.lag.ms"></a>
-      log.cleaner.max.compaction.lag.ms
+   <a class="anchor-link" id="log.cleaner.max.compaction.lag.ms"></a>
+   <a href="#log.cleaner.max.compaction.lag.ms">log.cleaner.max.compaction.lag.ms</a>
 </h4>
 <p>The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.</p>
 <table class="data-table"><tbody>
@@ -1457,8 +1457,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.min.cleanable.ratio" href="#log.cleaner.min.cleanable.ratio"></a>
-      log.cleaner.min.cleanable.ratio
+   <a class="anchor-link" id="log.cleaner.min.cleanable.ratio"></a>
+   <a href="#log.cleaner.min.cleanable.ratio">log.cleaner.min.cleanable.ratio</a>
 </h4>
 <p>The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.</p>
 <table class="data-table"><tbody>
@@ -1471,8 +1471,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.min.compaction.lag.ms" href="#log.cleaner.min.compaction.lag.ms"></a>
-      log.cleaner.min.compaction.lag.ms
+   <a class="anchor-link" id="log.cleaner.min.compaction.lag.ms"></a>
+   <a href="#log.cleaner.min.compaction.lag.ms">log.cleaner.min.compaction.lag.ms</a>
 </h4>
 <p>The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.</p>
 <table class="data-table"><tbody>
@@ -1485,8 +1485,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleaner.threads" href="#log.cleaner.threads"></a>
-      log.cleaner.threads
+   <a class="anchor-link" id="log.cleaner.threads"></a>
+   <a href="#log.cleaner.threads">log.cleaner.threads</a>
 </h4>
 <p>The number of background threads to use for log cleaning</p>
 <table class="data-table"><tbody>
@@ -1499,8 +1499,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.cleanup.policy" href="#log.cleanup.policy"></a>
-      log.cleanup.policy
+   <a class="anchor-link" id="log.cleanup.policy"></a>
+   <a href="#log.cleanup.policy">log.cleanup.policy</a>
 </h4>
 <p>The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"</p>
 <table class="data-table"><tbody>
@@ -1513,8 +1513,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.index.interval.bytes" href="#log.index.interval.bytes"></a>
-      log.index.interval.bytes
+   <a class="anchor-link" id="log.index.interval.bytes"></a>
+   <a href="#log.index.interval.bytes">log.index.interval.bytes</a>
 </h4>
 <p>The interval with which we add an entry to the offset index</p>
 <table class="data-table"><tbody>
@@ -1527,8 +1527,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.index.size.max.bytes" href="#log.index.size.max.bytes"></a>
-      log.index.size.max.bytes
+   <a class="anchor-link" id="log.index.size.max.bytes"></a>
+   <a href="#log.index.size.max.bytes">log.index.size.max.bytes</a>
 </h4>
 <p>The maximum size in bytes of the offset index</p>
 <table class="data-table"><tbody>
@@ -1541,8 +1541,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.message.format.version" href="#log.message.format.version"></a>
-      log.message.format.version
+   <a class="anchor-link" id="log.message.format.version"></a>
+   <a href="#log.message.format.version">log.message.format.version</a>
 </h4>
 <p>Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.</p>
 <table class="data-table"><tbody>
@@ -1555,8 +1555,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.message.timestamp.difference.max.ms" href="#log.message.timestamp.difference.max.ms"></a>
-      log.message.timestamp.difference.max.ms
+   <a class="anchor-link" id="log.message.timestamp.difference.max.ms"></a>
+   <a href="#log.message.timestamp.difference.max.ms">log.message.timestamp.difference.max.ms</a>
 </h4>
 <p>The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.</p>
 <table class="data-table"><tbody>
@@ -1569,8 +1569,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.message.timestamp.type" href="#log.message.timestamp.type"></a>
-      log.message.timestamp.type
+   <a class="anchor-link" id="log.message.timestamp.type"></a>
+   <a href="#log.message.timestamp.type">log.message.timestamp.type</a>
 </h4>
 <p>Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`</p>
 <table class="data-table"><tbody>
@@ -1583,8 +1583,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.preallocate" href="#log.preallocate"></a>
-      log.preallocate
+   <a class="anchor-link" id="log.preallocate"></a>
+   <a href="#log.preallocate">log.preallocate</a>
 </h4>
 <p>Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.</p>
 <table class="data-table"><tbody>
@@ -1597,8 +1597,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.retention.check.interval.ms" href="#log.retention.check.interval.ms"></a>
-      log.retention.check.interval.ms
+   <a class="anchor-link" id="log.retention.check.interval.ms"></a>
+   <a href="#log.retention.check.interval.ms">log.retention.check.interval.ms</a>
 </h4>
 <p>The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion</p>
 <table class="data-table"><tbody>
@@ -1611,8 +1611,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.connections" href="#max.connections"></a>
-      max.connections
+   <a class="anchor-link" id="max.connections"></a>
+   <a href="#max.connections">max.connections</a>
 </h4>
 <p>The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, <code>listener.name.internal.max.connections</code>. Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.</p>
 <table class="data-table"><tbody>
@@ -1625,8 +1625,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.connections.per.ip" href="#max.connections.per.ip"></a>
-      max.connections.per.ip
+   <a class="anchor-link" id="max.connections.per.ip"></a>
+   <a href="#max.connections.per.ip">max.connections.per.ip</a>
 </h4>
 <p>The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.</p>
 <table class="data-table"><tbody>
@@ -1639,8 +1639,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.connections.per.ip.overrides" href="#max.connections.per.ip.overrides"></a>
-      max.connections.per.ip.overrides
+   <a class="anchor-link" id="max.connections.per.ip.overrides"></a>
+   <a href="#max.connections.per.ip.overrides">max.connections.per.ip.overrides</a>
 </h4>
 <p>A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200"</p>
 <table class="data-table"><tbody>
@@ -1653,8 +1653,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.incremental.fetch.session.cache.slots" href="#max.incremental.fetch.session.cache.slots"></a>
-      max.incremental.fetch.session.cache.slots
+   <a class="anchor-link" id="max.incremental.fetch.session.cache.slots"></a>
+   <a href="#max.incremental.fetch.session.cache.slots">max.incremental.fetch.session.cache.slots</a>
 </h4>
 <p>The maximum number of incremental fetch sessions that we will maintain.</p>
 <table class="data-table"><tbody>
@@ -1667,8 +1667,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.partitions" href="#num.partitions"></a>
-      num.partitions
+   <a class="anchor-link" id="num.partitions"></a>
+   <a href="#num.partitions">num.partitions</a>
 </h4>
 <p>The default number of log partitions per topic</p>
 <table class="data-table"><tbody>
@@ -1681,8 +1681,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="password.encoder.old.secret" href="#password.encoder.old.secret"></a>
-      password.encoder.old.secret
+   <a class="anchor-link" id="password.encoder.old.secret"></a>
+   <a href="#password.encoder.old.secret">password.encoder.old.secret</a>
 </h4>
 <p>The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.</p>
 <table class="data-table"><tbody>
@@ -1695,8 +1695,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="password.encoder.secret" href="#password.encoder.secret"></a>
-      password.encoder.secret
+   <a class="anchor-link" id="password.encoder.secret"></a>
+   <a href="#password.encoder.secret">password.encoder.secret</a>
 </h4>
 <p>The secret used for encoding dynamically configured passwords for this broker.</p>
 <table class="data-table"><tbody>
@@ -1709,8 +1709,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="principal.builder.class" href="#principal.builder.class"></a>
-      principal.builder.class
+   <a class="anchor-link" id="principal.builder.class"></a>
+   <a href="#principal.builder.class">principal.builder.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication,  the principal will be derived using the rules defined by <code>ssl.principal.mapping.rules</code> applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by <code>sasl.kerberos.principal.to.local.rules</code> if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.</p>
 <table class="data-table"><tbody>
@@ -1723,8 +1723,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="producer.purgatory.purge.interval.requests" href="#producer.purgatory.purge.interval.requests"></a>
-      producer.purgatory.purge.interval.requests
+   <a class="anchor-link" id="producer.purgatory.purge.interval.requests"></a>
+   <a href="#producer.purgatory.purge.interval.requests">producer.purgatory.purge.interval.requests</a>
 </h4>
 <p>The purge interval (in number of requests) of the producer request purgatory</p>
 <table class="data-table"><tbody>
@@ -1737,8 +1737,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="queued.max.request.bytes" href="#queued.max.request.bytes"></a>
-      queued.max.request.bytes
+   <a class="anchor-link" id="queued.max.request.bytes"></a>
+   <a href="#queued.max.request.bytes">queued.max.request.bytes</a>
 </h4>
 <p>The number of queued bytes allowed before no more requests are read</p>
 <table class="data-table"><tbody>
@@ -1751,8 +1751,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.fetch.backoff.ms" href="#replica.fetch.backoff.ms"></a>
-      replica.fetch.backoff.ms
+   <a class="anchor-link" id="replica.fetch.backoff.ms"></a>
+   <a href="#replica.fetch.backoff.ms">replica.fetch.backoff.ms</a>
 </h4>
 <p>The amount of time to sleep when fetch partition error occurs.</p>
 <table class="data-table"><tbody>
@@ -1765,8 +1765,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.fetch.max.bytes" href="#replica.fetch.max.bytes"></a>
-      replica.fetch.max.bytes
+   <a class="anchor-link" id="replica.fetch.max.bytes"></a>
+   <a href="#replica.fetch.max.bytes">replica.fetch.max.bytes</a>
 </h4>
 <p>The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</p>
 <table class="data-table"><tbody>
@@ -1779,8 +1779,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.fetch.response.max.bytes" href="#replica.fetch.response.max.bytes"></a>
-      replica.fetch.response.max.bytes
+   <a class="anchor-link" id="replica.fetch.response.max.bytes"></a>
+   <a href="#replica.fetch.response.max.bytes">replica.fetch.response.max.bytes</a>
 </h4>
 <p>Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</p>
 <table class="data-table"><tbody>
@@ -1793,8 +1793,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replica.selector.class" href="#replica.selector.class"></a>
-      replica.selector.class
+   <a class="anchor-link" id="replica.selector.class"></a>
+   <a href="#replica.selector.class">replica.selector.class</a>
 </h4>
 <p>The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.</p>
 <table class="data-table"><tbody>
@@ -1807,8 +1807,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reserved.broker.max.id" href="#reserved.broker.max.id"></a>
-      reserved.broker.max.id
+   <a class="anchor-link" id="reserved.broker.max.id"></a>
+   <a href="#reserved.broker.max.id">reserved.broker.max.id</a>
 </h4>
 <p>Max number that can be used for a broker.id</p>
 <table class="data-table"><tbody>
@@ -1821,8 +1821,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class"></a>
-      sasl.client.callback.handler.class
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table class="data-table"><tbody>
@@ -1835,8 +1835,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.enabled.mechanisms" href="#sasl.enabled.mechanisms"></a>
-      sasl.enabled.mechanisms
+   <a class="anchor-link" id="sasl.enabled.mechanisms"></a>
+   <a href="#sasl.enabled.mechanisms">sasl.enabled.mechanisms</a>
 </h4>
 <p>The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.</p>
 <table class="data-table"><tbody>
@@ -1849,8 +1849,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.jaas.config" href="#sasl.jaas.config"></a>
-      sasl.jaas.config
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
 </h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table class="data-table"><tbody>
@@ -1863,8 +1863,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd"></a>
-      sasl.kerberos.kinit.cmd
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
 </h4>
 <p>Kerberos kinit command path.</p>
 <table class="data-table"><tbody>
@@ -1877,8 +1877,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin"></a>
-      sasl.kerberos.min.time.before.relogin
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
 </h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table class="data-table"><tbody>
@@ -1891,8 +1891,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.principal.to.local.rules" href="#sasl.kerberos.principal.to.local.rules"></a>
-      sasl.kerberos.principal.to.local.rules
+   <a class="anchor-link" id="sasl.kerberos.principal.to.local.rules"></a>
+   <a href="#sasl.kerberos.principal.to.local.rules">sasl.kerberos.principal.to.local.rules</a>
 </h4>
 <p>A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see <a href="#security_authz"> security authorization and acls</a>. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the <code>principal.builder.class</code> configuration.</p>
 <table class="data-table"><tbody>
@@ -1905,8 +1905,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name"></a>
-      sasl.kerberos.service.name
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
 </h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table class="data-table"><tbody>
@@ -1919,8 +1919,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter"></a>
-      sasl.kerberos.ticket.renew.jitter
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
 </h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table class="data-table"><tbody>
@@ -1933,8 +1933,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor"></a>
-      sasl.kerberos.ticket.renew.window.factor
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
 </h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table class="data-table"><tbody>
@@ -1947,8 +1947,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class"></a>
-      sasl.login.callback.handler.class
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table class="data-table"><tbody>
@@ -1961,8 +1961,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.class" href="#sasl.login.class"></a>
-      sasl.login.class
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table class="data-table"><tbody>
@@ -1975,8 +1975,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds"></a>
-      sasl.login.refresh.buffer.seconds
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
 </h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -1989,8 +1989,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds"></a>
-      sasl.login.refresh.min.period.seconds
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
 </h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -2003,8 +2003,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor"></a>
-      sasl.login.refresh.window.factor
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
 </h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -2017,8 +2017,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter"></a>
-      sasl.login.refresh.window.jitter
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
 </h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -2031,8 +2031,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.mechanism.inter.broker.protocol" href="#sasl.mechanism.inter.broker.protocol"></a>
-      sasl.mechanism.inter.broker.protocol
+   <a class="anchor-link" id="sasl.mechanism.inter.broker.protocol"></a>
+   <a href="#sasl.mechanism.inter.broker.protocol">sasl.mechanism.inter.broker.protocol</a>
 </h4>
 <p>SASL mechanism used for inter-broker communication. Default is GSSAPI.</p>
 <table class="data-table"><tbody>
@@ -2045,8 +2045,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.server.callback.handler.class" href="#sasl.server.callback.handler.class"></a>
-      sasl.server.callback.handler.class
+   <a class="anchor-link" id="sasl.server.callback.handler.class"></a>
+   <a href="#sasl.server.callback.handler.class">sasl.server.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.</p>
 <table class="data-table"><tbody>
@@ -2059,8 +2059,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.inter.broker.protocol" href="#security.inter.broker.protocol"></a>
-      security.inter.broker.protocol
+   <a class="anchor-link" id="security.inter.broker.protocol"></a>
+   <a href="#security.inter.broker.protocol">security.inter.broker.protocol</a>
 </h4>
 <p>Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.</p>
 <table class="data-table"><tbody>
@@ -2073,8 +2073,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.cipher.suites" href="#ssl.cipher.suites"></a>
-      ssl.cipher.suites
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
 </h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table class="data-table"><tbody>
@@ -2087,8 +2087,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.client.auth" href="#ssl.client.auth"></a>
-      ssl.client.auth
+   <a class="anchor-link" id="ssl.client.auth"></a>
+   <a href="#ssl.client.auth">ssl.client.auth</a>
 </h4>
 <p>Configures kafka broker to request client authentication. The following settings are common:  <ul> <li><code>ssl.client.auth=required</code> If set to required client authentication is required. <li><code>ssl.client.auth=requested</code> This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself <li><code>ssl.client.auth=none</code> This means client authentication is not needed.</ul></p>
 <table class="data-table"><tbody>
@@ -2101,8 +2101,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.enabled.protocols" href="#ssl.enabled.protocols"></a>
-      ssl.enabled.protocols
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
 </h4>
 <p>The list of protocols enabled for SSL connections.</p>
 <table class="data-table"><tbody>
@@ -2115,8 +2115,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.key.password" href="#ssl.key.password"></a>
-      ssl.key.password
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
 </h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -2129,8 +2129,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm"></a>
-      ssl.keymanager.algorithm
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
 </h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -2143,8 +2143,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.location" href="#ssl.keystore.location"></a>
-      ssl.keystore.location
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
 </h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table class="data-table"><tbody>
@@ -2157,8 +2157,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.password" href="#ssl.keystore.password"></a>
-      ssl.keystore.password
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
 </h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table class="data-table"><tbody>
@@ -2171,8 +2171,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.type" href="#ssl.keystore.type"></a>
-      ssl.keystore.type
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
 </h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -2185,8 +2185,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.protocol" href="#ssl.protocol"></a>
-      ssl.protocol
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
 </h4>
 <p>The SSL protocol used to generate the SSLContext. Default setting is TLSv1.2, which is fine for most cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</p>
 <table class="data-table"><tbody>
@@ -2199,8 +2199,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.provider" href="#ssl.provider"></a>
-      ssl.provider
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
 </h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table class="data-table"><tbody>
@@ -2213,8 +2213,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm"></a>
-      ssl.trustmanager.algorithm
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
 </h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -2227,8 +2227,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.location" href="#ssl.truststore.location"></a>
-      ssl.truststore.location
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
 </h4>
 <p>The location of the trust store file. </p>
 <table class="data-table"><tbody>
@@ -2241,8 +2241,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.password" href="#ssl.truststore.password"></a>
-      ssl.truststore.password
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
 </h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table class="data-table"><tbody>
@@ -2255,8 +2255,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.type" href="#ssl.truststore.type"></a>
-      ssl.truststore.type
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
 </h4>
 <p>The file format of the trust store file.</p>
 <table class="data-table"><tbody>
@@ -2269,8 +2269,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.clientCnxnSocket" href="#zookeeper.clientCnxnSocket"></a>
-      zookeeper.clientCnxnSocket
+   <a class="anchor-link" id="zookeeper.clientCnxnSocket"></a>
+   <a href="#zookeeper.clientCnxnSocket">zookeeper.clientCnxnSocket</a>
 </h4>
 <p>Typically set to <code>org.apache.zookeeper.ClientCnxnSocketNetty</code> when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named <code>zookeeper.clientCnxnSocket</code> system property.</p>
 <table class="data-table"><tbody>
@@ -2283,8 +2283,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.client.enable" href="#zookeeper.ssl.client.enable"></a>
-      zookeeper.ssl.client.enable
+   <a class="anchor-link" id="zookeeper.ssl.client.enable"></a>
+   <a href="#zookeeper.ssl.client.enable">zookeeper.ssl.client.enable</a>
 </h4>
 <p>Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the <code>zookeeper.client.secure</code> system property (note the different name). Defaults to false if neither is set; when true, <code>zookeeper.clientCnxnSocket</code> must be set (typically to <code>org.apache.zookeeper.ClientCnxnSocketNetty</code>); other values to set may include <code>zookeeper.ssl.cipher.suites</code>, <code>zookeeper.ssl.crl.enable</code>, <code>zookeeper.ssl.enabled.protocols</code>, <code>zookeeper.ssl.endpoint.identification.algorithm</code>, <code>zookeeper.ssl.keystore.location</code>, <code>zookeeper.ssl.keystore.password</code>, <code>zookeeper.ssl.keystore.type</code>, <code>zookeeper.ssl.ocsp.enable</code>, <code>zookeeper.ssl.protocol</code>, <code>zookeeper.ssl.truststore.location</code>, <code>zookeeper.ssl.truststore.password</code>, <code>zookeeper.ssl.truststore.type</code></p>
 <table class="data-table"><tbody>
@@ -2297,8 +2297,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.keystore.location" href="#zookeeper.ssl.keystore.location"></a>
-      zookeeper.ssl.keystore.location
+   <a class="anchor-link" id="zookeeper.ssl.keystore.location"></a>
+   <a href="#zookeeper.ssl.keystore.location">zookeeper.ssl.keystore.location</a>
 </h4>
 <p>Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.keyStore.location</code> system property (note the camelCase).</p>
 <table class="data-table"><tbody>
@@ -2311,8 +2311,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.keystore.password" href="#zookeeper.ssl.keystore.password"></a>
-      zookeeper.ssl.keystore.password
+   <a class="anchor-link" id="zookeeper.ssl.keystore.password"></a>
+   <a href="#zookeeper.ssl.keystore.password">zookeeper.ssl.keystore.password</a>
 </h4>
 <p>Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.keyStore.password</code> system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail.</p>
 <table class="data-table"><tbody>
@@ -2325,8 +2325,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.keystore.type" href="#zookeeper.ssl.keystore.type"></a>
-      zookeeper.ssl.keystore.type
+   <a class="anchor-link" id="zookeeper.ssl.keystore.type"></a>
+   <a href="#zookeeper.ssl.keystore.type">zookeeper.ssl.keystore.type</a>
 </h4>
 <p>Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.keyStore.type</code> system property (note the camelCase). The default value of <code>null</code> means the type will be auto-detected based on the filename extension of the keystore.</p>
 <table class="data-table"><tbody>
@@ -2339,8 +2339,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.truststore.location" href="#zookeeper.ssl.truststore.location"></a>
-      zookeeper.ssl.truststore.location
+   <a class="anchor-link" id="zookeeper.ssl.truststore.location"></a>
+   <a href="#zookeeper.ssl.truststore.location">zookeeper.ssl.truststore.location</a>
 </h4>
 <p>Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.trustStore.location</code> system property (note the camelCase).</p>
 <table class="data-table"><tbody>
@@ -2353,8 +2353,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.truststore.password" href="#zookeeper.ssl.truststore.password"></a>
-      zookeeper.ssl.truststore.password
+   <a class="anchor-link" id="zookeeper.ssl.truststore.password"></a>
+   <a href="#zookeeper.ssl.truststore.password">zookeeper.ssl.truststore.password</a>
 </h4>
 <p>Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.trustStore.password</code> system property (note the camelCase).</p>
 <table class="data-table"><tbody>
@@ -2367,8 +2367,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.truststore.type" href="#zookeeper.ssl.truststore.type"></a>
-      zookeeper.ssl.truststore.type
+   <a class="anchor-link" id="zookeeper.ssl.truststore.type"></a>
+   <a href="#zookeeper.ssl.truststore.type">zookeeper.ssl.truststore.type</a>
 </h4>
 <p>Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.trustStore.type</code> system property (note the camelCase). The default value of <code>null</code> means the type will be auto-detected based on the filename extension of the truststore.</p>
 <table class="data-table"><tbody>
@@ -2381,8 +2381,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="alter.config.policy.class.name" href="#alter.config.policy.class.name"></a>
-      alter.config.policy.class.name
+   <a class="anchor-link" id="alter.config.policy.class.name"></a>
+   <a href="#alter.config.policy.class.name">alter.config.policy.class.name</a>
 </h4>
 <p>The alter configs policy class that should be used for validation. The class should implement the <code>org.apache.kafka.server.policy.AlterConfigPolicy</code> interface.</p>
 <table class="data-table"><tbody>
@@ -2395,8 +2395,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="alter.log.dirs.replication.quota.window.num" href="#alter.log.dirs.replication.quota.window.num"></a>
-      alter.log.dirs.replication.quota.window.num
+   <a class="anchor-link" id="alter.log.dirs.replication.quota.window.num"></a>
+   <a href="#alter.log.dirs.replication.quota.window.num">alter.log.dirs.replication.quota.window.num</a>
 </h4>
 <p>The number of samples to retain in memory for alter log dirs replication quotas</p>
 <table class="data-table"><tbody>
@@ -2409,8 +2409,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="alter.log.dirs.replication.quota.window.size.seconds" href="#alter.log.dirs.replication.quota.window.size.seconds"></a>
-      alter.log.dirs.replication.quota.window.size.seconds
+   <a class="anchor-link" id="alter.log.dirs.replication.quota.window.size.seconds"></a>
+   <a href="#alter.log.dirs.replication.quota.window.size.seconds">alter.log.dirs.replication.quota.window.size.seconds</a>
 </h4>
 <p>The time span of each sample for alter log dirs replication quotas</p>
 <table class="data-table"><tbody>
@@ -2423,8 +2423,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="authorizer.class.name" href="#authorizer.class.name"></a>
-      authorizer.class.name
+   <a class="anchor-link" id="authorizer.class.name"></a>
+   <a href="#authorizer.class.name">authorizer.class.name</a>
 </h4>
 <p>The fully qualified name of a class that implements sorg.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization. This config also supports authorizers that implement the deprecated kafka.security.auth.Authorizer trait which was previously used for authorization.</p>
 <table class="data-table"><tbody>
@@ -2437,8 +2437,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.quota.callback.class" href="#client.quota.callback.class"></a>
-      client.quota.callback.class
+   <a class="anchor-link" id="client.quota.callback.class"></a>
+   <a href="#client.quota.callback.class">client.quota.callback.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, <user, client-id>, <user> or <client-id> quotas stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.</p>
 <table class="data-table"><tbody>
@@ -2451,8 +2451,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connection.failed.authentication.delay.ms" href="#connection.failed.authentication.delay.ms"></a>
-      connection.failed.authentication.delay.ms
+   <a class="anchor-link" id="connection.failed.authentication.delay.ms"></a>
+   <a href="#connection.failed.authentication.delay.ms">connection.failed.authentication.delay.ms</a>
 </h4>
 <p>Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.</p>
 <table class="data-table"><tbody>
@@ -2465,8 +2465,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="create.topic.policy.class.name" href="#create.topic.policy.class.name"></a>
-      create.topic.policy.class.name
+   <a class="anchor-link" id="create.topic.policy.class.name"></a>
+   <a href="#create.topic.policy.class.name">create.topic.policy.class.name</a>
 </h4>
 <p>The create topic policy class that should be used for validation. The class should implement the <code>org.apache.kafka.server.policy.CreateTopicPolicy</code> interface.</p>
 <table class="data-table"><tbody>
@@ -2479,8 +2479,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delegation.token.expiry.check.interval.ms" href="#delegation.token.expiry.check.interval.ms"></a>
-      delegation.token.expiry.check.interval.ms
+   <a class="anchor-link" id="delegation.token.expiry.check.interval.ms"></a>
+   <a href="#delegation.token.expiry.check.interval.ms">delegation.token.expiry.check.interval.ms</a>
 </h4>
 <p>Scan interval to remove expired delegation tokens.</p>
 <table class="data-table"><tbody>
@@ -2493,8 +2493,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="kafka.metrics.polling.interval.secs" href="#kafka.metrics.polling.interval.secs"></a>
-      kafka.metrics.polling.interval.secs
+   <a class="anchor-link" id="kafka.metrics.polling.interval.secs"></a>
+   <a href="#kafka.metrics.polling.interval.secs">kafka.metrics.polling.interval.secs</a>
 </h4>
 <p>The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.</p>
 <table class="data-table"><tbody>
@@ -2507,8 +2507,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="kafka.metrics.reporters" href="#kafka.metrics.reporters"></a>
-      kafka.metrics.reporters
+   <a class="anchor-link" id="kafka.metrics.reporters"></a>
+   <a href="#kafka.metrics.reporters">kafka.metrics.reporters</a>
 </h4>
 <p>A list of classes to use as Yammer metrics custom reporters. The reporters should implement <code>kafka.metrics.KafkaMetricsReporter</code> trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends <code>kafka.metrics.KafkaMetricsReporterMBean</code> trait so that the registered MBean is compliant with the standard MBean convention.</p>
 <table class="data-table"><tbody>
@@ -2521,8 +2521,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="listener.security.protocol.map" href="#listener.security.protocol.map"></a>
-      listener.security.protocol.map
+   <a class="anchor-link" id="listener.security.protocol.map"></a>
+   <a href="#listener.security.protocol.map">listener.security.protocol.map</a>
 </h4>
 <p>Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: `INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name <code>listener.name.internal.ssl.keystore.location</code> would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. <code>ssl.keystore.location</code>). </p>
 <table class="data-table"><tbody>
@@ -2535,8 +2535,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="log.message.downconversion.enable" href="#log.message.downconversion.enable"></a>
-      log.message.downconversion.enable
+   <a class="anchor-link" id="log.message.downconversion.enable"></a>
+   <a href="#log.message.downconversion.enable">log.message.downconversion.enable</a>
 </h4>
 <p>This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to <code>false</code>, broker will not perform down-conversion for consumers expecting an older message format. The broker responds with <code>UNSUPPORTED_VERSION</code> error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.</p>
 <table class="data-table"><tbody>
@@ -2549,8 +2549,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metric.reporters" href="#metric.reporters"></a>
-      metric.reporters
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
 </h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table class="data-table"><tbody>
@@ -2563,8 +2563,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.num.samples" href="#metrics.num.samples"></a>
-      metrics.num.samples
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
 </h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table class="data-table"><tbody>
@@ -2577,8 +2577,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.recording.level" href="#metrics.recording.level"></a>
-      metrics.recording.level
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
 </h4>
 <p>The highest recording level for metrics.</p>
 <table class="data-table"><tbody>
@@ -2591,8 +2591,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.sample.window.ms" href="#metrics.sample.window.ms"></a>
-      metrics.sample.window.ms
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
 </h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table class="data-table"><tbody>
@@ -2605,8 +2605,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="password.encoder.cipher.algorithm" href="#password.encoder.cipher.algorithm"></a>
-      password.encoder.cipher.algorithm
+   <a class="anchor-link" id="password.encoder.cipher.algorithm"></a>
+   <a href="#password.encoder.cipher.algorithm">password.encoder.cipher.algorithm</a>
 </h4>
 <p>The Cipher algorithm used for encoding dynamically configured passwords.</p>
 <table class="data-table"><tbody>
@@ -2619,8 +2619,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="password.encoder.iterations" href="#password.encoder.iterations"></a>
-      password.encoder.iterations
+   <a class="anchor-link" id="password.encoder.iterations"></a>
+   <a href="#password.encoder.iterations">password.encoder.iterations</a>
 </h4>
 <p>The iteration count used for encoding dynamically configured passwords.</p>
 <table class="data-table"><tbody>
@@ -2633,8 +2633,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="password.encoder.key.length" href="#password.encoder.key.length"></a>
-      password.encoder.key.length
+   <a class="anchor-link" id="password.encoder.key.length"></a>
+   <a href="#password.encoder.key.length">password.encoder.key.length</a>
 </h4>
 <p>The key length used for encoding dynamically configured passwords.</p>
 <table class="data-table"><tbody>
@@ -2647,8 +2647,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="password.encoder.keyfactory.algorithm" href="#password.encoder.keyfactory.algorithm"></a>
-      password.encoder.keyfactory.algorithm
+   <a class="anchor-link" id="password.encoder.keyfactory.algorithm"></a>
+   <a href="#password.encoder.keyfactory.algorithm">password.encoder.keyfactory.algorithm</a>
 </h4>
 <p>The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.</p>
 <table class="data-table"><tbody>
@@ -2661,8 +2661,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="quota.window.num" href="#quota.window.num"></a>
-      quota.window.num
+   <a class="anchor-link" id="quota.window.num"></a>
+   <a href="#quota.window.num">quota.window.num</a>
 </h4>
 <p>The number of samples to retain in memory for client quotas</p>
 <table class="data-table"><tbody>
@@ -2675,8 +2675,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="quota.window.size.seconds" href="#quota.window.size.seconds"></a>
-      quota.window.size.seconds
+   <a class="anchor-link" id="quota.window.size.seconds"></a>
+   <a href="#quota.window.size.seconds">quota.window.size.seconds</a>
 </h4>
 <p>The time span of each sample for client quotas</p>
 <table class="data-table"><tbody>
@@ -2689,8 +2689,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replication.quota.window.num" href="#replication.quota.window.num"></a>
-      replication.quota.window.num
+   <a class="anchor-link" id="replication.quota.window.num"></a>
+   <a href="#replication.quota.window.num">replication.quota.window.num</a>
 </h4>
 <p>The number of samples to retain in memory for replication quotas</p>
 <table class="data-table"><tbody>
@@ -2703,8 +2703,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replication.quota.window.size.seconds" href="#replication.quota.window.size.seconds"></a>
-      replication.quota.window.size.seconds
+   <a class="anchor-link" id="replication.quota.window.size.seconds"></a>
+   <a href="#replication.quota.window.size.seconds">replication.quota.window.size.seconds</a>
 </h4>
 <p>The time span of each sample for replication quotas</p>
 <table class="data-table"><tbody>
@@ -2717,8 +2717,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.providers" href="#security.providers"></a>
-      security.providers
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
 </h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table class="data-table"><tbody>
@@ -2731,8 +2731,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm"></a>
-      ssl.endpoint.identification.algorithm
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
 </h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table class="data-table"><tbody>
@@ -2745,8 +2745,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.principal.mapping.rules" href="#ssl.principal.mapping.rules"></a>
-      ssl.principal.mapping.rules
+   <a class="anchor-link" id="ssl.principal.mapping.rules"></a>
+   <a href="#ssl.principal.mapping.rules">ssl.principal.mapping.rules</a>
 </h4>
 <p>A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see <a href="#security_authz"> security authorization and acls</a>. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the <code>principal.builder.class</code> configuration.</p>
 <table class="data-table"><tbody>
@@ -2759,8 +2759,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation"></a>
-      ssl.secure.random.implementation
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
 </h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table class="data-table"><tbody>
@@ -2773,8 +2773,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.abort.timed.out.transaction.cleanup.interval.ms" href="#transaction.abort.timed.out.transaction.cleanup.interval.ms"></a>
-      transaction.abort.timed.out.transaction.cleanup.interval.ms
+   <a class="anchor-link" id="transaction.abort.timed.out.transaction.cleanup.interval.ms"></a>
+   <a href="#transaction.abort.timed.out.transaction.cleanup.interval.ms">transaction.abort.timed.out.transaction.cleanup.interval.ms</a>
 </h4>
 <p>The interval at which to rollback transactions that have timed out</p>
 <table class="data-table"><tbody>
@@ -2787,8 +2787,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.remove.expired.transaction.cleanup.interval.ms" href="#transaction.remove.expired.transaction.cleanup.interval.ms"></a>
-      transaction.remove.expired.transaction.cleanup.interval.ms
+   <a class="anchor-link" id="transaction.remove.expired.transaction.cleanup.interval.ms"></a>
+   <a href="#transaction.remove.expired.transaction.cleanup.interval.ms">transaction.remove.expired.transaction.cleanup.interval.ms</a>
 </h4>
 <p>The interval at which to remove transactions that have expired due to <code>transactional.id.expiration.ms</code> passing</p>
 <table class="data-table"><tbody>
@@ -2801,8 +2801,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.cipher.suites" href="#zookeeper.ssl.cipher.suites"></a>
-      zookeeper.ssl.cipher.suites
+   <a class="anchor-link" id="zookeeper.ssl.cipher.suites"></a>
+   <a href="#zookeeper.ssl.cipher.suites">zookeeper.ssl.cipher.suites</a>
 </h4>
 <p>Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the <code>zookeeper.ssl.ciphersuites</code> system property (note the single word "ciphersuites"). The default value of <code>null</code> means the list of enabled cipher suites is determined by the Java runtime being used.</p>
 <table class="data-table"><tbody>
@@ -2815,8 +2815,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.crl.enable" href="#zookeeper.ssl.crl.enable"></a>
-      zookeeper.ssl.crl.enable
+   <a class="anchor-link" id="zookeeper.ssl.crl.enable"></a>
+   <a href="#zookeeper.ssl.crl.enable">zookeeper.ssl.crl.enable</a>
 </h4>
 <p>Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the <code>zookeeper.ssl.crl</code> system property (note the shorter name).</p>
 <table class="data-table"><tbody>
@@ -2829,8 +2829,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.enabled.protocols" href="#zookeeper.ssl.enabled.protocols"></a>
-      zookeeper.ssl.enabled.protocols
+   <a class="anchor-link" id="zookeeper.ssl.enabled.protocols"></a>
+   <a href="#zookeeper.ssl.enabled.protocols">zookeeper.ssl.enabled.protocols</a>
 </h4>
 <p>Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the <code>zookeeper.ssl.enabledProtocols</code> system property (note the camelCase). The default value of <code>null</code> means the enabled protocol will be the value of the <code>zookeeper.ssl.protocol</code> configuration property.</p>
 <table class="data-table"><tbody>
@@ -2843,8 +2843,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.endpoint.identification.algorithm" href="#zookeeper.ssl.endpoint.identification.algorithm"></a>
-      zookeeper.ssl.endpoint.identification.algorithm
+   <a class="anchor-link" id="zookeeper.ssl.endpoint.identification.algorithm"></a>
+   <a href="#zookeeper.ssl.endpoint.identification.algorithm">zookeeper.ssl.endpoint.identification.algorithm</a>
 </h4>
 <p>Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the <code>zookeeper.ssl.hostnameVerification</code> system property (note the different name and values; true implies https and false implies blank).</p>
 <table class="data-table"><tbody>
@@ -2857,8 +2857,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.ocsp.enable" href="#zookeeper.ssl.ocsp.enable"></a>
-      zookeeper.ssl.ocsp.enable
+   <a class="anchor-link" id="zookeeper.ssl.ocsp.enable"></a>
+   <a href="#zookeeper.ssl.ocsp.enable">zookeeper.ssl.ocsp.enable</a>
 </h4>
 <p>Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the <code>zookeeper.ssl.ocsp</code> system property (note the shorter name).</p>
 <table class="data-table"><tbody>
@@ -2871,8 +2871,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.ssl.protocol" href="#zookeeper.ssl.protocol"></a>
-      zookeeper.ssl.protocol
+   <a class="anchor-link" id="zookeeper.ssl.protocol"></a>
+   <a href="#zookeeper.ssl.protocol">zookeeper.ssl.protocol</a>
 </h4>
 <p>Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named <code>zookeeper.ssl.protocol</code> system property.</p>
 <table class="data-table"><tbody>
@@ -2885,8 +2885,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="zookeeper.sync.time.ms" href="#zookeeper.sync.time.ms"></a>
-      zookeeper.sync.time.ms
+   <a class="anchor-link" id="zookeeper.sync.time.ms"></a>
+   <a href="#zookeeper.sync.time.ms">zookeeper.sync.time.ms</a>
 </h4>
 <p>How far a ZK follower can be behind a ZK leader</p>
 <table class="data-table"><tbody>
diff --git a/25/generated/producer_config.html b/25/generated/producer_config.html
index 0fa891b..5d97b76 100644
--- a/25/generated/producer_config.html
+++ b/25/generated/producer_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="key.serializer" href="#key.serializer"></a>
-      key.serializer
+   <a class="anchor-link" id="key.serializer"></a>
+   <a href="#key.serializer">key.serializer</a>
 </h4>
 <p>Serializer class for key that implements the <code>org.apache.kafka.common.serialization.Serializer</code> interface.</p>
 <table class="data-table"><tbody>
@@ -14,8 +14,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="value.serializer" href="#value.serializer"></a>
-      value.serializer
+   <a class="anchor-link" id="value.serializer"></a>
+   <a href="#value.serializer">value.serializer</a>
 </h4>
 <p>Serializer class for value that implements the <code>org.apache.kafka.common.serialization.Serializer</code> interface.</p>
 <table class="data-table"><tbody>
@@ -40,8 +40,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="bootstrap.servers" href="#bootstrap.servers"></a>
-      bootstrap.servers
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
 </h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table class="data-table"><tbody>
@@ -53,8 +53,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="buffer.memory" href="#buffer.memory"></a>
-      buffer.memory
+   <a class="anchor-link" id="buffer.memory"></a>
+   <a href="#buffer.memory">buffer.memory</a>
 </h4>
 <p>The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for <code>max.block.ms</code> after which it will throw an exception.<p>This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.</p>
 <table class="data-table"><tbody>
@@ -66,8 +66,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="compression.type" href="#compression.type"></a>
-      compression.type
+   <a class="anchor-link" id="compression.type"></a>
+   <a href="#compression.type">compression.type</a>
 </h4>
 <p>The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid  values are <code>none</code>, <code>gzip</code>, <code>snappy</code>, <code>lz4</code>, or <code>zstd</code>. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).</p>
 <table class="data-table"><tbody>
@@ -92,8 +92,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.key.password" href="#ssl.key.password"></a>
-      ssl.key.password
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
 </h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -105,8 +105,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.location" href="#ssl.keystore.location"></a>
-      ssl.keystore.location
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
 </h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table class="data-table"><tbody>
@@ -118,8 +118,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.password" href="#ssl.keystore.password"></a>
-      ssl.keystore.password
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
 </h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table class="data-table"><tbody>
@@ -131,8 +131,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.location" href="#ssl.truststore.location"></a>
-      ssl.truststore.location
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
 </h4>
 <p>The location of the trust store file. </p>
 <table class="data-table"><tbody>
@@ -144,8 +144,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.password" href="#ssl.truststore.password"></a>
-      ssl.truststore.password
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
 </h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table class="data-table"><tbody>
@@ -157,8 +157,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="batch.size" href="#batch.size"></a>
-      batch.size
+   <a class="anchor-link" id="batch.size"></a>
+   <a href="#batch.size">batch.size</a>
 </h4>
 <p>The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. <p>No attempt will be made to batch records larger than this size. <p>Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. <p>A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.</p>
 <table class="data-table"><tbody>
@@ -170,8 +170,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.dns.lookup" href="#client.dns.lookup"></a>
-      client.dns.lookup
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
 </h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code> then, when the lookup returns multiple IP addresses for a hostname, they will all be attempted to connect to before failing the connection. Applies to both bootstrap and advertised servers. If the value is <code>resolve_canonical_bootstrap_servers_only</code> each entry will be resolved and expanded into a list of canonical names.</p>
 <table class="data-table"><tbody>
@@ -183,8 +183,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.id" href="#client.id"></a>
-      client.id
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
 </h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table class="data-table"><tbody>
@@ -196,8 +196,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connections.max.idle.ms" href="#connections.max.idle.ms"></a>
-      connections.max.idle.ms
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
 </h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table class="data-table"><tbody>
@@ -209,8 +209,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delivery.timeout.ms" href="#delivery.timeout.ms"></a>
-      delivery.timeout.ms
+   <a class="anchor-link" id="delivery.timeout.ms"></a>
+   <a href="#delivery.timeout.ms">delivery.timeout.ms</a>
 </h4>
 <p>An upper bound on the time to report success or failure after a call to <code>send()</code> returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of <code>request.timeout.ms</code> and <code>linger.ms</code>.</p>
 <table class="data-table"><tbody>
@@ -222,8 +222,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="linger.ms" href="#linger.ms"></a>
-      linger.ms
+   <a class="anchor-link" id="linger.ms"></a>
+   <a href="#linger.ms">linger.ms</a>
 </h4>
 <p>The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay&mdash;that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get <code>batch.size</code> worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting <code>linger.ms=5</code>, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.</p>
 <table class="data-table"><tbody>
@@ -235,8 +235,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.block.ms" href="#max.block.ms"></a>
-      max.block.ms
+   <a class="anchor-link" id="max.block.ms"></a>
+   <a href="#max.block.ms">max.block.ms</a>
 </h4>
 <p>The configuration controls how long <code>KafkaProducer.send()</code> and <code>KafkaProducer.partitionsFor()</code> will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.</p>
 <table class="data-table"><tbody>
@@ -248,8 +248,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.request.size" href="#max.request.size"></a>
-      max.request.size
+   <a class="anchor-link" id="max.request.size"></a>
+   <a href="#max.request.size">max.request.size</a>
 </h4>
 <p>The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.</p>
 <table class="data-table"><tbody>
@@ -261,8 +261,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="partitioner.class" href="#partitioner.class"></a>
-      partitioner.class
+   <a class="anchor-link" id="partitioner.class"></a>
+   <a href="#partitioner.class">partitioner.class</a>
 </h4>
 <p>Partitioner class that implements the <code>org.apache.kafka.clients.producer.Partitioner</code> interface.</p>
 <table class="data-table"><tbody>
@@ -274,8 +274,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="receive.buffer.bytes" href="#receive.buffer.bytes"></a>
-      receive.buffer.bytes
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
 </h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -287,8 +287,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="request.timeout.ms" href="#request.timeout.ms"></a>
-      request.timeout.ms
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
 </h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than <code>replica.lag.time.max.ms</code> (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.</p>
 <table class="data-table"><tbody>
@@ -300,8 +300,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class"></a>
-      sasl.client.callback.handler.class
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table class="data-table"><tbody>
@@ -313,8 +313,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.jaas.config" href="#sasl.jaas.config"></a>
-      sasl.jaas.config
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
 </h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html"></a>
       here. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
@@ -327,8 +327,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name"></a>
-      sasl.kerberos.service.name
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
 </h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table class="data-table"><tbody>
@@ -340,8 +340,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class"></a>
-      sasl.login.callback.handler.class
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
 </h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table class="data-table"><tbody>
@@ -353,8 +353,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.class" href="#sasl.login.class"></a>
-      sasl.login.class
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
 </h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table class="data-table"><tbody>
@@ -366,8 +366,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.mechanism" href="#sasl.mechanism"></a>
-      sasl.mechanism
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
 </h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table class="data-table"><tbody>
@@ -379,8 +379,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.protocol" href="#security.protocol"></a>
-      security.protocol
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
 </h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table class="data-table"><tbody>
@@ -392,8 +392,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="send.buffer.bytes" href="#send.buffer.bytes"></a>
-      send.buffer.bytes
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
 </h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -405,8 +405,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.enabled.protocols" href="#ssl.enabled.protocols"></a>
-      ssl.enabled.protocols
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
 </h4>
 <p>The list of protocols enabled for SSL connections.</p>
 <table class="data-table"><tbody>
@@ -418,8 +418,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keystore.type" href="#ssl.keystore.type"></a>
-      ssl.keystore.type
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
 </h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table class="data-table"><tbody>
@@ -431,8 +431,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.protocol" href="#ssl.protocol"></a>
-      ssl.protocol
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
 </h4>
 <p>The SSL protocol used to generate the SSLContext. Default setting is TLSv1.2, which is fine for most cases. Allowed values in recent JVMs are TLSv1.2 and TLSv1.3. TLS, TLSv1.1, SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.</p>
 <table class="data-table"><tbody>
@@ -444,8 +444,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.provider" href="#ssl.provider"></a>
-      ssl.provider
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
 </h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table class="data-table"><tbody>
@@ -457,8 +457,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.truststore.type" href="#ssl.truststore.type"></a>
-      ssl.truststore.type
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
 </h4>
 <p>The file format of the trust store file.</p>
 <table class="data-table"><tbody>
@@ -470,8 +470,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="enable.idempotence" href="#enable.idempotence"></a>
-      enable.idempotence
+   <a class="anchor-link" id="enable.idempotence"></a>
+   <a href="#enable.idempotence">enable.idempotence</a>
 </h4>
 <p>When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires <code>max.in.flight.requests.per.connection</code> to be less than or equal to 5, <code>retries</code> to be greater than 0 and <code>acks</code> must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a <code>ConfigException</code> will be thrown.</p>
 <table class="data-table"><tbody>
@@ -483,8 +483,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="interceptor.classes" href="#interceptor.classes"></a>
-      interceptor.classes
+   <a class="anchor-link" id="interceptor.classes"></a>
+   <a href="#interceptor.classes">interceptor.classes</a>
 </h4>
 <p>A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.producer.ProducerInterceptor</code> interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.</p>
 <table class="data-table"><tbody>
@@ -496,8 +496,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.in.flight.requests.per.connection" href="#max.in.flight.requests.per.connection"></a>
-      max.in.flight.requests.per.connection
+   <a class="anchor-link" id="max.in.flight.requests.per.connection"></a>
+   <a href="#max.in.flight.requests.per.connection">max.in.flight.requests.per.connection</a>
 </h4>
 <p>The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).</p>
 <table class="data-table"><tbody>
@@ -509,8 +509,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metadata.max.age.ms" href="#metadata.max.age.ms"></a>
-      metadata.max.age.ms
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
 </h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table class="data-table"><tbody>
@@ -522,8 +522,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metadata.max.idle.ms" href="#metadata.max.idle.ms"></a>
-      metadata.max.idle.ms
+   <a class="anchor-link" id="metadata.max.idle.ms"></a>
+   <a href="#metadata.max.idle.ms">metadata.max.idle.ms</a>
 </h4>
 <p>Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.</p>
 <table class="data-table"><tbody>
@@ -535,8 +535,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metric.reporters" href="#metric.reporters"></a>
-      metric.reporters
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
 </h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table class="data-table"><tbody>
@@ -548,8 +548,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.num.samples" href="#metrics.num.samples"></a>
-      metrics.num.samples
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
 </h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table class="data-table"><tbody>
@@ -561,8 +561,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.recording.level" href="#metrics.recording.level"></a>
-      metrics.recording.level
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
 </h4>
 <p>The highest recording level for metrics.</p>
 <table class="data-table"><tbody>
@@ -574,8 +574,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.sample.window.ms" href="#metrics.sample.window.ms"></a>
-      metrics.sample.window.ms
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
 </h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table class="data-table"><tbody>
@@ -587,8 +587,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms"></a>
-      reconnect.backoff.max.ms
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
 </h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table class="data-table"><tbody>
@@ -600,8 +600,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reconnect.backoff.ms" href="#reconnect.backoff.ms"></a>
-      reconnect.backoff.ms
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
 </h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table class="data-table"><tbody>
@@ -613,8 +613,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="retry.backoff.ms" href="#retry.backoff.ms"></a>
-      retry.backoff.ms
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
 </h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table class="data-table"><tbody>
@@ -626,8 +626,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd"></a>
-      sasl.kerberos.kinit.cmd
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
 </h4>
 <p>Kerberos kinit command path.</p>
 <table class="data-table"><tbody>
@@ -639,8 +639,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin"></a>
-      sasl.kerberos.min.time.before.relogin
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
 </h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table class="data-table"><tbody>
@@ -652,8 +652,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter"></a>
-      sasl.kerberos.ticket.renew.jitter
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
 </h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table class="data-table"><tbody>
@@ -665,8 +665,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor"></a>
-      sasl.kerberos.ticket.renew.window.factor
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
 </h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table class="data-table"><tbody>
@@ -678,8 +678,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds"></a>
-      sasl.login.refresh.buffer.seconds
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
 </h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -691,8 +691,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds"></a>
-      sasl.login.refresh.min.period.seconds
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
 </h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -704,8 +704,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor"></a>
-      sasl.login.refresh.window.factor
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
 </h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -717,8 +717,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter"></a>
-      sasl.login.refresh.window.jitter
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
 </h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table class="data-table"><tbody>
@@ -730,8 +730,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.providers" href="#security.providers"></a>
-      security.providers
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
 </h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table class="data-table"><tbody>
@@ -743,8 +743,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.cipher.suites" href="#ssl.cipher.suites"></a>
-      ssl.cipher.suites
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
 </h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table class="data-table"><tbody>
@@ -756,8 +756,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm"></a>
-      ssl.endpoint.identification.algorithm
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
 </h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table class="data-table"><tbody>
@@ -769,8 +769,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm"></a>
-      ssl.keymanager.algorithm
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
 </h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -782,8 +782,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation"></a>
-      ssl.secure.random.implementation
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
 </h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table class="data-table"><tbody>
@@ -795,8 +795,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm"></a>
-      ssl.trustmanager.algorithm
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
 </h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table class="data-table"><tbody>
@@ -808,8 +808,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transaction.timeout.ms" href="#transaction.timeout.ms"></a>
-      transaction.timeout.ms
+   <a class="anchor-link" id="transaction.timeout.ms"></a>
+   <a href="#transaction.timeout.ms">transaction.timeout.ms</a>
 </h4>
 <p>The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a <code>InvalidTransactionTimeout</code> error.</p>
 <table class="data-table"><tbody>
@@ -821,8 +821,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="transactional.id" href="#transactional.id"></a>
-      transactional.id
+   <a class="anchor-link" id="transactional.id"></a>
+   <a href="#transactional.id">transactional.id</a>
 </h4>
 <p>The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. Note that <code>enable.idempotence</code> must be enabled if a TransactionalId is configured. The default is <code>null</code>, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting <code>transaction.state.log.replication.factor</code>.</p>
 <table class="data-table"><tbody>
diff --git a/25/generated/sink_connector_config.html b/25/generated/sink_connector_config.html
index a25a047..7eca7d1 100644
--- a/25/generated/sink_connector_config.html
+++ b/25/generated/sink_connector_config.html
@@ -13,8 +13,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connector.class" href="#connector.class"></a>
-      connector.class</h4>
+   <a class="anchor-link" id="connector.class"></a>
+   <a href="#connector.class">connector.class</a>
+</h4>
 <p>Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name,  or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -25,8 +26,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="tasks.max" href="#tasks.max"></a>
-      tasks.max</h4>
+   <a class="anchor-link" id="tasks.max"></a>
+   <a href="#tasks.max">tasks.max</a>
+</h4>
 <p>Maximum number of tasks to use for this connector.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -49,8 +51,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="topics.regex" href="#topics.regex"></a>
-      topics.regex</h4>
+   <a class="anchor-link" id="topics.regex"></a>
+   <a href="#topics.regex">topics.regex</a>
+</h4>
 <p>Regular expression giving topics to consume. Under the hood, the regex is compiled to a <code>java.util.regex.Pattern</code>. Only one of topics or topics.regex should be specified.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -61,8 +64,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="key.converter" href="#key.converter"></a>
-      key.converter</h4>
+   <a class="anchor-link" id="key.converter"></a>
+   <a href="#key.converter">key.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -73,8 +77,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="value.converter" href="#value.converter"></a>
-      value.converter</h4>
+   <a class="anchor-link" id="value.converter"></a>
+   <a href="#value.converter">value.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -85,8 +90,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="header.converter" href="#header.converter"></a>
-      header.converter</h4>
+   <a class="anchor-link" id="header.converter"></a>
+   <a href="#header.converter">header.converter</a>
+</h4>
 <p>HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -97,8 +103,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="config.action.reload" href="#config.action.reload"></a>
-      config.action.reload</h4>
+   <a class="anchor-link" id="config.action.reload"></a>
+   <a href="#config.action.reload">config.action.reload</a>
+</h4>
 <p>The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -121,8 +128,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.retry.timeout" href="#errors.retry.timeout"></a>
-      errors.retry.timeout</h4>
+   <a class="anchor-link" id="errors.retry.timeout"></a>
+   <a href="#errors.retry.timeout">errors.retry.timeout</a>
+</h4>
 <p>The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -133,8 +141,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.retry.delay.max.ms" href="#errors.retry.delay.max.ms"></a>
-      errors.retry.delay.max.ms</h4>
+   <a class="anchor-link" id="errors.retry.delay.max.ms"></a>
+   <a href="#errors.retry.delay.max.ms">errors.retry.delay.max.ms</a>
+</h4>
 <p>The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -145,8 +154,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.tolerance" href="#errors.tolerance"></a>
-      errors.tolerance</h4>
+   <a class="anchor-link" id="errors.tolerance"></a>
+   <a href="#errors.tolerance">errors.tolerance</a>
+</h4>
 <p>Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -157,8 +167,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.log.enable" href="#errors.log.enable"></a>
-      errors.log.enable</h4>
+   <a class="anchor-link" id="errors.log.enable"></a>
+   <a href="#errors.log.enable">errors.log.enable</a>
+</h4>
 <p>If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -169,8 +180,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.log.include.messages" href="#errors.log.include.messages"></a>
-      errors.log.include.messages</h4>
+   <a class="anchor-link" id="errors.log.include.messages"></a>
+   <a href="#errors.log.include.messages">errors.log.include.messages</a>
+</h4>
 <p>Whether to the include in the log the Connect record that resulted in a failure. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files, although some information such as topic and partition number will still be logged.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -181,8 +193,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.deadletterqueue.topic.name" href="#errors.deadletterqueue.topic.name"></a>
-      errors.deadletterqueue.topic.name</h4>
+   <a class="anchor-link" id="errors.deadletterqueue.topic.name"></a>
+   <a href="#errors.deadletterqueue.topic.name">errors.deadletterqueue.topic.name</a>
+</h4>
 <p>The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. The topic name is blank by default, which means that no messages are to be recorded in the DLQ.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -193,8 +206,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.deadletterqueue.topic.replication.factor" href="#errors.deadletterqueue.topic.replication.factor"></a>
-      errors.deadletterqueue.topic.replication.factor</h4>
+   <a class="anchor-link" id="errors.deadletterqueue.topic.replication.factor"></a>
+   <a href="#errors.deadletterqueue.topic.replication.factor">errors.deadletterqueue.topic.replication.factor</a>
+</h4>
 <p>Replication factor used to create the dead letter queue topic when it doesn't already exist.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -205,8 +219,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.deadletterqueue.context.headers.enable" href="#errors.deadletterqueue.context.headers.enable"></a>
-      errors.deadletterqueue.context.headers.enable</h4>
+   <a class="anchor-link" id="errors.deadletterqueue.context.headers.enable"></a>
+   <a href="#errors.deadletterqueue.context.headers.enable">errors.deadletterqueue.context.headers.enable</a>
+</h4>
 <p>If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers from the original record, all error context header keys, all error context header keys will start with <code>__connect.errors.</code></p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
diff --git a/25/generated/source_connector_config.html b/25/generated/source_connector_config.html
index 55e6354..6ebf56e 100644
--- a/25/generated/source_connector_config.html
+++ b/25/generated/source_connector_config.html
@@ -13,8 +13,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connector.class" href="#connector.class"></a>
-      connector.class</h4>
+   <a class="anchor-link" id="connector.class"></a>
+   <a href="#connector.class">connector.class</a>
+</h4>
 <p>Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name,  or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -25,8 +26,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="tasks.max" href="#tasks.max"></a>
-      tasks.max</h4>
+   <a class="anchor-link" id="tasks.max"></a>
+   <a href="#tasks.max">tasks.max</a>
+</h4>
 <p>Maximum number of tasks to use for this connector.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -37,8 +39,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="key.converter" href="#key.converter"></a>
-      key.converter</h4>
+   <a class="anchor-link" id="key.converter"></a>
+   <a href="#key.converter">key.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -49,8 +52,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="value.converter" href="#value.converter"></a>
-      value.converter</h4>
+   <a class="anchor-link" id="value.converter"></a>
+   <a href="#value.converter">value.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -61,8 +65,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="header.converter" href="#header.converter"></a>
-      header.converter</h4>
+   <a class="anchor-link" id="header.converter"></a>
+   <a href="#header.converter">header.converter</a>
+</h4>
 <p>HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -73,8 +78,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="config.action.reload" href="#config.action.reload"></a>
-      config.action.reload</h4>
+   <a class="anchor-link" id="config.action.reload"></a>
+   <a href="#config.action.reload">config.action.reload</a>
+</h4>
 <p>The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -97,8 +103,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.retry.timeout" href="#errors.retry.timeout"></a>
-      errors.retry.timeout</h4>
+   <a class="anchor-link" id="errors.retry.timeout"></a>
+   <a href="#errors.retry.timeout">errors.retry.timeout</a>
+</h4>
 <p>The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -109,8 +116,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.retry.delay.max.ms" href="#errors.retry.delay.max.ms"></a>
-      errors.retry.delay.max.ms</h4>
+   <a class="anchor-link" id="errors.retry.delay.max.ms"></a>
+   <a href="#errors.retry.delay.max.ms">errors.retry.delay.max.ms</a>
+</h4>
 <p>The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -121,8 +129,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.tolerance" href="#errors.tolerance"></a>
-      errors.tolerance</h4>
+   <a class="anchor-link" id="errors.tolerance"></a>
+   <a href="#errors.tolerance">errors.tolerance</a>
+</h4>
 <p>Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -133,8 +142,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.log.enable" href="#errors.log.enable"></a>
-      errors.log.enable</h4>
+   <a class="anchor-link" id="errors.log.enable"></a>
+   <a href="#errors.log.enable">errors.log.enable</a>
+</h4>
 <p>If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -145,8 +155,9 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="errors.log.include.messages" href="#errors.log.include.messages"></a>
-      errors.log.include.messages</h4>
+   <a class="anchor-link" id="errors.log.include.messages"></a>
+   <a href="#errors.log.include.messages">errors.log.include.messages</a>
+</h4>
 <p>Whether to the include in the log the Connect record that resulted in a failure. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files, although some information such as topic and partition number will still be logged.</p>
 <table class="data-table"><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
diff --git a/25/generated/streams_config.html b/25/generated/streams_config.html
index 5679901..bbb3bc6 100644
--- a/25/generated/streams_config.html
+++ b/25/generated/streams_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="application.id" href="#application.id"></a>
-      application.id
+   <a class="anchor-link" id="application.id"></a>
+   <a href="#application.id">application.id</a>
 </h4>
 <p>An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.</p>
 <table class="data-table"><tbody>
@@ -14,8 +14,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="bootstrap.servers" href="#bootstrap.servers"></a>
-      bootstrap.servers
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
 </h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table class="data-table"><tbody>
@@ -27,8 +27,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="replication.factor" href="#replication.factor"></a>
-      replication.factor
+   <a class="anchor-link" id="replication.factor"></a>
+   <a href="#replication.factor">replication.factor</a>
 </h4>
 <p>The replication factor for change log topics and repartition topics created by the stream processing application.</p>
 <table class="data-table"><tbody>
@@ -40,8 +40,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="state.dir" href="#state.dir"></a>
-      state.dir
+   <a class="anchor-link" id="state.dir"></a>
+   <a href="#state.dir">state.dir</a>
 </h4>
 <p>Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem.</p>
 <table class="data-table"><tbody>
@@ -53,8 +53,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="cache.max.bytes.buffering" href="#cache.max.bytes.buffering"></a>
-      cache.max.bytes.buffering
+   <a class="anchor-link" id="cache.max.bytes.buffering"></a>
+   <a href="#cache.max.bytes.buffering">cache.max.bytes.buffering</a>
 </h4>
 <p>Maximum number of memory bytes to be used for buffering across all threads</p>
 <table class="data-table"><tbody>
@@ -66,8 +66,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="client.id" href="#client.id"></a>
-      client.id
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
 </h4>
 <p>An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '<client.id>-StreamThread-<threadSequenceNumber>-<consumer|producer|restore-consumer>'.</p>
 <table class="data-table"><tbody>
@@ -79,8 +79,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.deserialization.exception.handler" href="#default.deserialization.exception.handler"></a>
-      default.deserialization.exception.handler
+   <a class="anchor-link" id="default.deserialization.exception.handler"></a>
+   <a href="#default.deserialization.exception.handler">default.deserialization.exception.handler</a>
 </h4>
 <p>Exception handling class that implements the <code>org.apache.kafka.streams.errors.DeserializationExceptionHandler</code> interface.</p>
 <table class="data-table"><tbody>
@@ -92,8 +92,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.key.serde" href="#default.key.serde"></a>
-      default.key.serde
+   <a class="anchor-link" id="default.key.serde"></a>
+   <a href="#default.key.serde">default.key.serde</a>
 </h4>
 <p> Default serializer / deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well</p>
 <table class="data-table"><tbody>
@@ -105,8 +105,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.production.exception.handler" href="#default.production.exception.handler"></a>
-      default.production.exception.handler
+   <a class="anchor-link" id="default.production.exception.handler"></a>
+   <a href="#default.production.exception.handler">default.production.exception.handler</a>
 </h4>
 <p>Exception handling class that implements the <code>org.apache.kafka.streams.errors.ProductionExceptionHandler</code> interface.</p>
 <table class="data-table"><tbody>
@@ -118,8 +118,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.timestamp.extractor" href="#default.timestamp.extractor"></a>
-      default.timestamp.extractor
+   <a class="anchor-link" id="default.timestamp.extractor"></a>
+   <a href="#default.timestamp.extractor">default.timestamp.extractor</a>
 </h4>
 <p>Default timestamp extractor class that implements the <code>org.apache.kafka.streams.processor.TimestampExtractor</code> interface.</p>
 <table class="data-table"><tbody>
@@ -131,8 +131,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="default.value.serde" href="#default.value.serde"></a>
-      default.value.serde
+   <a class="anchor-link" id="default.value.serde"></a>
+   <a href="#default.value.serde">default.value.serde</a>
 </h4>
 <p>Default serializer / deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well</p>
 <table class="data-table"><tbody>
@@ -144,8 +144,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.task.idle.ms" href="#max.task.idle.ms"></a>
-      max.task.idle.ms
+   <a class="anchor-link" id="max.task.idle.ms"></a>
+   <a href="#max.task.idle.ms">max.task.idle.ms</a>
 </h4>
 <p>Maximum amount of time a stream task will stay idle when not all of its partition buffers contain records, to avoid potential out-of-order record processing across multiple input streams.</p>
 <table class="data-table"><tbody>
@@ -157,8 +157,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.standby.replicas" href="#num.standby.replicas"></a>
-      num.standby.replicas
+   <a class="anchor-link" id="num.standby.replicas"></a>
+   <a href="#num.standby.replicas">num.standby.replicas</a>
 </h4>
 <p>The number of standby replicas for each task.</p>
 <table class="data-table"><tbody>
@@ -170,8 +170,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="num.stream.threads" href="#num.stream.threads"></a>
-      num.stream.threads
+   <a class="anchor-link" id="num.stream.threads"></a>
+   <a href="#num.stream.threads">num.stream.threads</a>
 </h4>
 <p>The number of threads to execute stream processing.</p>
 <table class="data-table"><tbody>
@@ -183,8 +183,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="processing.guarantee" href="#processing.guarantee"></a>
-      processing.guarantee
+   <a class="anchor-link" id="processing.guarantee"></a>
+   <a href="#processing.guarantee">processing.guarantee</a>
 </h4>
 <p>The processing guarantee that should be used. Possible values are <code>at_least_once</code> (default) and <code>exactly_once</code>. Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting <code>transaction.state.log.replication.factor</code> and <code>transaction.state.log.min.isr</code>.</p>
 <table class="data-table"><tbody>
@@ -196,8 +196,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="security.protocol" href="#security.protocol"></a>
-      security.protocol
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
 </h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table class="data-table"><tbody>
@@ -209,8 +209,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="topology.optimization" href="#topology.optimization"></a>
-      topology.optimization
+   <a class="anchor-link" id="topology.optimization"></a>
+   <a href="#topology.optimization">topology.optimization</a>
 </h4>
 <p>A configuration telling Kafka Streams if it should optimize the topology, disabled by default</p>
 <table class="data-table"><tbody>
@@ -222,8 +222,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="application.server" href="#application.server"></a>
-      application.server
+   <a class="anchor-link" id="application.server"></a>
+   <a href="#application.server">application.server</a>
 </h4>
 <p>A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance.</p>
 <table class="data-table"><tbody>
@@ -235,8 +235,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="buffered.records.per.partition" href="#buffered.records.per.partition"></a>
-      buffered.records.per.partition
+   <a class="anchor-link" id="buffered.records.per.partition"></a>
+   <a href="#buffered.records.per.partition">buffered.records.per.partition</a>
 </h4>
 <p>Maximum number of records to buffer per partition.</p>
 <table class="data-table"><tbody>
@@ -248,8 +248,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="built.in.metrics.version" href="#built.in.metrics.version"></a>
-      built.in.metrics.version
+   <a class="anchor-link" id="built.in.metrics.version"></a>
+   <a href="#built.in.metrics.version">built.in.metrics.version</a>
 </h4>
 <p>Version of the built-in metrics to use.</p>
 <table class="data-table"><tbody>
@@ -261,8 +261,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="commit.interval.ms" href="#commit.interval.ms"></a>
-      commit.interval.ms
+   <a class="anchor-link" id="commit.interval.ms"></a>
+   <a href="#commit.interval.ms">commit.interval.ms</a>
 </h4>
 <p>The frequency with which to save the position of the processor. (Note, if <code>processing.guarantee</code> is set to <code>exactly_once</code>, the default value is <code>100</code>, otherwise the default value is <code>30000</code>.</p>
 <table class="data-table"><tbody>
@@ -274,8 +274,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="connections.max.idle.ms" href="#connections.max.idle.ms"></a>
-      connections.max.idle.ms
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
 </h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table class="data-table"><tbody>
@@ -287,8 +287,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metadata.max.age.ms" href="#metadata.max.age.ms"></a>
-      metadata.max.age.ms
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
 </h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table class="data-table"><tbody>
@@ -300,8 +300,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metric.reporters" href="#metric.reporters"></a>
-      metric.reporters
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
 </h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table class="data-table"><tbody>
@@ -313,8 +313,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.num.samples" href="#metrics.num.samples"></a>
-      metrics.num.samples
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
 </h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table class="data-table"><tbody>
@@ -326,8 +326,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.recording.level" href="#metrics.recording.level"></a>
-      metrics.recording.level
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
 </h4>
 <p>The highest recording level for metrics.</p>
 <table class="data-table"><tbody>
@@ -339,8 +339,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="metrics.sample.window.ms" href="#metrics.sample.window.ms"></a>
-      metrics.sample.window.ms
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
 </h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table class="data-table"><tbody>
@@ -352,8 +352,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="partition.grouper" href="#partition.grouper"></a>
-      partition.grouper
+   <a class="anchor-link" id="partition.grouper"></a>
+   <a href="#partition.grouper">partition.grouper</a>
 </h4>
 <p>Partition grouper class that implements the <code>org.apache.kafka.streams.processor.PartitionGrouper</code> interface. WARNING: This config is deprecated and will be removed in 3.0.0 release.</p>
 <table class="data-table"><tbody>
@@ -365,8 +365,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="poll.ms" href="#poll.ms"></a>
-      poll.ms
+   <a class="anchor-link" id="poll.ms"></a>
+   <a href="#poll.ms">poll.ms</a>
 </h4>
 <p>The amount of time in milliseconds to block waiting for input.</p>
 <table class="data-table"><tbody>
@@ -378,8 +378,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="receive.buffer.bytes" href="#receive.buffer.bytes"></a>
-      receive.buffer.bytes
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
 </h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -391,8 +391,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms"></a>
-      reconnect.backoff.max.ms
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
 </h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table class="data-table"><tbody>
@@ -404,8 +404,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="reconnect.backoff.ms" href="#reconnect.backoff.ms"></a>
-      reconnect.backoff.ms
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
 </h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table class="data-table"><tbody>
@@ -417,8 +417,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="request.timeout.ms" href="#request.timeout.ms"></a>
-      request.timeout.ms
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
 </h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table class="data-table"><tbody>
@@ -443,8 +443,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="retry.backoff.ms" href="#retry.backoff.ms"></a>
-      retry.backoff.ms
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
 </h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table class="data-table"><tbody>
@@ -456,8 +456,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="rocksdb.config.setter" href="#rocksdb.config.setter"></a>
-      rocksdb.config.setter
+   <a class="anchor-link" id="rocksdb.config.setter"></a>
+   <a href="#rocksdb.config.setter">rocksdb.config.setter</a>
 </h4>
 <p>A Rocks DB config setter class or class name that implements the <code>org.apache.kafka.streams.state.RocksDBConfigSetter</code> interface</p>
 <table class="data-table"><tbody>
@@ -469,8 +469,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="send.buffer.bytes" href="#send.buffer.bytes"></a>
-      send.buffer.bytes
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
 </h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table class="data-table"><tbody>
@@ -482,8 +482,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="state.cleanup.delay.ms" href="#state.cleanup.delay.ms"></a>
-      state.cleanup.delay.ms
+   <a class="anchor-link" id="state.cleanup.delay.ms"></a>
+   <a href="#state.cleanup.delay.ms">state.cleanup.delay.ms</a>
 </h4>
 <p>The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least <code>state.cleanup.delay.ms</code> will be removed</p>
 <table class="data-table"><tbody>
@@ -495,8 +495,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="upgrade.from" href="#upgrade.from"></a>
-      upgrade.from
+   <a class="anchor-link" id="upgrade.from"></a>
+   <a href="#upgrade.from">upgrade.from</a>
 </h4>
 <p>Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 2.4 to a newer version it is not required to specify this config. Default is null. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3" (for upgrading from the corresponding old version).</p>
 <table class="data-table"><tbody>
@@ -508,8 +508,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="windowstore.changelog.additional.retention.ms" href="#windowstore.changelog.additional.retention.ms"></a>
-      windowstore.changelog.additional.retention.ms
+   <a class="anchor-link" id="windowstore.changelog.additional.retention.ms"></a>
+   <a href="#windowstore.changelog.additional.retention.ms">windowstore.changelog.additional.retention.ms</a>
 </h4>
 <p>Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day</p>
 <table class="data-table"><tbody>
diff --git a/25/generated/topic_config.html b/25/generated/topic_config.html
index a5dbffd..3135468 100644
--- a/25/generated/topic_config.html
+++ b/25/generated/topic_config.html
@@ -1,8 +1,8 @@
 <ul class="config-list">
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="cleanup.policy" href="#cleanup.policy"></a>
-      cleanup.policy
+   <a class="anchor-link" id="cleanup.policy"></a>
+   <a href="#cleanup.policy">cleanup.policy</a>
 </h4>
 <p>A string that is either "delete" or "compact" or both. This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable <a href="#compaction">log compaction</a> on the topic.</p>
 <table class="data-table"><tbody>
@@ -15,8 +15,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="compression.type" href="#compression.type"></a>
-      compression.type
+   <a class="anchor-link" id="compression.type"></a>
+   <a href="#compression.type">compression.type</a>
 </h4>
 <p>Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.</p>
 <table class="data-table"><tbody>
@@ -29,8 +29,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="delete.retention.ms" href="#delete.retention.ms"></a>
-      delete.retention.ms
+   <a class="anchor-link" id="delete.retention.ms"></a>
+   <a href="#delete.retention.ms">delete.retention.ms</a>
 </h4>
 <p>The amount of time to retain delete tombstone markers for <a href="#compaction">log compacted</a> topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).</p>
 <table class="data-table"><tbody>
@@ -43,8 +43,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="file.delete.delay.ms" href="#file.delete.delay.ms"></a>
-      file.delete.delay.ms
+   <a class="anchor-link" id="file.delete.delay.ms"></a>
+   <a href="#file.delete.delay.ms">file.delete.delay.ms</a>
 </h4>
 <p>The time to wait before deleting a file from the filesystem</p>
 <table class="data-table"><tbody>
@@ -57,8 +57,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="flush.messages" href="#flush.messages"></a>
-      flush.messages
+   <a class="anchor-link" id="flush.messages"></a>
+   <a href="#flush.messages">flush.messages</a>
 </h4>
 <p>This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see <a href="#topicconfigs">the per-topic configuration section</a>).</p>
 <table class="data-table"><tbody>
@@ -71,8 +71,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="flush.ms" href="#flush.ms"></a>
-      flush.ms
+   <a class="anchor-link" id="flush.ms"></a>
+   <a href="#flush.ms">flush.ms</a>
 </h4>
 <p>This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.</p>
 <table class="data-table"><tbody>
@@ -85,8 +85,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="follower.replication.throttled.replicas" href="#follower.replication.throttled.replicas"></a>
-      follower.replication.throttled.replicas
+   <a class="anchor-link" id="follower.replication.throttled.replicas"></a>
+   <a href="#follower.replication.throttled.replicas">follower.replication.throttled.replicas</a>
 </h4>
 <p>A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</p>
 <table class="data-table"><tbody>
@@ -99,8 +99,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="index.interval.bytes" href="#index.interval.bytes"></a>
-      index.interval.bytes
+   <a class="anchor-link" id="index.interval.bytes"></a>
+   <a href="#index.interval.bytes">index.interval.bytes</a>
 </h4>
 <p>This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.</p>
 <table class="data-table"><tbody>
@@ -113,8 +113,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="leader.replication.throttled.replicas" href="#leader.replication.throttled.replicas"></a>
-      leader.replication.throttled.replicas
+   <a class="anchor-link" id="leader.replication.throttled.replicas"></a>
+   <a href="#leader.replication.throttled.replicas">leader.replication.throttled.replicas</a>
 </h4>
 <p>A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</p>
 <table class="data-table"><tbody>
@@ -127,8 +127,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.compaction.lag.ms" href="#max.compaction.lag.ms"></a>
-      max.compaction.lag.ms
+   <a class="anchor-link" id="max.compaction.lag.ms"></a>
+   <a href="#max.compaction.lag.ms">max.compaction.lag.ms</a>
 </h4>
 <p>The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.</p>
 <table class="data-table"><tbody>
@@ -141,8 +141,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="max.message.bytes" href="#max.message.bytes"></a>
-      max.message.bytes
+   <a class="anchor-link" id="max.message.bytes"></a>
+   <a href="#max.message.bytes">max.message.bytes</a>
 </h4>
 <p>The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that the they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.</p>
 <table class="data-table"><tbody>
@@ -155,8 +155,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="message.format.version" href="#message.format.version"></a>
-      message.format.version
+   <a class="anchor-link" id="message.format.version"></a>
+   <a href="#message.format.version">message.format.version</a>
 </h4>
 <p>Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.</p>
 <table class="data-table"><tbody>
@@ -169,8 +169,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="message.timestamp.difference.max.ms" href="#message.timestamp.difference.max.ms"></a>
-      message.timestamp.difference.max.ms
+   <a class="anchor-link" id="message.timestamp.difference.max.ms"></a>
+   <a href="#message.timestamp.difference.max.ms">message.timestamp.difference.max.ms</a>
 </h4>
 <p>The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.</p>
 <table class="data-table"><tbody>
@@ -183,8 +183,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="message.timestamp.type" href="#message.timestamp.type"></a>
-      message.timestamp.type
+   <a class="anchor-link" id="message.timestamp.type"></a>
+   <a href="#message.timestamp.type">message.timestamp.type</a>
 </h4>
 <p>Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`</p>
 <table class="data-table"><tbody>
@@ -197,8 +197,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="min.cleanable.dirty.ratio" href="#min.cleanable.dirty.ratio"></a>
-      min.cleanable.dirty.ratio
+   <a class="anchor-link" id="min.cleanable.dirty.ratio"></a>
+   <a href="#min.cleanable.dirty.ratio">min.cleanable.dirty.ratio</a>
 </h4>
 <p>This configuration controls how frequently the log compactor will attempt to clean the log (assuming <a href="#compaction">log compaction</a> is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period.</p>
 <table class="data-table"><tbody>
@@ -211,8 +211,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="min.compaction.lag.ms" href="#min.compaction.lag.ms"></a>
-      min.compaction.lag.ms
+   <a class="anchor-link" id="min.compaction.lag.ms"></a>
+   <a href="#min.compaction.lag.ms">min.compaction.lag.ms</a>
 </h4>
 <p>The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.</p>
 <table class="data-table"><tbody>
@@ -225,8 +225,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="min.insync.replicas" href="#min.insync.replicas"></a>
-      min.insync.replicas
+   <a class="anchor-link" id="min.insync.replicas"></a>
+   <a href="#min.insync.replicas">min.insync.replicas</a>
 </h4>
 <p>When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, <code>min.insync.replicas</code> and <code>acks</code> allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set <code>min.insync.replicas</code> to 2, and produce with <code>acks</code> of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.</p>
 <table class="data-table"><tbody>
@@ -253,8 +253,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="retention.bytes" href="#retention.bytes"></a>
-      retention.bytes
+   <a class="anchor-link" id="retention.bytes"></a>
+   <a href="#retention.bytes">retention.bytes</a>
 </h4>
 <p>This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.</p>
 <table class="data-table"><tbody>
@@ -267,8 +267,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="retention.ms" href="#retention.ms"></a>
-      retention.ms
+   <a class="anchor-link" id="retention.ms"></a>
+   <a href="#retention.ms">retention.ms</a>
 </h4>
 <p>This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied.</p>
 <table class="data-table"><tbody>
@@ -281,8 +281,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="segment.bytes" href="#segment.bytes"></a>
-      segment.bytes
+   <a class="anchor-link" id="segment.bytes"></a>
+   <a href="#segment.bytes">segment.bytes</a>
 </h4>
 <p>This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.</p>
 <table class="data-table"><tbody>
@@ -295,8 +295,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="segment.index.bytes" href="#segment.index.bytes"></a>
-      segment.index.bytes
+   <a class="anchor-link" id="segment.index.bytes"></a>
+   <a href="#segment.index.bytes">segment.index.bytes</a>
 </h4>
 <p>This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.</p>
 <table class="data-table"><tbody>
@@ -309,8 +309,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="segment.jitter.ms" href="#segment.jitter.ms"></a>
-      segment.jitter.ms
+   <a class="anchor-link" id="segment.jitter.ms"></a>
+   <a href="#segment.jitter.ms">segment.jitter.ms</a>
 </h4>
 <p>The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling</p>
 <table class="data-table"><tbody>
@@ -323,8 +323,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="segment.ms" href="#segment.ms"></a>
-      segment.ms
+   <a class="anchor-link" id="segment.ms"></a>
+   <a href="#segment.ms">segment.ms</a>
 </h4>
 <p>This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.</p>
 <table class="data-table"><tbody>
@@ -337,8 +337,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="unclean.leader.election.enable" href="#unclean.leader.election.enable"></a>
-      unclean.leader.election.enable
+   <a class="anchor-link" id="unclean.leader.election.enable"></a>
+   <a href="#unclean.leader.election.enable">unclean.leader.election.enable</a>
 </h4>
 <p>Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.</p>
 <table class="data-table"><tbody>
@@ -351,8 +351,8 @@
 </li>
 <li>
 <h4 class="anchor-heading">
-      <a class="anchor-link" id="message.downconversion.enable" href="#message.downconversion.enable"></a>
-      message.downconversion.enable
+   <a class="anchor-link" id="message.downconversion.enable"></a>
+   <a href="#message.downconversion.enable">message.downconversion.enable</a>
 </h4>
 <p>This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to <code>false</code>, broker will not perform down-conversion for consumers expecting an older message format. The broker responds with <code>UNSUPPORTED_VERSION</code> error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.</p>
 <table class="data-table"><tbody>
diff --git a/26/generated/admin_client_config.html b/26/generated/admin_client_config.html
index c53fdbc..e863c8c 100644
--- a/26/generated/admin_client_config.html
+++ b/26/generated/admin_client_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="bootstrap.servers" href="#bootstrap.servers">bootstrap.servers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
+</h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -10,7 +13,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.key.password" href="#ssl.key.password">ssl.key.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
+</h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -20,7 +26,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.location" href="#ssl.keystore.location">ssl.keystore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
+</h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -30,7 +39,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.password" href="#ssl.keystore.password">ssl.keystore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
+</h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -40,7 +52,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.location" href="#ssl.truststore.location">ssl.truststore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
+</h4>
 <p>The location of the trust store file. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -50,7 +65,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.password" href="#ssl.truststore.password">ssl.truststore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
+</h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -60,7 +78,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.dns.lookup" href="#client.dns.lookup">client.dns.lookup</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
+</h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code>, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to <code>resolve_canonical_bootstrap_servers_only</code>, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as <code>use_all_dns_ips</code>. If set to <code>default</code> (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -70,7 +91,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.id" href="#client.id">client.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
+</h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -80,7 +104,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.idle.ms" href="#connections.max.idle.ms">connections.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
+</h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -90,7 +117,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.api.timeout.ms" href="#default.api.timeout.ms">default.api.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.api.timeout.ms"></a>
+   <a href="#default.api.timeout.ms">default.api.timeout.ms</a>
+</h4>
 <p>Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a <code>timeout</code> parameter.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -100,7 +130,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="receive.buffer.bytes" href="#receive.buffer.bytes">receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
+</h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -110,7 +143,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="request.timeout.ms" href="#request.timeout.ms">request.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
+</h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -120,7 +156,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -130,7 +169,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.jaas.config" href="#sasl.jaas.config">sasl.jaas.config</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
+</h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -140,7 +182,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
+</h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -150,7 +195,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -160,7 +208,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.class" href="#sasl.login.class">sasl.login.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -170,7 +221,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.mechanism" href="#sasl.mechanism">sasl.mechanism</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
+</h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -180,7 +234,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.protocol" href="#security.protocol">security.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
+</h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -190,7 +247,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="send.buffer.bytes" href="#send.buffer.bytes">send.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
+</h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -200,7 +260,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.enabled.protocols" href="#ssl.enabled.protocols">ssl.enabled.protocols</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
+</h4>
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -210,7 +273,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.type" href="#ssl.keystore.type">ssl.keystore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
+</h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -220,7 +286,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.protocol" href="#ssl.protocol">ssl.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
+</h4>
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -230,7 +299,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.provider" href="#ssl.provider">ssl.provider</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
+</h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -240,7 +312,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.type" href="#ssl.truststore.type">ssl.truststore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
+</h4>
 <p>The file format of the trust store file.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -250,7 +325,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metadata.max.age.ms" href="#metadata.max.age.ms">metadata.max.age.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
+</h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -260,7 +338,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metric.reporters" href="#metric.reporters">metric.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
+</h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -270,7 +351,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.num.samples" href="#metrics.num.samples">metrics.num.samples</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
+</h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -280,7 +364,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.recording.level" href="#metrics.recording.level">metrics.recording.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
+</h4>
 <p>The highest recording level for metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -290,7 +377,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.sample.window.ms" href="#metrics.sample.window.ms">metrics.sample.window.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
+</h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -300,7 +390,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
+</h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -310,7 +403,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.ms" href="#reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
+</h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -330,7 +426,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.ms" href="#retry.backoff.ms">retry.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
+</h4>
 <p>The amount of time to wait before attempting to retry a failed request. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -340,7 +439,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
+</h4>
 <p>Kerberos kinit command path.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -350,7 +452,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
+</h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -360,7 +465,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
+</h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -370,7 +478,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
+</h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -380,7 +491,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
+</h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -390,7 +504,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
+</h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -400,7 +517,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
+</h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -410,7 +530,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
+</h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -420,7 +543,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.providers" href="#security.providers">security.providers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
+</h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -430,7 +556,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.cipher.suites" href="#ssl.cipher.suites">ssl.cipher.suites</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
+</h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -440,7 +569,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
+</h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -450,7 +582,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.engine.factory.class" href="#ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.engine.factory.class"></a>
+   <a href="#ssl.engine.factory.class">ssl.engine.factory.class</a>
+</h4>
 <p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -460,7 +595,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
+</h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -470,7 +608,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
+</h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -480,7 +621,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
+</h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
diff --git a/26/generated/connect_config.html b/26/generated/connect_config.html
index 1d1f0ad..bf19b6f 100644
--- a/26/generated/connect_config.html
+++ b/26/generated/connect_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="config.storage.topic" href="#config.storage.topic">config.storage.topic</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="config.storage.topic"></a>
+   <a href="#config.storage.topic">config.storage.topic</a>
+</h4>
 <p>The name of the Kafka topic where connector configurations are stored</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -10,7 +13,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.id" href="#group.id">group.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.id"></a>
+   <a href="#group.id">group.id</a>
+</h4>
 <p>A unique string that identifies the Connect cluster group this worker belongs to.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -20,7 +26,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="key.converter" href="#key.converter">key.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="key.converter"></a>
+   <a href="#key.converter">key.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -30,7 +39,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offset.storage.topic" href="#offset.storage.topic">offset.storage.topic</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offset.storage.topic"></a>
+   <a href="#offset.storage.topic">offset.storage.topic</a>
+</h4>
 <p>The name of the Kafka topic where connector offsets are stored</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -40,7 +52,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="status.storage.topic" href="#status.storage.topic">status.storage.topic</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="status.storage.topic"></a>
+   <a href="#status.storage.topic">status.storage.topic</a>
+</h4>
 <p>The name of the Kafka topic where connector and task status are stored</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -50,7 +65,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="value.converter" href="#value.converter">value.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="value.converter"></a>
+   <a href="#value.converter">value.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -60,7 +78,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="bootstrap.servers" href="#bootstrap.servers">bootstrap.servers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
+</h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -70,7 +91,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="heartbeat.interval.ms" href="#heartbeat.interval.ms">heartbeat.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="heartbeat.interval.ms"></a>
+   <a href="#heartbeat.interval.ms">heartbeat.interval.ms</a>
+</h4>
 <p>The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -80,7 +104,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rebalance.timeout.ms" href="#rebalance.timeout.ms">rebalance.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rebalance.timeout.ms"></a>
+   <a href="#rebalance.timeout.ms">rebalance.timeout.ms</a>
+</h4>
 <p>The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -90,7 +117,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="session.timeout.ms" href="#session.timeout.ms">session.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="session.timeout.ms"></a>
+   <a href="#session.timeout.ms">session.timeout.ms</a>
+</h4>
 <p>The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -100,7 +130,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.key.password" href="#ssl.key.password">ssl.key.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
+</h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -110,7 +143,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.location" href="#ssl.keystore.location">ssl.keystore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
+</h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -120,7 +156,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.password" href="#ssl.keystore.password">ssl.keystore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
+</h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -130,7 +169,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.location" href="#ssl.truststore.location">ssl.truststore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
+</h4>
 <p>The location of the trust store file. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -140,7 +182,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.password" href="#ssl.truststore.password">ssl.truststore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
+</h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -150,7 +195,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.dns.lookup" href="#client.dns.lookup">client.dns.lookup</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
+</h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code>, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to <code>resolve_canonical_bootstrap_servers_only</code>, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as <code>use_all_dns_ips</code>. If set to <code>default</code> (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -160,7 +208,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.idle.ms" href="#connections.max.idle.ms">connections.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
+</h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -170,7 +221,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connector.client.config.override.policy" href="#connector.client.config.override.policy">connector.client.config.override.policy</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connector.client.config.override.policy"></a>
+   <a href="#connector.client.config.override.policy">connector.client.config.override.policy</a>
+</h4>
 <p>Class name or alias of implementation of <code>ConnectorClientConfigOverridePolicy</code>. Defines what client configurations can be overriden by the connector. The default implementation is `None`. The other possible policies in the framework include `All` and `Principal`. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -180,7 +234,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="receive.buffer.bytes" href="#receive.buffer.bytes">receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
+</h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -190,7 +247,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="request.timeout.ms" href="#request.timeout.ms">request.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
+</h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -200,7 +260,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -210,7 +273,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.jaas.config" href="#sasl.jaas.config">sasl.jaas.config</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
+</h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -220,7 +286,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
+</h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -230,7 +299,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -240,7 +312,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.class" href="#sasl.login.class">sasl.login.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -250,7 +325,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.mechanism" href="#sasl.mechanism">sasl.mechanism</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
+</h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -260,7 +338,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.protocol" href="#security.protocol">security.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
+</h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -270,7 +351,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="send.buffer.bytes" href="#send.buffer.bytes">send.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
+</h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -280,7 +364,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.enabled.protocols" href="#ssl.enabled.protocols">ssl.enabled.protocols</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
+</h4>
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -290,7 +377,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.type" href="#ssl.keystore.type">ssl.keystore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
+</h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -300,7 +390,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.protocol" href="#ssl.protocol">ssl.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
+</h4>
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -310,7 +403,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.provider" href="#ssl.provider">ssl.provider</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
+</h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -320,7 +416,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.type" href="#ssl.truststore.type">ssl.truststore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
+</h4>
 <p>The file format of the trust store file.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -330,7 +429,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="worker.sync.timeout.ms" href="#worker.sync.timeout.ms">worker.sync.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="worker.sync.timeout.ms"></a>
+   <a href="#worker.sync.timeout.ms">worker.sync.timeout.ms</a>
+</h4>
 <p>When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -340,7 +442,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="worker.unsync.backoff.ms" href="#worker.unsync.backoff.ms">worker.unsync.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="worker.unsync.backoff.ms"></a>
+   <a href="#worker.unsync.backoff.ms">worker.unsync.backoff.ms</a>
+</h4>
 <p>When the worker is out of sync with other workers and  fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -350,7 +455,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="access.control.allow.methods" href="#access.control.allow.methods">access.control.allow.methods</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="access.control.allow.methods"></a>
+   <a href="#access.control.allow.methods">access.control.allow.methods</a>
+</h4>
 <p>Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -360,7 +468,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="access.control.allow.origin" href="#access.control.allow.origin">access.control.allow.origin</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="access.control.allow.origin"></a>
+   <a href="#access.control.allow.origin">access.control.allow.origin</a>
+</h4>
 <p>Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -370,7 +481,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="admin.listeners" href="#admin.listeners">admin.listeners</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="admin.listeners"></a>
+   <a href="#admin.listeners">admin.listeners</a>
+</h4>
 <p>List of comma-separated URIs the Admin REST API will listen on. The supported protocols are HTTP and HTTPS. An empty or blank string will disable this feature. The default behavior is to use the regular listener (specified by the 'listeners' property).</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -380,7 +494,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.id" href="#client.id">client.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
+</h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -390,7 +507,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="config.providers" href="#config.providers">config.providers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="config.providers"></a>
+   <a href="#config.providers">config.providers</a>
+</h4>
 <p>Comma-separated names of <code>ConfigProvider</code> classes, loaded and used in the order specified. Implementing the interface  <code>ConfigProvider</code> allows you to replace variable references in connector configurations, such as for externalized secrets. </p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -400,7 +520,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="config.storage.replication.factor" href="#config.storage.replication.factor">config.storage.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="config.storage.replication.factor"></a>
+   <a href="#config.storage.replication.factor">config.storage.replication.factor</a>
+</h4>
 <p>Replication factor used when creating the configuration storage topic</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -410,7 +533,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connect.protocol" href="#connect.protocol">connect.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connect.protocol"></a>
+   <a href="#connect.protocol">connect.protocol</a>
+</h4>
 <p>Compatibility mode for Kafka Connect Protocol</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -420,7 +546,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="header.converter" href="#header.converter">header.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="header.converter"></a>
+   <a href="#header.converter">header.converter</a>
+</h4>
 <p>HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -430,7 +559,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.worker.key.generation.algorithm" href="#inter.worker.key.generation.algorithm">inter.worker.key.generation.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.worker.key.generation.algorithm"></a>
+   <a href="#inter.worker.key.generation.algorithm">inter.worker.key.generation.algorithm</a>
+</h4>
 <p>The algorithm to use for generating internal request keys</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -440,7 +572,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.worker.key.size" href="#inter.worker.key.size">inter.worker.key.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.worker.key.size"></a>
+   <a href="#inter.worker.key.size">inter.worker.key.size</a>
+</h4>
 <p>The size of the key to use for signing internal requests, in bits. If null, the default key size for the key generation algorithm will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -450,7 +585,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.worker.key.ttl.ms" href="#inter.worker.key.ttl.ms">inter.worker.key.ttl.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.worker.key.ttl.ms"></a>
+   <a href="#inter.worker.key.ttl.ms">inter.worker.key.ttl.ms</a>
+</h4>
 <p>The TTL of generated session keys used for internal request validation (in milliseconds)</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -460,7 +598,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.worker.signature.algorithm" href="#inter.worker.signature.algorithm">inter.worker.signature.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.worker.signature.algorithm"></a>
+   <a href="#inter.worker.signature.algorithm">inter.worker.signature.algorithm</a>
+</h4>
 <p>The algorithm used to sign internal requests</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -470,7 +611,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.worker.verification.algorithms" href="#inter.worker.verification.algorithms">inter.worker.verification.algorithms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.worker.verification.algorithms"></a>
+   <a href="#inter.worker.verification.algorithms">inter.worker.verification.algorithms</a>
+</h4>
 <p>A list of permitted algorithms for verifying internal requests</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -480,7 +624,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="internal.key.converter" href="#internal.key.converter">internal.key.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="internal.key.converter"></a>
+   <a href="#internal.key.converter">internal.key.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -490,7 +637,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="internal.value.converter" href="#internal.value.converter">internal.value.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="internal.value.converter"></a>
+   <a href="#internal.value.converter">internal.value.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation. Deprecated; will be removed in an upcoming version.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -510,7 +660,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metadata.max.age.ms" href="#metadata.max.age.ms">metadata.max.age.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
+</h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -520,7 +673,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metric.reporters" href="#metric.reporters">metric.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
+</h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -530,7 +686,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.num.samples" href="#metrics.num.samples">metrics.num.samples</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
+</h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -540,7 +699,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.recording.level" href="#metrics.recording.level">metrics.recording.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
+</h4>
 <p>The highest recording level for metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -550,7 +712,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.sample.window.ms" href="#metrics.sample.window.ms">metrics.sample.window.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
+</h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -560,7 +725,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offset.flush.interval.ms" href="#offset.flush.interval.ms">offset.flush.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offset.flush.interval.ms"></a>
+   <a href="#offset.flush.interval.ms">offset.flush.interval.ms</a>
+</h4>
 <p>Interval at which to try committing offsets for tasks.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -570,7 +738,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offset.flush.timeout.ms" href="#offset.flush.timeout.ms">offset.flush.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offset.flush.timeout.ms"></a>
+   <a href="#offset.flush.timeout.ms">offset.flush.timeout.ms</a>
+</h4>
 <p>Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -580,7 +751,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offset.storage.partitions" href="#offset.storage.partitions">offset.storage.partitions</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offset.storage.partitions"></a>
+   <a href="#offset.storage.partitions">offset.storage.partitions</a>
+</h4>
 <p>The number of partitions used when creating the offset storage topic</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -590,7 +764,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offset.storage.replication.factor" href="#offset.storage.replication.factor">offset.storage.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offset.storage.replication.factor"></a>
+   <a href="#offset.storage.replication.factor">offset.storage.replication.factor</a>
+</h4>
 <p>Replication factor used when creating the offset storage topic</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -600,7 +777,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="plugin.path" href="#plugin.path">plugin.path</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="plugin.path"></a>
+   <a href="#plugin.path">plugin.path</a>
+</h4>
 <p>List of paths separated by commas (,) that contain plugins (connectors, converters, transformations). The list should consist of top level directories that include any combination of: <br>a) directories immediately containing jars with plugins and their dependencies<br>b) uber-jars with plugins and their dependencies<br>c) directories immediately containing the package directory structure of classes of plugins and their dependencies<br>Note: symlinks will be followed to discover dependencies or plugins.<br>Examples: plugin.path=/usr/local/share/java,/usr/local/share/kafka/plugins,/opt/connectors<br>Do not use config provider variables in this property, since the raw path is used by the worker's scanner before config providers are initialized and used to replace variables.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -610,7 +790,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
+</h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -620,7 +803,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.ms" href="#reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
+</h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -630,7 +816,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="response.http.headers.config" href="#response.http.headers.config">response.http.headers.config</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="response.http.headers.config"></a>
+   <a href="#response.http.headers.config">response.http.headers.config</a>
+</h4>
 <p>Rules for REST API HTTP response headers</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -640,7 +829,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rest.advertised.host.name" href="#rest.advertised.host.name">rest.advertised.host.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rest.advertised.host.name"></a>
+   <a href="#rest.advertised.host.name">rest.advertised.host.name</a>
+</h4>
 <p>If this is set, this is the hostname that will be given out to other workers to connect to.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -650,7 +842,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rest.advertised.listener" href="#rest.advertised.listener">rest.advertised.listener</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rest.advertised.listener"></a>
+   <a href="#rest.advertised.listener">rest.advertised.listener</a>
+</h4>
 <p>Sets the advertised listener (HTTP or HTTPS) which will be given to other workers to use.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -660,7 +855,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rest.advertised.port" href="#rest.advertised.port">rest.advertised.port</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rest.advertised.port"></a>
+   <a href="#rest.advertised.port">rest.advertised.port</a>
+</h4>
 <p>If this is set, this is the port that will be given out to other workers to connect to.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -670,7 +868,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rest.extension.classes" href="#rest.extension.classes">rest.extension.classes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rest.extension.classes"></a>
+   <a href="#rest.extension.classes">rest.extension.classes</a>
+</h4>
 <p>Comma-separated names of <code>ConnectRestExtension</code> classes, loaded and called in the order specified. Implementing the interface  <code>ConnectRestExtension</code> allows you to inject into Connect's REST API user defined resources like filters. Typically used to add custom capability like logging, security, etc. </p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -680,7 +881,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rest.host.name" href="#rest.host.name">rest.host.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rest.host.name"></a>
+   <a href="#rest.host.name">rest.host.name</a>
+</h4>
 <p>Hostname for the REST API. If this is set, it will only bind to this interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -690,7 +894,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rest.port" href="#rest.port">rest.port</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rest.port"></a>
+   <a href="#rest.port">rest.port</a>
+</h4>
 <p>Port for the REST API to listen on.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -700,7 +907,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.ms" href="#retry.backoff.ms">retry.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
+</h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -710,7 +920,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
+</h4>
 <p>Kerberos kinit command path.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -720,7 +933,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
+</h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -730,7 +946,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
+</h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -740,7 +959,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
+</h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -750,7 +972,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
+</h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -760,7 +985,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
+</h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -770,7 +998,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
+</h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -780,7 +1011,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
+</h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -790,7 +1024,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="scheduled.rebalance.max.delay.ms" href="#scheduled.rebalance.max.delay.ms">scheduled.rebalance.max.delay.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="scheduled.rebalance.max.delay.ms"></a>
+   <a href="#scheduled.rebalance.max.delay.ms">scheduled.rebalance.max.delay.ms</a>
+</h4>
 <p>The maximum delay that is scheduled in order to wait for the return of one or more departed workers before rebalancing and reassigning their connectors and tasks to the group. During this period the connectors and tasks of the departed workers remain unassigned</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -800,7 +1037,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.cipher.suites" href="#ssl.cipher.suites">ssl.cipher.suites</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
+</h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -810,7 +1050,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.client.auth" href="#ssl.client.auth">ssl.client.auth</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.client.auth"></a>
+   <a href="#ssl.client.auth">ssl.client.auth</a>
+</h4>
 <p>Configures kafka broker to request client authentication. The following settings are common:  <ul> <li><code>ssl.client.auth=required</code> If set to required client authentication is required. <li><code>ssl.client.auth=requested</code> This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself <li><code>ssl.client.auth=none</code> This means client authentication is not needed.</ul></p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -820,7 +1063,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
+</h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -830,7 +1076,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.engine.factory.class" href="#ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.engine.factory.class"></a>
+   <a href="#ssl.engine.factory.class">ssl.engine.factory.class</a>
+</h4>
 <p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -840,7 +1089,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
+</h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -850,7 +1102,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
+</h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -860,7 +1115,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
+</h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -870,7 +1128,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="status.storage.partitions" href="#status.storage.partitions">status.storage.partitions</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="status.storage.partitions"></a>
+   <a href="#status.storage.partitions">status.storage.partitions</a>
+</h4>
 <p>The number of partitions used when creating the status storage topic</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -880,7 +1141,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="status.storage.replication.factor" href="#status.storage.replication.factor">status.storage.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="status.storage.replication.factor"></a>
+   <a href="#status.storage.replication.factor">status.storage.replication.factor</a>
+</h4>
 <p>Replication factor used when creating the status storage topic</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -890,7 +1154,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="task.shutdown.graceful.timeout.ms" href="#task.shutdown.graceful.timeout.ms">task.shutdown.graceful.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="task.shutdown.graceful.timeout.ms"></a>
+   <a href="#task.shutdown.graceful.timeout.ms">task.shutdown.graceful.timeout.ms</a>
+</h4>
 <p>Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -900,7 +1167,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="topic.creation.enable" href="#topic.creation.enable">topic.creation.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="topic.creation.enable"></a>
+   <a href="#topic.creation.enable">topic.creation.enable</a>
+</h4>
 <p>Whether to allow automatic creation of topics used by source connectors, when source connectors are configured with `topic.creation.` properties. Each task will use an admin client to create its topics and will not depend on the Kafka brokers to create topics automatically.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -910,7 +1180,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="topic.tracking.allow.reset" href="#topic.tracking.allow.reset">topic.tracking.allow.reset</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="topic.tracking.allow.reset"></a>
+   <a href="#topic.tracking.allow.reset">topic.tracking.allow.reset</a>
+</h4>
 <p>If set to true, it allows user requests to reset the set of active topics per connector.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -920,7 +1193,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="topic.tracking.enable" href="#topic.tracking.enable">topic.tracking.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="topic.tracking.enable"></a>
+   <a href="#topic.tracking.enable">topic.tracking.enable</a>
+</h4>
 <p>Enable tracking the set of active topics per connector during runtime.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
diff --git a/26/generated/consumer_config.html b/26/generated/consumer_config.html
index 2ef6030..bcb349e 100644
--- a/26/generated/consumer_config.html
+++ b/26/generated/consumer_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="key.deserializer" href="#key.deserializer">key.deserializer</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="key.deserializer"></a>
+   <a href="#key.deserializer">key.deserializer</a>
+</h4>
 <p>Deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Deserializer</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -10,7 +13,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="value.deserializer" href="#value.deserializer">value.deserializer</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="value.deserializer"></a>
+   <a href="#value.deserializer">value.deserializer</a>
+</h4>
 <p>Deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Deserializer</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -20,7 +26,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="bootstrap.servers" href="#bootstrap.servers">bootstrap.servers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
+</h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -30,7 +39,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="fetch.min.bytes" href="#fetch.min.bytes">fetch.min.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="fetch.min.bytes"></a>
+   <a href="#fetch.min.bytes">fetch.min.bytes</a>
+</h4>
 <p>The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -40,7 +52,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.id" href="#group.id">group.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.id"></a>
+   <a href="#group.id">group.id</a>
+</h4>
 <p>A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using <code>subscribe(topic)</code> or the Kafka-based offset management strategy.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -50,7 +65,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="heartbeat.interval.ms" href="#heartbeat.interval.ms">heartbeat.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="heartbeat.interval.ms"></a>
+   <a href="#heartbeat.interval.ms">heartbeat.interval.ms</a>
+</h4>
 <p>The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower than <code>session.timeout.ms</code>, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -60,7 +78,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.partition.fetch.bytes" href="#max.partition.fetch.bytes">max.partition.fetch.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.partition.fetch.bytes"></a>
+   <a href="#max.partition.fetch.bytes">max.partition.fetch.bytes</a>
+</h4>
 <p>The maximum amount of data per-partition the server will return. Records are fetched in batches by the consumer. If the first record batch in the first non-empty partition of the fetch is larger than this limit, the batch will still be returned to ensure that the consumer can make progress. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). See fetch.max.bytes for limiting the consumer request size.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -70,7 +91,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="session.timeout.ms" href="#session.timeout.ms">session.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="session.timeout.ms"></a>
+   <a href="#session.timeout.ms">session.timeout.ms</a>
+</h4>
 <p>The timeout used to detect client failures when using Kafka's group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration by <code>group.min.session.timeout.ms</code> and <code>group.max.session.timeout.ms</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -80,7 +104,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.key.password" href="#ssl.key.password">ssl.key.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
+</h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -90,7 +117,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.location" href="#ssl.keystore.location">ssl.keystore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
+</h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -100,7 +130,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.password" href="#ssl.keystore.password">ssl.keystore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
+</h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -110,7 +143,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.location" href="#ssl.truststore.location">ssl.truststore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
+</h4>
 <p>The location of the trust store file. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -120,7 +156,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.password" href="#ssl.truststore.password">ssl.truststore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
+</h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -130,7 +169,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="allow.auto.create.topics" href="#allow.auto.create.topics">allow.auto.create.topics</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="allow.auto.create.topics"></a>
+   <a href="#allow.auto.create.topics">allow.auto.create.topics</a>
+</h4>
 <p>Allow automatic topic creation on the broker when subscribing to or assigning a topic. A topic being subscribed to will be automatically created only if the broker allows for it using `auto.create.topics.enable` broker configuration. This configuration must be set to `false` when using brokers older than 0.11.0</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -140,7 +182,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="auto.offset.reset" href="#auto.offset.reset">auto.offset.reset</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="auto.offset.reset"></a>
+   <a href="#auto.offset.reset">auto.offset.reset</a>
+</h4>
 <p>What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): <ul><li>earliest: automatically reset the offset to the earliest offset<li>latest: automatically reset the offset to the latest offset</li><li>none: throw exception to the consumer if no previous offset is found for the consumer's group</li><li>anything else: throw exception to the consumer.</li></ul></p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -150,7 +195,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.dns.lookup" href="#client.dns.lookup">client.dns.lookup</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
+</h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code>, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to <code>resolve_canonical_bootstrap_servers_only</code>, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as <code>use_all_dns_ips</code>. If set to <code>default</code> (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -160,7 +208,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.idle.ms" href="#connections.max.idle.ms">connections.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
+</h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -170,7 +221,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.api.timeout.ms" href="#default.api.timeout.ms">default.api.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.api.timeout.ms"></a>
+   <a href="#default.api.timeout.ms">default.api.timeout.ms</a>
+</h4>
 <p>Specifies the timeout (in milliseconds) for client APIs. This configuration is used as the default timeout for all client operations that do not specify a <code>timeout</code> parameter.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -180,7 +234,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="enable.auto.commit" href="#enable.auto.commit">enable.auto.commit</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="enable.auto.commit"></a>
+   <a href="#enable.auto.commit">enable.auto.commit</a>
+</h4>
 <p>If true the consumer's offset will be periodically committed in the background.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -190,7 +247,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="exclude.internal.topics" href="#exclude.internal.topics">exclude.internal.topics</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="exclude.internal.topics"></a>
+   <a href="#exclude.internal.topics">exclude.internal.topics</a>
+</h4>
 <p>Whether internal topics matching a subscribed pattern should be excluded from the subscription. It is always possible to explicitly subscribe to an internal topic.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -200,7 +260,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="fetch.max.bytes" href="#fetch.max.bytes">fetch.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="fetch.max.bytes"></a>
+   <a href="#fetch.max.bytes">fetch.max.bytes</a>
+</h4>
 <p>The maximum amount of data the server should return for a fetch request. Records are fetched in batches by the consumer, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that the consumer can make progress. As such, this is not a absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config). Note that the consumer performs multiple fetches in parallel.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -210,7 +273,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.instance.id" href="#group.instance.id">group.instance.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.instance.id"></a>
+   <a href="#group.instance.id">group.instance.id</a>
+</h4>
 <p>A unique identifier of the consumer instance provided by the end user. Only non-empty strings are permitted. If set, the consumer is treated as a static member, which means that only one instance with this ID is allowed in the consumer group at any time. This can be used in combination with a larger session timeout to avoid group rebalances caused by transient unavailability (e.g. process restarts). If not set, the consumer will join the group as a dynamic member, which is the traditional behavior.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -220,7 +286,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="isolation.level" href="#isolation.level">isolation.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="isolation.level"></a>
+   <a href="#isolation.level">isolation.level</a>
+</h4>
 <p>Controls how to read messages written transactionally. If set to <code>read_committed</code>, consumer.poll() will only return transactional messages which have been committed. If set to <code>read_uncommitted</code>' (the default), consumer.poll() will return all messages, even transactional messages which have been aborted. Non-transactional messages will be returned unconditionally in either mode. <p>Messages will always be returned in offset order. Hence, in  <code>read_committed</code> mode, consumer.poll() will only return messages up to the last stable offset (LSO), which is the one less than the offset of the first open transaction. In particular any messages appearing after messages belonging to ongoing transactions will be withheld until the relevant transaction has been completed. As a result, <code>read_committed</code> consumers will not be able to read up to the high watermark when there are in flight transactions.</p><p> Further, when in <code>read_committed</code> the seekToEnd method will return the LSO</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -230,7 +299,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.poll.interval.ms" href="#max.poll.interval.ms">max.poll.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.poll.interval.ms"></a>
+   <a href="#max.poll.interval.ms">max.poll.interval.ms</a>
+</h4>
 <p>The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member. For consumers using a non-null <code>group.instance.id</code> which reach this timeout, partitions will not be immediately reassigned. Instead, the consumer will stop sending heartbeats and partitions will be reassigned after expiration of <code>session.timeout.ms</code>. This mirrors the behavior of a static consumer which has shutdown.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -240,7 +312,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.poll.records" href="#max.poll.records">max.poll.records</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.poll.records"></a>
+   <a href="#max.poll.records">max.poll.records</a>
+</h4>
 <p>The maximum number of records returned in a single call to poll().</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -250,7 +325,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="partition.assignment.strategy" href="#partition.assignment.strategy">partition.assignment.strategy</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="partition.assignment.strategy"></a>
+   <a href="#partition.assignment.strategy">partition.assignment.strategy</a>
+</h4>
 <p>A list of class names or class types, ordered by preference, of supported partition assignment strategies that the client will use to distribute partition ownership amongst consumer instances when group management is used.<p>In addition to the default class specified below, you can use the <code>org.apache.kafka.clients.consumer.RoundRobinAssignor</code>class for round robin assignments of partitions to consumers. </p><p>Implementing the <code>org.apache.kafka.clients.consumer.ConsumerPartitionAssignor</code> interface allows you to plug in a custom assignmentstrategy.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -260,7 +338,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="receive.buffer.bytes" href="#receive.buffer.bytes">receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
+</h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -270,7 +351,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="request.timeout.ms" href="#request.timeout.ms">request.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
+</h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -280,7 +364,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -290,7 +377,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.jaas.config" href="#sasl.jaas.config">sasl.jaas.config</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
+</h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -300,7 +390,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
+</h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -310,7 +403,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -320,7 +416,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.class" href="#sasl.login.class">sasl.login.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -330,7 +429,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.mechanism" href="#sasl.mechanism">sasl.mechanism</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
+</h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -340,7 +442,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.protocol" href="#security.protocol">security.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
+</h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -350,7 +455,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="send.buffer.bytes" href="#send.buffer.bytes">send.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
+</h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -360,7 +468,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.enabled.protocols" href="#ssl.enabled.protocols">ssl.enabled.protocols</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
+</h4>
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -370,7 +481,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.type" href="#ssl.keystore.type">ssl.keystore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
+</h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -380,7 +494,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.protocol" href="#ssl.protocol">ssl.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
+</h4>
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -390,7 +507,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.provider" href="#ssl.provider">ssl.provider</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
+</h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -400,7 +520,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.type" href="#ssl.truststore.type">ssl.truststore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
+</h4>
 <p>The file format of the trust store file.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -410,7 +533,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="auto.commit.interval.ms" href="#auto.commit.interval.ms">auto.commit.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="auto.commit.interval.ms"></a>
+   <a href="#auto.commit.interval.ms">auto.commit.interval.ms</a>
+</h4>
 <p>The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if <code>enable.auto.commit</code> is set to <code>true</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -420,7 +546,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="check.crcs" href="#check.crcs">check.crcs</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="check.crcs"></a>
+   <a href="#check.crcs">check.crcs</a>
+</h4>
 <p>Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -430,7 +559,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.id" href="#client.id">client.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
+</h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -440,7 +572,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.rack" href="#client.rack">client.rack</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.rack"></a>
+   <a href="#client.rack">client.rack</a>
+</h4>
 <p>A rack identifier for this client. This can be any string value which indicates where this client is physically located. It corresponds with the broker config 'broker.rack'</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -450,7 +585,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="fetch.max.wait.ms" href="#fetch.max.wait.ms">fetch.max.wait.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="fetch.max.wait.ms"></a>
+   <a href="#fetch.max.wait.ms">fetch.max.wait.ms</a>
+</h4>
 <p>The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -460,7 +598,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="interceptor.classes" href="#interceptor.classes">interceptor.classes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="interceptor.classes"></a>
+   <a href="#interceptor.classes">interceptor.classes</a>
+</h4>
 <p>A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.consumer.ConsumerInterceptor</code> interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -470,7 +611,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metadata.max.age.ms" href="#metadata.max.age.ms">metadata.max.age.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
+</h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -480,7 +624,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metric.reporters" href="#metric.reporters">metric.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
+</h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -490,7 +637,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.num.samples" href="#metrics.num.samples">metrics.num.samples</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
+</h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -500,7 +650,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.recording.level" href="#metrics.recording.level">metrics.recording.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
+</h4>
 <p>The highest recording level for metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -510,7 +663,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.sample.window.ms" href="#metrics.sample.window.ms">metrics.sample.window.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
+</h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -520,7 +676,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
+</h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -530,7 +689,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.ms" href="#reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
+</h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -540,7 +702,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.ms" href="#retry.backoff.ms">retry.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
+</h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -550,7 +715,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
+</h4>
 <p>Kerberos kinit command path.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -560,7 +728,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
+</h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -570,7 +741,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
+</h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -580,7 +754,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
+</h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -590,7 +767,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
+</h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -600,7 +780,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
+</h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -610,7 +793,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
+</h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -620,7 +806,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
+</h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -630,7 +819,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.providers" href="#security.providers">security.providers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
+</h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -640,7 +832,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.cipher.suites" href="#ssl.cipher.suites">ssl.cipher.suites</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
+</h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -650,7 +845,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
+</h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -660,7 +858,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.engine.factory.class" href="#ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.engine.factory.class"></a>
+   <a href="#ssl.engine.factory.class">ssl.engine.factory.class</a>
+</h4>
 <p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -670,7 +871,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
+</h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -680,7 +884,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
+</h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -690,7 +897,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
+</h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
diff --git a/26/generated/kafka_config.html b/26/generated/kafka_config.html
index 2e683a4..cdc2a34 100644
--- a/26/generated/kafka_config.html
+++ b/26/generated/kafka_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="zookeeper.connect" href="#zookeeper.connect">zookeeper.connect</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.connect"></a>
+   <a href="#zookeeper.connect">zookeeper.connect</a>
+</h4>
 <p>Specifies the ZooKeeper connection string in the form <code>hostname:port</code> where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form <code>hostname1:port1,hostname2:port2,hostname3:port3</code>.<br>The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of <code>/chroot/path</code> you would give the connection string as <code>hostname1:port1,hostname2:port2,hostname3:port3/chroot/path</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -11,7 +14,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="advertised.host.name" href="#advertised.host.name">advertised.host.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="advertised.host.name"></a>
+   <a href="#advertised.host.name">advertised.host.name</a>
+</h4>
 <p>DEPRECATED: only used when <code>advertised.listeners</code> or <code>listeners</code> are not set. Use <code>advertised.listeners</code> instead. <br>Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for <code>host.name</code> if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName().</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -22,7 +28,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="advertised.listeners" href="#advertised.listeners">advertised.listeners</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="advertised.listeners"></a>
+   <a href="#advertised.listeners">advertised.listeners</a>
+</h4>
 <p>Listeners to publish to ZooKeeper for clients to use, if different than the <code>listeners</code> config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for <code>listeners</code> will be used. Unlike <code>listeners</code> it is not valid to advertise the 0.0.0.0 meta-address.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -33,7 +42,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="advertised.port" href="#advertised.port">advertised.port</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="advertised.port"></a>
+   <a href="#advertised.port">advertised.port</a>
+</h4>
 <p>DEPRECATED: only used when <code>advertised.listeners</code> or <code>listeners</code> are not set. Use <code>advertised.listeners</code> instead. <br>The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -44,7 +56,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="auto.create.topics.enable" href="#auto.create.topics.enable">auto.create.topics.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="auto.create.topics.enable"></a>
+   <a href="#auto.create.topics.enable">auto.create.topics.enable</a>
+</h4>
 <p>Enable auto creation of topic on the server</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -55,7 +70,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="auto.leader.rebalance.enable" href="#auto.leader.rebalance.enable">auto.leader.rebalance.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="auto.leader.rebalance.enable"></a>
+   <a href="#auto.leader.rebalance.enable">auto.leader.rebalance.enable</a>
+</h4>
 <p>Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by `leader.imbalance.check.interval.seconds`. If the leader imbalance exceeds `leader.imbalance.per.broker.percentage`, leader rebalance to the preferred leader for partitions is triggered.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -66,7 +84,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="background.threads" href="#background.threads">background.threads</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="background.threads"></a>
+   <a href="#background.threads">background.threads</a>
+</h4>
 <p>The number of threads to use for various background processing tasks</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -77,7 +98,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="broker.id" href="#broker.id">broker.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="broker.id"></a>
+   <a href="#broker.id">broker.id</a>
+</h4>
 <p>The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -88,7 +112,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="compression.type" href="#compression.type">compression.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="compression.type"></a>
+   <a href="#compression.type">compression.type</a>
+</h4>
 <p>Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -99,7 +126,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="control.plane.listener.name" href="#control.plane.listener.name">control.plane.listener.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="control.plane.listener.name"></a>
+   <a href="#control.plane.listener.name">control.plane.listener.name</a>
+</h4>
 <p>Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is :<br>listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094<br>listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL<br>control.plane.listener.name = CONTROLLER<br>On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL".<br>On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker.<br>For example, if the broker's published endpoints on zookeeper are :<br>"endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"]<br> and the controller's config is :<br>listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL<br>control.plane.listener.name = CONTROLLER<br>then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker.<br>If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -110,7 +140,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delete.topic.enable" href="#delete.topic.enable">delete.topic.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delete.topic.enable"></a>
+   <a href="#delete.topic.enable">delete.topic.enable</a>
+</h4>
 <p>Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -121,7 +154,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="host.name" href="#host.name">host.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="host.name"></a>
+   <a href="#host.name">host.name</a>
+</h4>
 <p>DEPRECATED: only used when <code>listeners</code> is not set. Use <code>listeners</code> instead. <br>hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -132,7 +168,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="leader.imbalance.check.interval.seconds" href="#leader.imbalance.check.interval.seconds">leader.imbalance.check.interval.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="leader.imbalance.check.interval.seconds"></a>
+   <a href="#leader.imbalance.check.interval.seconds">leader.imbalance.check.interval.seconds</a>
+</h4>
 <p>The frequency with which the partition rebalance check is triggered by the controller</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -143,7 +182,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="leader.imbalance.per.broker.percentage" href="#leader.imbalance.per.broker.percentage">leader.imbalance.per.broker.percentage</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="leader.imbalance.per.broker.percentage"></a>
+   <a href="#leader.imbalance.per.broker.percentage">leader.imbalance.per.broker.percentage</a>
+</h4>
 <p>The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -165,7 +207,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.dir" href="#log.dir">log.dir</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.dir"></a>
+   <a href="#log.dir">log.dir</a>
+</h4>
 <p>The directory in which the log data is kept (supplemental for log.dirs property)</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -176,7 +221,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.dirs" href="#log.dirs">log.dirs</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.dirs"></a>
+   <a href="#log.dirs">log.dirs</a>
+</h4>
 <p>The directories in which the log data is kept. If not set, the value in log.dir is used</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -187,7 +235,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.flush.interval.messages" href="#log.flush.interval.messages">log.flush.interval.messages</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.flush.interval.messages"></a>
+   <a href="#log.flush.interval.messages">log.flush.interval.messages</a>
+</h4>
 <p>The number of messages accumulated on a log partition before messages are flushed to disk </p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -198,7 +249,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.flush.interval.ms" href="#log.flush.interval.ms">log.flush.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.flush.interval.ms"></a>
+   <a href="#log.flush.interval.ms">log.flush.interval.ms</a>
+</h4>
 <p>The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -209,7 +263,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.flush.offset.checkpoint.interval.ms" href="#log.flush.offset.checkpoint.interval.ms">log.flush.offset.checkpoint.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.flush.offset.checkpoint.interval.ms"></a>
+   <a href="#log.flush.offset.checkpoint.interval.ms">log.flush.offset.checkpoint.interval.ms</a>
+</h4>
 <p>The frequency with which we update the persistent record of the last flush which acts as the log recovery point</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -220,7 +277,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.flush.scheduler.interval.ms" href="#log.flush.scheduler.interval.ms">log.flush.scheduler.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.flush.scheduler.interval.ms"></a>
+   <a href="#log.flush.scheduler.interval.ms">log.flush.scheduler.interval.ms</a>
+</h4>
 <p>The frequency in ms that the log flusher checks whether any log needs to be flushed to disk</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -231,7 +291,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.flush.start.offset.checkpoint.interval.ms" href="#log.flush.start.offset.checkpoint.interval.ms">log.flush.start.offset.checkpoint.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.flush.start.offset.checkpoint.interval.ms"></a>
+   <a href="#log.flush.start.offset.checkpoint.interval.ms">log.flush.start.offset.checkpoint.interval.ms</a>
+</h4>
 <p>The frequency with which we update the persistent record of log start offset</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -242,7 +305,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.retention.bytes" href="#log.retention.bytes">log.retention.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.retention.bytes"></a>
+   <a href="#log.retention.bytes">log.retention.bytes</a>
+</h4>
 <p>The maximum size of the log before deleting it</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -253,7 +319,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.retention.hours" href="#log.retention.hours">log.retention.hours</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.retention.hours"></a>
+   <a href="#log.retention.hours">log.retention.hours</a>
+</h4>
 <p>The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -264,7 +333,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.retention.minutes" href="#log.retention.minutes">log.retention.minutes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.retention.minutes"></a>
+   <a href="#log.retention.minutes">log.retention.minutes</a>
+</h4>
 <p>The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -275,7 +347,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.retention.ms" href="#log.retention.ms">log.retention.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.retention.ms"></a>
+   <a href="#log.retention.ms">log.retention.ms</a>
+</h4>
 <p>The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -286,7 +361,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.roll.hours" href="#log.roll.hours">log.roll.hours</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.roll.hours"></a>
+   <a href="#log.roll.hours">log.roll.hours</a>
+</h4>
 <p>The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -297,7 +375,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.roll.jitter.hours" href="#log.roll.jitter.hours">log.roll.jitter.hours</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.roll.jitter.hours"></a>
+   <a href="#log.roll.jitter.hours">log.roll.jitter.hours</a>
+</h4>
 <p>The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -308,7 +389,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.roll.jitter.ms" href="#log.roll.jitter.ms">log.roll.jitter.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.roll.jitter.ms"></a>
+   <a href="#log.roll.jitter.ms">log.roll.jitter.ms</a>
+</h4>
 <p>The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -319,7 +403,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.roll.ms" href="#log.roll.ms">log.roll.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.roll.ms"></a>
+   <a href="#log.roll.ms">log.roll.ms</a>
+</h4>
 <p>The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -330,7 +417,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.segment.bytes" href="#log.segment.bytes">log.segment.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.segment.bytes"></a>
+   <a href="#log.segment.bytes">log.segment.bytes</a>
+</h4>
 <p>The maximum size of a single log file</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -341,7 +431,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.segment.delete.delay.ms" href="#log.segment.delete.delay.ms">log.segment.delete.delay.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.segment.delete.delay.ms"></a>
+   <a href="#log.segment.delete.delay.ms">log.segment.delete.delay.ms</a>
+</h4>
 <p>The amount of time to wait before deleting a file from the filesystem</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -352,7 +445,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="message.max.bytes" href="#message.max.bytes">message.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="message.max.bytes"></a>
+   <a href="#message.max.bytes">message.max.bytes</a>
+</h4>
 <p>The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level <code>max.message.bytes</code> config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -363,7 +459,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="min.insync.replicas" href="#min.insync.replicas">min.insync.replicas</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="min.insync.replicas"></a>
+   <a href="#min.insync.replicas">min.insync.replicas</a>
+</h4>
 <p>When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -374,7 +473,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.io.threads" href="#num.io.threads">num.io.threads</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.io.threads"></a>
+   <a href="#num.io.threads">num.io.threads</a>
+</h4>
 <p>The number of threads that the server uses for processing requests, which may include disk I/O</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -385,7 +487,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.network.threads" href="#num.network.threads">num.network.threads</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.network.threads"></a>
+   <a href="#num.network.threads">num.network.threads</a>
+</h4>
 <p>The number of threads that the server uses for receiving requests from the network and sending responses to the network</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -396,7 +501,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.recovery.threads.per.data.dir" href="#num.recovery.threads.per.data.dir">num.recovery.threads.per.data.dir</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.recovery.threads.per.data.dir"></a>
+   <a href="#num.recovery.threads.per.data.dir">num.recovery.threads.per.data.dir</a>
+</h4>
 <p>The number of threads per data directory to be used for log recovery at startup and flushing at shutdown</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -407,7 +515,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.replica.alter.log.dirs.threads" href="#num.replica.alter.log.dirs.threads">num.replica.alter.log.dirs.threads</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.replica.alter.log.dirs.threads"></a>
+   <a href="#num.replica.alter.log.dirs.threads">num.replica.alter.log.dirs.threads</a>
+</h4>
 <p>The number of threads that can move replicas between log directories, which may include disk I/O</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -418,7 +529,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.replica.fetchers" href="#num.replica.fetchers">num.replica.fetchers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.replica.fetchers"></a>
+   <a href="#num.replica.fetchers">num.replica.fetchers</a>
+</h4>
 <p>Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -429,7 +543,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offset.metadata.max.bytes" href="#offset.metadata.max.bytes">offset.metadata.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offset.metadata.max.bytes"></a>
+   <a href="#offset.metadata.max.bytes">offset.metadata.max.bytes</a>
+</h4>
 <p>The maximum size for a metadata entry associated with an offset commit</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -440,7 +557,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.commit.required.acks" href="#offsets.commit.required.acks">offsets.commit.required.acks</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.commit.required.acks"></a>
+   <a href="#offsets.commit.required.acks">offsets.commit.required.acks</a>
+</h4>
 <p>The required acks before the commit can be accepted. In general, the default (-1) should not be overridden</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -451,7 +571,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.commit.timeout.ms" href="#offsets.commit.timeout.ms">offsets.commit.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.commit.timeout.ms"></a>
+   <a href="#offsets.commit.timeout.ms">offsets.commit.timeout.ms</a>
+</h4>
 <p>Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -462,7 +585,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.load.buffer.size" href="#offsets.load.buffer.size">offsets.load.buffer.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.load.buffer.size"></a>
+   <a href="#offsets.load.buffer.size">offsets.load.buffer.size</a>
+</h4>
 <p>Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large).</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -473,7 +599,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.retention.check.interval.ms" href="#offsets.retention.check.interval.ms">offsets.retention.check.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.retention.check.interval.ms"></a>
+   <a href="#offsets.retention.check.interval.ms">offsets.retention.check.interval.ms</a>
+</h4>
 <p>Frequency at which to check for stale offsets</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -484,7 +613,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.retention.minutes" href="#offsets.retention.minutes">offsets.retention.minutes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.retention.minutes"></a>
+   <a href="#offsets.retention.minutes">offsets.retention.minutes</a>
+</h4>
 <p>After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -495,7 +627,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.topic.compression.codec" href="#offsets.topic.compression.codec">offsets.topic.compression.codec</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.topic.compression.codec"></a>
+   <a href="#offsets.topic.compression.codec">offsets.topic.compression.codec</a>
+</h4>
 <p>Compression codec for the offsets topic - compression may be used to achieve "atomic" commits</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -506,7 +641,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.topic.num.partitions" href="#offsets.topic.num.partitions">offsets.topic.num.partitions</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.topic.num.partitions"></a>
+   <a href="#offsets.topic.num.partitions">offsets.topic.num.partitions</a>
+</h4>
 <p>The number of partitions for the offset commit topic (should not change after deployment)</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -517,7 +655,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.topic.replication.factor" href="#offsets.topic.replication.factor">offsets.topic.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.topic.replication.factor"></a>
+   <a href="#offsets.topic.replication.factor">offsets.topic.replication.factor</a>
+</h4>
 <p>The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -528,7 +669,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="offsets.topic.segment.bytes" href="#offsets.topic.segment.bytes">offsets.topic.segment.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="offsets.topic.segment.bytes"></a>
+   <a href="#offsets.topic.segment.bytes">offsets.topic.segment.bytes</a>
+</h4>
 <p>The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -550,7 +694,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="queued.max.requests" href="#queued.max.requests">queued.max.requests</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="queued.max.requests"></a>
+   <a href="#queued.max.requests">queued.max.requests</a>
+</h4>
 <p>The number of queued requests allowed for data-plane, before blocking the network threads</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -561,7 +708,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="quota.consumer.default" href="#quota.consumer.default">quota.consumer.default</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="quota.consumer.default"></a>
+   <a href="#quota.consumer.default">quota.consumer.default</a>
+</h4>
 <p>DEPRECATED: Used only when dynamic default quotas are not configured for <user, <client-id> or <user, client-id> in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -572,7 +722,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="quota.producer.default" href="#quota.producer.default">quota.producer.default</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="quota.producer.default"></a>
+   <a href="#quota.producer.default">quota.producer.default</a>
+</h4>
 <p>DEPRECATED: Used only when dynamic default quotas are not configured for <user>, <client-id> or <user, client-id> in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -583,7 +736,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.fetch.min.bytes" href="#replica.fetch.min.bytes">replica.fetch.min.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.fetch.min.bytes"></a>
+   <a href="#replica.fetch.min.bytes">replica.fetch.min.bytes</a>
+</h4>
 <p>Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -594,7 +750,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.fetch.wait.max.ms" href="#replica.fetch.wait.max.ms">replica.fetch.wait.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.fetch.wait.max.ms"></a>
+   <a href="#replica.fetch.wait.max.ms">replica.fetch.wait.max.ms</a>
+</h4>
 <p>max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -605,7 +764,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.high.watermark.checkpoint.interval.ms" href="#replica.high.watermark.checkpoint.interval.ms">replica.high.watermark.checkpoint.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.high.watermark.checkpoint.interval.ms"></a>
+   <a href="#replica.high.watermark.checkpoint.interval.ms">replica.high.watermark.checkpoint.interval.ms</a>
+</h4>
 <p>The frequency with which the high watermark is saved out to disk</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -616,7 +778,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.lag.time.max.ms" href="#replica.lag.time.max.ms">replica.lag.time.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.lag.time.max.ms"></a>
+   <a href="#replica.lag.time.max.ms">replica.lag.time.max.ms</a>
+</h4>
 <p>If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -627,7 +792,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.socket.receive.buffer.bytes" href="#replica.socket.receive.buffer.bytes">replica.socket.receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.socket.receive.buffer.bytes"></a>
+   <a href="#replica.socket.receive.buffer.bytes">replica.socket.receive.buffer.bytes</a>
+</h4>
 <p>The socket receive buffer for network requests</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -638,7 +806,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.socket.timeout.ms" href="#replica.socket.timeout.ms">replica.socket.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.socket.timeout.ms"></a>
+   <a href="#replica.socket.timeout.ms">replica.socket.timeout.ms</a>
+</h4>
 <p>The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -649,7 +820,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="request.timeout.ms" href="#request.timeout.ms">request.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
+</h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -660,7 +834,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="socket.receive.buffer.bytes" href="#socket.receive.buffer.bytes">socket.receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="socket.receive.buffer.bytes"></a>
+   <a href="#socket.receive.buffer.bytes">socket.receive.buffer.bytes</a>
+</h4>
 <p>The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -671,7 +848,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="socket.request.max.bytes" href="#socket.request.max.bytes">socket.request.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="socket.request.max.bytes"></a>
+   <a href="#socket.request.max.bytes">socket.request.max.bytes</a>
+</h4>
 <p>The maximum number of bytes in a socket request</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -682,7 +862,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="socket.send.buffer.bytes" href="#socket.send.buffer.bytes">socket.send.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="socket.send.buffer.bytes"></a>
+   <a href="#socket.send.buffer.bytes">socket.send.buffer.bytes</a>
+</h4>
 <p>The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -693,7 +876,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.max.timeout.ms" href="#transaction.max.timeout.ms">transaction.max.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.max.timeout.ms"></a>
+   <a href="#transaction.max.timeout.ms">transaction.max.timeout.ms</a>
+</h4>
 <p>The maximum allowed timeout for transactions. If a client’s requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -704,7 +890,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.state.log.load.buffer.size" href="#transaction.state.log.load.buffer.size">transaction.state.log.load.buffer.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.state.log.load.buffer.size"></a>
+   <a href="#transaction.state.log.load.buffer.size">transaction.state.log.load.buffer.size</a>
+</h4>
 <p>Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large).</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -715,7 +904,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.state.log.min.isr" href="#transaction.state.log.min.isr">transaction.state.log.min.isr</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.state.log.min.isr"></a>
+   <a href="#transaction.state.log.min.isr">transaction.state.log.min.isr</a>
+</h4>
 <p>Overridden min.insync.replicas config for the transaction topic.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -726,7 +918,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.state.log.num.partitions" href="#transaction.state.log.num.partitions">transaction.state.log.num.partitions</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.state.log.num.partitions"></a>
+   <a href="#transaction.state.log.num.partitions">transaction.state.log.num.partitions</a>
+</h4>
 <p>The number of partitions for the transaction topic (should not change after deployment).</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -737,7 +932,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.state.log.replication.factor" href="#transaction.state.log.replication.factor">transaction.state.log.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.state.log.replication.factor"></a>
+   <a href="#transaction.state.log.replication.factor">transaction.state.log.replication.factor</a>
+</h4>
 <p>The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -748,7 +946,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.state.log.segment.bytes" href="#transaction.state.log.segment.bytes">transaction.state.log.segment.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.state.log.segment.bytes"></a>
+   <a href="#transaction.state.log.segment.bytes">transaction.state.log.segment.bytes</a>
+</h4>
 <p>The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -759,7 +960,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transactional.id.expiration.ms" href="#transactional.id.expiration.ms">transactional.id.expiration.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transactional.id.expiration.ms"></a>
+   <a href="#transactional.id.expiration.ms">transactional.id.expiration.ms</a>
+</h4>
 <p>The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -770,7 +974,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="unclean.leader.election.enable" href="#unclean.leader.election.enable">unclean.leader.election.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="unclean.leader.election.enable"></a>
+   <a href="#unclean.leader.election.enable">unclean.leader.election.enable</a>
+</h4>
 <p>Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -781,7 +988,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.connection.timeout.ms" href="#zookeeper.connection.timeout.ms">zookeeper.connection.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.connection.timeout.ms"></a>
+   <a href="#zookeeper.connection.timeout.ms">zookeeper.connection.timeout.ms</a>
+</h4>
 <p>The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -792,7 +1002,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.max.in.flight.requests" href="#zookeeper.max.in.flight.requests">zookeeper.max.in.flight.requests</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.max.in.flight.requests"></a>
+   <a href="#zookeeper.max.in.flight.requests">zookeeper.max.in.flight.requests</a>
+</h4>
 <p>The maximum number of unacknowledged requests the client will send to Zookeeper before blocking.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -803,7 +1016,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.session.timeout.ms" href="#zookeeper.session.timeout.ms">zookeeper.session.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.session.timeout.ms"></a>
+   <a href="#zookeeper.session.timeout.ms">zookeeper.session.timeout.ms</a>
+</h4>
 <p>Zookeeper session timeout</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -814,7 +1030,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.set.acl" href="#zookeeper.set.acl">zookeeper.set.acl</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.set.acl"></a>
+   <a href="#zookeeper.set.acl">zookeeper.set.acl</a>
+</h4>
 <p>Set client to use secure ACLs</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -825,7 +1044,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="broker.id.generation.enable" href="#broker.id.generation.enable">broker.id.generation.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="broker.id.generation.enable"></a>
+   <a href="#broker.id.generation.enable">broker.id.generation.enable</a>
+</h4>
 <p>Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -836,7 +1058,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="broker.rack" href="#broker.rack">broker.rack</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="broker.rack"></a>
+   <a href="#broker.rack">broker.rack</a>
+</h4>
 <p>Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: `RACK1`, `us-east-1d`</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -847,7 +1072,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.idle.ms" href="#connections.max.idle.ms">connections.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
+</h4>
 <p>Idle connections timeout: the server socket processor threads close the connections that idle more than this</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -858,7 +1086,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.reauth.ms" href="#connections.max.reauth.ms">connections.max.reauth.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.reauth.ms"></a>
+   <a href="#connections.max.reauth.ms">connections.max.reauth.ms</a>
+</h4>
 <p>When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -869,7 +1100,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="controlled.shutdown.enable" href="#controlled.shutdown.enable">controlled.shutdown.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="controlled.shutdown.enable"></a>
+   <a href="#controlled.shutdown.enable">controlled.shutdown.enable</a>
+</h4>
 <p>Enable controlled shutdown of the server</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -880,7 +1114,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="controlled.shutdown.max.retries" href="#controlled.shutdown.max.retries">controlled.shutdown.max.retries</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="controlled.shutdown.max.retries"></a>
+   <a href="#controlled.shutdown.max.retries">controlled.shutdown.max.retries</a>
+</h4>
 <p>Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -891,7 +1128,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="controlled.shutdown.retry.backoff.ms" href="#controlled.shutdown.retry.backoff.ms">controlled.shutdown.retry.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="controlled.shutdown.retry.backoff.ms"></a>
+   <a href="#controlled.shutdown.retry.backoff.ms">controlled.shutdown.retry.backoff.ms</a>
+</h4>
 <p>Before each retry, the system needs time to recover from the state that caused the previous failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -902,7 +1142,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="controller.socket.timeout.ms" href="#controller.socket.timeout.ms">controller.socket.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="controller.socket.timeout.ms"></a>
+   <a href="#controller.socket.timeout.ms">controller.socket.timeout.ms</a>
+</h4>
 <p>The socket timeout for controller-to-broker channels</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -913,7 +1156,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.replication.factor" href="#default.replication.factor">default.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.replication.factor"></a>
+   <a href="#default.replication.factor">default.replication.factor</a>
+</h4>
 <p>default replication factors for automatically created topics</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -924,7 +1170,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delegation.token.expiry.time.ms" href="#delegation.token.expiry.time.ms">delegation.token.expiry.time.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delegation.token.expiry.time.ms"></a>
+   <a href="#delegation.token.expiry.time.ms">delegation.token.expiry.time.ms</a>
+</h4>
 <p>The token validity time in miliseconds before the token needs to be renewed. Default value 1 day.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -935,7 +1184,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delegation.token.master.key" href="#delegation.token.master.key">delegation.token.master.key</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delegation.token.master.key"></a>
+   <a href="#delegation.token.master.key">delegation.token.master.key</a>
+</h4>
 <p>Master/secret key to generate and verify delegation tokens. Same key must be configured across all the brokers.  If the key is not set or set to empty string, brokers will disable the delegation token support.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -946,7 +1198,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delegation.token.max.lifetime.ms" href="#delegation.token.max.lifetime.ms">delegation.token.max.lifetime.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delegation.token.max.lifetime.ms"></a>
+   <a href="#delegation.token.max.lifetime.ms">delegation.token.max.lifetime.ms</a>
+</h4>
 <p>The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -957,7 +1212,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delete.records.purgatory.purge.interval.requests" href="#delete.records.purgatory.purge.interval.requests">delete.records.purgatory.purge.interval.requests</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delete.records.purgatory.purge.interval.requests"></a>
+   <a href="#delete.records.purgatory.purge.interval.requests">delete.records.purgatory.purge.interval.requests</a>
+</h4>
 <p>The purge interval (in number of requests) of the delete records request purgatory</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -968,7 +1226,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="fetch.max.bytes" href="#fetch.max.bytes">fetch.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="fetch.max.bytes"></a>
+   <a href="#fetch.max.bytes">fetch.max.bytes</a>
+</h4>
 <p>The maximum number of bytes we will return for a fetch request. Must be at least 1024.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -979,7 +1240,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="fetch.purgatory.purge.interval.requests" href="#fetch.purgatory.purge.interval.requests">fetch.purgatory.purge.interval.requests</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="fetch.purgatory.purge.interval.requests"></a>
+   <a href="#fetch.purgatory.purge.interval.requests">fetch.purgatory.purge.interval.requests</a>
+</h4>
 <p>The purge interval (in number of requests) of the fetch request purgatory</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -990,7 +1254,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.initial.rebalance.delay.ms" href="#group.initial.rebalance.delay.ms">group.initial.rebalance.delay.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.initial.rebalance.delay.ms"></a>
+   <a href="#group.initial.rebalance.delay.ms">group.initial.rebalance.delay.ms</a>
+</h4>
 <p>The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1001,7 +1268,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.max.session.timeout.ms" href="#group.max.session.timeout.ms">group.max.session.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.max.session.timeout.ms"></a>
+   <a href="#group.max.session.timeout.ms">group.max.session.timeout.ms</a>
+</h4>
 <p>The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1012,7 +1282,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.max.size" href="#group.max.size">group.max.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.max.size"></a>
+   <a href="#group.max.size">group.max.size</a>
+</h4>
 <p>The maximum number of consumers that a single consumer group can accommodate.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1023,7 +1296,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="group.min.session.timeout.ms" href="#group.min.session.timeout.ms">group.min.session.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="group.min.session.timeout.ms"></a>
+   <a href="#group.min.session.timeout.ms">group.min.session.timeout.ms</a>
+</h4>
 <p>The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1034,7 +1310,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.broker.listener.name" href="#inter.broker.listener.name">inter.broker.listener.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.broker.listener.name"></a>
+   <a href="#inter.broker.listener.name">inter.broker.listener.name</a>
+</h4>
 <p>Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1045,7 +1324,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="inter.broker.protocol.version" href="#inter.broker.protocol.version">inter.broker.protocol.version</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="inter.broker.protocol.version"></a>
+   <a href="#inter.broker.protocol.version">inter.broker.protocol.version</a>
+</h4>
 <p>Specify which version of the inter-broker protocol will be used.<br> This is typically bumped after all brokers were upgraded to a new version.<br> Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1056,7 +1338,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.backoff.ms" href="#log.cleaner.backoff.ms">log.cleaner.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.backoff.ms"></a>
+   <a href="#log.cleaner.backoff.ms">log.cleaner.backoff.ms</a>
+</h4>
 <p>The amount of time to sleep when there are no logs to clean</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1067,7 +1352,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.dedupe.buffer.size" href="#log.cleaner.dedupe.buffer.size">log.cleaner.dedupe.buffer.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.dedupe.buffer.size"></a>
+   <a href="#log.cleaner.dedupe.buffer.size">log.cleaner.dedupe.buffer.size</a>
+</h4>
 <p>The total memory used for log deduplication across all cleaner threads</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1078,7 +1366,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.delete.retention.ms" href="#log.cleaner.delete.retention.ms">log.cleaner.delete.retention.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.delete.retention.ms"></a>
+   <a href="#log.cleaner.delete.retention.ms">log.cleaner.delete.retention.ms</a>
+</h4>
 <p>How long are delete records retained?</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1089,7 +1380,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.enable" href="#log.cleaner.enable">log.cleaner.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.enable"></a>
+   <a href="#log.cleaner.enable">log.cleaner.enable</a>
+</h4>
 <p>Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -1100,7 +1394,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.io.buffer.load.factor" href="#log.cleaner.io.buffer.load.factor">log.cleaner.io.buffer.load.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.io.buffer.load.factor"></a>
+   <a href="#log.cleaner.io.buffer.load.factor">log.cleaner.io.buffer.load.factor</a>
+</h4>
 <p>Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1111,7 +1408,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.io.buffer.size" href="#log.cleaner.io.buffer.size">log.cleaner.io.buffer.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.io.buffer.size"></a>
+   <a href="#log.cleaner.io.buffer.size">log.cleaner.io.buffer.size</a>
+</h4>
 <p>The total memory used for log cleaner I/O buffers across all cleaner threads</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1122,7 +1422,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.io.max.bytes.per.second" href="#log.cleaner.io.max.bytes.per.second">log.cleaner.io.max.bytes.per.second</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.io.max.bytes.per.second"></a>
+   <a href="#log.cleaner.io.max.bytes.per.second">log.cleaner.io.max.bytes.per.second</a>
+</h4>
 <p>The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1133,7 +1436,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.max.compaction.lag.ms" href="#log.cleaner.max.compaction.lag.ms">log.cleaner.max.compaction.lag.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.max.compaction.lag.ms"></a>
+   <a href="#log.cleaner.max.compaction.lag.ms">log.cleaner.max.compaction.lag.ms</a>
+</h4>
 <p>The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1144,7 +1450,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.min.cleanable.ratio" href="#log.cleaner.min.cleanable.ratio">log.cleaner.min.cleanable.ratio</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.min.cleanable.ratio"></a>
+   <a href="#log.cleaner.min.cleanable.ratio">log.cleaner.min.cleanable.ratio</a>
+</h4>
 <p>The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1155,7 +1464,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.min.compaction.lag.ms" href="#log.cleaner.min.compaction.lag.ms">log.cleaner.min.compaction.lag.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.min.compaction.lag.ms"></a>
+   <a href="#log.cleaner.min.compaction.lag.ms">log.cleaner.min.compaction.lag.ms</a>
+</h4>
 <p>The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1166,7 +1478,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleaner.threads" href="#log.cleaner.threads">log.cleaner.threads</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleaner.threads"></a>
+   <a href="#log.cleaner.threads">log.cleaner.threads</a>
+</h4>
 <p>The number of background threads to use for log cleaning</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1177,7 +1492,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.cleanup.policy" href="#log.cleanup.policy">log.cleanup.policy</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.cleanup.policy"></a>
+   <a href="#log.cleanup.policy">log.cleanup.policy</a>
+</h4>
 <p>The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact"</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -1188,7 +1506,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.index.interval.bytes" href="#log.index.interval.bytes">log.index.interval.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.index.interval.bytes"></a>
+   <a href="#log.index.interval.bytes">log.index.interval.bytes</a>
+</h4>
 <p>The interval with which we add an entry to the offset index</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1199,7 +1520,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.index.size.max.bytes" href="#log.index.size.max.bytes">log.index.size.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.index.size.max.bytes"></a>
+   <a href="#log.index.size.max.bytes">log.index.size.max.bytes</a>
+</h4>
 <p>The maximum size in bytes of the offset index</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1210,7 +1534,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.message.format.version" href="#log.message.format.version">log.message.format.version</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.message.format.version"></a>
+   <a href="#log.message.format.version">log.message.format.version</a>
+</h4>
 <p>Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1221,7 +1548,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.message.timestamp.difference.max.ms" href="#log.message.timestamp.difference.max.ms">log.message.timestamp.difference.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.message.timestamp.difference.max.ms"></a>
+   <a href="#log.message.timestamp.difference.max.ms">log.message.timestamp.difference.max.ms</a>
+</h4>
 <p>The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1232,7 +1562,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.message.timestamp.type" href="#log.message.timestamp.type">log.message.timestamp.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.message.timestamp.type"></a>
+   <a href="#log.message.timestamp.type">log.message.timestamp.type</a>
+</h4>
 <p>Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1243,7 +1576,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.preallocate" href="#log.preallocate">log.preallocate</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.preallocate"></a>
+   <a href="#log.preallocate">log.preallocate</a>
+</h4>
 <p>Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -1254,7 +1590,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.retention.check.interval.ms" href="#log.retention.check.interval.ms">log.retention.check.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.retention.check.interval.ms"></a>
+   <a href="#log.retention.check.interval.ms">log.retention.check.interval.ms</a>
+</h4>
 <p>The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1265,7 +1604,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.connections" href="#max.connections">max.connections</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.connections"></a>
+   <a href="#max.connections">max.connections</a>
+</h4>
 <p>The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, <code>listener.name.internal.max.connections</code>. Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1276,7 +1618,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.connections.per.ip" href="#max.connections.per.ip">max.connections.per.ip</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.connections.per.ip"></a>
+   <a href="#max.connections.per.ip">max.connections.per.ip</a>
+</h4>
 <p>The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1287,7 +1632,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.connections.per.ip.overrides" href="#max.connections.per.ip.overrides">max.connections.per.ip.overrides</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.connections.per.ip.overrides"></a>
+   <a href="#max.connections.per.ip.overrides">max.connections.per.ip.overrides</a>
+</h4>
 <p>A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200"</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1298,7 +1646,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.incremental.fetch.session.cache.slots" href="#max.incremental.fetch.session.cache.slots">max.incremental.fetch.session.cache.slots</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.incremental.fetch.session.cache.slots"></a>
+   <a href="#max.incremental.fetch.session.cache.slots">max.incremental.fetch.session.cache.slots</a>
+</h4>
 <p>The maximum number of incremental fetch sessions that we will maintain.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1309,7 +1660,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.partitions" href="#num.partitions">num.partitions</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.partitions"></a>
+   <a href="#num.partitions">num.partitions</a>
+</h4>
 <p>The default number of log partitions per topic</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1320,7 +1674,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="password.encoder.old.secret" href="#password.encoder.old.secret">password.encoder.old.secret</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="password.encoder.old.secret"></a>
+   <a href="#password.encoder.old.secret">password.encoder.old.secret</a>
+</h4>
 <p>The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1331,7 +1688,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="password.encoder.secret" href="#password.encoder.secret">password.encoder.secret</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="password.encoder.secret"></a>
+   <a href="#password.encoder.secret">password.encoder.secret</a>
+</h4>
 <p>The secret used for encoding dynamically configured passwords for this broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1342,7 +1702,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="principal.builder.class" href="#principal.builder.class">principal.builder.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="principal.builder.class"></a>
+   <a href="#principal.builder.class">principal.builder.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication,  the principal will be derived using the rules defined by <code>ssl.principal.mapping.rules</code> applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by <code>sasl.kerberos.principal.to.local.rules</code> if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1353,7 +1716,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="producer.purgatory.purge.interval.requests" href="#producer.purgatory.purge.interval.requests">producer.purgatory.purge.interval.requests</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="producer.purgatory.purge.interval.requests"></a>
+   <a href="#producer.purgatory.purge.interval.requests">producer.purgatory.purge.interval.requests</a>
+</h4>
 <p>The purge interval (in number of requests) of the producer request purgatory</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1364,7 +1730,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="queued.max.request.bytes" href="#queued.max.request.bytes">queued.max.request.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="queued.max.request.bytes"></a>
+   <a href="#queued.max.request.bytes">queued.max.request.bytes</a>
+</h4>
 <p>The number of queued bytes allowed before no more requests are read</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1375,7 +1744,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.fetch.backoff.ms" href="#replica.fetch.backoff.ms">replica.fetch.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.fetch.backoff.ms"></a>
+   <a href="#replica.fetch.backoff.ms">replica.fetch.backoff.ms</a>
+</h4>
 <p>The amount of time to sleep when fetch partition error occurs.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1386,7 +1758,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.fetch.max.bytes" href="#replica.fetch.max.bytes">replica.fetch.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.fetch.max.bytes"></a>
+   <a href="#replica.fetch.max.bytes">replica.fetch.max.bytes</a>
+</h4>
 <p>The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1397,7 +1772,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.fetch.response.max.bytes" href="#replica.fetch.response.max.bytes">replica.fetch.response.max.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.fetch.response.max.bytes"></a>
+   <a href="#replica.fetch.response.max.bytes">replica.fetch.response.max.bytes</a>
+</h4>
 <p>Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via <code>message.max.bytes</code> (broker config) or <code>max.message.bytes</code> (topic config).</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1408,7 +1786,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replica.selector.class" href="#replica.selector.class">replica.selector.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replica.selector.class"></a>
+   <a href="#replica.selector.class">replica.selector.class</a>
+</h4>
 <p>The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1419,7 +1800,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reserved.broker.max.id" href="#reserved.broker.max.id">reserved.broker.max.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reserved.broker.max.id"></a>
+   <a href="#reserved.broker.max.id">reserved.broker.max.id</a>
+</h4>
 <p>Max number that can be used for a broker.id</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1430,7 +1814,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1441,7 +1828,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.enabled.mechanisms" href="#sasl.enabled.mechanisms">sasl.enabled.mechanisms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.enabled.mechanisms"></a>
+   <a href="#sasl.enabled.mechanisms">sasl.enabled.mechanisms</a>
+</h4>
 <p>The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -1452,7 +1842,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.jaas.config" href="#sasl.jaas.config">sasl.jaas.config</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
+</h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1463,7 +1856,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
+</h4>
 <p>Kerberos kinit command path.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1474,7 +1870,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
+</h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1485,7 +1884,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.principal.to.local.rules" href="#sasl.kerberos.principal.to.local.rules">sasl.kerberos.principal.to.local.rules</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.principal.to.local.rules"></a>
+   <a href="#sasl.kerberos.principal.to.local.rules">sasl.kerberos.principal.to.local.rules</a>
+</h4>
 <p>A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see <a href="#security_authz"> security authorization and acls</a>. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the <code>principal.builder.class</code> configuration.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -1496,7 +1898,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
+</h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1507,7 +1912,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
+</h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1518,7 +1926,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
+</h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1529,7 +1940,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1540,7 +1954,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.class" href="#sasl.login.class">sasl.login.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1551,7 +1968,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
+</h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -1562,7 +1982,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
+</h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -1573,7 +1996,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
+</h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1584,7 +2010,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
+</h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -1595,7 +2024,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.mechanism.inter.broker.protocol" href="#sasl.mechanism.inter.broker.protocol">sasl.mechanism.inter.broker.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.mechanism.inter.broker.protocol"></a>
+   <a href="#sasl.mechanism.inter.broker.protocol">sasl.mechanism.inter.broker.protocol</a>
+</h4>
 <p>SASL mechanism used for inter-broker communication. Default is GSSAPI.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1606,7 +2038,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.server.callback.handler.class" href="#sasl.server.callback.handler.class">sasl.server.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.server.callback.handler.class"></a>
+   <a href="#sasl.server.callback.handler.class">sasl.server.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1617,7 +2052,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.inter.broker.protocol" href="#security.inter.broker.protocol">security.inter.broker.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.inter.broker.protocol"></a>
+   <a href="#security.inter.broker.protocol">security.inter.broker.protocol</a>
+</h4>
 <p>Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1628,7 +2066,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.cipher.suites" href="#ssl.cipher.suites">ssl.cipher.suites</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
+</h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -1639,7 +2080,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.client.auth" href="#ssl.client.auth">ssl.client.auth</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.client.auth"></a>
+   <a href="#ssl.client.auth">ssl.client.auth</a>
+</h4>
 <p>Configures kafka broker to request client authentication. The following settings are common:  <ul> <li><code>ssl.client.auth=required</code> If set to required client authentication is required. <li><code>ssl.client.auth=requested</code> This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself <li><code>ssl.client.auth=none</code> This means client authentication is not needed.</ul></p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1650,7 +2094,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.enabled.protocols" href="#ssl.enabled.protocols">ssl.enabled.protocols</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
+</h4>
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -1661,7 +2108,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.key.password" href="#ssl.key.password">ssl.key.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
+</h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1672,7 +2122,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
+</h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1683,7 +2136,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.location" href="#ssl.keystore.location">ssl.keystore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
+</h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1694,7 +2150,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.password" href="#ssl.keystore.password">ssl.keystore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
+</h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1705,7 +2164,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.type" href="#ssl.keystore.type">ssl.keystore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
+</h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1716,7 +2178,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.protocol" href="#ssl.protocol">ssl.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
+</h4>
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1727,7 +2192,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.provider" href="#ssl.provider">ssl.provider</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
+</h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1738,7 +2206,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
+</h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1749,7 +2220,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.location" href="#ssl.truststore.location">ssl.truststore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
+</h4>
 <p>The location of the trust store file. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1760,7 +2234,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.password" href="#ssl.truststore.password">ssl.truststore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
+</h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1771,7 +2248,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.type" href="#ssl.truststore.type">ssl.truststore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
+</h4>
 <p>The file format of the trust store file.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1782,7 +2262,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.clientCnxnSocket" href="#zookeeper.clientCnxnSocket">zookeeper.clientCnxnSocket</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.clientCnxnSocket"></a>
+   <a href="#zookeeper.clientCnxnSocket">zookeeper.clientCnxnSocket</a>
+</h4>
 <p>Typically set to <code>org.apache.zookeeper.ClientCnxnSocketNetty</code> when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named <code>zookeeper.clientCnxnSocket</code> system property.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1793,7 +2276,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.client.enable" href="#zookeeper.ssl.client.enable">zookeeper.ssl.client.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.client.enable"></a>
+   <a href="#zookeeper.ssl.client.enable">zookeeper.ssl.client.enable</a>
+</h4>
 <p>Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the <code>zookeeper.client.secure</code> system property (note the different name). Defaults to false if neither is set; when true, <code>zookeeper.clientCnxnSocket</code> must be set (typically to <code>org.apache.zookeeper.ClientCnxnSocketNetty</code>); other values to set may include <code>zookeeper.ssl.cipher.suites</code>, <code>zookeeper.ssl.crl.enable</code>, <code>zookeeper.ssl.enabled.protocols</code>, <code>zookeeper.ssl.endpoint.identification.algorithm</code>, <code>zookeeper.ssl.keystore.location</code>, <code>zookeeper.ssl.keystore.password</code>, <code>zookeeper.ssl.keystore.type</code>, <code>zookeeper.ssl.ocsp.enable</code>, <code>zookeeper.ssl.protocol</code>, <code>zookeeper.ssl.truststore.location</code>, <code>zookeeper.ssl.truststore.password</code>, <code>zookeeper.ssl.truststore.type</code></p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -1804,7 +2290,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.keystore.location" href="#zookeeper.ssl.keystore.location">zookeeper.ssl.keystore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.keystore.location"></a>
+   <a href="#zookeeper.ssl.keystore.location">zookeeper.ssl.keystore.location</a>
+</h4>
 <p>Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.keyStore.location</code> system property (note the camelCase).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1815,7 +2304,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.keystore.password" href="#zookeeper.ssl.keystore.password">zookeeper.ssl.keystore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.keystore.password"></a>
+   <a href="#zookeeper.ssl.keystore.password">zookeeper.ssl.keystore.password</a>
+</h4>
 <p>Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.keyStore.password</code> system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1826,7 +2318,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.keystore.type" href="#zookeeper.ssl.keystore.type">zookeeper.ssl.keystore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.keystore.type"></a>
+   <a href="#zookeeper.ssl.keystore.type">zookeeper.ssl.keystore.type</a>
+</h4>
 <p>Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.keyStore.type</code> system property (note the camelCase). The default value of <code>null</code> means the type will be auto-detected based on the filename extension of the keystore.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1837,7 +2332,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.truststore.location" href="#zookeeper.ssl.truststore.location">zookeeper.ssl.truststore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.truststore.location"></a>
+   <a href="#zookeeper.ssl.truststore.location">zookeeper.ssl.truststore.location</a>
+</h4>
 <p>Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.trustStore.location</code> system property (note the camelCase).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1848,7 +2346,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.truststore.password" href="#zookeeper.ssl.truststore.password">zookeeper.ssl.truststore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.truststore.password"></a>
+   <a href="#zookeeper.ssl.truststore.password">zookeeper.ssl.truststore.password</a>
+</h4>
 <p>Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.trustStore.password</code> system property (note the camelCase).</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -1859,7 +2360,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.truststore.type" href="#zookeeper.ssl.truststore.type">zookeeper.ssl.truststore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.truststore.type"></a>
+   <a href="#zookeeper.ssl.truststore.type">zookeeper.ssl.truststore.type</a>
+</h4>
 <p>Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the <code>zookeeper.ssl.trustStore.type</code> system property (note the camelCase). The default value of <code>null</code> means the type will be auto-detected based on the filename extension of the truststore.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1870,7 +2374,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="alter.config.policy.class.name" href="#alter.config.policy.class.name">alter.config.policy.class.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="alter.config.policy.class.name"></a>
+   <a href="#alter.config.policy.class.name">alter.config.policy.class.name</a>
+</h4>
 <p>The alter configs policy class that should be used for validation. The class should implement the <code>org.apache.kafka.server.policy.AlterConfigPolicy</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1881,7 +2388,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="alter.log.dirs.replication.quota.window.num" href="#alter.log.dirs.replication.quota.window.num">alter.log.dirs.replication.quota.window.num</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="alter.log.dirs.replication.quota.window.num"></a>
+   <a href="#alter.log.dirs.replication.quota.window.num">alter.log.dirs.replication.quota.window.num</a>
+</h4>
 <p>The number of samples to retain in memory for alter log dirs replication quotas</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1892,7 +2402,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="alter.log.dirs.replication.quota.window.size.seconds" href="#alter.log.dirs.replication.quota.window.size.seconds">alter.log.dirs.replication.quota.window.size.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="alter.log.dirs.replication.quota.window.size.seconds"></a>
+   <a href="#alter.log.dirs.replication.quota.window.size.seconds">alter.log.dirs.replication.quota.window.size.seconds</a>
+</h4>
 <p>The time span of each sample for alter log dirs replication quotas</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1903,7 +2416,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="authorizer.class.name" href="#authorizer.class.name">authorizer.class.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="authorizer.class.name"></a>
+   <a href="#authorizer.class.name">authorizer.class.name</a>
+</h4>
 <p>The fully qualified name of a class that implements sorg.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization. This config also supports authorizers that implement the deprecated kafka.security.auth.Authorizer trait which was previously used for authorization.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1914,7 +2430,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.quota.callback.class" href="#client.quota.callback.class">client.quota.callback.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.quota.callback.class"></a>
+   <a href="#client.quota.callback.class">client.quota.callback.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, <user, client-id>, <user> or <client-id> quotas stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1925,7 +2444,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connection.failed.authentication.delay.ms" href="#connection.failed.authentication.delay.ms">connection.failed.authentication.delay.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connection.failed.authentication.delay.ms"></a>
+   <a href="#connection.failed.authentication.delay.ms">connection.failed.authentication.delay.ms</a>
+</h4>
 <p>Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1936,7 +2458,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="create.topic.policy.class.name" href="#create.topic.policy.class.name">create.topic.policy.class.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="create.topic.policy.class.name"></a>
+   <a href="#create.topic.policy.class.name">create.topic.policy.class.name</a>
+</h4>
 <p>The create topic policy class that should be used for validation. The class should implement the <code>org.apache.kafka.server.policy.CreateTopicPolicy</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -1947,7 +2472,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delegation.token.expiry.check.interval.ms" href="#delegation.token.expiry.check.interval.ms">delegation.token.expiry.check.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delegation.token.expiry.check.interval.ms"></a>
+   <a href="#delegation.token.expiry.check.interval.ms">delegation.token.expiry.check.interval.ms</a>
+</h4>
 <p>Scan interval to remove expired delegation tokens.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -1958,7 +2486,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="kafka.metrics.polling.interval.secs" href="#kafka.metrics.polling.interval.secs">kafka.metrics.polling.interval.secs</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="kafka.metrics.polling.interval.secs"></a>
+   <a href="#kafka.metrics.polling.interval.secs">kafka.metrics.polling.interval.secs</a>
+</h4>
 <p>The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -1969,7 +2500,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="kafka.metrics.reporters" href="#kafka.metrics.reporters">kafka.metrics.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="kafka.metrics.reporters"></a>
+   <a href="#kafka.metrics.reporters">kafka.metrics.reporters</a>
+</h4>
 <p>A list of classes to use as Yammer metrics custom reporters. The reporters should implement <code>kafka.metrics.KafkaMetricsReporter</code> trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends <code>kafka.metrics.KafkaMetricsReporterMBean</code> trait so that the registered MBean is compliant with the standard MBean convention.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -1980,7 +2514,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="listener.security.protocol.map" href="#listener.security.protocol.map">listener.security.protocol.map</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="listener.security.protocol.map"></a>
+   <a href="#listener.security.protocol.map">listener.security.protocol.map</a>
+</h4>
 <p>Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: `INTERNAL:SSL,EXTERNAL:SSL`. As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name <code>listener.name.internal.ssl.keystore.location</code> would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. <code>ssl.keystore.location</code>). </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -1991,7 +2528,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="log.message.downconversion.enable" href="#log.message.downconversion.enable">log.message.downconversion.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="log.message.downconversion.enable"></a>
+   <a href="#log.message.downconversion.enable">log.message.downconversion.enable</a>
+</h4>
 <p>This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to <code>false</code>, broker will not perform down-conversion for consumers expecting an older message format. The broker responds with <code>UNSUPPORTED_VERSION</code> error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -2002,7 +2542,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metric.reporters" href="#metric.reporters">metric.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
+</h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -2013,7 +2556,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.num.samples" href="#metrics.num.samples">metrics.num.samples</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
+</h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2024,7 +2570,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.recording.level" href="#metrics.recording.level">metrics.recording.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
+</h4>
 <p>The highest recording level for metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2035,7 +2584,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.sample.window.ms" href="#metrics.sample.window.ms">metrics.sample.window.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
+</h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -2046,7 +2598,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="password.encoder.cipher.algorithm" href="#password.encoder.cipher.algorithm">password.encoder.cipher.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="password.encoder.cipher.algorithm"></a>
+   <a href="#password.encoder.cipher.algorithm">password.encoder.cipher.algorithm</a>
+</h4>
 <p>The Cipher algorithm used for encoding dynamically configured passwords.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2057,7 +2612,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="password.encoder.iterations" href="#password.encoder.iterations">password.encoder.iterations</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="password.encoder.iterations"></a>
+   <a href="#password.encoder.iterations">password.encoder.iterations</a>
+</h4>
 <p>The iteration count used for encoding dynamically configured passwords.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2068,7 +2626,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="password.encoder.key.length" href="#password.encoder.key.length">password.encoder.key.length</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="password.encoder.key.length"></a>
+   <a href="#password.encoder.key.length">password.encoder.key.length</a>
+</h4>
 <p>The key length used for encoding dynamically configured passwords.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2079,7 +2640,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="password.encoder.keyfactory.algorithm" href="#password.encoder.keyfactory.algorithm">password.encoder.keyfactory.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="password.encoder.keyfactory.algorithm"></a>
+   <a href="#password.encoder.keyfactory.algorithm">password.encoder.keyfactory.algorithm</a>
+</h4>
 <p>The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2090,7 +2654,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="quota.window.num" href="#quota.window.num">quota.window.num</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="quota.window.num"></a>
+   <a href="#quota.window.num">quota.window.num</a>
+</h4>
 <p>The number of samples to retain in memory for client quotas</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2101,7 +2668,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="quota.window.size.seconds" href="#quota.window.size.seconds">quota.window.size.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="quota.window.size.seconds"></a>
+   <a href="#quota.window.size.seconds">quota.window.size.seconds</a>
+</h4>
 <p>The time span of each sample for client quotas</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2112,7 +2682,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replication.quota.window.num" href="#replication.quota.window.num">replication.quota.window.num</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replication.quota.window.num"></a>
+   <a href="#replication.quota.window.num">replication.quota.window.num</a>
+</h4>
 <p>The number of samples to retain in memory for replication quotas</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2123,7 +2696,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replication.quota.window.size.seconds" href="#replication.quota.window.size.seconds">replication.quota.window.size.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replication.quota.window.size.seconds"></a>
+   <a href="#replication.quota.window.size.seconds">replication.quota.window.size.seconds</a>
+</h4>
 <p>The time span of each sample for replication quotas</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2134,7 +2710,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.providers" href="#security.providers">security.providers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
+</h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2145,7 +2724,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
+</h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2156,7 +2738,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.engine.factory.class" href="#ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.engine.factory.class"></a>
+   <a href="#ssl.engine.factory.class">ssl.engine.factory.class</a>
+</h4>
 <p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -2167,7 +2752,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.principal.mapping.rules" href="#ssl.principal.mapping.rules">ssl.principal.mapping.rules</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.principal.mapping.rules"></a>
+   <a href="#ssl.principal.mapping.rules">ssl.principal.mapping.rules</a>
+</h4>
 <p>A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see <a href="#security_authz"> security authorization and acls</a>. Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the <code>principal.builder.class</code> configuration.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2178,7 +2766,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
+</h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2189,7 +2780,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.abort.timed.out.transaction.cleanup.interval.ms" href="#transaction.abort.timed.out.transaction.cleanup.interval.ms">transaction.abort.timed.out.transaction.cleanup.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.abort.timed.out.transaction.cleanup.interval.ms"></a>
+   <a href="#transaction.abort.timed.out.transaction.cleanup.interval.ms">transaction.abort.timed.out.transaction.cleanup.interval.ms</a>
+</h4>
 <p>The interval at which to rollback transactions that have timed out</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2200,7 +2794,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.remove.expired.transaction.cleanup.interval.ms" href="#transaction.remove.expired.transaction.cleanup.interval.ms">transaction.remove.expired.transaction.cleanup.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.remove.expired.transaction.cleanup.interval.ms"></a>
+   <a href="#transaction.remove.expired.transaction.cleanup.interval.ms">transaction.remove.expired.transaction.cleanup.interval.ms</a>
+</h4>
 <p>The interval at which to remove transactions that have expired due to <code>transactional.id.expiration.ms</code> passing</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -2211,7 +2808,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.cipher.suites" href="#zookeeper.ssl.cipher.suites">zookeeper.ssl.cipher.suites</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.cipher.suites"></a>
+   <a href="#zookeeper.ssl.cipher.suites">zookeeper.ssl.cipher.suites</a>
+</h4>
 <p>Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the <code>zookeeper.ssl.ciphersuites</code> system property (note the single word "ciphersuites"). The default value of <code>null</code> means the list of enabled cipher suites is determined by the Java runtime being used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -2222,7 +2822,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.crl.enable" href="#zookeeper.ssl.crl.enable">zookeeper.ssl.crl.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.crl.enable"></a>
+   <a href="#zookeeper.ssl.crl.enable">zookeeper.ssl.crl.enable</a>
+</h4>
 <p>Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the <code>zookeeper.ssl.crl</code> system property (note the shorter name).</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -2233,7 +2836,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.enabled.protocols" href="#zookeeper.ssl.enabled.protocols">zookeeper.ssl.enabled.protocols</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.enabled.protocols"></a>
+   <a href="#zookeeper.ssl.enabled.protocols">zookeeper.ssl.enabled.protocols</a>
+</h4>
 <p>Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the <code>zookeeper.ssl.enabledProtocols</code> system property (note the camelCase). The default value of <code>null</code> means the enabled protocol will be the value of the <code>zookeeper.ssl.protocol</code> configuration property.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -2244,7 +2850,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.endpoint.identification.algorithm" href="#zookeeper.ssl.endpoint.identification.algorithm">zookeeper.ssl.endpoint.identification.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.endpoint.identification.algorithm"></a>
+   <a href="#zookeeper.ssl.endpoint.identification.algorithm">zookeeper.ssl.endpoint.identification.algorithm</a>
+</h4>
 <p>Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the <code>zookeeper.ssl.hostnameVerification</code> system property (note the different name and values; true implies https and false implies blank).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2255,7 +2864,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.ocsp.enable" href="#zookeeper.ssl.ocsp.enable">zookeeper.ssl.ocsp.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.ocsp.enable"></a>
+   <a href="#zookeeper.ssl.ocsp.enable">zookeeper.ssl.ocsp.enable</a>
+</h4>
 <p>Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the <code>zookeeper.ssl.ocsp</code> system property (note the shorter name).</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -2266,7 +2878,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.ssl.protocol" href="#zookeeper.ssl.protocol">zookeeper.ssl.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.ssl.protocol"></a>
+   <a href="#zookeeper.ssl.protocol">zookeeper.ssl.protocol</a>
+</h4>
 <p>Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named <code>zookeeper.ssl.protocol</code> system property.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -2277,7 +2892,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="zookeeper.sync.time.ms" href="#zookeeper.sync.time.ms">zookeeper.sync.time.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="zookeeper.sync.time.ms"></a>
+   <a href="#zookeeper.sync.time.ms">zookeeper.sync.time.ms</a>
+</h4>
 <p>How far a ZK follower can be behind a ZK leader</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
diff --git a/26/generated/producer_config.html b/26/generated/producer_config.html
index a1ddc45..a08b31e 100644
--- a/26/generated/producer_config.html
+++ b/26/generated/producer_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="key.serializer" href="#key.serializer">key.serializer</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="key.serializer"></a>
+   <a href="#key.serializer">key.serializer</a>
+</h4>
 <p>Serializer class for key that implements the <code>org.apache.kafka.common.serialization.Serializer</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -10,7 +13,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="value.serializer" href="#value.serializer">value.serializer</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="value.serializer"></a>
+   <a href="#value.serializer">value.serializer</a>
+</h4>
 <p>Serializer class for value that implements the <code>org.apache.kafka.common.serialization.Serializer</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -30,7 +36,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="bootstrap.servers" href="#bootstrap.servers">bootstrap.servers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
+</h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -40,7 +49,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="buffer.memory" href="#buffer.memory">buffer.memory</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="buffer.memory"></a>
+   <a href="#buffer.memory">buffer.memory</a>
+</h4>
 <p>The total bytes of memory the producer can use to buffer records waiting to be sent to the server. If records are sent faster than they can be delivered to the server the producer will block for <code>max.block.ms</code> after which it will throw an exception.<p>This setting should correspond roughly to the total memory the producer will use, but is not a hard bound since not all memory the producer uses is used for buffering. Some additional memory will be used for compression (if compression is enabled) as well as for maintaining in-flight requests.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -50,7 +62,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="compression.type" href="#compression.type">compression.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="compression.type"></a>
+   <a href="#compression.type">compression.type</a>
+</h4>
 <p>The compression type for all data generated by the producer. The default is none (i.e. no compression). Valid  values are <code>none</code>, <code>gzip</code>, <code>snappy</code>, <code>lz4</code>, or <code>zstd</code>. Compression is of full batches of data, so the efficacy of batching will also impact the compression ratio (more batching means better compression).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -70,7 +85,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.key.password" href="#ssl.key.password">ssl.key.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.key.password"></a>
+   <a href="#ssl.key.password">ssl.key.password</a>
+</h4>
 <p>The password of the private key in the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -80,7 +98,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.location" href="#ssl.keystore.location">ssl.keystore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.location"></a>
+   <a href="#ssl.keystore.location">ssl.keystore.location</a>
+</h4>
 <p>The location of the key store file. This is optional for client and can be used for two-way authentication for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -90,7 +111,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.password" href="#ssl.keystore.password">ssl.keystore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.password"></a>
+   <a href="#ssl.keystore.password">ssl.keystore.password</a>
+</h4>
 <p>The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. </p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -100,7 +124,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.location" href="#ssl.truststore.location">ssl.truststore.location</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.location"></a>
+   <a href="#ssl.truststore.location">ssl.truststore.location</a>
+</h4>
 <p>The location of the trust store file. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -110,7 +137,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.password" href="#ssl.truststore.password">ssl.truststore.password</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.password"></a>
+   <a href="#ssl.truststore.password">ssl.truststore.password</a>
+</h4>
 <p>The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled.</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -120,7 +150,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="batch.size" href="#batch.size">batch.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="batch.size"></a>
+   <a href="#batch.size">batch.size</a>
+</h4>
 <p>The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition. This helps performance on both the client and the server. This configuration controls the default batch size in bytes. <p>No attempt will be made to batch records larger than this size. <p>Requests sent to brokers will contain multiple batches, one for each partition with data available to be sent. <p>A small batch size will make batching less common and may reduce throughput (a batch size of zero will disable batching entirely). A very large batch size may use memory a bit more wastefully as we will always allocate a buffer of the specified batch size in anticipation of additional records.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -130,7 +163,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.dns.lookup" href="#client.dns.lookup">client.dns.lookup</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.dns.lookup"></a>
+   <a href="#client.dns.lookup">client.dns.lookup</a>
+</h4>
 <p>Controls how the client uses DNS lookups. If set to <code>use_all_dns_ips</code>, connect to each returned IP address in sequence until a successful connection is established. After a disconnection, the next IP is used. Once all IPs have been used once, the client resolves the IP(s) from the hostname again (both the JVM and the OS cache DNS name lookups, however). If set to <code>resolve_canonical_bootstrap_servers_only</code>, resolve each bootstrap address into a list of canonical names. After the bootstrap phase, this behaves the same as <code>use_all_dns_ips</code>. If set to <code>default</code> (deprecated), attempt to connect to the first IP address returned by the lookup, even if the lookup returns multiple IP addresses.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -140,7 +176,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.id" href="#client.id">client.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
+</h4>
 <p>An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -150,7 +189,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.idle.ms" href="#connections.max.idle.ms">connections.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
+</h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -160,7 +202,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delivery.timeout.ms" href="#delivery.timeout.ms">delivery.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delivery.timeout.ms"></a>
+   <a href="#delivery.timeout.ms">delivery.timeout.ms</a>
+</h4>
 <p>An upper bound on the time to report success or failure after a call to <code>send()</code> returns. This limits the total time that a record will be delayed prior to sending, the time to await acknowledgement from the broker (if expected), and the time allowed for retriable send failures. The producer may report failure to send a record earlier than this config if either an unrecoverable error is encountered, the retries have been exhausted, or the record is added to a batch which reached an earlier delivery expiration deadline. The value of this config should be greater than or equal to the sum of <code>request.timeout.ms</code> and <code>linger.ms</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -170,7 +215,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="linger.ms" href="#linger.ms">linger.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="linger.ms"></a>
+   <a href="#linger.ms">linger.ms</a>
+</h4>
 <p>The producer groups together any records that arrive in between request transmissions into a single batched request. Normally this occurs only under load when records arrive faster than they can be sent out. However in some circumstances the client may want to reduce the number of requests even under moderate load. This setting accomplishes this by adding a small amount of artificial delay&mdash;that is, rather than immediately sending out a record the producer will wait for up to the given delay to allow other records to be sent so that the sends can be batched together. This can be thought of as analogous to Nagle's algorithm in TCP. This setting gives the upper bound on the delay for batching: once we get <code>batch.size</code> worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up. This setting defaults to 0 (i.e. no delay). Setting <code>linger.ms=5</code>, for example, would have the effect of reducing the number of requests sent but would add up to 5ms of latency to records sent in the absence of load.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -180,7 +228,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.block.ms" href="#max.block.ms">max.block.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.block.ms"></a>
+   <a href="#max.block.ms">max.block.ms</a>
+</h4>
 <p>The configuration controls how long <code>KafkaProducer.send()</code> and <code>KafkaProducer.partitionsFor()</code> will block.These methods can be blocked either because the buffer is full or metadata unavailable.Blocking in the user-supplied serializers or partitioner will not be counted against this timeout.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -190,7 +241,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.request.size" href="#max.request.size">max.request.size</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.request.size"></a>
+   <a href="#max.request.size">max.request.size</a>
+</h4>
 <p>The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests. This is also effectively a cap on the maximum uncompressed record batch size. Note that the server has its own cap on the record batch size (after compression if compression is enabled) which may be different from this.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -200,7 +254,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="partitioner.class" href="#partitioner.class">partitioner.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="partitioner.class"></a>
+   <a href="#partitioner.class">partitioner.class</a>
+</h4>
 <p>Partitioner class that implements the <code>org.apache.kafka.clients.producer.Partitioner</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -210,7 +267,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="receive.buffer.bytes" href="#receive.buffer.bytes">receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
+</h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -220,7 +280,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="request.timeout.ms" href="#request.timeout.ms">request.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
+</h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. This should be larger than <code>replica.lag.time.max.ms</code> (a broker configuration) to reduce the possibility of message duplication due to unnecessary producer retries.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -230,7 +293,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.client.callback.handler.class" href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.client.callback.handler.class"></a>
+   <a href="#sasl.client.callback.handler.class">sasl.client.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -240,7 +306,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.jaas.config" href="#sasl.jaas.config">sasl.jaas.config</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.jaas.config"></a>
+   <a href="#sasl.jaas.config">sasl.jaas.config</a>
+</h4>
 <p>JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described <a href="http://docs.oracle.com/javase/8/docs/technotes/guides/security/jgss/tutorials/LoginConfigFile.html">here</a>. The format for the value is: '<code>loginModuleClass controlFlag (optionName=optionValue)*;</code>'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;</p>
 <table><tbody>
 <tr><th>Type:</th><td>password</td></tr>
@@ -250,7 +319,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.service.name" href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.service.name"></a>
+   <a href="#sasl.kerberos.service.name">sasl.kerberos.service.name</a>
+</h4>
 <p>The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -260,7 +332,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.callback.handler.class" href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.callback.handler.class"></a>
+   <a href="#sasl.login.callback.handler.class">sasl.login.callback.handler.class</a>
+</h4>
 <p>The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -270,7 +345,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.class" href="#sasl.login.class">sasl.login.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.class"></a>
+   <a href="#sasl.login.class">sasl.login.class</a>
+</h4>
 <p>The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -280,7 +358,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.mechanism" href="#sasl.mechanism">sasl.mechanism</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.mechanism"></a>
+   <a href="#sasl.mechanism">sasl.mechanism</a>
+</h4>
 <p>SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -290,7 +371,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.protocol" href="#security.protocol">security.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
+</h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -300,7 +384,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="send.buffer.bytes" href="#send.buffer.bytes">send.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
+</h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -310,7 +397,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.enabled.protocols" href="#ssl.enabled.protocols">ssl.enabled.protocols</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.enabled.protocols"></a>
+   <a href="#ssl.enabled.protocols">ssl.enabled.protocols</a>
+</h4>
 <p>The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for `ssl.protocol`.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -320,7 +410,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keystore.type" href="#ssl.keystore.type">ssl.keystore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keystore.type"></a>
+   <a href="#ssl.keystore.type">ssl.keystore.type</a>
+</h4>
 <p>The file format of the key store file. This is optional for client.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -330,7 +423,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.protocol" href="#ssl.protocol">ssl.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.protocol"></a>
+   <a href="#ssl.protocol">ssl.protocol</a>
+</h4>
 <p>The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -340,7 +436,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.provider" href="#ssl.provider">ssl.provider</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.provider"></a>
+   <a href="#ssl.provider">ssl.provider</a>
+</h4>
 <p>The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -350,7 +449,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.truststore.type" href="#ssl.truststore.type">ssl.truststore.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.truststore.type"></a>
+   <a href="#ssl.truststore.type">ssl.truststore.type</a>
+</h4>
 <p>The file format of the trust store file.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -360,7 +462,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="enable.idempotence" href="#enable.idempotence">enable.idempotence</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="enable.idempotence"></a>
+   <a href="#enable.idempotence">enable.idempotence</a>
+</h4>
 <p>When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream. Note that enabling idempotence requires <code>max.in.flight.requests.per.connection</code> to be less than or equal to 5, <code>retries</code> to be greater than 0 and <code>acks</code> must be 'all'. If these values are not explicitly set by the user, suitable values will be chosen. If incompatible values are set, a <code>ConfigException</code> will be thrown.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -370,7 +475,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="interceptor.classes" href="#interceptor.classes">interceptor.classes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="interceptor.classes"></a>
+   <a href="#interceptor.classes">interceptor.classes</a>
+</h4>
 <p>A list of classes to use as interceptors. Implementing the <code>org.apache.kafka.clients.producer.ProducerInterceptor</code> interface allows you to intercept (and possibly mutate) the records received by the producer before they are published to the Kafka cluster. By default, there are no interceptors.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -380,7 +488,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.in.flight.requests.per.connection" href="#max.in.flight.requests.per.connection">max.in.flight.requests.per.connection</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.in.flight.requests.per.connection"></a>
+   <a href="#max.in.flight.requests.per.connection">max.in.flight.requests.per.connection</a>
+</h4>
 <p>The maximum number of unacknowledged requests the client will send on a single connection before blocking. Note that if this setting is set to be greater than 1 and there are failed sends, there is a risk of message re-ordering due to retries (i.e., if retries are enabled).</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -390,7 +501,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metadata.max.age.ms" href="#metadata.max.age.ms">metadata.max.age.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
+</h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -400,7 +514,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metadata.max.idle.ms" href="#metadata.max.idle.ms">metadata.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metadata.max.idle.ms"></a>
+   <a href="#metadata.max.idle.ms">metadata.max.idle.ms</a>
+</h4>
 <p>Controls how long the producer will cache metadata for a topic that's idle. If the elapsed time since a topic was last produced to exceeds the metadata idle duration, then the topic's metadata is forgotten and the next access to it will force a metadata fetch request.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -410,7 +527,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metric.reporters" href="#metric.reporters">metric.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
+</h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -420,7 +540,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.num.samples" href="#metrics.num.samples">metrics.num.samples</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
+</h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -430,7 +553,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.recording.level" href="#metrics.recording.level">metrics.recording.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
+</h4>
 <p>The highest recording level for metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -440,7 +566,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.sample.window.ms" href="#metrics.sample.window.ms">metrics.sample.window.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
+</h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -450,7 +579,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
+</h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -460,7 +592,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.ms" href="#reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
+</h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -470,7 +605,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.ms" href="#retry.backoff.ms">retry.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
+</h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -480,7 +618,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.kinit.cmd" href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.kinit.cmd"></a>
+   <a href="#sasl.kerberos.kinit.cmd">sasl.kerberos.kinit.cmd</a>
+</h4>
 <p>Kerberos kinit command path.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -490,7 +631,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.min.time.before.relogin" href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.min.time.before.relogin"></a>
+   <a href="#sasl.kerberos.min.time.before.relogin">sasl.kerberos.min.time.before.relogin</a>
+</h4>
 <p>Login thread sleep time between refresh attempts.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -500,7 +644,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.jitter" href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.jitter"></a>
+   <a href="#sasl.kerberos.ticket.renew.jitter">sasl.kerberos.ticket.renew.jitter</a>
+</h4>
 <p>Percentage of random jitter added to the renewal time.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -510,7 +657,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.kerberos.ticket.renew.window.factor" href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.kerberos.ticket.renew.window.factor"></a>
+   <a href="#sasl.kerberos.ticket.renew.window.factor">sasl.kerberos.ticket.renew.window.factor</a>
+</h4>
 <p>Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -520,7 +670,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.buffer.seconds" href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.buffer.seconds"></a>
+   <a href="#sasl.login.refresh.buffer.seconds">sasl.login.refresh.buffer.seconds</a>
+</h4>
 <p>The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of  300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -530,7 +683,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.min.period.seconds" href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.min.period.seconds"></a>
+   <a href="#sasl.login.refresh.min.period.seconds">sasl.login.refresh.min.period.seconds</a>
+</h4>
 <p>The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified.  This value and  sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -540,7 +696,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.factor" href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.factor"></a>
+   <a href="#sasl.login.refresh.window.factor">sasl.login.refresh.window.factor</a>
+</h4>
 <p>Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -550,7 +709,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="sasl.login.refresh.window.jitter" href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="sasl.login.refresh.window.jitter"></a>
+   <a href="#sasl.login.refresh.window.jitter">sasl.login.refresh.window.jitter</a>
+</h4>
 <p>The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -560,7 +722,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.providers" href="#security.providers">security.providers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.providers"></a>
+   <a href="#security.providers">security.providers</a>
+</h4>
 <p>A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the <code>org.apache.kafka.common.security.auth.SecurityProviderCreator</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -570,7 +735,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.cipher.suites" href="#ssl.cipher.suites">ssl.cipher.suites</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.cipher.suites"></a>
+   <a href="#ssl.cipher.suites">ssl.cipher.suites</a>
+</h4>
 <p>A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -580,7 +748,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.endpoint.identification.algorithm" href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.endpoint.identification.algorithm"></a>
+   <a href="#ssl.endpoint.identification.algorithm">ssl.endpoint.identification.algorithm</a>
+</h4>
 <p>The endpoint identification algorithm to validate server hostname using server certificate. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -590,7 +761,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.engine.factory.class" href="#ssl.engine.factory.class">ssl.engine.factory.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.engine.factory.class"></a>
+   <a href="#ssl.engine.factory.class">ssl.engine.factory.class</a>
+</h4>
 <p>The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -600,7 +774,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.keymanager.algorithm" href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.keymanager.algorithm"></a>
+   <a href="#ssl.keymanager.algorithm">ssl.keymanager.algorithm</a>
+</h4>
 <p>The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -610,7 +787,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.secure.random.implementation" href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.secure.random.implementation"></a>
+   <a href="#ssl.secure.random.implementation">ssl.secure.random.implementation</a>
+</h4>
 <p>The SecureRandom PRNG implementation to use for SSL cryptography operations. </p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -620,7 +800,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="ssl.trustmanager.algorithm" href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="ssl.trustmanager.algorithm"></a>
+   <a href="#ssl.trustmanager.algorithm">ssl.trustmanager.algorithm</a>
+</h4>
 <p>The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -630,7 +813,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transaction.timeout.ms" href="#transaction.timeout.ms">transaction.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transaction.timeout.ms"></a>
+   <a href="#transaction.timeout.ms">transaction.timeout.ms</a>
+</h4>
 <p>The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.If this value is larger than the transaction.max.timeout.ms setting in the broker, the request will fail with a <code>InvalidTransactionTimeout</code> error.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -640,7 +826,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="transactional.id" href="#transactional.id">transactional.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="transactional.id"></a>
+   <a href="#transactional.id">transactional.id</a>
+</h4>
 <p>The TransactionalId to use for transactional delivery. This enables reliability semantics which span multiple producer sessions since it allows the client to guarantee that transactions using the same TransactionalId have been completed prior to starting any new transactions. If no TransactionalId is provided, then the producer is limited to idempotent delivery. If a TransactionalId is configured, <code>enable.idempotence</code> is implied. By default the TransactionId is not configured, which means transactions cannot be used. Note that, by default, transactions require a cluster of at least three brokers which is the recommended setting for production; for development you can change this, by adjusting broker setting <code>transaction.state.log.replication.factor</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
diff --git a/26/generated/sink_connector_config.html b/26/generated/sink_connector_config.html
index 9c81d28..dbd13b0 100644
--- a/26/generated/sink_connector_config.html
+++ b/26/generated/sink_connector_config.html
@@ -10,7 +10,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connector.class" href="#connector.class">connector.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connector.class"></a>
+   <a href="#connector.class">connector.class</a>
+</h4>
 <p>Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name,  or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -20,7 +23,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="tasks.max" href="#tasks.max">tasks.max</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="tasks.max"></a>
+   <a href="#tasks.max">tasks.max</a>
+</h4>
 <p>Maximum number of tasks to use for this connector.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -40,7 +46,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="topics.regex" href="#topics.regex">topics.regex</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="topics.regex"></a>
+   <a href="#topics.regex">topics.regex</a>
+</h4>
 <p>Regular expression giving topics to consume. Under the hood, the regex is compiled to a <code>java.util.regex.Pattern</code>. Only one of topics or topics.regex should be specified.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -50,7 +59,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="key.converter" href="#key.converter">key.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="key.converter"></a>
+   <a href="#key.converter">key.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -60,7 +72,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="value.converter" href="#value.converter">value.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="value.converter"></a>
+   <a href="#value.converter">value.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -70,7 +85,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="header.converter" href="#header.converter">header.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="header.converter"></a>
+   <a href="#header.converter">header.converter</a>
+</h4>
 <p>HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -80,7 +98,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="config.action.reload" href="#config.action.reload">config.action.reload</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="config.action.reload"></a>
+   <a href="#config.action.reload">config.action.reload</a>
+</h4>
 <p>The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -110,7 +131,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.retry.timeout" href="#errors.retry.timeout">errors.retry.timeout</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.retry.timeout"></a>
+   <a href="#errors.retry.timeout">errors.retry.timeout</a>
+</h4>
 <p>The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -120,7 +144,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.retry.delay.max.ms" href="#errors.retry.delay.max.ms">errors.retry.delay.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.retry.delay.max.ms"></a>
+   <a href="#errors.retry.delay.max.ms">errors.retry.delay.max.ms</a>
+</h4>
 <p>The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -130,7 +157,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.tolerance" href="#errors.tolerance">errors.tolerance</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.tolerance"></a>
+   <a href="#errors.tolerance">errors.tolerance</a>
+</h4>
 <p>Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -140,7 +170,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.log.enable" href="#errors.log.enable">errors.log.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.log.enable"></a>
+   <a href="#errors.log.enable">errors.log.enable</a>
+</h4>
 <p>If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -150,7 +183,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.log.include.messages" href="#errors.log.include.messages">errors.log.include.messages</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.log.include.messages"></a>
+   <a href="#errors.log.include.messages">errors.log.include.messages</a>
+</h4>
 <p>Whether to the include in the log the Connect record that resulted in a failure. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files, although some information such as topic and partition number will still be logged.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -160,7 +196,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.deadletterqueue.topic.name" href="#errors.deadletterqueue.topic.name">errors.deadletterqueue.topic.name</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.deadletterqueue.topic.name"></a>
+   <a href="#errors.deadletterqueue.topic.name">errors.deadletterqueue.topic.name</a>
+</h4>
 <p>The name of the topic to be used as the dead letter queue (DLQ) for messages that result in an error when processed by this sink connector, or its transformations or converters. The topic name is blank by default, which means that no messages are to be recorded in the DLQ.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -170,7 +209,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.deadletterqueue.topic.replication.factor" href="#errors.deadletterqueue.topic.replication.factor">errors.deadletterqueue.topic.replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.deadletterqueue.topic.replication.factor"></a>
+   <a href="#errors.deadletterqueue.topic.replication.factor">errors.deadletterqueue.topic.replication.factor</a>
+</h4>
 <p>Replication factor used to create the dead letter queue topic when it doesn't already exist.</p>
 <table><tbody>
 <tr><th>Type:</th><td>short</td></tr>
@@ -180,7 +222,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.deadletterqueue.context.headers.enable" href="#errors.deadletterqueue.context.headers.enable">errors.deadletterqueue.context.headers.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.deadletterqueue.context.headers.enable"></a>
+   <a href="#errors.deadletterqueue.context.headers.enable">errors.deadletterqueue.context.headers.enable</a>
+</h4>
 <p>If true, add headers containing error context to the messages written to the dead letter queue. To avoid clashing with headers from the original record, all error context header keys, all error context header keys will start with <code>__connect.errors.</code></p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
diff --git a/26/generated/source_connector_config.html b/26/generated/source_connector_config.html
index 7fad0a6..10c396b 100644
--- a/26/generated/source_connector_config.html
+++ b/26/generated/source_connector_config.html
@@ -10,7 +10,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connector.class" href="#connector.class">connector.class</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connector.class"></a>
+   <a href="#connector.class">connector.class</a>
+</h4>
 <p>Name or alias of the class for this connector. Must be a subclass of org.apache.kafka.connect.connector.Connector. If the connector is org.apache.kafka.connect.file.FileStreamSinkConnector, you can either specify this full name,  or use "FileStreamSink" or "FileStreamSinkConnector" to make the configuration a bit shorter</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -20,7 +23,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="tasks.max" href="#tasks.max">tasks.max</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="tasks.max"></a>
+   <a href="#tasks.max">tasks.max</a>
+</h4>
 <p>Maximum number of tasks to use for this connector.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -30,7 +36,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="key.converter" href="#key.converter">key.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="key.converter"></a>
+   <a href="#key.converter">key.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -40,7 +49,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="value.converter" href="#value.converter">value.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="value.converter"></a>
+   <a href="#value.converter">value.converter</a>
+</h4>
 <p>Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -50,7 +62,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="header.converter" href="#header.converter">header.converter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="header.converter"></a>
+   <a href="#header.converter">header.converter</a>
+</h4>
 <p>HeaderConverter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the header values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. By default, the SimpleHeaderConverter is used to serialize header values to strings and deserialize them by inferring the schemas.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -60,7 +75,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="config.action.reload" href="#config.action.reload">config.action.reload</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="config.action.reload"></a>
+   <a href="#config.action.reload">config.action.reload</a>
+</h4>
 <p>The action that Connect should take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of 'none' indicates that Connect will do nothing. A value of 'restart' indicates that Connect should restart/reload the connector with the updated configuration properties.The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -90,7 +108,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.retry.timeout" href="#errors.retry.timeout">errors.retry.timeout</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.retry.timeout"></a>
+   <a href="#errors.retry.timeout">errors.retry.timeout</a>
+</h4>
 <p>The maximum duration in milliseconds that a failed operation will be reattempted. The default is 0, which means no retries will be attempted. Use -1 for infinite retries.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -100,7 +121,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.retry.delay.max.ms" href="#errors.retry.delay.max.ms">errors.retry.delay.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.retry.delay.max.ms"></a>
+   <a href="#errors.retry.delay.max.ms">errors.retry.delay.max.ms</a>
+</h4>
 <p>The maximum duration in milliseconds between consecutive retry attempts. Jitter will be added to the delay once this limit is reached to prevent thundering herd issues.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -110,7 +134,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.tolerance" href="#errors.tolerance">errors.tolerance</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.tolerance"></a>
+   <a href="#errors.tolerance">errors.tolerance</a>
+</h4>
 <p>Behavior for tolerating errors during connector operation. 'none' is the default value and signals that any error will result in an immediate connector task failure; 'all' changes the behavior to skip over problematic records.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -120,7 +147,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.log.enable" href="#errors.log.enable">errors.log.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.log.enable"></a>
+   <a href="#errors.log.enable">errors.log.enable</a>
+</h4>
 <p>If true, write each error and the details of the failed operation and problematic record to the Connect application log. This is 'false' by default, so that only errors that are not tolerated are reported.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -130,7 +160,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="errors.log.include.messages" href="#errors.log.include.messages">errors.log.include.messages</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="errors.log.include.messages"></a>
+   <a href="#errors.log.include.messages">errors.log.include.messages</a>
+</h4>
 <p>Whether to the include in the log the Connect record that resulted in a failure. This is 'false' by default, which will prevent record keys, values, and headers from being written to log files, although some information such as topic and partition number will still be logged.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -140,7 +173,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="topic.creation.groups" href="#topic.creation.groups">topic.creation.groups</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="topic.creation.groups"></a>
+   <a href="#topic.creation.groups">topic.creation.groups</a>
+</h4>
 <p>Groups of configurations for topics created by source connectors</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
diff --git a/26/generated/streams_config.html b/26/generated/streams_config.html
index 804cb74..2c5e92a 100644
--- a/26/generated/streams_config.html
+++ b/26/generated/streams_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="application.id" href="#application.id">application.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="application.id"></a>
+   <a href="#application.id">application.id</a>
+</h4>
 <p>An identifier for the stream processing application. Must be unique within the Kafka cluster. It is used as 1) the default client-id prefix, 2) the group-id for membership management, 3) the changelog topic prefix.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -10,7 +13,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="bootstrap.servers" href="#bootstrap.servers">bootstrap.servers</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="bootstrap.servers"></a>
+   <a href="#bootstrap.servers">bootstrap.servers</a>
+</h4>
 <p>A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping&mdash;this list only impacts the initial hosts used to discover the full set of servers. This list should be in the form <code>host1:port1,host2:port2,...</code>. Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -20,7 +26,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="replication.factor" href="#replication.factor">replication.factor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="replication.factor"></a>
+   <a href="#replication.factor">replication.factor</a>
+</h4>
 <p>The replication factor for change log topics and repartition topics created by the stream processing application.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -30,7 +39,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="state.dir" href="#state.dir">state.dir</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="state.dir"></a>
+   <a href="#state.dir">state.dir</a>
+</h4>
 <p>Directory location for state store. This path must be unique for each streams instance sharing the same underlying filesystem.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -40,7 +52,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="acceptable.recovery.lag" href="#acceptable.recovery.lag">acceptable.recovery.lag</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="acceptable.recovery.lag"></a>
+   <a href="#acceptable.recovery.lag">acceptable.recovery.lag</a>
+</h4>
 <p>The maximum acceptable lag (number of offsets to catch up) for a client to be considered caught-up for an active task.Should correspond to a recovery time of well under a minute for a given workload. Must be at least 0.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -50,7 +65,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="cache.max.bytes.buffering" href="#cache.max.bytes.buffering">cache.max.bytes.buffering</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="cache.max.bytes.buffering"></a>
+   <a href="#cache.max.bytes.buffering">cache.max.bytes.buffering</a>
+</h4>
 <p>Maximum number of memory bytes to be used for buffering across all threads</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -60,7 +78,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="client.id" href="#client.id">client.id</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="client.id"></a>
+   <a href="#client.id">client.id</a>
+</h4>
 <p>An ID prefix string used for the client IDs of internal consumer, producer and restore-consumer, with pattern '<client.id>-StreamThread-<threadSequenceNumber>-<consumer|producer|restore-consumer>'.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -70,7 +91,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.deserialization.exception.handler" href="#default.deserialization.exception.handler">default.deserialization.exception.handler</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.deserialization.exception.handler"></a>
+   <a href="#default.deserialization.exception.handler">default.deserialization.exception.handler</a>
+</h4>
 <p>Exception handling class that implements the <code>org.apache.kafka.streams.errors.DeserializationExceptionHandler</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -80,7 +104,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.key.serde" href="#default.key.serde">default.key.serde</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.key.serde"></a>
+   <a href="#default.key.serde">default.key.serde</a>
+</h4>
 <p> Default serializer / deserializer class for key that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -90,7 +117,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.production.exception.handler" href="#default.production.exception.handler">default.production.exception.handler</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.production.exception.handler"></a>
+   <a href="#default.production.exception.handler">default.production.exception.handler</a>
+</h4>
 <p>Exception handling class that implements the <code>org.apache.kafka.streams.errors.ProductionExceptionHandler</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -100,7 +130,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.timestamp.extractor" href="#default.timestamp.extractor">default.timestamp.extractor</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.timestamp.extractor"></a>
+   <a href="#default.timestamp.extractor">default.timestamp.extractor</a>
+</h4>
 <p>Default timestamp extractor class that implements the <code>org.apache.kafka.streams.processor.TimestampExtractor</code> interface.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -110,7 +143,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="default.value.serde" href="#default.value.serde">default.value.serde</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="default.value.serde"></a>
+   <a href="#default.value.serde">default.value.serde</a>
+</h4>
 <p>Default serializer / deserializer class for value that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface. Note when windowed serde class is used, one needs to set the inner serde class that implements the <code>org.apache.kafka.common.serialization.Serde</code> interface via 'default.windowed.key.serde.inner' or 'default.windowed.value.serde.inner' as well</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -120,7 +156,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.task.idle.ms" href="#max.task.idle.ms">max.task.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.task.idle.ms"></a>
+   <a href="#max.task.idle.ms">max.task.idle.ms</a>
+</h4>
 <p>Maximum amount of time a stream task will stay idle when not all of its partition buffers contain records, to avoid potential out-of-order record processing across multiple input streams.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -130,7 +169,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.warmup.replicas" href="#max.warmup.replicas">max.warmup.replicas</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.warmup.replicas"></a>
+   <a href="#max.warmup.replicas">max.warmup.replicas</a>
+</h4>
 <p>The maximum number of warmup replicas (extra standbys beyond the configured num.standbys) that can be assigned at once for the purpose of keeping  the task available on one instance while it is warming up on another instance it has been reassigned to. Used to throttle how much extra broker  traffic and cluster state can be used for high availability. Must be at least 1.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -140,7 +182,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.standby.replicas" href="#num.standby.replicas">num.standby.replicas</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.standby.replicas"></a>
+   <a href="#num.standby.replicas">num.standby.replicas</a>
+</h4>
 <p>The number of standby replicas for each task.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -150,7 +195,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="num.stream.threads" href="#num.stream.threads">num.stream.threads</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="num.stream.threads"></a>
+   <a href="#num.stream.threads">num.stream.threads</a>
+</h4>
 <p>The number of threads to execute stream processing.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -160,7 +208,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="processing.guarantee" href="#processing.guarantee">processing.guarantee</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="processing.guarantee"></a>
+   <a href="#processing.guarantee">processing.guarantee</a>
+</h4>
 <p>The processing guarantee that should be used. Possible values are <code>at_least_once</code> (default), <code>exactly_once</code> (requires brokers version 0.11.0 or higher), and <code>exactly_once_beta</code> (requires brokers version 2.5 or higher). Note that exactly-once processing requires a cluster of at least three brokers by default what is the recommended setting for production; for development you can change this, by adjusting broker setting <code>transaction.state.log.replication.factor</code> and <code>transaction.state.log.min.isr</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -170,7 +221,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="security.protocol" href="#security.protocol">security.protocol</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="security.protocol"></a>
+   <a href="#security.protocol">security.protocol</a>
+</h4>
 <p>Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -180,7 +234,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="topology.optimization" href="#topology.optimization">topology.optimization</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="topology.optimization"></a>
+   <a href="#topology.optimization">topology.optimization</a>
+</h4>
 <p>A configuration telling Kafka Streams if it should optimize the topology, disabled by default</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -190,7 +247,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="application.server" href="#application.server">application.server</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="application.server"></a>
+   <a href="#application.server">application.server</a>
+</h4>
 <p>A host:port pair pointing to a user-defined endpoint that can be used for state store discovery and interactive queries on this KafkaStreams instance.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -200,7 +260,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="buffered.records.per.partition" href="#buffered.records.per.partition">buffered.records.per.partition</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="buffered.records.per.partition"></a>
+   <a href="#buffered.records.per.partition">buffered.records.per.partition</a>
+</h4>
 <p>Maximum number of records to buffer per partition.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -210,7 +273,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="built.in.metrics.version" href="#built.in.metrics.version">built.in.metrics.version</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="built.in.metrics.version"></a>
+   <a href="#built.in.metrics.version">built.in.metrics.version</a>
+</h4>
 <p>Version of the built-in metrics to use.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -220,7 +286,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="commit.interval.ms" href="#commit.interval.ms">commit.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="commit.interval.ms"></a>
+   <a href="#commit.interval.ms">commit.interval.ms</a>
+</h4>
 <p>The frequency with which to save the position of the processor. (Note, if <code>processing.guarantee</code> is set to <code>exactly_once</code>, the default value is <code>100</code>, otherwise the default value is <code>30000</code>.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -230,7 +299,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="connections.max.idle.ms" href="#connections.max.idle.ms">connections.max.idle.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="connections.max.idle.ms"></a>
+   <a href="#connections.max.idle.ms">connections.max.idle.ms</a>
+</h4>
 <p>Close idle connections after the number of milliseconds specified by this config.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -240,7 +312,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metadata.max.age.ms" href="#metadata.max.age.ms">metadata.max.age.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metadata.max.age.ms"></a>
+   <a href="#metadata.max.age.ms">metadata.max.age.ms</a>
+</h4>
 <p>The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -250,7 +325,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metric.reporters" href="#metric.reporters">metric.reporters</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metric.reporters"></a>
+   <a href="#metric.reporters">metric.reporters</a>
+</h4>
 <p>A list of classes to use as metrics reporters. Implementing the <code>org.apache.kafka.common.metrics.MetricsReporter</code> interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -260,7 +338,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.num.samples" href="#metrics.num.samples">metrics.num.samples</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.num.samples"></a>
+   <a href="#metrics.num.samples">metrics.num.samples</a>
+</h4>
 <p>The number of samples maintained to compute metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -270,7 +351,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.recording.level" href="#metrics.recording.level">metrics.recording.level</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.recording.level"></a>
+   <a href="#metrics.recording.level">metrics.recording.level</a>
+</h4>
 <p>The highest recording level for metrics.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -280,7 +364,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="metrics.sample.window.ms" href="#metrics.sample.window.ms">metrics.sample.window.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="metrics.sample.window.ms"></a>
+   <a href="#metrics.sample.window.ms">metrics.sample.window.ms</a>
+</h4>
 <p>The window of time a metrics sample is computed over.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -290,7 +377,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="partition.grouper" href="#partition.grouper">partition.grouper</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="partition.grouper"></a>
+   <a href="#partition.grouper">partition.grouper</a>
+</h4>
 <p>Partition grouper class that implements the <code>org.apache.kafka.streams.processor.PartitionGrouper</code> interface. WARNING: This config is deprecated and will be removed in 3.0.0 release.</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -300,7 +390,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="poll.ms" href="#poll.ms">poll.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="poll.ms"></a>
+   <a href="#poll.ms">poll.ms</a>
+</h4>
 <p>The amount of time in milliseconds to block waiting for input.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -310,7 +403,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="probing.rebalance.interval.ms" href="#probing.rebalance.interval.ms">probing.rebalance.interval.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="probing.rebalance.interval.ms"></a>
+   <a href="#probing.rebalance.interval.ms">probing.rebalance.interval.ms</a>
+</h4>
 <p>The maximum time to wait before triggering a rebalance to probe for warmup replicas that have finished warming up and are ready to become active. Probing rebalances will continue to be triggered until the assignment is balanced. Must be at least 1 minute.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -320,7 +416,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="receive.buffer.bytes" href="#receive.buffer.bytes">receive.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="receive.buffer.bytes"></a>
+   <a href="#receive.buffer.bytes">receive.buffer.bytes</a>
+</h4>
 <p>The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -330,7 +429,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.max.ms" href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.max.ms"></a>
+   <a href="#reconnect.backoff.max.ms">reconnect.backoff.max.ms</a>
+</h4>
 <p>The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect. If provided, the backoff per host will increase exponentially for each consecutive connection failure, up to this maximum. After calculating the backoff increase, 20% random jitter is added to avoid connection storms.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -340,7 +442,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="reconnect.backoff.ms" href="#reconnect.backoff.ms">reconnect.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="reconnect.backoff.ms"></a>
+   <a href="#reconnect.backoff.ms">reconnect.backoff.ms</a>
+</h4>
 <p>The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -350,7 +455,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="request.timeout.ms" href="#request.timeout.ms">request.timeout.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="request.timeout.ms"></a>
+   <a href="#request.timeout.ms">request.timeout.ms</a>
+</h4>
 <p>The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -370,7 +478,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retry.backoff.ms" href="#retry.backoff.ms">retry.backoff.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retry.backoff.ms"></a>
+   <a href="#retry.backoff.ms">retry.backoff.ms</a>
+</h4>
 <p>The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -380,7 +491,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="rocksdb.config.setter" href="#rocksdb.config.setter">rocksdb.config.setter</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="rocksdb.config.setter"></a>
+   <a href="#rocksdb.config.setter">rocksdb.config.setter</a>
+</h4>
 <p>A Rocks DB config setter class or class name that implements the <code>org.apache.kafka.streams.state.RocksDBConfigSetter</code> interface</p>
 <table><tbody>
 <tr><th>Type:</th><td>class</td></tr>
@@ -390,7 +504,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="send.buffer.bytes" href="#send.buffer.bytes">send.buffer.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="send.buffer.bytes"></a>
+   <a href="#send.buffer.bytes">send.buffer.bytes</a>
+</h4>
 <p>The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -400,7 +517,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="state.cleanup.delay.ms" href="#state.cleanup.delay.ms">state.cleanup.delay.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="state.cleanup.delay.ms"></a>
+   <a href="#state.cleanup.delay.ms">state.cleanup.delay.ms</a>
+</h4>
 <p>The amount of time in milliseconds to wait before deleting state when a partition has migrated. Only state directories that have not been modified for at least <code>state.cleanup.delay.ms</code> will be removed</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -410,7 +530,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="upgrade.from" href="#upgrade.from">upgrade.from</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="upgrade.from"></a>
+   <a href="#upgrade.from">upgrade.from</a>
+</h4>
 <p>Allows upgrading in a backward compatible way. This is needed when upgrading from [0.10.0, 1.1] to 2.0+, or when upgrading from [2.0, 2.3] to 2.4+. When upgrading from 2.4 to a newer version it is not required to specify this config. Default is `null`. Accepted values are "0.10.0", "0.10.1", "0.10.2", "0.11.0", "1.0", "1.1", "2.0", "2.1", "2.2", "2.3" (for upgrading from the corresponding old version).</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -420,7 +543,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="windowstore.changelog.additional.retention.ms" href="#windowstore.changelog.additional.retention.ms">windowstore.changelog.additional.retention.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="windowstore.changelog.additional.retention.ms"></a>
+   <a href="#windowstore.changelog.additional.retention.ms">windowstore.changelog.additional.retention.ms</a>
+</h4>
 <p>Added to a windows maintainMs to ensure data is not deleted from the log prematurely. Allows for clock drift. Default is 1 day</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
diff --git a/26/generated/topic_config.html b/26/generated/topic_config.html
index c9437be..54352f9 100644
--- a/26/generated/topic_config.html
+++ b/26/generated/topic_config.html
@@ -1,6 +1,9 @@
 <ul class="config-list">
 <li>
-<h4><a id="cleanup.policy" href="#cleanup.policy">cleanup.policy</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="cleanup.policy"></a>
+   <a href="#cleanup.policy">cleanup.policy</a>
+</h4>
 <p>A string that is either "delete" or "compact" or both. This string designates the retention policy to use on old log segments. The default policy ("delete") will discard old segments when their retention time or size limit has been reached. The "compact" setting will enable <a href="#compaction">log compaction</a> on the topic.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -11,7 +14,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="compression.type" href="#compression.type">compression.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="compression.type"></a>
+   <a href="#compression.type">compression.type</a>
+</h4>
 <p>Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -22,7 +28,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="delete.retention.ms" href="#delete.retention.ms">delete.retention.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="delete.retention.ms"></a>
+   <a href="#delete.retention.ms">delete.retention.ms</a>
+</h4>
 <p>The amount of time to retain delete tombstone markers for <a href="#compaction">log compacted</a> topics. This setting also gives a bound on the time in which a consumer must complete a read if they begin from offset 0 to ensure that they get a valid snapshot of the final stage (otherwise delete tombstones may be collected before they complete their scan).</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -33,7 +42,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="file.delete.delay.ms" href="#file.delete.delay.ms">file.delete.delay.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="file.delete.delay.ms"></a>
+   <a href="#file.delete.delay.ms">file.delete.delay.ms</a>
+</h4>
 <p>The time to wait before deleting a file from the filesystem</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -44,7 +56,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="flush.messages" href="#flush.messages">flush.messages</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="flush.messages"></a>
+   <a href="#flush.messages">flush.messages</a>
+</h4>
 <p>This setting allows specifying an interval at which we will force an fsync of data written to the log. For example if this was set to 1 we would fsync after every message; if it were 5 we would fsync after every five messages. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient. This setting can be overridden on a per-topic basis (see <a href="#topicconfigs">the per-topic configuration section</a>).</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -55,7 +70,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="flush.ms" href="#flush.ms">flush.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="flush.ms"></a>
+   <a href="#flush.ms">flush.ms</a>
+</h4>
 <p>This setting allows specifying a time interval at which we will force an fsync of data written to the log. For example if this was set to 1000 we would fsync after 1000 ms had passed. In general we recommend you not set this and use replication for durability and allow the operating system's background flush capabilities as it is more efficient.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -66,7 +84,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="follower.replication.throttled.replicas" href="#follower.replication.throttled.replicas">follower.replication.throttled.replicas</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="follower.replication.throttled.replicas"></a>
+   <a href="#follower.replication.throttled.replicas">follower.replication.throttled.replicas</a>
+</h4>
 <p>A list of replicas for which log replication should be throttled on the follower side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -77,7 +98,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="index.interval.bytes" href="#index.interval.bytes">index.interval.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="index.interval.bytes"></a>
+   <a href="#index.interval.bytes">index.interval.bytes</a>
+</h4>
 <p>This setting controls how frequently Kafka adds an index entry to its offset index. The default setting ensures that we index a message roughly every 4096 bytes. More indexing allows reads to jump closer to the exact position in the log but makes the index larger. You probably don't need to change this.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -88,7 +112,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="leader.replication.throttled.replicas" href="#leader.replication.throttled.replicas">leader.replication.throttled.replicas</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="leader.replication.throttled.replicas"></a>
+   <a href="#leader.replication.throttled.replicas">leader.replication.throttled.replicas</a>
+</h4>
 <p>A list of replicas for which log replication should be throttled on the leader side. The list should describe a set of replicas in the form [PartitionId]:[BrokerId],[PartitionId]:[BrokerId]:... or alternatively the wildcard '*' can be used to throttle all replicas for this topic.</p>
 <table><tbody>
 <tr><th>Type:</th><td>list</td></tr>
@@ -99,7 +126,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.compaction.lag.ms" href="#max.compaction.lag.ms">max.compaction.lag.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.compaction.lag.ms"></a>
+   <a href="#max.compaction.lag.ms">max.compaction.lag.ms</a>
+</h4>
 <p>The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -110,7 +140,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="max.message.bytes" href="#max.message.bytes">max.message.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="max.message.bytes"></a>
+   <a href="#max.message.bytes">max.message.bytes</a>
+</h4>
 <p>The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -121,7 +154,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="message.format.version" href="#message.format.version">message.format.version</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="message.format.version"></a>
+   <a href="#message.format.version">message.format.version</a>
+</h4>
 <p>Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand.</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -132,7 +168,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="message.timestamp.difference.max.ms" href="#message.timestamp.difference.max.ms">message.timestamp.difference.max.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="message.timestamp.difference.max.ms"></a>
+   <a href="#message.timestamp.difference.max.ms">message.timestamp.difference.max.ms</a>
+</h4>
 <p>The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if message.timestamp.type=LogAppendTime.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -143,7 +182,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="message.timestamp.type" href="#message.timestamp.type">message.timestamp.type</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="message.timestamp.type"></a>
+   <a href="#message.timestamp.type">message.timestamp.type</a>
+</h4>
 <p>Define whether the timestamp in the message is message create time or log append time. The value should be either `CreateTime` or `LogAppendTime`</p>
 <table><tbody>
 <tr><th>Type:</th><td>string</td></tr>
@@ -154,7 +196,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="min.cleanable.dirty.ratio" href="#min.cleanable.dirty.ratio">min.cleanable.dirty.ratio</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="min.cleanable.dirty.ratio"></a>
+   <a href="#min.cleanable.dirty.ratio">min.cleanable.dirty.ratio</a>
+</h4>
 <p>This configuration controls how frequently the log compactor will attempt to clean the log (assuming <a href="#compaction">log compaction</a> is enabled). By default we will avoid cleaning a log where more than 50% of the log has been compacted. This ratio bounds the maximum space wasted in the log by duplicates (at 50% at most 50% of the log could be duplicates). A higher ratio will mean fewer, more efficient cleanings but will mean more wasted space in the log. If the max.compaction.lag.ms or the min.compaction.lag.ms configurations are also specified, then the log compactor considers the log to be eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the max.compaction.lag.ms period.</p>
 <table><tbody>
 <tr><th>Type:</th><td>double</td></tr>
@@ -165,7 +210,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="min.compaction.lag.ms" href="#min.compaction.lag.ms">min.compaction.lag.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="min.compaction.lag.ms"></a>
+   <a href="#min.compaction.lag.ms">min.compaction.lag.ms</a>
+</h4>
 <p>The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -176,7 +224,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="min.insync.replicas" href="#min.insync.replicas">min.insync.replicas</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="min.insync.replicas"></a>
+   <a href="#min.insync.replicas">min.insync.replicas</a>
+</h4>
 <p>When a producer sets acks to "all" (or "-1"), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend).<br>When used together, <code>min.insync.replicas</code> and <code>acks</code> allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set <code>min.insync.replicas</code> to 2, and produce with <code>acks</code> of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -198,7 +249,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retention.bytes" href="#retention.bytes">retention.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retention.bytes"></a>
+   <a href="#retention.bytes">retention.bytes</a>
+</h4>
 <p>This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the "delete" retention policy. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic retention in bytes.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -209,7 +263,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="retention.ms" href="#retention.ms">retention.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="retention.ms"></a>
+   <a href="#retention.ms">retention.ms</a>
+</h4>
 <p>This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the "delete" retention policy. This represents an SLA on how soon consumers must read their data. If set to -1, no time limit is applied.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -220,7 +277,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="segment.bytes" href="#segment.bytes">segment.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="segment.bytes"></a>
+   <a href="#segment.bytes">segment.bytes</a>
+</h4>
 <p>This configuration controls the segment file size for the log. Retention and cleaning is always done a file at a time so a larger segment size means fewer files but less granular control over retention.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -231,7 +291,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="segment.index.bytes" href="#segment.index.bytes">segment.index.bytes</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="segment.index.bytes"></a>
+   <a href="#segment.index.bytes">segment.index.bytes</a>
+</h4>
 <p>This configuration controls the size of the index that maps offsets to file positions. We preallocate this index file and shrink it only after log rolls. You generally should not need to change this setting.</p>
 <table><tbody>
 <tr><th>Type:</th><td>int</td></tr>
@@ -242,7 +305,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="segment.jitter.ms" href="#segment.jitter.ms">segment.jitter.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="segment.jitter.ms"></a>
+   <a href="#segment.jitter.ms">segment.jitter.ms</a>
+</h4>
 <p>The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -253,7 +319,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="segment.ms" href="#segment.ms">segment.ms</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="segment.ms"></a>
+   <a href="#segment.ms">segment.ms</a>
+</h4>
 <p>This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data.</p>
 <table><tbody>
 <tr><th>Type:</th><td>long</td></tr>
@@ -264,7 +333,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="unclean.leader.election.enable" href="#unclean.leader.election.enable">unclean.leader.election.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="unclean.leader.election.enable"></a>
+   <a href="#unclean.leader.election.enable">unclean.leader.election.enable</a>
+</h4>
 <p>Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
@@ -275,7 +347,10 @@
 </tbody></table>
 </li>
 <li>
-<h4><a id="message.downconversion.enable" href="#message.downconversion.enable">message.downconversion.enable</a></h4>
+<h4 class="anchor-heading">
+   <a class="anchor-link" id="message.downconversion.enable"></a>
+   <a href="#message.downconversion.enable">message.downconversion.enable</a>
+</h4>
 <p>This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to <code>false</code>, broker will not perform down-conversion for consumers expecting an older message format. The broker responds with <code>UNSUPPORTED_VERSION</code> error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers.</p>
 <table><tbody>
 <tr><th>Type:</th><td>boolean</td></tr>
diff --git a/26/introduction.html b/26/introduction.html
index 3ff44cf..da79386 100644
--- a/26/introduction.html
+++ b/26/introduction.html
@@ -18,198 +18,203 @@
 <script><!--#include virtual="js/templateData.js" --></script>
 
 <script id="introduction-template" type="text/x-handlebars-template">
-  <h3> Apache Kafka&reg; is <i>a distributed streaming platform</i>. What exactly does that mean?</h3>
-  <p>A streaming platform has three key capabilities:</p>
-  <ul>
-    <li>Publish and subscribe to streams of records, similar to a message queue or enterprise messaging system.
-    <li>Store streams of records in a fault-tolerant durable way.
-    <li>Process streams of records as they occur.
-  </ul>
-  <p>Kafka is generally used for two broad classes of applications:</p>
-  <ul>
-    <li>Building real-time streaming data pipelines that reliably get data between systems or applications
-    <li>Building real-time streaming applications that transform or react to the streams of data
-  </ul>
-  <p>To understand how Kafka does these things, let's dive in and explore Kafka's capabilities from the bottom up.</p>
-  <p>First a few concepts:</p>
-  <ul>
-    <li>Kafka is run as a cluster on one or more servers that can span multiple datacenters.
-      <li>The Kafka cluster stores streams of <i>records</i> in categories called <i>topics</i>.
-    <li>Each record consists of a key, a value, and a timestamp.
-  </ul>
-  <p>Kafka has five core APIs:</p>
-  <div style="overflow: hidden;">
-      <ul style="float: left; width: 40%;">
-      <li>The <a href="/documentation.html#producerapi">Producer API</a> allows an application to publish a stream of records to one or more Kafka topics.
-      <li>The <a href="/documentation.html#consumerapi">Consumer API</a> allows an application to subscribe to one or more topics and process the stream of records produced to them.
-    <li>The <a href="/documentation/streams">Streams API</a> allows an application to act as a <i>stream processor</i>, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
-    <li>The <a href="/documentation.html#connect">Connector API</a> allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.
-    <li>The <a href="/documentation.html#adminapi">Admin API</a> allows managing and inspecting topics, brokers and other Kafka objects.
-  </ul>
-      <img src="/{{version}}/images/kafka-apis.png" style="float: right; width: 50%;">
-      </div>
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_streaming" href="#intro_streaming"></a>
+    <a href="#intro_streaming">What is event streaming?</a>
+  </h4>
   <p>
-  In Kafka the communication between the clients and the servers is done with a simple, high-performance, language agnostic <a href="https://kafka.apache.org/protocol.html">TCP protocol</a>. This protocol is versioned and maintains backwards compatibility with older versions. We provide a Java client for Kafka, but clients are available in <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">many languages</a>.</p>
-
-  <h4 class="anchor-heading"><a id="intro_topics" class="anchor-link"></a><a href="#intro_topics">Topics and Logs</a></h4>
-  <p>Let's first dive into the core abstraction Kafka provides for a stream of records&mdash;the topic.</p>
-  <p>A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.</p>
-  <p> For each topic, the Kafka cluster maintains a partitioned log that looks like this: </p>
-  <img class="centered" src="/{{version}}/images/log_anatomy.png">
-
-  <p> Each partition is an ordered, immutable sequence of records that is continually appended to&mdash;a structured commit log. The records in the partitions are each assigned a sequential id number called the <i>offset</i> that uniquely identifies each record within the partition.
+    Event streaming is the digital equivalent of the human body's central nervous system. It is the
+    technological foundation for the 'always-on' world where businesses are increasingly software-defined 
+    and automated, and where the user of software is more software.
   </p>
   <p>
-  The Kafka cluster durably persists all published records&mdash;whether or not they have been consumed&mdash;using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.
-  </p>
-  <img class="centered" src="/{{version}}/images/log_consumer.png" style="width:400px">
-  <p>
-  In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".
-  </p>
-  <p>
-  This combination of features means that Kafka consumers are very cheap&mdash;they can come and go without much impact on the cluster or on other consumers. For example, you can use our command line tools to "tail" the contents of any topic without changing what is consumed by any existing consumers.
-  </p>
-  <p>
-  The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism&mdash;more on that in a bit.
+    Technically speaking, event streaming is the practice of capturing data in real-time from event sources
+    like databases, sensors, mobile devices, cloud services, and software applications in the form of streams
+    of events; storing these event streams durably for later retrieval; manipulating, processing, and reacting
+    to the event streams in real-time as well as retrospectively; and routing the event streams to different
+    destination technologies as needed. Event streaming thus ensures a continuous flow and interpretation of
+    data so that the right information is at the right place, at the right time.
   </p>
 
-  <h4 class="anchor-heading"><a id="intro_distribution" class="anchor-link"></a><a href="#intro_distribution">Distribution</a></h4>
-
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_usage" href="#intro_usage"></a>
+    <a href="#intro_usage">What can I use event streaming for?</a>
+  </h4>
   <p>
-  The partitions of the log are distributed over the servers in the Kafka cluster with each server handling data and requests for a share of the partitions. Each partition is replicated across a configurable number of servers for fault tolerance.
-  </p>
-  <p>
-  Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster.
-  </p>
-
-  <h4 class="anchor-heading"><a id="intro_geo-replication" class="anchor-link"></a><a href="#intro_geo-replication">Geo-Replication</a></h4>
-
-  <p>Kafka MirrorMaker provides geo-replication support for your clusters. With MirrorMaker, messages are replicated across multiple datacenters or cloud regions. You can use this in active/passive scenarios for backup and recovery; or in active/active scenarios to place data closer to your users, or support data locality requirements. </p>
-
-  <h4 class="anchor-heading"><a id="intro_producers" class="anchor-link"></a><a href="#intro_producers">Producers</a></h4>
-  <p>
-  Producers publish data to the topics of their choice. The producer is responsible for choosing which record to assign to which partition within the topic. This can be done in a round-robin fashion simply to balance load or it can be done according to some semantic partition function (say based on some key in the record). More on the use of partitioning in a second!
-  </p>
-
-  <h4 class="anchor-heading"><a id="intro_consumers" class="anchor-link"></a><a href="#intro_consumers">Consumers</a></h4>
-
-  <p>
-  Consumers label themselves with a <i>consumer group</i> name, and each record published to a topic is delivered to one consumer instance within each subscribing consumer group. Consumer instances can be in separate processes or on separate machines.
-  </p>
-  <p>
-  If all the consumer instances have the same consumer group, then the records will effectively be load balanced over the consumer instances.</p>
-  <p>
-  If all the consumer instances have different consumer groups, then each record will be broadcast to all the consumer processes.
-  </p>
-  <img class="centered" src="/{{version}}/images/consumer-groups.png">
-  <p>
-    A two server Kafka cluster hosting four partitions (P0-P3) with two consumer groups. Consumer group A has two consumer instances and group B has four.
-  </p>
-
-  <p>
-  More commonly, however, we have found that topics have a small number of consumer groups, one for each "logical subscriber". Each group is composed of many consumer instances for scalability and fault tolerance. This is nothing more than publish-subscribe semantics where the subscriber is a cluster of consumers instead of a single process.
-  </p>
-  <p>
-  The way consumption is implemented in Kafka is by dividing up the partitions in the log over the consumer instances so that each instance is the exclusive consumer of a "fair share" of partitions at any point in time. This process of maintaining membership in the group is handled by the Kafka protocol dynamically. If new instances join the group they will take over some partitions from other members of the group; if an instance dies, its partitions will be distributed to the remaining instances.
-  </p>
-  <p>
-  Kafka only provides a total order over records <i>within</i> a partition, not between different partitions in a topic. Per-partition ordering combined with the ability to partition data by key is sufficient for most applications. However, if you require a total order over records this can be achieved with a topic that has only one partition, though this will mean only one consumer process per consumer group.
-  </p>
-  <h4 class="anchor-heading"><a id="intro_multi-tenancy" class="anchor-link"></a><a href="#intro_multi-tenancy">Multi-tenancy</a></h4>
-  <p>You can deploy Kafka as a multi-tenant solution. Multi-tenancy is enabled by configuring which topics can produce or consume data. There is also operations support for quotas.  Administrators can define and enforce quotas on requests to control the broker resources that are used by clients.  For more information, see the <a href="https://kafka.apache.org/documentation/#security">security documentation</a>. </p>
-  <h4 class="anchor-heading"><a id="intro_guarantees" class="anchor-link"></a><a href="#intro_guarantees">Guarantees</a></h4>
-  <p>
-  At a high-level Kafka gives the following guarantees:
+    Event streaming is applied to a <a href="/powered-by">wide variety of use cases</a>
+    across a plethora of industries and organizations. Its many examples include:
   </p>
   <ul>
-    <li>Messages sent by a producer to a particular topic partition will be appended in the order they are sent. That is, if a record M1 is sent by the same producer as a record M2, and M1 is sent first, then M1 will have a lower offset than M2 and appear earlier in the log.
-    <li>A consumer instance sees records in the order they are stored in the log.
-    <li>For a topic with replication factor N, we will tolerate up to N-1 server failures without losing any records committed to the log.
+    <li>
+      To process payments and financial transactions in real-time, such as in stock exchanges, banks, and insurances.
+    </li>
+    <li>
+      To track and monitor cars, trucks, fleets, and shipments in real-time, such as in logistics and the automotive industry.
+    </li>
+    <li>
+      To continuously capture and analyze sensor data from IoT devices or other equipment, such as in factories and wind parks.
+    </li>
+    <li>
+      To collect and immediately react to customer interactions and orders, such as in retail, the hotel and travel industry, and mobile applications.
+    </li>
+    <li>
+      To monitor patients in hospital care and predict changes in condition to ensure timely treatment in emergencies.
+    </li>
+    <li>
+      To connect, store, and make available data produced by different divisions of a company.
+    </li>
+    <li>
+      To serve as the foundation for data platforms, event-driven architectures, and microservices.
+    </li>
+  </ul>
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_platform" href="#intro_platform"></a>
+    <a href="#intro_platform">Apache Kafka&reg; is an event streaming platform. What does that mean?</a>
+  </h4>
+  <p>
+    Kafka combines three key capabilities so you can implement
+    <a href="/powered-by">your use cases</a>
+    for event streaming end-to-end with a single battle-tested solution:
+  </p>
+  <ol>
+    <li>
+      To <strong>publish</strong> (write) and <strong>subscribe to</strong> (read) streams of events, including continuous import/export of
+      your data from other systems.
+    </li>
+    <li>
+      To <strong>store</strong> streams of events durably and reliably for as long as you want.
+    </li>
+    <li>
+      To <strong>process</strong> streams of events as they occur or retrospectively.
+    </li>
+  </ol>
+  <p>
+    And all this functionality is provided in a distributed, highly scalable, elastic, fault-tolerant, and
+    secure manner. Kafka can be deployed on bare-metal hardware, virtual machines, and containers, and on-premises
+    as well as in the cloud. You can choose between self-managing your Kafka environments and using fully managed
+    services offered by a variety of vendors.
+  </p>
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_nutshell" href="#intro_nutshell"></a>
+    <a href="#intro_nutshell">How does Kafka work in a nutshell?</a>
+  </h4>
+  <p>
+    Kafka is a distributed system consisting of <strong>servers</strong> and <strong>clients</strong> that
+    communicate via a high-performance <a href="/protocol.html">TCP network protocol</a>.
+    It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud
+    environments.
+  </p>
+  <p>
+    <strong>Servers</strong>: Kafka is run as a cluster of one or more servers that can span multiple datacenters
+    or cloud regions. Some of these servers form the storage layer, called the brokers. Other servers run
+    <a href="/documentation/#connect">Kafka Connect</a> to continuously import and export
+    data as event streams to integrate Kafka with your existing systems such as relational databases as well as
+    other Kafka clusters. To let you implement mission-critical use cases, a Kafka cluster is highly scalable
+    and fault-tolerant: if any of its servers fails, the other servers will take over their work to ensure
+    continuous operations without any data loss.
+  </p>
+  <p>
+    <strong>Clients</strong>: They allow you to write distributed applications and microservices that read, write,
+    and process streams of events in parallel, at scale, and in a fault-tolerant manner even in the case of network
+    problems or machine failures. Kafka ships with some such clients included, which are augmented by
+    <a href="https://cwiki.apache.org/confluence/display/KAFKA/Clients">dozens of clients</a> provided by the Kafka
+    community: clients are available for Java and Scala including the higher-level
+    <a href="/documentation/streams/">Kafka Streams</a> library, for Go, Python, C/C++, and
+    many other programming languages as well as REST APIs.
+  </p>
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_concepts_and_terms" href="#intro_concepts_and_terms"></a>
+    <a href="#intro_concepts_and_terms">Main Concepts and Terminology</a>
+  </h4>
+  <p>
+    An <strong>event</strong> records the fact that "something happened" in the world or in your business. It is also called record or message in the documentation. When you read or write data to Kafka, you do this in the form of events. Conceptually, an event has a key, value, timestamp, and optional metadata headers. Here's an example event:
+  </p>
+  <ul>
+    <li>
+      Event key: "Alice"
+    </li>
+    <li>
+      Event value: "Made a payment of $200 to Bob"
+    </li>
+    <li>
+      Event timestamp: "Jun. 25, 2020 at 2:06 p.m."
+    </li>
   </ul>
   <p>
-  More details on these guarantees are given in the design section of the documentation.
-  </p>
-  <h4 class="anchor-heading"><a id="kafka_mq" class="anchor-link"></a><a href="#kafka_mq">Kafka as a Messaging System</a></h4>
-  <p>
-  How does Kafka's notion of streams compare to a traditional enterprise messaging system?
+    <strong>Producers</strong> are those client applications that publish (write) events to Kafka, and <strong>consumers</strong> are those that subscribe to (read and process) these events. In Kafka, producers and consumers are fully decoupled and agnostic of each other, which is a key design element to achieve the high scalability that Kafka is known for. For example, producers never need to wait for consumers. Kafka provides various <a href="/documentation/#intro_guarantees">guarantees</a> such as the ability to process events exactly-once.
   </p>
   <p>
-  Messaging traditionally has two models: <a href="http://en.wikipedia.org/wiki/Message_queue">queuing</a> and <a href="http://en.wikipedia.org/wiki/Publish%E2%80%93subscribe_pattern">publish-subscribe</a>. In a queue, a pool of consumers may read from a server and each record goes to one of them; in publish-subscribe the record is broadcast to all consumers. Each of these two models has a strength and a weakness. The strength of queuing is that it allows you to divide up the processing of data over multiple consumer instances, which lets you scale your processing. Unfortunately, queues aren't multi-subscriber&mdash;once one process reads the data it's gone. Publish-subscribe allows you broadcast data to multiple processes, but has no way of scaling processing since every message goes to every subscriber.
+    Events are organized and durably stored in <strong>topics</strong>. Very simplified, a topic is similar to a folder in a filesystem, and the events are the files in that folder. An example topic name could be "payments". Topics in Kafka are always multi-producer and multi-subscriber: a topic can have zero, one, or many producers that write events to it, as well as zero, one, or many consumers that subscribe to these events. Events in a topic can be read as often as needed—unlike traditional messaging systems, events are not deleted after consumption. Instead, you define for how long Kafka should retain your events through a per-topic configuration setting, after which old events will be discarded. Kafka's performance is effectively constant with respect to data size, so storing data for a long time is perfectly fine.
   </p>
   <p>
-  The consumer group concept in Kafka generalizes these two concepts. As with a queue the consumer group allows you to divide up processing over a collection of processes (the members of the consumer group). As with publish-subscribe, Kafka allows you to broadcast messages to multiple consumer groups.
+    Topics are <strong>partitioned</strong>, meaning a topic is spread over a number of "buckets" located on different Kafka brokers. This distributed placement of your data is very important for scalability because it allows client applications to both read and write the data from/to many brokers at the same time. When a new event is published to a topic, it is actually appended to one of the topic's partitions. Events with the same event key (e.g., a customer or vehicle ID) are written to the same partition, and Kafka <a href="/documentation/#intro_guarantees">guarantees</a> that any consumer of a given topic-partition will always read that partition's events in exactly the same order as they were written.
+  </p>
+  <figure class="figure">
+    <img src="/images/streams-and-tables-p1_p4.png" class="figure-image" />
+    <figcaption class="figure-caption">
+      Figure: This example topic has four partitions P1–P4. Two different producer clients are publishing,
+      independently from each other, new events to the topic by writing events over the network to the topic's
+      partitions. Events with the same key (denoted by their color in the figure) are written to the same
+      partition. Note that both producers can write to the same partition if appropriate.
+    </figcaption>
+  </figure>
+  <p>
+    To make your data fault-tolerant and highly-available, every topic can be <strong>replicated</strong>, even across geo-regions or datacenters, so that there are always multiple brokers that have a copy of the data just in case things go wrong, you want to do maintenance on the brokers, and so on. A common production setting is a replication factor of 3, i.e., there will always be three copies of your data. This replication is performed at the level of topic-partitions.
   </p>
   <p>
-  The advantage of Kafka's model is that every topic has both these properties&mdash;it can scale processing and is also multi-subscriber&mdash;there is no need to choose one or the other.
-  </p>
-  <p>
-  Kafka has stronger ordering guarantees than a traditional messaging system, too.
-  </p>
-  <p>
-  A traditional queue retains records in-order on the server, and if multiple consumers consume from the queue then the server hands out records in the order they are stored. However, although the server hands out records in order, the records are delivered asynchronously to consumers, so they may arrive out of order on different consumers. This effectively means the ordering of the records is lost in the presence of parallel consumption. Messaging systems often work around this by having a notion of "exclusive consumer" that allows only one process to consume from a queue, but of course this means that there is no parallelism in processing.
-  </p>
-  <p>
-  Kafka does it better. By having a notion of parallelism&mdash;the partition&mdash;within the topics, Kafka is able to provide both ordering guarantees and load balancing over a pool of consumer processes. This is achieved by assigning the partitions in the topic to the consumers in the consumer group so that each partition is consumed by exactly one consumer in the group. By doing this we ensure that the consumer is the only reader of that partition and consumes the data in order. Since there are many partitions this still balances the load over many consumer instances. Note however that there cannot be more consumer instances in a consumer group than partitions.
+    This primer should be sufficient for an introduction. The <a href="/documentation/#design">Design</a> section of the documentation explains Kafka's various concepts in full detail, if you are interested.
   </p>
 
-  <h4 id="kafka_storage">Kafka as a Storage System</h4>
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_apis" href="#intro_apis"></a>
+    <a href="#intro_apis">Kafka APIs</a>
+  </h4>
+  <p>
+    In addition to command line tooling for management and administration tasks, Kafka has five core APIs for Java and Scala:
+  </p>
+  <ul>
+    <li>
+      The <a href="/documentation.html#adminapi">Admin API</a> to manage and inspect topics, brokers, and other Kafka objects.
+    </li>
+    <li>
+      The <a href="/documentation.html#producerapi">Producer API</a> to publish (write) a stream of events to one or more Kafka topics.
+    </li>
+    <li>
+      The <a href="/documentation.html#consumerapi">Consumer API</a> to subscribe to (read) one or more topics and to process the stream of events produced to them.
+    </li>
+    <li>
+      The <a href="/documentation/streams">Kafka Streams API</a> to implement stream processing applications and microservices. It provides higher-level functions to process event streams, including transformations, stateful operations like aggregations and joins, windowing, processing based on event-time, and more. Input is read from one or more topics in order to generate output to one or more topics, effectively transforming the input streams to output streams.
+    </li>
+    <li>
+      The <a href="/documentation.html#connect">Kafka Connect API</a> to build and run reusable data import/export connectors that consume (read) or produce (write) streams of events from and to external systems and applications so they can integrate with Kafka. For example, a connector to a relational database like PostgreSQL might capture every change to a set of tables. However, in practice, you typically don't need to implement your own connectors because the Kafka community already provides hundreds of ready-to-use connectors.
+    </li>
+  </ul>
 
-  <p>
-  Any message queue that allows publishing messages decoupled from consuming them is effectively acting as a storage system for the in-flight messages. What is different about Kafka is that it is a very good storage system.
-  </p>
-  <p>
-  Data written to Kafka is written to disk and replicated for fault-tolerance. Kafka allows producers to wait on acknowledgement so that a write isn't considered complete until it is fully replicated and guaranteed to persist even if the server written to fails.
-  </p>
-  <p>
-  The disk structures Kafka uses scale well&mdash;Kafka will perform the same whether you have 50 KB or 50 TB of persistent data on the server.
-  </p>
-  <p>
-  As a result of taking storage seriously and allowing the clients to control their read position, you can think of Kafka as a kind of special purpose distributed filesystem dedicated to high-performance, low-latency commit log storage, replication, and propagation.
-  </p>
-  <p>
-  For details about the Kafka's commit log storage and replication design, please read <a href="https://kafka.apache.org/documentation/#design">this</a> page.
-  </p>
-  <h4>Kafka for Stream Processing</h4>
-  <p>
-  It isn't enough to just read, write, and store streams of data, the purpose is to enable real-time processing of streams.
-  </p>
-  <p>
-  In Kafka a stream processor is anything that takes continual streams of  data from input topics, performs some processing on this input, and produces continual streams of data to output topics.
-  </p>
-  <p>
-  For example, a retail application might take in input streams of sales and shipments, and output a stream of reorders and price adjustments computed off this data.
-  </p>
-  <p>
-  It is possible to do simple processing directly using the producer and consumer APIs. However for more complex transformations Kafka provides a fully integrated <a href="/documentation/streams">Streams API</a>. This allows building applications that do non-trivial processing that compute aggregations off of streams or join streams together.
-  </p>
-  <p>
-  This facility helps solve the hard problems this type of application faces: handling out-of-order data, reprocessing input as code changes, performing stateful computations, etc.
-  </p>
-  <p>
-  The streams API builds on the core primitives Kafka provides: it uses the producer and consumer APIs for input, uses Kafka for stateful storage, and uses the same group mechanism for fault tolerance among the stream processor instances.
-  </p>
-  <h4>Putting the Pieces Together</h4>
-  <p>
-  This combination of messaging, storage, and stream processing may seem unusual but it is essential to Kafka's role as a streaming platform.
-  </p>
-  <p>
-  A distributed file system like HDFS allows storing static files for batch processing. Effectively a system like this allows storing and processing <i>historical</i> data from the past.
-  </p>
-  <p>
-  A traditional enterprise messaging system allows processing future messages that will arrive after you subscribe. Applications built in this way process future data as it arrives.
-  </p>
-  <p>
-  Kafka combines both of these capabilities, and the combination is critical both for Kafka usage as a platform for streaming applications as well as for streaming data pipelines.
-  </p>
-  <p>
-  By combining storage and low-latency subscriptions, streaming applications can treat both past and future data the same way. That is a single application can process historical, stored data but rather than ending when it reaches the last record it can keep processing as future data arrives. This is a generalized notion of stream processing that subsumes batch processing as well as message-driven applications.
-  </p>
-  <p>
-  Likewise for streaming data pipelines the combination of subscription to real-time events make it possible to use Kafka for very low-latency pipelines; but the ability to store data reliably make it possible to use it for critical data where the delivery of data must be guaranteed or for integration with offline systems that load data only periodically or may go down for extended periods of time for maintenance. The stream processing facilities make it possible to transform data as it arrives.
-  </p>
-  <p>
-  For more information on the guarantees, APIs, and capabilities Kafka provides see the rest of the <a href="/documentation.html">documentation</a>.
-  </p>
+  <!-- TODO: add new section once supporting page is written -->
+
+  <h4 class="anchor-heading">
+    <a class="anchor-link" id="intro_more" href="#intro_more"></a>
+    <a href="#intro_more">Where to go from here</a>
+  </h4>
+  <ul>
+    <li>
+      To get hands-on experience with Kafka, follow the <a href="/quickstart">Quickstart</a>.
+    </li>
+    <li>
+      To understand Kafka in more detail, read the <a href="/documentation/">Documentation</a>.
+      You also have your choice of <a href="/books-and-papers">Kafka books and academic papers</a>.
+    </li>
+    <li>
+      Browse through the <a href="/powered-by">Use Cases</a> to learn how other users in our world-wide community are getting value out of Kafka.
+    </li>
+    <li>
+      Join a <a href="/events">local Kafka meetup group</a> and
+      <a href="https://kafka-summit.org/past-events/">watch talks from Kafka Summit</a>, the main conference of the Kafka community.
+    </li>
+  </ul>
 </script>
 
 <div class="p-introduction"></div>