blob: e125b0f0675e1ca623b7b9835ef1e80f0268f609 [file] [log] [blame]
<!DOCTYPE html>
<html lang="en">
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<head>
<meta charset="utf-8" />
<title>PublishKafkaRecord</title>
<link rel="stylesheet" href="../../../../../css/component-usage.css" type="text/css" />
</head>
<body>
<h2>Description</h2>
<p>
This Processor puts the contents of a FlowFile to a Topic in
<a href="http://kafka.apache.org/">Apache Kafka</a> using KafkaProducer API available
with Kafka 2.6 API. The contents of the incoming FlowFile will be read using the
configured Record Reader. Each record will then be serialized using the configured
Record Writer, and this serialized form will be the content of a Kafka message.
This message is optionally assigned a key by using the &lt;Kafka Key&gt; Property.
</p>
<h2>Security Configuration</h2>
<p>
The Security Protocol property allows the user to specify the protocol for communicating
with the Kafka broker. The following sections describe each of the protocols in further detail.
</p>
<h3>PLAINTEXT</h3>
<p>
This option provides an unsecured connection to the broker, with no client authentication and no encryption.
In order to use this option the broker must be configured with a listener of the form:
<pre>
PLAINTEXT://host.name:port
</pre>
</p>
<h3>SSL</h3>
<p>
This option provides an encrypted connection to the broker, with optional client authentication. In order
to use this option the broker must be configured with a listener of the form:
<pre>
SSL://host.name:port
</pre>
In addition, the processor must have an SSL Context Service selected.
</p>
<p>
If the broker specifies ssl.client.auth=none, or does not specify ssl.client.auth, then the client will
not be required to present a certificate. In this case, the SSL Context Service selected may specify only
a truststore containing the public key of the certificate authority used to sign the broker's key.
</p>
<p>
If the broker specifies ssl.client.auth=required then the client will be required to present a certificate.
In this case, the SSL Context Service must also specify a keystore containing a client key, in addition to
a truststore as described above.
</p>
<h3>SASL_PLAINTEXT</h3>
<p>
This option uses SASL with a PLAINTEXT transport layer to authenticate to the broker. In order to use this
option the broker must be configured with a listener of the form:
<pre>
SASL_PLAINTEXT://host.name:port
</pre>
In addition, the Kerberos Service Name must be specified in the processor.
</p>
<h4>SASL_PLAINTEXT - GSSAPI</h4>
<p>
If the SASL mechanism is GSSAPI, then the client must provide a JAAS configuration to authenticate. The
JAAS configuration can be provided by specifying the java.security.auth.login.config system property in
NiFi's bootstrap.conf, such as:
<pre>
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
</pre>
</p>
<p>
An example of the JAAS config file would be the following:
<pre>
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/path/to/nifi.keytab"
serviceName="kafka"
principal="nifi@YOURREALM.COM";
};
</pre>
<b>NOTE:</b> The serviceName in the JAAS file must match the Kerberos Service Name in the processor.
</p>
<p>
Alternatively, the JAAS
configuration when using GSSAPI can be provided by specifying the Kerberos Principal and Kerberos Keytab
directly in the processor properties. This will dynamically create a JAAS configuration like above, and
will take precedence over the java.security.auth.login.config system property.
</p>
<h4>SASL_PLAINTEXT - PLAIN</h4>
<p>
If the SASL mechanism is PLAIN, then client must provide a JAAS configuration to authenticate, but
the JAAS configuration must use Kafka's PlainLoginModule. An example of the JAAS config file would
be the following:
<pre>
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="nifi"
password="nifi-password";
};
</pre>
The JAAS configuration can be provided by either of below ways
<ol type="1">
<li>specify the java.security.auth.login.config system property in
NiFi's bootstrap.conf. This limits you to use only one user credential across the cluster.</li>
<pre>
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
</pre>
<li>add user attribute 'sasl.jaas.config' in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.</li>
<pre>
sasl.jaas.config : org.apache.kafka.common.security.plain.PlainLoginModule required
username="nifi"
password="nifi-password";
</pre>
<b>NOTE:</b> The dynamic properties of this processor are not secured and as a result the password entered when utilizing sasl.jaas.config will be stored in the flow.xml.gz file in plain-text, and will be saved to NiFi Registry if using versioned flows.
</ol>
</p>
<p>
<b>NOTE:</b> It is not recommended to use a SASL mechanism of PLAIN with SASL_PLAINTEXT, as it would transmit
the username and password unencrypted.
</p>
<p>
<b>NOTE:</b> The Kerberos Service Name is not required for SASL mechanism of PLAIN. However, processor warns saying this attribute has to be filled with non empty string. You can choose to fill any random string, such as "null".
</p>
<p>
<b>NOTE:</b> Using the PlainLoginModule will cause it be registered in the JVM's static list of Providers, making
it visible to components in other NARs that may access the providers. There is currently a known issue
where Kafka processors using the PlainLoginModule will cause HDFS processors with Keberos to no longer work.
</p>
<h4>SASL_PLAINTEXT - SCRAM</h4>
<p>
If the SASL mechanism is SSL, then client must provide a JAAS configuration to authenticate, but
the JAAS configuration must use Kafka's ScramLoginModule. Ensure that you add user defined attribute 'sasl.mechanism' and assign 'SCRAM-SHA-256' or 'SCRAM-SHA-512' based on kafka broker configurations. An example of the JAAS config file would
be the following:
<pre>
KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule
username="nifi"
password="nifi-password";
};
</pre>
The JAAS configuration can be provided by either of below ways
<ol type="1">
<li>specify the java.security.auth.login.config system property in
NiFi's bootstrap.conf. This limits you to use only one user credential across the cluster.</li>
<pre>
java.arg.16=-Djava.security.auth.login.config=/path/to/kafka_client_jaas.conf
</pre>
<li>add user attribute 'sasl.jaas.config' in the processor configurations. This method allows one to have multiple consumers with different user credentials or gives flexibility to consume from multiple kafka clusters.</li>
<pre>
sasl.jaas.config : org.apache.kafka.common.security.scram.ScramLoginModule required
username="nifi"
password="nifi-password";
</pre>
<b>NOTE:</b> The dynamic properties of this processor are not secured and as a result the password entered when utilizing sasl.jaas.config will be stored in the flow.xml.gz file in plain-text, and will be saved to NiFi Registry if using versioned flows.
</ol>
<b>NOTE:</b> The Kerberos Service Name is not required for SASL mechanism of SCRAM-SHA-256 or SCRAM-SHA-512. However, processor warns saying this attribute has to be filled with non empty string. You can choose to fill any random string, such as "null".
</p>
<h3>SASL_SSL</h3>
<p>
This option uses SASL with an SSL/TLS transport layer to authenticate to the broker. In order to use this
option the broker must be configured with a listener of the form:
<pre>
SASL_SSL://host.name:port
</pre>
</p>
<p>
See the SASL_PLAINTEXT section for a description of how to provide the proper JAAS configuration
depending on the SASL mechanism (GSSAPI or PLAIN).
</p>
<p>
See the SSL section for a description of how to configure the SSL Context Service based on the
ssl.client.auth property.
</p>
<h2>Publish Strategy</h2>
<div>
<p>This processor includes optional properties that control how a Kafka Record's key and headers are determined:</p>
<ul>
<li>'Publish Strategy'</li>
<li>'Record Key Writer'</li>
</ul>
<p>'Publish Strategy' controls the mode used to convert the FlowFile record into a Kafka record.</p>
<ul>
<li>'Use Content as Record Value' (the default) - the content of the FlowFile Record becomes the content of the Kafka record. The Kafka record's key is determined by the 'Message Key
Field' property, and the Kafka record's headers are determined by the 'Attributes to Send as Headers (Regex)' property.</li>
<li>'Use Wrapper' - the content of the FlowFile record is a wrapper consisting of the Kafka record's key, value, headers, and metadata (topic and partition).</li>
</ul>
<p>
If Publish Strategy is set to 'Use Wrapper', two additional processor configuration properties are
made available: 'Record Key Writer' and 'Record Metadata Strategy'.
</p>
<p>
The 'Record Key Writer' property determines the Record Writer that should be used to serialize the Kafka record's key.
This may be used to emit the key as JSON, Avro, XML, or some other data format. If this property is not set, and the NiFi Record indicates that the key itself
is a Record, the FlowFile will be routed to the 'failure' relationship. If this property is not set and the NiFi Record has a Byte Array or a String (encoded in UTF-8 format), the
Kafka record's key will still be set accordingly.
</p>
<p>
The 'Record Metadata Strategy' specifies whether the Kafka Topic and partition should come from the configured 'Topic Name' property and 'Partition' / 'Partitioner class' properties,
or if they should come from the Record's optional <code>metadata</code> field. If the value is set to 'Metadata From Record', the incoming FlowFile record is expected to have a field
named 'metadata'. That field is expected to be a Record with a 'topic' and a 'partition' field. If these fields are missing or invalid, the processor's 'Topic Name' and 'Partition' /
'Partitioner class' properties will still be used.
</p>
<p>
Using the <code>metadata</code> field to convey the topic and partition has two advantages. Firstly, it pairs well with the ConsumeKafkaRecord_* processor, which produces
this same schema. This means that if data is consumed from one topic and pushed to another topic (or Kafka cluster), the data can be easily pinned to the same partition and topic name.
If the data should be pushed to a different topic, it can be easily updated using an UpdateRecord processor, for instance.
</p>
<p>
Additionally, because a single FlowFile can be sent as a single Kafka transaction, this allows sending records to multiple Kafka topics in a single transaction.
</p>
<h3>Examples</h3>
<p>
The below examples illustrate what will be sent to Kafka, given different configurations and FlowFile contents. These examples
all assume that JsonRecordSetWriter and JsonTreeReader will be used for the Record Readers and Writers.
</p>
<h4>Publish Strategy = 'Use Content as Record Value'</h4>
<p>
Given the processor configuration:
</p>
<table border="thin">
<tr>
<th>Processor Property</th>
<th>Configured Value</th>
</tr>
<tr>
<td>Message Key Field</td>
<td>account</td>
</tr>
<tr>
<td>Attributes to Send as Headers (Regex)</td>
<td>attribute.*</td>
</tr>
</table>
<p>
And a FlowFile with the content:
</p>
<pre>
<code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code>
</pre>
<p>
And attributes:
</p>
<table border="thin">
<tr>
<th>Attribute Name</th>
<th>Attribute Value</th>
</tr>
<tr>
<td>attributeA</td>
<td>valueA</td>
</tr>
<tr>
<td>attributeB</td>
<td>valueB</td>
</tr>
<tr>
<td>otherAttribute</td>
<td>otherValue</td>
</tr>
</table>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Record Key</th>
<td><code>{"name":"Acme","number":"AC1234"}</code></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>attributeA</td>
<td>valueA</td>
</tr>
<tr>
<td>attributeB</td>
<td>valueB</td>
</tr>
</table>
</td>
</tr>
</table>
<h4>Publish Strategy = 'Use Wrapper'</h4>
<p>
When the Publish Strategy is configured to 'Use Wrapper', each FlowFile Record is expected to adhere to a specific schema.
The Record must have three fields: <code>key</code>, <code>value</code>, and <code>headers</code>. There is a fourth, optional field named <code>metadata</code>.
The <code>key</code> may be a String, a byte array, or a Record. The <code>value</code> can be any Record. The <code>headers</code>
is a Map where the values are Strings. The <code>metadata</code> field is a Record that has two fields of interest: <code>topic</code> and <code>partition</code>. If these
fields are specified, they will take precedence over the configured 'Topic Name' and 'Partition' and 'Partitioner class' processor properties.
</p>
<h5>Example 1 - Key as String</h5>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"key": "Acme Holdings",
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"accountType": "enterprise",
"test": "true"
}
}</code>
</pre>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Record Key</th>
<td><code>Acme Holdings</code></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>accountType</td>
<td>enterprise</td>
</tr>
<tr>
<td>test</td>
<td>true</td>
</tr>
</table>
</td>
</tr>
</table>
<p>
Note that in this case, the headers and key come directly from the Record, not from FlowFile attributes.
If there is a desire to include some FlowFile attributes in the headers, this should be accomplished by using a Processor
upstream in order to inject those values into the <code>headers</code> field. For example, an UpdateRecord processor could be used
to easily add new fields to the <code>headers</code> Map.
</p>
<h5>Example 2 - Key as Record</h5>
<p>
Additionally, we may choose to use a more complex value for the record key. The key itself may be a record. This is sometimes used to write the record key
either as JSON or as Avro. In this example, we assume that the 'Record Key Writer' property is set to a JsonRecordSetWriter.
</p>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"key": {
"accountName": "Acme Holdings",
"accountHolder": "John Doe",
"accountId": "280182830-A009"
},
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
}
}</code>
</pre>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Record Key</th>
<td><code>{"accountName":"Acme Holdings","accountHolder":"John Doe","accountId":"280182830-A009"}</code></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td></td>
</tr>
</table>
<p>
Note here that the Record Key is JSON, as the 'Record Key Writer' property is configured to write JSON. it could just as easily be Avro.
</p>
<p>
Also note that if the 'Record Key Writer' had not been set, the FlowFile would have been routed to the 'failure' relationship because the key is a Record.
</p>
<p>
Finally, note here that the <code>headers</code> field is missing. This is acceptable and no headers will be added to the Kafka record.
</p>
<h5>Example 3 - Key as Byte Array</h5>
<p>
We can also have a Record whose <code>key</code> field is an array of bytes. In this case, the 'Record Key Writer' property is not used.
</p>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"key": [65, 27, 10, 20, 11, 57, 88, 19, 65],
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"otherField": {
"a": "b"
}
}</code>
</pre>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Record Key</th>
<td><code>0x411b0a140b39581341</code></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td></td>
</tr>
</table>
<p>
In this case, the byte array that is specified for the key is provided to the Kafka Record as a byte array without changes (in the table, it is simply represented as Hex).
</p>
<p>
Finally, note here that the <code>headers</code> field is missing and an extraneous field, <code>otherField</code> is present.
This is acceptable and no headers will be added to the Kafka record. The <code>otherField</code> is simply ignored.
</p>
<h5>Example 4 - No Key</h5>
<p>
We can also have a Record whose <code>key</code> field is null or missing. In this case, the 'Record Key Writer' property is not used.
</p>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"a": "b",
"c": {
"d": "e"
}
}
}</code>
</pre>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Record Key</th>
<td></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
</tr>
<tr>
<td>c</td>
<td>MapRecord[{d=e}]</td>
</tr>
</table>
</td>
</tr>
</table>
<p>
In this case, the key is not present, so the Kafka record that is produced has no key associated with it.
</p>
<p>
Note also that the <code>headers</code> field has the expected value for the <code>a</code> header but the <code>c</code>
header has an expected value of <code>MapRecord[{d=e}]</code>. This is because the <code>headers</code> field is expected always to be a Map
with String values. By providing a Record for the <code>c</code> element, we have violated the contract. NiFi attempts to compensate for this
by creating a String representation of the Record, even if it is unlikely to be the representation that the user expects.
</p>
<h5>Example 5 - Topic provided in Record</h5>
<p>
If the Metadata field is provided in the FlowFile's Record, it will be used to determine the Topic and the Partition that the Records are written to.
</p>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"a": "b"
},
"metadata": {
"topic": "topic1"
}
}</code>
</pre>
<p>
And considering that the processor properties are configured as:
</p>
<table border="thin">
<tr>
<th>Property Name</th>
<th>Property Value</th>
</tr>
<tr>
<td>Topic Name</td>
<td>My Topic</td>
</tr>
<tr>
<td>Partition</td>
<td>2</td>
</tr>
<tr>
<td>Record Metadata Strategy</td>
<td>Metadata From Record</td>
</tr>
</table>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Kafka Topic</th>
<td>topic1</td>
</tr>
<tr>
<th>Topic Partition</th>
<td>2</td>
</tr>
<tr>
<th>Record Key</th>
<td></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
</tr>
</table>
</td>
</tr>
</table>
<p>
Note that the topic name comes directly from the FlowFile record, and the configured topic name ("My Topic") is ignored. However, if either the "metadata" field or its "topic" sub-field
were missing, the configured topic name ("My Topic") would be used.
</p>
<h5>Example 6 - Partition provided in Record</h5>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"a": "b"
},
"metadata": {
"partition": 6
}
}</code>
</pre>
<p>
And considering that the processor properties are configured as:
</p>
<table border="thin">
<tr>
<th>Property Name</th>
<th>Property Value</th>
</tr>
<tr>
<td>Topic Name</td>
<td>My Topic</td>
</tr>
<tr>
<td>Partition</td>
<td>2</td>
</tr>
<tr>
<td>Record Metadata Strategy</td>
<td>Metadata From Record</td>
</tr>
</table>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Kafka Topic</th>
<td>My Topic</td>
</tr>
<tr>
<th>Topic Partition</th>
<td>6</td>
</tr>
<tr>
<th>Record Key</th>
<td></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
</tr>
</table>
</td>
</tr>
</table>
<h5>Example 7 - Topic and Partition provided in Record</h5>
<p>
If the Metadata field is provided in the FlowFile's Record, it will be used to determine the Topic and the Partition that the Records are written to.
</p>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"a": "b"
},
"metadata": {
"topic": "topic1",
"partition": 0
}
}</code>
</pre>
<p>
And considering that the processor properties are configured as:
</p>
<table border="thin">
<tr>
<th>Property Name</th>
<th>Property Value</th>
</tr>
<tr>
<td>Topic Name</td>
<td>My Topic</td>
</tr>
<tr>
<td>Partition</td>
<td>2</td>
</tr>
<tr>
<td>Record Metadata Strategy</td>
<td>Metadata From Record</td>
</tr>
</table>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Kafka Topic</th>
<td>topic1</td>
</tr>
<tr>
<th>Topic Partition</th>
<td>0</td>
</tr>
<tr>
<th>Record Key</th>
<td></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
</tr>
</table>
</td>
</tr>
</table>
<p>
In this case, both the topic name and the partition are explicitly defined within the incoming Record, and those will be used.
</p>
<h5>Example 8 - Invalid metadata provided in Record</h5>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"a": "b"
},
"metadata": "hello"
}</code>
</pre>
<p>
And considering that the processor properties are configured as:
</p>
<table border="thin">
<tr>
<th>Property Name</th>
<th>Property Value</th>
</tr>
<tr>
<td>Topic Name</td>
<td>My Topic</td>
</tr>
<tr>
<td>Partition</td>
<td>2</td>
</tr>
<tr>
<td>Record Metadata Strategy</td>
<td>Metadata From Record</td>
</tr>
</table>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Kafka Topic</th>
<td>My Topic</td>
</tr>
<tr>
<th>Topic Partition</th>
<td>2</td>
</tr>
<tr>
<th>Record Key</th>
<td></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
</tr>
</table>
</td>
</tr>
</table>
<p>
In this case, the "metadata" field in the Record is ignored because it is not itself a Record.
</p>
<h5>Example 9 - Use Configured Values for Metadata</h5>
<p>
Given a FlowFile with the content:
</p>
<pre>
<code>{
"value": {
"address": "1234 First Street",
"zip": "12345",
"account": {
"name": "Acme",
"number":"AC1234"
}
},
"headers": {
"a": "b"
},
"metadata": {
"topic": "topic1",
"partition": 6
}
}</code>
</pre>
<p>
And considering that the processor properties are configured as:
</p>
<table border="thin">
<tr>
<th>Property Name</th>
<th>Property Value</th>
</tr>
<tr>
<td>Topic Name</td>
<td>My Topic</td>
</tr>
<tr>
<td>Partition</td>
<td>2</td>
</tr>
<tr>
<td>Record Metadata Strategy</td>
<td>Use Configured Values</td>
</tr>
</table>
<p>
The record that is produced to Kafka will have the following characteristics:
</p>
<table border="thin">
<tr>
<th>Kafka Topic</th>
<td>My Topic</td>
</tr>
<tr>
<th>Topic Partition</th>
<td>2</td>
</tr>
<tr>
<th>Record Key</th>
<td></td>
</tr>
<tr>
<th>Record Value</th>
<td><code>{"address":"1234 First Street","zip":"12345","account":{"name":"Acme","number":"AC1234"}}</code></td>
</tr>
<tr>
<th>Record Headers</th>
<td>
<table border="thin">
<tr>
<td>Header Name</td>
<td>Header Value</td>
</tr>
<tr>
<td>a</td>
<td>b</td>
</tr>
</table>
</td>
</tr>
</table>
<p>
In this case, the "metadata" field specifies both the topic and the partition. However, it is ignored in favor of the processor properties 'Topic' and 'Partition' because
the property 'Record Metadata Strategy' is set to 'Use Configured Values'.
</p>
</div>
</body>
</html>