| |
| <!DOCTYPE html> |
| <!--[if lt IE 7]> <html class="no-js lt-ie9 lt-ie8 lt-ie7"> <![endif]--> |
| <!--[if IE 7]> <html class="no-js lt-ie9 lt-ie8"> <![endif]--> |
| <!--[if IE 8]> <html class="no-js lt-ie9"> <![endif]--> |
| <!--[if gt IE 8]><!--> <html class="no-js"> <!--<![endif]--> |
| <head> |
| <meta charset="utf-8"> |
| <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> |
| <title>Structured Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher) - Spark 2.0.2 Documentation</title> |
| |
| |
| |
| |
| <link rel="stylesheet" href="css/bootstrap.min.css"> |
| <style> |
| body { |
| padding-top: 60px; |
| padding-bottom: 40px; |
| } |
| </style> |
| <meta name="viewport" content="width=device-width"> |
| <link rel="stylesheet" href="css/bootstrap-responsive.min.css"> |
| <link rel="stylesheet" href="css/main.css"> |
| |
| <script src="js/vendor/modernizr-2.6.1-respond-1.1.0.min.js"></script> |
| |
| <link rel="stylesheet" href="css/pygments-default.css"> |
| |
| |
| <!-- Google analytics script --> |
| <script type="text/javascript"> |
| var _gaq = _gaq || []; |
| _gaq.push(['_setAccount', 'UA-32518208-2']); |
| _gaq.push(['_trackPageview']); |
| |
| (function() { |
| var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; |
| ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; |
| var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); |
| })(); |
| </script> |
| |
| |
| </head> |
| <body> |
| <!--[if lt IE 7]> |
| <p class="chromeframe">You are using an outdated browser. <a href="http://browsehappy.com/">Upgrade your browser today</a> or <a href="http://www.google.com/chromeframe/?redirect=true">install Google Chrome Frame</a> to better experience this site.</p> |
| <![endif]--> |
| |
| <!-- This code is taken from http://twitter.github.com/bootstrap/examples/hero.html --> |
| |
| <div class="navbar navbar-fixed-top" id="topbar"> |
| <div class="navbar-inner"> |
| <div class="container"> |
| <div class="brand"><a href="index.html"> |
| <img src="img/spark-logo-hd.png" style="height:50px;"/></a><span class="version">2.0.2</span> |
| </div> |
| <ul class="nav"> |
| <!--TODO(andyk): Add class="active" attribute to li some how.--> |
| <li><a href="index.html">Overview</a></li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">Programming Guides<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="quick-start.html">Quick Start</a></li> |
| <li><a href="programming-guide.html">Spark Programming Guide</a></li> |
| <li class="divider"></li> |
| <li><a href="streaming-programming-guide.html">Spark Streaming</a></li> |
| <li><a href="sql-programming-guide.html">DataFrames, Datasets and SQL</a></li> |
| <li><a href="structured-streaming-programming-guide.html">Structured Streaming</a></li> |
| <li><a href="ml-guide.html">MLlib (Machine Learning)</a></li> |
| <li><a href="graphx-programming-guide.html">GraphX (Graph Processing)</a></li> |
| <li><a href="sparkr.html">SparkR (R on Spark)</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">API Docs<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="api/scala/index.html#org.apache.spark.package">Scala</a></li> |
| <li><a href="api/java/index.html">Java</a></li> |
| <li><a href="api/python/index.html">Python</a></li> |
| <li><a href="api/R/index.html">R</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="#" class="dropdown-toggle" data-toggle="dropdown">Deploying<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="cluster-overview.html">Overview</a></li> |
| <li><a href="submitting-applications.html">Submitting Applications</a></li> |
| <li class="divider"></li> |
| <li><a href="spark-standalone.html">Spark Standalone</a></li> |
| <li><a href="running-on-mesos.html">Mesos</a></li> |
| <li><a href="running-on-yarn.html">YARN</a></li> |
| </ul> |
| </li> |
| |
| <li class="dropdown"> |
| <a href="api.html" class="dropdown-toggle" data-toggle="dropdown">More<b class="caret"></b></a> |
| <ul class="dropdown-menu"> |
| <li><a href="configuration.html">Configuration</a></li> |
| <li><a href="monitoring.html">Monitoring</a></li> |
| <li><a href="tuning.html">Tuning Guide</a></li> |
| <li><a href="job-scheduling.html">Job Scheduling</a></li> |
| <li><a href="security.html">Security</a></li> |
| <li><a href="hardware-provisioning.html">Hardware Provisioning</a></li> |
| <li class="divider"></li> |
| <li><a href="building-spark.html">Building Spark</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark</a></li> |
| <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects">Third Party Projects</a></li> |
| </ul> |
| </li> |
| </ul> |
| <!--<p class="navbar-text pull-right"><span class="version-text">v2.0.2</span></p>--> |
| </div> |
| </div> |
| </div> |
| |
| <div class="container-wrapper"> |
| |
| |
| <div class="content" id="content"> |
| |
| <h1 class="title">Structured Streaming + Kafka Integration Guide (Kafka broker version 0.10.0 or higher)</h1> |
| |
| |
| <p>Structured Streaming integration for Kafka 0.10 to poll data from Kafka.</p> |
| |
| <h3 id="linking">Linking</h3> |
| <p>For Scala/Java applications using SBT/Maven project definitions, link your application with the following artifact:</p> |
| |
| <pre><code>groupId = org.apache.spark |
| artifactId = spark-sql-kafka-0-10_2.11 |
| version = 2.0.2 |
| </code></pre> |
| |
| <p>For Python applications, you need to add this above library and its dependencies when deploying your |
| application. See the <a href="#deploying">Deploying</a> subsection below.</p> |
| |
| <h3 id="creating-a-kafka-source-stream">Creating a Kafka Source Stream</h3> |
| |
| <div class="codetabs"> |
| <div data-lang="scala"> |
| |
| <pre><code>// Subscribe to 1 topic |
| val ds1 = spark |
| .readStream |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribe", "topic1") |
| .load() |
| ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| .as[(String, String)] |
| |
| // Subscribe to multiple topics |
| val ds2 = spark |
| .readStream |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribe", "topic1,topic2") |
| .load() |
| ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| .as[(String, String)] |
| |
| // Subscribe to a pattern |
| val ds3 = spark |
| .readStream |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribePattern", "topic.*") |
| .load() |
| ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| .as[(String, String)] |
| </code></pre> |
| |
| </div> |
| <div data-lang="java"> |
| |
| <pre><code>// Subscribe to 1 topic |
| Dataset<Row> ds1 = spark |
| .readStream() |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribe", "topic1") |
| .load() |
| ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| |
| // Subscribe to multiple topics |
| Dataset<Row> ds2 = spark |
| .readStream() |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribe", "topic1,topic2") |
| .load() |
| ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| |
| // Subscribe to a pattern |
| Dataset<Row> ds3 = spark |
| .readStream() |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribePattern", "topic.*") |
| .load() |
| ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| </code></pre> |
| |
| </div> |
| <div data-lang="python"> |
| |
| <pre><code># Subscribe to 1 topic |
| ds1 = spark |
| .readStream() |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribe", "topic1") |
| .load() |
| ds1.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| |
| # Subscribe to multiple topics |
| ds2 = spark |
| .readStream |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribe", "topic1,topic2") |
| .load() |
| ds2.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| |
| # Subscribe to a pattern |
| ds3 = spark |
| .readStream() |
| .format("kafka") |
| .option("kafka.bootstrap.servers", "host1:port1,host2:port2") |
| .option("subscribePattern", "topic.*") |
| .load() |
| ds3.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)") |
| </code></pre> |
| |
| </div> |
| </div> |
| |
| <p>Each row in the source has the following schema:</p> |
| <table class="table"> |
| <tr><th>Column</th><th>Type</th></tr> |
| <tr> |
| <td>key</td> |
| <td>binary</td> |
| </tr> |
| <tr> |
| <td>value</td> |
| <td>binary</td> |
| </tr> |
| <tr> |
| <td>topic</td> |
| <td>string</td> |
| </tr> |
| <tr> |
| <td>partition</td> |
| <td>int</td> |
| </tr> |
| <tr> |
| <td>offset</td> |
| <td>long</td> |
| </tr> |
| <tr> |
| <td>timestamp</td> |
| <td>long</td> |
| </tr> |
| <tr> |
| <td>timestampType</td> |
| <td>int</td> |
| </tr> |
| </table> |
| |
| <p>The following options must be set for the Kafka source.</p> |
| |
| <table class="table"> |
| <tr><th>Option</th><th>value</th><th>meaning</th></tr> |
| <tr> |
| <td>assign</td> |
| <td>json string {"topicA":[0,1],"topicB":[2,4]}</td> |
| <td>Specific TopicPartitions to consume. |
| Only one of "assign", "subscribe" or "subscribePattern" |
| options can be specified for Kafka source.</td> |
| </tr> |
| <tr> |
| <td>subscribe</td> |
| <td>A comma-separated list of topics</td> |
| <td>The topic list to subscribe. |
| Only one of "assign", "subscribe" or "subscribePattern" |
| options can be specified for Kafka source.</td> |
| </tr> |
| <tr> |
| <td>subscribePattern</td> |
| <td>Java regex string</td> |
| <td>The pattern used to subscribe to topic(s). |
| Only one of "assign, "subscribe" or "subscribePattern" |
| options can be specified for Kafka source.</td> |
| </tr> |
| <tr> |
| <td>kafka.bootstrap.servers</td> |
| <td>A comma-separated list of host:port</td> |
| <td>The Kafka "bootstrap.servers" configuration.</td> |
| </tr> |
| </table> |
| |
| <p>The following configurations are optional:</p> |
| |
| <table class="table"> |
| <tr><th>Option</th><th>value</th><th>default</th><th>meaning</th></tr> |
| <tr> |
| <td>startingOffsets</td> |
| <td>earliest, latest, or json string |
| {"topicA":{"0":23,"1":-1},"topicB":{"0":-2}} |
| </td> |
| <td>latest</td> |
| <td>The start point when a query is started, either "earliest" which is from the earliest offsets, |
| "latest" which is just from the latest offsets, or a json string specifying a starting offset for |
| each TopicPartition. In the json, -2 as an offset can be used to refer to earliest, -1 to latest. |
| Note: This only applies when a new Streaming query is started, and that resuming will always pick |
| up from where the query left off. Newly discovered partitions during a query will start at |
| earliest.</td> |
| </tr> |
| <tr> |
| <td>failOnDataLoss</td> |
| <td>true or false</td> |
| <td>true</td> |
| <td>Whether to fail the query when it's possible that data is lost (e.g., topics are deleted, or |
| offsets are out of range). This may be a false alarm. You can disable it when it doesn't work |
| as you expected.</td> |
| </tr> |
| <tr> |
| <td>kafkaConsumer.pollTimeoutMs</td> |
| <td>long</td> |
| <td>512</td> |
| <td>The timeout in milliseconds to poll data from Kafka in executors.</td> |
| </tr> |
| <tr> |
| <td>fetchOffset.numRetries</td> |
| <td>int</td> |
| <td>3</td> |
| <td>Number of times to retry before giving up fatch Kafka latest offsets.</td> |
| </tr> |
| <tr> |
| <td>fetchOffset.retryIntervalMs</td> |
| <td>long</td> |
| <td>10</td> |
| <td>milliseconds to wait before retrying to fetch Kafka offsets</td> |
| </tr> |
| <tr> |
| <td>maxOffsetsPerTrigger</td> |
| <td>long</td> |
| <td>none</td> |
| <td>Rate limit on maximum number of offsets processed per trigger interval. The specified total number of offsets will be proportionally split across topicPartitions of different volume.</td> |
| </tr> |
| </table> |
| |
| <p>Kafka’s own configurations can be set via <code>DataStreamReader.option</code> with <code>kafka.</code> prefix, e.g, |
| <code>stream.option("kafka.bootstrap.servers", "host:port")</code>. For possible kafkaParams, see |
| <a href="http://kafka.apache.org/documentation.html#newconsumerconfigs">Kafka consumer config docs</a>.</p> |
| |
| <p>Note that the following Kafka params cannot be set and the Kafka source will throw an exception: |
| - <strong>group.id</strong>: Kafka source will create a unique group id for each query automatically. |
| - <strong>auto.offset.reset</strong>: Set the source option <code>startingOffsets</code> to specify |
| where to start instead. Structured Streaming manages which offsets are consumed internally, rather |
| than rely on the kafka Consumer to do it. This will ensure that no data is missed when when new |
| topics/partitions are dynamically subscribed. Note that <code>startingOffsets</code> only applies when a new |
| Streaming query is started, and that resuming will always pick up from where the query left off. |
| - <strong>key.deserializer</strong>: Keys are always deserialized as byte arrays with ByteArrayDeserializer. Use |
| DataFrame operations to explicitly deserialize the keys. |
| - <strong>value.deserializer</strong>: Values are always deserialized as byte arrays with ByteArrayDeserializer. |
| Use DataFrame operations to explicitly deserialize the values. |
| - <strong>enable.auto.commit</strong>: Kafka source doesn’t commit any offset. |
| - <strong>interceptor.classes</strong>: Kafka source always read keys and values as byte arrays. It’s not safe to |
| use ConsumerInterceptor as it may break the query.</p> |
| |
| <h3 id="deploying">Deploying</h3> |
| |
| <p>As with any Spark applications, <code>spark-submit</code> is used to launch your application. <code>spark-sql-kafka-0-10_2.11</code> |
| and its dependencies can be directly added to <code>spark-submit</code> using <code>--packages</code>, such as,</p> |
| |
| <pre><code>./bin/spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.11:2.0.2 ... |
| </code></pre> |
| |
| <p>See <a href="submitting-applications.html">Application Submission Guide</a> for more details about submitting |
| applications with external dependencies.</p> |
| |
| |
| </div> |
| |
| <!-- /container --> |
| </div> |
| |
| <script src="js/vendor/jquery-1.8.0.min.js"></script> |
| <script src="js/vendor/bootstrap.min.js"></script> |
| <script src="js/vendor/anchor.min.js"></script> |
| <script src="js/main.js"></script> |
| |
| <!-- MathJax Section --> |
| <script type="text/x-mathjax-config"> |
| MathJax.Hub.Config({ |
| TeX: { equationNumbers: { autoNumber: "AMS" } } |
| }); |
| </script> |
| <script> |
| // Note that we load MathJax this way to work with local file (file://), HTTP and HTTPS. |
| // We could use "//cdn.mathjax...", but that won't support "file://". |
| (function(d, script) { |
| script = d.createElement('script'); |
| script.type = 'text/javascript'; |
| script.async = true; |
| script.onload = function(){ |
| MathJax.Hub.Config({ |
| tex2jax: { |
| inlineMath: [ ["$", "$"], ["\\\\(","\\\\)"] ], |
| displayMath: [ ["$$","$$"], ["\\[", "\\]"] ], |
| processEscapes: true, |
| skipTags: ['script', 'noscript', 'style', 'textarea', 'pre'] |
| } |
| }); |
| }; |
| script.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + |
| 'cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML'; |
| d.getElementsByTagName('head')[0].appendChild(script); |
| }(document)); |
| </script> |
| </body> |
| </html> |