| --- |
| date: "2017-08-05T09:00:00Z" |
| title: Apache Flink 1.3.2 Released |
| aliases: |
| - /news/2017/08/05/release-1.3.2.html |
| --- |
| |
| The Apache Flink community released the second bugfix version of the Apache Flink 1.3 series. |
| |
| This release includes more than 60 fixes and minor improvements for Flink 1.3.1. The list below includes a detailed list of all fixes. |
| |
| We highly recommend all users to upgrade to Flink 1.3.2. |
| |
| |
| <div class="alert alert-warning"> |
| Important Notice: |
| |
| <p>A user reported a bug in the FlinkKafkaConsumer |
| (<a href="https://issues.apache.org/jira/browse/FLINK-7143">FLINK-7143</a>) that is causing |
| incorrect partition assignment in large Kafka deployments in the presence of inconsistent broker |
| metadata. In that case multiple parallel instances of the FlinkKafkaConsumer may read from the |
| same topic partition, leading to data duplication. In Flink 1.3.2 this bug is fixed but incorrect |
| assignments from Flink 1.3.0 and 1.3.1 cannot be automatically fixed by upgrading to Flink 1.3.2 |
| via a savepoint because the upgraded version would resume the wrong partition assignment from the |
| savepoint. If you believe you are affected by this bug (seeing messages from some partitions |
| duplicated) please refer to the JIRA issue for an upgrade path that works around that.</p> |
| |
| <p>Before attempting the more elaborate upgrade path, we would suggest to check if you are |
| actually affected by this bug. We did not manage to reproduce it in various testing clusters and |
| according to the reporting user, it only appeared in rare cases on their very large setup. This |
| leads us to believe that most likely only a minority of setups would be affected by this bug.</p> |
| </div> |
| |
| Notable changes: |
| |
| - The default Kafka version for Flink Kafka Consumer 0.10 was bumped from 0.10.0.1 to 0.10.2.1. |
| - Some default values for configurations of AWS API call behaviors in the Flink Kinesis Consumer |
| were adapted for better default consumption performance: 1) `SHARD_GETRECORDS_MAX` default changed |
| to 10,000, and 2) `SHARD_GETRECORDS_INTERVAL_MILLIS` default changed to 200ms. |
| |
| Updated Maven dependencies: |
| |
| ```xml |
| <dependency> |
| <groupId>org.apache.flink</groupId> |
| <artifactId>flink-java</artifactId> |
| <version>1.3.2</version> |
| </dependency> |
| <dependency> |
| <groupId>org.apache.flink</groupId> |
| <artifactId>flink-streaming-java_2.10</artifactId> |
| <version>1.3.2</version> |
| </dependency> |
| <dependency> |
| <groupId>org.apache.flink</groupId> |
| <artifactId>flink-clients_2.10</artifactId> |
| <version>1.3.2</version> |
| </dependency> |
| ``` |
| |
| You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html). |
| |
| List of resolved issues: |
| |
| <h2> Sub-task |
| </h2> |
| <ul> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6665'>FLINK-6665</a>] - Pass a ScheduledExecutorService to the RestartStrategy |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6667'>FLINK-6667</a>] - Pass a callback type to the RestartStrategy, rather than the full ExecutionGraph |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6680'>FLINK-6680</a>] - App & Flink migration guide: updates for the 1.3 release |
| </li> |
| </ul> |
| |
| <h2> Bug |
| </h2> |
| <ul> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-5488'>FLINK-5488</a>] - yarnClient should be closed in AbstractYarnClusterDescriptor for error conditions |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6376'>FLINK-6376</a>] - when deploy flink cluster on the yarn, it is lack of hdfs delegation token. |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6541'>FLINK-6541</a>] - Jar upload directory not created |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6654'>FLINK-6654</a>] - missing maven dependency on "flink-shaded-hadoop2-uber" in flink-dist |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6655'>FLINK-6655</a>] - Misleading error message when HistoryServer path is empty |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6742'>FLINK-6742</a>] - Improve error message when savepoint migration fails due to task removal |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6774'>FLINK-6774</a>] - build-helper-maven-plugin version not set |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6806'>FLINK-6806</a>] - rocksdb is not listed as state backend in doc |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6843'>FLINK-6843</a>] - ClientConnectionTest fails on travis |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6867'>FLINK-6867</a>] - Elasticsearch 1.x ITCase still instable due to embedded node instability |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6918'>FLINK-6918</a>] - Failing tests: ChainLengthDecreaseTest and ChainLengthIncreaseTest |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6945'>FLINK-6945</a>] - TaskCancelAsyncProducerConsumerITCase.testCancelAsyncProducerAndConsumer instable test case |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6964'>FLINK-6964</a>] - Fix recovery for incremental checkpoints in StandaloneCompletedCheckpointStore |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6965'>FLINK-6965</a>] - Avro is missing snappy dependency |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6987'>FLINK-6987</a>] - TextInputFormatTest fails when run in path containing spaces |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6996'>FLINK-6996</a>] - FlinkKafkaProducer010 doesn't guarantee at-least-once semantic |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7005'>FLINK-7005</a>] - Optimization steps are missing for nested registered tables |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7011'>FLINK-7011</a>] - Instable Kafka testStartFromKafkaCommitOffsets failures on Travis |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7025'>FLINK-7025</a>] - Using NullByteKeySelector for Unbounded ProcTime NonPartitioned Over |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7034'>FLINK-7034</a>] - GraphiteReporter cannot recover from lost connection |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7038'>FLINK-7038</a>] - Several misused "KeyedDataStream" term in docs and Javadocs |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7041'>FLINK-7041</a>] - Deserialize StateBackend from JobCheckpointingSettings with user classloader |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7132'>FLINK-7132</a>] - Fix BulkIteration parallelism |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7133'>FLINK-7133</a>] - Fix Elasticsearch version interference |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7137'>FLINK-7137</a>] - Flink table API defaults top level fields as nullable and all nested fields within CompositeType as non-nullable |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7143'>FLINK-7143</a>] - Partition assignment for Kafka consumer is not stable |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7154'>FLINK-7154</a>] - Missing call to build CsvTableSource example |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7158'>FLINK-7158</a>] - Wrong test jar dependency in flink-clients |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7177'>FLINK-7177</a>] - DataSetAggregateWithNullValuesRule fails creating null literal for non-nullable type |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7178'>FLINK-7178</a>] - Datadog Metric Reporter Jar is Lacking Dependencies |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7180'>FLINK-7180</a>] - CoGroupStream perform checkpoint failed |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7195'>FLINK-7195</a>] - FlinkKafkaConsumer should not respect fetched partitions to filter restored partition states |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7216'>FLINK-7216</a>] - ExecutionGraph can perform concurrent global restarts to scheduling |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7225'>FLINK-7225</a>] - Cutoff exception message in StateDescriptor |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7226'>FLINK-7226</a>] - REST responses contain invalid content-encoding header |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7231'>FLINK-7231</a>] - SlotSharingGroups are not always released in time for new restarts |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7234'>FLINK-7234</a>] - Fix CombineHint documentation |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7241'>FLINK-7241</a>] - Fix YARN high availability documentation |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7255'>FLINK-7255</a>] - ListStateDescriptor example uses wrong constructor |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7258'>FLINK-7258</a>] - IllegalArgumentException in Netty bootstrap with large memory state segment size |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7266'>FLINK-7266</a>] - Don't attempt to delete parent directory on S3 |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7268'>FLINK-7268</a>] - Zookeeper Checkpoint Store interacting with Incremental State Handles can lead to loss of handles |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7281'>FLINK-7281</a>] - Fix various issues in (Maven) release infrastructure |
| </li> |
| </ul> |
| |
| <h2> Improvement |
| </h2> |
| <ul> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6365'>FLINK-6365</a>] - Adapt default values of the Kinesis connector |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6575'>FLINK-6575</a>] - Disable all tests on Windows that use HDFS |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6682'>FLINK-6682</a>] - Improve error message in case parallelism exceeds maxParallelism |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6789'>FLINK-6789</a>] - Remove duplicated test utility reducer in optimizer |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6874'>FLINK-6874</a>] - Static and transient fields ignored for POJOs |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6898'>FLINK-6898</a>] - Limit size of operator component in metric name |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6937'>FLINK-6937</a>] - Fix link markdown in Production Readiness Checklist doc |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6940'>FLINK-6940</a>] - Clarify the effect of configuring per-job state backend |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-6998'>FLINK-6998</a>] - Kafka connector needs to expose metrics for failed/successful offset commits in the Kafka Consumer callback |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7004'>FLINK-7004</a>] - Switch to Travis Trusty image |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7032'>FLINK-7032</a>] - Intellij is constantly changing language level of sub projects back to 1.6 |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7069'>FLINK-7069</a>] - Catch exceptions for each reporter separately |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7149'>FLINK-7149</a>] - Add checkpoint ID to 'sendValues()' in GenericWriteAheadSink |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7164'>FLINK-7164</a>] - Extend integration tests for (externalised) checkpoints, checkpoint store |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7174'>FLINK-7174</a>] - Bump dependency of Kafka 0.10.x to the latest one |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7211'>FLINK-7211</a>] - Exclude Gelly javadoc jar from release |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7224'>FLINK-7224</a>] - Incorrect Javadoc description in all Kafka consumer versions |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7228'>FLINK-7228</a>] - Harden HistoryServerStaticFileHandlerTest |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7233'>FLINK-7233</a>] - TaskManagerHeapSizeCalculationJavaBashTest failed on Travis |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7287'>FLINK-7287</a>] - test instability in Kafka010ITCase.testCommitOffsetsToKafka |
| </li> |
| <li>[<a href='https://issues.apache.org/jira/browse/FLINK-7290'>FLINK-7290</a>] - Make release scripts modular |
| </li> |
| </ul> |