Rebuild website
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index d0c54b9..2f8e703 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,357 @@
 <atom:link href="https://flink.apache.org/blog/feed.xml" rel="self" type="application/rss+xml" />
 
 <item>
+<title>Apache Flink 1.11.0 Release Announcement</title>
+<description>&lt;p&gt;The Apache Flink community is proud to announce the release of Flink 1.11.0! More than 200 contributors worked on over 1.3k issues to bring significant improvements to usability as well as new features to Flink users across the whole API stack. Some highlights that we’re particularly excited about are:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;The core engine is introducing &lt;strong&gt;unaligned checkpoints&lt;/strong&gt;, a major change to Flink’s fault tolerance mechanism that improves checkpointing performance under heavy backpressure.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;A &lt;strong&gt;new Source API&lt;/strong&gt; that simplifies the implementation of (custom) sources by unifying batch and streaming execution, as well as offloading internals such as event-time handling, watermark generation or idleness detection to Flink.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Flink SQL is introducing &lt;strong&gt;Support for Change Data Capture (CDC)&lt;/strong&gt; to easily consume and interpret database changelogs from tools like Debezium. The renewed &lt;strong&gt;FileSystem Connector&lt;/strong&gt; also expands the set of use cases and formats supported in the Table API/SQL, enabling scenarios like streaming data directly from Kafka to Hive.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Multiple performance optimizations to PyFlink, including support for &lt;strong&gt;vectorized User-defined Functions (Pandas UDFs)&lt;/strong&gt;. This improves interoperability with libraries like Pandas and NumPy, making Flink more powerful for data science and ML workloads.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;Read on for all major new features and improvements, important changes to be aware of and what to expect moving forward!&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#new-features-and-improvements&quot; id=&quot;markdown-toc-new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#unaligned-checkpoints-beta&quot; id=&quot;markdown-toc-unaligned-checkpoints-beta&quot;&gt;Unaligned Checkpoints (Beta)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#unified-watermark-generators&quot; id=&quot;markdown-toc-unified-watermark-generators&quot;&gt;Unified Watermark Generators&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#new-data-source-api-beta&quot; id=&quot;markdown-toc-new-data-source-api-beta&quot;&gt;New Data Source API (Beta)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#application-mode-deployments&quot; id=&quot;markdown-toc-application-mode-deployments&quot;&gt;Application Mode Deployments&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#other-improvements&quot; id=&quot;markdown-toc-other-improvements&quot;&gt;Other Improvements&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#table-apisql-support-for-change-data-capture-cdc&quot; id=&quot;markdown-toc-table-apisql-support-for-change-data-capture-cdc&quot;&gt;Table API/SQL: Support for Change Data Capture (CDC)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#table-apisql-jdbc-catalog-interface-and-postgres-catalog&quot; id=&quot;markdown-toc-table-apisql-jdbc-catalog-interface-and-postgres-catalog&quot;&gt;Table API/SQL: JDBC Catalog Interface and Postgres Catalog&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#table-apisql-filesystem-connector-with-support-for-avro-orc-and-parquet&quot; id=&quot;markdown-toc-table-apisql-filesystem-connector-with-support-for-avro-orc-and-parquet&quot;&gt;Table API/SQL: FileSystem Connector with Support for Avro, ORC and Parquet&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#table-apisql-support-for-python-udfs&quot; id=&quot;markdown-toc-table-apisql-support-for-python-udfs&quot;&gt;Table API/SQL: Support for Python UDFs&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#other-improvements-to-the-table-apisql&quot; id=&quot;markdown-toc-other-improvements-to-the-table-apisql&quot;&gt;Other Improvements to the Table API/SQL&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#pyflink-support-for-pandas-udfs&quot; id=&quot;markdown-toc-pyflink-support-for-pandas-udfs&quot;&gt;PyFlink: Support for Pandas UDFs&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#other-improvements-to-pyflink&quot; id=&quot;markdown-toc-other-improvements-to-pyflink&quot;&gt;Other Improvements to PyFlink&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#important-changes&quot; id=&quot;markdown-toc-important-changes&quot;&gt;Important Changes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#release-notes&quot; id=&quot;markdown-toc-release-notes&quot;&gt;Release Notes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#list-of-contributors&quot; id=&quot;markdown-toc-list-of-contributors&quot;&gt;List of Contributors&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;/div&gt;
+
+&lt;p&gt;The binary distribution and source artifacts are now available on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt; of the Flink website, and the most recent distribution of PyFlink is available on &lt;a href=&quot;https://pypi.org/project/apache-flink/&quot;&gt;PyPI&lt;/a&gt;. Please review the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/release-notes/flink-1.11.html&quot;&gt;release notes&lt;/a&gt; carefully, and check the complete &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12346364&amp;amp;styleName=Html&amp;amp;projectId=12315522&quot;&gt;release changelog&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/flink-docs-release-1.11/&quot;&gt;updated documentation&lt;/a&gt; for more details.&lt;/p&gt;
+
+&lt;p&gt;We encourage you to download the release and share your feedback with the community through the &lt;a href=&quot;https://flink.apache.org/community.html#mailing-lists&quot;&gt;Flink mailing lists&lt;/a&gt; or &lt;a href=&quot;https://issues.apache.org/jira/projects/FLINK/summary&quot;&gt;JIRA&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/h2&gt;
+
+&lt;h3 id=&quot;unaligned-checkpoints-beta&quot;&gt;Unaligned Checkpoints (Beta)&lt;/h3&gt;
+
+&lt;p&gt;Triggering a checkpoint in Flink will cause a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/internals/stream_checkpointing.html#barriers&quot;&gt;checkpoint barrier&lt;/a&gt; to flow from the sources of your topology all the way towards the sinks. For operators that receive more than one input stream, the barriers flowing through each channel need to be aligned before the operator can snapshot its state and forward the checkpoint barrier — typically, this alignment will take just a few milliseconds to complete, but it can become a bottleneck in backpressured pipelines as:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Checkpoint barriers will flow much slower through backpressured channels, effectively blocking the remaining channels and their upstream operators during checkpointing;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Slow checkpoint barrier propagation leads to longer checkpointing times and can, worst case, result in little to no progress in the application.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;To improve the performance of checkpointing under backpressure scenarios, the community is rolling out the first iteration of unaligned checkpoints (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-76%3A+Unaligned+Checkpoints&quot;&gt;FLIP-76&lt;/a&gt;) with Flink 1.11. Compared to the original checkpointing mechanism (Fig. 1), this approach doesn’t wait for barrier alignment across input channels, instead allowing barriers to overtake in-flight records (i.e., data stored in buffers) and forwarding them downstream before the synchronous part of the checkpoint takes place (Fig. 2).&lt;/p&gt;
+
+&lt;div style=&quot;line-height:60%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;div class=&quot;row&quot;&gt;
+  &lt;div class=&quot;col-lg-6&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;figure&gt;
+		&lt;img src=&quot;/img/blog/2020-07-06-release-1.11.0/image1.gif&quot; width=&quot;600px&quot; alt=&quot;Aligned Checkpoints&quot; /&gt;
+		&lt;br /&gt;&lt;br /&gt;
+		&lt;figcaption&gt;&lt;i&gt;&lt;b&gt;Fig.1:&lt;/b&gt; Aligned Checkpoints&lt;/i&gt;&lt;/figcaption&gt;
+	  &lt;/figure&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+  &lt;div class=&quot;col-lg-6&quot;&gt;
+    &lt;div class=&quot;text-center&quot;&gt;
+      &lt;figure&gt;
+		&lt;img src=&quot;/img/blog/2020-07-06-release-1.11.0/image2.png&quot; width=&quot;600px&quot; alt=&quot;Unaligned Checkpoints&quot; /&gt;
+		&lt;br /&gt;&lt;br /&gt;
+		&lt;figcaption&gt;&lt;i&gt;&lt;b&gt;Fig.2:&lt;/b&gt; Unaligned Checkpoints&lt;/i&gt;&lt;/figcaption&gt;
+	  &lt;/figure&gt;
+    &lt;/div&gt;
+  &lt;/div&gt;
+&lt;/div&gt;
+
+&lt;div style=&quot;line-height:150%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Because in-flight records have to be persisted as part of the snapshot, unaligned checkpoints will lead to increased checkpoints sizes. On the upside, &lt;strong&gt;checkpointing times are heavily reduced&lt;/strong&gt;, so users will see more progress (even in unstable environments) as more up-to-date checkpoints will lighten the recovery process. You can learn more about the current limitations of unaligned checkpoints in the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/state/checkpoints.html#unaligned-checkpoints&quot;&gt;documentation&lt;/a&gt;, and track the improvement work planned for this feature in &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14551&quot;&gt;FLINK-14551&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;As with any beta feature, we appreciate early feedback that you might want to share with the community after giving unaligned checkpoints a try!&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-info&quot;&gt;Info&lt;/span&gt; To enable this feature, you need to configure the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/streaming/api/environment/CheckpointConfig.html&quot;&gt;&lt;code&gt;enableUnalignedCheckpoints&lt;/code&gt;&lt;/a&gt; option in your &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/state/checkpointing.html#enabling-and-configuring-checkpointing&quot;&gt;checkpoint config&lt;/a&gt;. Please note that unaligned checkpoints can only be enabled if &lt;code&gt;checkpointingMode&lt;/code&gt; is set to &lt;code&gt;CheckpointingMode.EXACTLY_ONCE&lt;/code&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;unified-watermark-generators&quot;&gt;Unified Watermark Generators&lt;/h3&gt;
+
+&lt;p&gt;So far, watermark generation (prev. also called &lt;em&gt;assignment&lt;/em&gt;) relied on two different interfaces: &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/streaming/api/functions/AssignerWithPunctuatedWatermarks.html&quot;&gt;&lt;code&gt;AssignerWithPunctuatedWatermarks&lt;/code&gt;&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/streaming/api/functions/AssignerWithPeriodicWatermarks.html&quot;&gt;&lt;code&gt;AssignerWithPeriodicWatermarks&lt;/code&gt;&lt;/a&gt;; that were closely intertwined with timestamp extraction. This made it difficult to implement long-requested features like support for idleness detection, besides leading to code duplication and maintenance burden. With &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-126%3A+Unify+%28and+separate%29+Watermark+Assigners&quot;&gt;FLIP-126&lt;/a&gt;, the legacy watermark assigners are unified into a single interface: the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/WatermarkGenerator.html&quot;&gt;&lt;code&gt;WatermarkGenerator&lt;/code&gt;&lt;/a&gt;; and detached from the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/TimestampAssigner.html&quot;&gt;&lt;code&gt;TimestampAssigner&lt;/code&gt;&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;This gives users more control over watermark emission and simplifies the implementation of new connectors that need to support watermark assignment and timestamp extraction at the source (see &lt;em&gt;&lt;a href=&quot;#new-data-source-api-beta&quot;&gt;New Data Source API&lt;/a&gt;&lt;/em&gt;). Multiple &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11//dev/event_timestamps_watermarks.html#introduction-to-watermark-strategies&quot;&gt;strategies for watermarking&lt;/a&gt; are available out-of-the-box as convenience methods in Flink 1.11 (e.g. &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.html#forBoundedOutOfOrderness-java.time.Duration-&quot;&gt;&lt;code&gt;forBoundedOutOfOrderness&lt;/code&gt;&lt;/a&gt;, &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.html#forMonotonousTimestamps--&quot;&gt;&lt;code&gt;forMonotonousTimestamps&lt;/code&gt;&lt;/a&gt;), though you can also choose to customize your own.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Support for Watermark Idleness Detection&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;The &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/api/common/eventtime/WatermarkStrategy.html#withIdleness-java.time.Duration-&quot;&gt;&lt;code&gt;WatermarkStrategy.withIdleness()&lt;/code&gt;&lt;/a&gt; method allows you to mark a stream as idle if no events arrive within a configured time (i.e. a timeout duration), which in turn allows handling event time skew properly and preventing idle partitions from holding back the event time progress of the entire application. Users can already benefit from &lt;strong&gt;per-partition idleness detection&lt;/strong&gt; in the Kafka connector, which has been adapted to use the new interfaces (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-17669&quot;&gt;FLINK-17669&lt;/a&gt;).&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-info&quot;&gt;Note&lt;/span&gt; &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-126%3A+Unify+%28and+separate%29+Watermark+Assigners&quot;&gt;FLIP-126&lt;/a&gt; introduces no breaking changes, but we recommend that users give preference to the new &lt;code&gt;WatermarkGenerator&lt;/code&gt; interface moving forward, in preparation for the deprecation of the legacy watermark assigners in future releases.&lt;/p&gt;
+
+&lt;h3 id=&quot;new-data-source-api-beta&quot;&gt;New Data Source API (Beta)&lt;/h3&gt;
+
+&lt;p&gt;Up to this point, writing a production-grade source connector for Flink was a non-trivial task that required users to be somewhat familiar with Flink internals and account for implementation details like event time assignment, watermark generation or idleness detection in their code. Flink 1.11 introduces a new Data Source API (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface&quot;&gt;FLIP-27&lt;/a&gt;) to overcome these limitations, as well as the need to rewrite separate code for batch and streaming execution.&lt;/p&gt;
+
+&lt;center&gt;
+	&lt;figure&gt;
+	&lt;img src=&quot;/img/blog/2020-07-06-release-1.11.0/image3.png&quot; width=&quot;600px&quot; alt=&quot;Data Source API&quot; /&gt;
+	&lt;/figure&gt;
+&lt;/center&gt;
+
+&lt;div style=&quot;line-height:150%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Separating the work of split discovery and the actual reading of the consumed data (i.e. the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/sources.html#data-source-concepts&quot;&gt;&lt;em&gt;splits&lt;/em&gt;&lt;/a&gt;) in different components — resp. the &lt;code&gt;SplitEnumerator&lt;/code&gt; and &lt;code&gt;SourceReader&lt;/code&gt; — allows mixing and matching different enumeration strategies and split readers.&lt;/p&gt;
+
+&lt;p&gt;As an example, the existing Kafka connector has multiple strategies for partition discovery that are intermingled with the rest of the code. With the new interfaces in place, it would only need a single reader implementation and there could be several split enumerators for the different partition discovery strategies.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Batch and Streaming Unification&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Source connectors implemented using the Data Source API will be able to work both as a bounded (&lt;em&gt;batch&lt;/em&gt;) and unbounded (&lt;em&gt;streaming&lt;/em&gt;) source. The difference between both cases is minimal: for bounded input, the &lt;code&gt;SplitEnumerator&lt;/code&gt; will generate a fixed set of splits and each split is finite; for unbounded input, either the splits are not finite or the &lt;code&gt;SplitEnumerator&lt;/code&gt; keeps generating new splits.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Implicit Watermark and Event Time Handling&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;The &lt;code&gt;TimestampAssigner&lt;/code&gt; and &lt;code&gt;WatermarkGenerator&lt;/code&gt; run transparently as part of the &lt;code&gt;SourceReader&lt;/code&gt; component, so users also don’t have to implement any timestamp extraction or watermark generation code.&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-info&quot;&gt;Note&lt;/span&gt; The existing source connectors have not yet been reimplemented using the Data Source API — this is planned for upcoming releases. If you’re looking to implement a new source, please refer to the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/sources.html#data-sources&quot;&gt;Data Source documentation&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/stream/sources.html#the-data-source-api&quot;&gt;the tips on source development&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;application-mode-deployments&quot;&gt;Application Mode Deployments&lt;/h3&gt;
+
+&lt;p&gt;Prior to Flink 1.11, jobs in a Flink application could either be submitted to a long-running &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/concepts/flink-architecture.html#flink-session-cluster&quot;&gt;Flink Session Cluster&lt;/a&gt; (&lt;em&gt;session mode&lt;/em&gt;) or a dedicated &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/concepts/flink-architecture.html#flink-job-cluster&quot;&gt;Flink Job Cluster&lt;/a&gt; (&lt;em&gt;job mode&lt;/em&gt;). For both these modes, the &lt;code&gt;main()&lt;/code&gt; method of user programs runs on the &lt;em&gt;client&lt;/em&gt; side. This presents some challenges: on one hand, if the client is part of a large installation, it can easily become a bottleneck for &lt;code&gt;JobGraph&lt;/code&gt; generation; and on the other, it’s not a good fit for containerized environments like Docker or Kubernetes.&lt;/p&gt;
+
+&lt;p&gt;From this release on, Flink gets an additional deployment mode: &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/deployment/#application-mode&quot;&gt;Application Mode&lt;/a&gt; (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-85+Flink+Application+Mode&quot;&gt;FLIP-85&lt;/a&gt;); where the &lt;code&gt;main()&lt;/code&gt; method runs on the cluster, rather than the client. The job submission becomes a one-step process: you package your application logic and dependencies into an executable job JAR and the cluster entrypoint (&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/client/deployment/application/ApplicationClusterEntryPoint.html&quot;&gt;&lt;code&gt;ApplicationClusterEntryPoint&lt;/code&gt;&lt;/a&gt;) is responsible for calling the &lt;code&gt;main()&lt;/code&gt; method to extract the &lt;code&gt;JobGraph&lt;/code&gt;.&lt;/p&gt;
+
+&lt;p&gt;In Flink 1.11, the community worked to already support &lt;em&gt;application mode&lt;/em&gt; in Kubernetes (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10934&quot;&gt;FLINK-10934&lt;/a&gt;).&lt;/p&gt;
+
+&lt;h3 id=&quot;other-improvements&quot;&gt;Other Improvements&lt;/h3&gt;
+
+&lt;p&gt;&lt;strong&gt;Unified Memory Configuration for JobManagers (&lt;a href=&quot;https://jira.apache.org/jira/browse/FLINK-16614&quot;&gt;FLIP-116&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Following the work started in Flink 1.10 to improve memory management and configuration, this release introduces a new memory model that aligns the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/memory/mem_setup_master.html&quot;&gt;JobManagers’ configuration options&lt;/a&gt; and terminology with that introduced in &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors&quot;&gt;FLIP-49&lt;/a&gt; for TaskManagers. This affects all deployment types: standalone, YARN, Mesos and the new active Kubernetes integration.&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-danger&quot;&gt;Attention&lt;/span&gt; Reusing a previous Flink configuration without any adjustments can result in differently computed memory parameters for the JVM and, as a result, performance changes or even failures. Make sure to check the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/ops/memory/mem_migration.html#migrate-job-manager-memory-configuration&quot;&gt;migration guide&lt;/a&gt; if you’re planning to update to the latest version.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Improvements to the Flink WebUI (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-75%3A+Flink+Web+UI+Improvement+Proposal&quot;&gt;FLIP-75&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;In Flink 1.11, the community kicked off a series of improvements to the Flink WebUI. The first to roll out are better TaskManager and JobManager log display (&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=147427143&quot;&gt;FLIP-103&lt;/a&gt;), as well as a new thread dump utility (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14816&quot;&gt;FLINK-14816&lt;/a&gt;). Some additional work planned for upcoming releases includes better backpressure detection, more flexible and configurable exception display and support for displaying the history of subtask failure attempts.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Docker Image Unification (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-111%3A+Docker+image+unification&quot;&gt;FLIP-111&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;With this release, all Docker-related resources have been consolidated into &lt;a href=&quot;https://github.com/apache/flink-docker&quot;&gt;apache/flink-docker&lt;/a&gt; and the entry point script has been extended to allow users to run the default Docker image in different modes without the need to create a custom image. The &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/ops/deployment/docker.html#customize-flink-image&quot;&gt;updated documentation&lt;/a&gt; describes in detail how to use and customize the official Flink Docker image for different environments and deployment modes.&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h3 id=&quot;table-apisql-support-for-change-data-capture-cdc&quot;&gt;Table API/SQL: Support for Change Data Capture (CDC)&lt;/h3&gt;
+
+&lt;p&gt;Change Data Capture (CDC) has become a popular pattern to capture committed changes from a database and propagate those changes to downstream consumers, for example to keep multiple datastores in sync and avoid common pitfalls such as &lt;a href=&quot;https://thorben-janssen.com/dual-writes/&quot;&gt;dual writes&lt;/a&gt;. Being able to easily ingest and interpret these changelogs into the Table API/SQL has been a highly demanded feature in the Flink community — and it’s now possible with Flink 1.11.&lt;/p&gt;
+
+&lt;p&gt;To extend the scope of the Table API/SQL to use cases like CDC, Flink 1.11 introduces new table source and sink interfaces with &lt;strong&gt;changelog mode&lt;/strong&gt; (see &lt;em&gt;&lt;a href=&quot;#other-improvements-to-the-table-apisql&quot;&gt;New TableSource and TableSink Interfaces&lt;/a&gt;&lt;/em&gt;) and support for the &lt;a href=&quot;https://debezium.io/&quot;&gt;Debezium&lt;/a&gt; and &lt;a href=&quot;https://github.com/alibaba/canal&quot;&gt;Canal&lt;/a&gt; formats (&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=147427289&quot;&gt;FLIP-105&lt;/a&gt;). This means that dynamic tables sources are no longer limited to append-only operations and can ingest these external changelogs (&lt;code&gt;INSERT&lt;/code&gt; events), interpret them into change operations (&lt;code&gt;INSERT&lt;/code&gt;, &lt;code&gt;UPDATE&lt;/code&gt;, &lt;code&gt;DELETE&lt;/code&gt; events) and emit them downstream with the change type.&lt;/p&gt;
+
+&lt;center&gt;
+	&lt;figure&gt;
+	&lt;img src=&quot;/img/blog/2020-07-06-release-1.11.0/image4.png&quot; width=&quot;500px&quot; alt=&quot;CDC&quot; /&gt;
+	&lt;/figure&gt;
+&lt;/center&gt;
+
+&lt;div style=&quot;line-height:150%;&quot;&gt;
+    &lt;br /&gt;
+&lt;/div&gt;
+
+&lt;p&gt;Users have to specify either &lt;code&gt;“format=debezium-json”&lt;/code&gt; or &lt;code&gt;“format=canal-json”&lt;/code&gt; in their &lt;code&gt;CREATE TABLE&lt;/code&gt; statement to consume changelogs using SQL DDL.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;my_table&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;p&quot;&gt;...&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;s1&quot;&gt;&amp;#39;connector&amp;#39;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&amp;#39;...&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;-- e.g. &amp;#39;kafka&amp;#39;&lt;/span&gt;
+  &lt;span class=&quot;s1&quot;&gt;&amp;#39;format&amp;#39;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&amp;#39;debezium-json&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;s1&quot;&gt;&amp;#39;debezium-json.schema-include&amp;#39;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&amp;#39;true&amp;#39;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;-- default: false (Debezium can be configured to include or exclude the message schema)&lt;/span&gt;
+  &lt;span class=&quot;s1&quot;&gt;&amp;#39;debezium-json.ignore-parse-errors&amp;#39;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&amp;#39;true&amp;#39;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;-- default: false&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Flink 1.11 only supports Kafka as a changelog source out-of-the-box and JSON-encoded changelogs, with Avro (Debezium) and Protobuf (Canal) planned for future releases. There are also plans to support MySQL binlogs and Kafka compacted topics as sources, as well as to extend changelog support to batch execution.&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-danger&quot;&gt;Attention&lt;/span&gt; There is a known issue (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-18461&quot;&gt;FLINK-18461&lt;/a&gt;) that prevents changelog sources from being used to write to upsert sinks (e.g. MySQL, HBase, Elasticsearch). This will be fixed in the next patch release (1.11.1).&lt;/p&gt;
+
+&lt;h3 id=&quot;table-apisql-jdbc-catalog-interface-and-postgres-catalog&quot;&gt;Table API/SQL: JDBC Catalog Interface and Postgres Catalog&lt;/h3&gt;
+
+&lt;p&gt;Flink 1.11 introduces a generic JDBC catalog interface (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-93%3A+JDBC+catalog+and+Postgres+catalog&quot;&gt;FLIP-93&lt;/a&gt;) that enables users of the Table API/SQL to &lt;strong&gt;derive table schemas automatically&lt;/strong&gt; from connections to relational databases over &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connect.html#jdbc-connector&quot;&gt;JDBC&lt;/a&gt;. This eliminates the previous need for manual schema definition and type conversion, and also allows to check for schema errors at compile time instead of runtime.&lt;/p&gt;
+
+&lt;p&gt;The first implementation, rolling out with the new release, is the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/catalogs.html#postgrescatalog&quot;&gt;Postgres catalog&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;table-apisql-filesystem-connector-with-support-for-avro-orc-and-parquet&quot;&gt;Table API/SQL: FileSystem Connector with Support for Avro, ORC and Parquet&lt;/h3&gt;
+
+&lt;p&gt;To improve the user experience for end-to-end streaming ETL use cases, the Flink community worked on a new FileSystem Connector for the Table API/SQL (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-115%3A+Filesystem+connector+in+Table&quot;&gt;FLIP-115&lt;/a&gt;). The implementation is based on Flink’s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/ops/filesystems/index.html&quot;&gt;FileSystem abstraction&lt;/a&gt; and reuses &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/connectors/streamfile_sink.html&quot;&gt;StreamingFileSink&lt;/a&gt; to ensure the same set of capabilities and consistent behaviour with the DataStream API.&lt;/p&gt;
+
+&lt;p&gt;This also means that Table API/SQL users can now make use of all formats already supported by StreamingFileSink, like (Avro) Parquet, as well as the new formats introduced with this release, like Avro (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11395&quot;&gt;FLINK-11395&lt;/a&gt;) and Orc (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10114&quot;&gt;FLINK-10114&lt;/a&gt;).&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;my_table&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;column_name1&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;INT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;column_name2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;p&quot;&gt;...&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;part_name1&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;INT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;part_name2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;STRING&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;PARTITIONED&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;BY&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;part_name1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;part_name2&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;s1&quot;&gt;&amp;#39;connector&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;filesystem&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;         
+  &lt;span class=&quot;s1&quot;&gt;&amp;#39;path&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;file:///path/to/file,&lt;/span&gt;
+&lt;span class=&quot;s1&quot;&gt;  &amp;#39;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;format&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&amp;#39; = &amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;...&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;  &lt;span class=&quot;c1&quot;&gt;-- supported formats: Avro, ORC, Parquet, CSV, JSON         &lt;/span&gt;
+  &lt;span class=&quot;p&quot;&gt;...&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The new all-rounder FileSystem Connector transparently handles batch and streaming execution, provides exactly-once guarantees and has full partition support, greatly expanding the scope of usage of the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/connect.html#file-system-connector&quot;&gt;legacy connector&lt;/a&gt;. This allows users to easily implement common use cases like &lt;strong&gt;directly streaming data from Kafka to Hive&lt;/strong&gt;.&lt;/p&gt;
+
+&lt;p&gt;You can track the upcoming improvements to the FileSystem Connector in &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-17778&quot;&gt;FLINK-17778&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;table-apisql-support-for-python-udfs&quot;&gt;Table API/SQL: Support for Python UDFs&lt;/h3&gt;
+
+&lt;p&gt;Prior to this release, users of the Table API/SQL were limited to defining UDFs in either Java or Scala. In Flink 1.11, the community worked on expanding the usage scope of the Python language beyond PyFlink and providing support for Python UDFs in the SQL DDL syntax (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-106%3A+Support+Python+UDF+in+SQL+Function+DDL&quot;&gt;FLIP-106&lt;/a&gt;), as well as the SQL Client (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-114%3A+Support+Python+UDF+in+SQL+Client&quot;&gt;FLIP-114&lt;/a&gt;). Users can also register Python UDFs in the system catalog via SQL DDL or the Java Catalog API, so that functions can be shared between jobs.&lt;/p&gt;
+
+&lt;h3 id=&quot;other-improvements-to-the-table-apisql&quot;&gt;Other Improvements to the Table API/SQL&lt;/h3&gt;
+
+&lt;p&gt;&lt;strong&gt;DDL and DML Compatibility for the Hive Connector (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-123%3A+DDL+and+DML+compatibility+for+Hive+connector&quot;&gt;FLIP-123&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Starting from Flink 1.11, users can write SQL statements directly using Hive syntax (HiveQL) in the Table API/SQL and the SQL Client. For this purpose, an additional dialect was introduced and users can now dynamically switch between Flink (&lt;code&gt;default&lt;/code&gt;) and Hive (&lt;code&gt;hive&lt;/code&gt;) on a per-statement basis. For a complete list of supported DDL and DML statements, check the Hive dialect &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/hive/hive_dialect.html#hive-dialect&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Extensions and Improvements to the Flink SQL Syntax&lt;/strong&gt;&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Flink 1.11 introduces the concept of &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sql/create.html#create-table&quot;&gt;primary key constraints&lt;/a&gt; to leverage runtime optimizations in Flink SQL DDL (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP+87%3A+Primary+key+constraints+in+Table+API&quot;&gt;FLIP-87&lt;/a&gt;);&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;View objects are now fully supported in SQL DDL using the &lt;code&gt;CREATE&lt;/code&gt;/&lt;code&gt;ALTER&lt;/code&gt;/&lt;code&gt;DROP VIEW&lt;/code&gt; statements (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-71%3A+E2E+View+support+in+FLINK+SQL&quot;&gt;FLIP-71&lt;/a&gt;);&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Users can now specify or override table options in their DQL/DML statements using &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/dev/table/sql/hints.html#dynamic-table-options&quot;&gt;dynamic table options&lt;/a&gt; (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-113%3A+Supports+Dynamic+Table+Options+for+Flink+SQL&quot;&gt;FLIP-113&lt;/a&gt;).&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;To make connector properties less verbose and improve exception handling, some key properties have been refactored (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-122%3A+New+Connector+Property+Keys+for+New+Factory&quot;&gt;FLIP-122&lt;/a&gt;). This change does not break compatibility, so users can still use the old property keys.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;&lt;strong&gt;New TableSource and TableSink Interfaces (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-95%3A+New+TableSource+and+TableSink+interfaces&quot;&gt;FLIP-95&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Flink 1.11 introduces new table source and sink interfaces (resp. &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/table/connector/source/DynamicTableSource.html&quot;&gt;&lt;code&gt;DynamicTableSource&lt;/code&gt;&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/table/connector/sink/DynamicTableSink.html&quot;&gt;&lt;code&gt;DynamicTableSink&lt;/code&gt;&lt;/a&gt;) that unify batch and streaming execution, provide more efficient data processing with the Blink planner and offer support for handling changelogs (see &lt;em&gt;&lt;a href=&quot;#table-apisql-support-for-change-data-capture-cdc&quot;&gt;Support for Change Data Capture (CDC)&lt;/a&gt;&lt;/em&gt;). The new interfaces also make it easier for users to &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sourceSinks.html#full-stack-example&quot;&gt;implement custom connectors&lt;/a&gt; or modify existing ones. For an end-to-end example on how to implement a custom scan table source with a decoding format that supports changelog semantics, check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sourceSinks.html#full-stack-example&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-info&quot;&gt;Note&lt;/span&gt; Although compatibility is not immediately affected, we recommend that Table API/SQL users update any sources and sinks to the new interface stack.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Refactored TableEnvironment Interface (&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878&quot;&gt;FLIP-84&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;The semantics to describe similar behaviours in the &lt;code&gt;TableEnvironment&lt;/code&gt; and &lt;code&gt;Table&lt;/code&gt; interfaces have diverged over time, leading to an inconsistent and sometimes unclear user experience. To improve this and make programming more fluent in the Table API/SQL, Flink 1.11 introduces new methods that unify behaviours like execution triggering (e.g. &lt;code&gt;executeSql()&lt;/code&gt;) and result representation (e.g. &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/table/api/TableResult.html#print--&quot;&gt;&lt;code&gt;print()&lt;/code&gt;&lt;/a&gt;, &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/api/java/org/apache/flink/table/api/TableResult.html#collect--&quot;&gt;&lt;code&gt;collect()&lt;/code&gt;&lt;/a&gt;), and also lay the groundwork for important features like &lt;a href=&quot;https://lists.apache.org/thread.html/r076e63bf6c8ed42d1b9ed2b406029696274a3a90cc360bc3a03e65d2%40%3Cdev.flink.apache.org%3E&quot;&gt;multi-statement execution support&lt;/a&gt; in future releases.&lt;/p&gt;
+
+&lt;p&gt;&lt;span class=&quot;label label-info&quot;&gt;Note&lt;/span&gt; The methods deprecated with FLIP-84 will not be immediately removed, but we recommend that users adopt the newly introduced methods. For a complete list of new and deprecated methods, check the “Summary” section of &lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=134745878&quot;&gt;FLIP-84&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;New Type Inference for Table API UDFs (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-65%3A+New+type+inference+for+Table+API+UDFs&quot;&gt;FLIP-65&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;In Flink 1.9, the community started working on a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/types.html#data-types&quot;&gt;new data type system&lt;/a&gt; for the Table API to improve its compliance with the SQL standard (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System&quot;&gt;FLIP-37&lt;/a&gt;). This work is now close to being completed in Flink 1.11, with the exposure of Table API UDFs to the new type system (scalar and table functions, with aggregate functions planned for the next release).&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h3 id=&quot;pyflink-support-for-pandas-udfs&quot;&gt;PyFlink: Support for Pandas UDFs&lt;/h3&gt;
+
+&lt;p&gt;Up to this release, Python UDFs in PyFlink only supported scalar values of standard Python types. This presented some limitations:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;High serialization/deserialization overhead in the process of transferring data between the JVM and the Python processes;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Hard to integrate with common Python libraries for high-performance numerical processing like pandas and NumPy.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;To overcome these limitations, the community introduced support for (scalar) &lt;strong&gt;vectorized Python UDFs&lt;/strong&gt; based on &lt;a href=&quot;https://pandas.pydata.org/pandas-docs/stable/getting_started/overview.html&quot;&gt;pandas&lt;/a&gt; in Flink 1.11 (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-97%3A+Support+Scalar+Vectorized+Python+UDF+in+PyFlink&quot;&gt;FLIP-97&lt;/a&gt;). The performance of vectorized UDFs is usually much higher, as the serialization/deserialization overhead is minimized by falling back to &lt;a href=&quot;https://arrow.apache.org/&quot;&gt;Apache Arrow&lt;/a&gt;; and handling &lt;code&gt;pandas.Series&lt;/code&gt; as input/output allows to take full advantage of the pandas and NumPy libraries. This makes Pandas UDFs a popular solution to parallelize Machine Learning and other large-scale, distributed data science workloads (e.g. feature engineering, distributed model application).&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-python&quot;&gt;&lt;span class=&quot;nd&quot;&gt;@udf&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;input_types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DataTypes&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;BIGINT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;DataTypes&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;BIGINT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;()],&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;result_type&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DataTypes&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;BIGINT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;udf_type&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;pandas&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;j&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;):&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;j&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;To mark a UDF as a Pandas UDF, you only need to add an extra parameter &lt;code&gt;udf_type=”pandas”&lt;/code&gt; in the udf decorator, as described in the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/dev/table/python/vectorized_python_udfs.html#vectorized-user-defined-functions&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h3 id=&quot;other-improvements-to-pyflink&quot;&gt;Other Improvements to PyFlink&lt;/h3&gt;
+
+&lt;p&gt;&lt;strong&gt;Conversion fromPandas/toPandas (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-120%3A+Support+conversion+between+PyFlink+Table+and+Pandas+DataFrame&quot;&gt;FLIP-120&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;Arrow is also supported as an optimization to convert between PyFlink tables and &lt;a href=&quot;https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html&quot;&gt;&lt;code&gt;pandas.DataFrames&lt;/code&gt;&lt;/a&gt;, enabling users to switch processing engines seamlessly without the need for an intermediate connector. For examples on how to use the new &lt;code&gt;fromPandas()&lt;/code&gt; and &lt;code&gt;toPandas()&lt;/code&gt; methods in PyFlink, check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/python/conversion_of_pandas.html#conversions-between-pyflink-table-and-pandas-dataframe&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Support for User-defined Table Functions (UDTFs) (&lt;a href=&quot;https://jira.apache.org/jira/browse/FLINK-14500&quot;&gt;FLINK-14500&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;From Flink 1.11, you can define and register custom &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/python/python_udfs.html#table-functions&quot;&gt;UDTFs&lt;/a&gt; in PyFlink. Similar to a Python UDF, a UDTF takes zero, one or multiple scalar values as input, but can return an arbitrary number of rows as output instead of a single value.&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;Cython Performance Optimization for UDFs (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-121%3A+Support+Cython+Optimizing+Python+User+Defined+Function&quot;&gt;FLIP-121&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;&lt;a href=&quot;https://cython.readthedocs.io/en/latest/src/quickstart/cythonize.html&quot;&gt;Cython&lt;/a&gt; is a compiled superset of the Python language that is often used to improve the performance of large-scale numeric processing in Python, as it optimizes execution to machine code-level speed and pairs well with popular C-based libraries like NumPy. From Flink 1.11, you can build &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/flinkDev/building.html#build-pyflink&quot;&gt;PyFlink with Cython support&lt;/a&gt; and “Cythonize” your Python UDFs to substantially improve code execution speed (up to 30x faster, compared to Python UDFs in Flink 1.10).&lt;/p&gt;
+
+&lt;p&gt;&lt;strong&gt;User-defined Metrics in Python UDFs (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-112%3A+Support+User-Defined+Metrics+in++Python+UDF&quot;&gt;FLIP-112&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+&lt;p&gt;To make it easier for users to monitor and debug the execution of Python UDFs, PyFlink now allows gathering and exposing metrics to external systems, as well as defining user scopes and variables. You can access the metrics system from a UDF by calling &lt;code&gt;function_context.get_metric_group()&lt;/code&gt; in the open method, as described in the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/dev/table/python/metrics.html#registering-metrics&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
+
+&lt;hr /&gt;
+
+&lt;h2 id=&quot;important-changes&quot;&gt;Important Changes&lt;/h2&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://jira.apache.org/jira/browse/FLINK-17339&quot;&gt;FLINK-17339&lt;/a&gt;] The Blink planner is the &lt;strong&gt;default&lt;/strong&gt; in the Table API/SQL starting from Flink 1.11. This was already the case for the SQL Client since Flink 1.10. The old Flink planner is still supported, but not actively developed.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-5763&quot;&gt;FLINK-5763&lt;/a&gt;] Savepoints now contain all their state inside a single directory (both metadata and program state). This makes it straightforward to figure out which files make up the state of a savepoint and allows users to &lt;strong&gt;relocate savepoints&lt;/strong&gt; by simply moving a directory.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-16408&quot;&gt;FLINK-16408&lt;/a&gt;] To reduce pressure on the JVM metaspace, the user code class loader is being reused by a &lt;code&gt;TaskExecutor&lt;/code&gt; as long as there is at least a single slot allocated for the respective job. This changes Flink’s recovery behaviour slightly, so that it will not reload static fields.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11086&quot;&gt;FLINK-11086&lt;/a&gt;] Flink now supports Hadoop versions above &lt;strong&gt;Hadoop 3.0.0&lt;/strong&gt;. Note that the Flink project does not provide any updated “flink-shaded-hadoop-*” jars. Users need to provide Hadoop dependencies through the &lt;code&gt;HADOOP_CLASSPATH&lt;/code&gt; environment variable (recommended) or the lib/ folder.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-16963&quot;&gt;FLINK-16963&lt;/a&gt;] All &lt;code&gt;MetricReporters&lt;/code&gt; that come with Flink have been converted to plugins. These should no longer be placed into &lt;code&gt;/lib&lt;/code&gt; (which may result in dependency conflicts), but &lt;code&gt;/plugins/&amp;lt;some_directory&amp;gt;&lt;/code&gt; instead.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12639&quot;&gt;FLINK-12639&lt;/a&gt;] The Flink &lt;strong&gt;documentation&lt;/strong&gt; is undergoing some &lt;strong&gt;rework&lt;/strong&gt;, so you might notice that the navigation and organization of content look slightly different starting from Flink 1.11.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2 id=&quot;release-notes&quot;&gt;Release Notes&lt;/h2&gt;
+
+&lt;p&gt;Please review the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.11/release-notes/flink-1.11.html&quot;&gt;release notes&lt;/a&gt; carefully for a detailed list of changes and new features if you plan to upgrade your setup to Flink 1.11. This version is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation.&lt;/p&gt;
+
+&lt;h2 id=&quot;list-of-contributors&quot;&gt;List of Contributors&lt;/h2&gt;
+
+&lt;p&gt;The Apache Flink community would like to thank all the 200+ contributors that have made this release possible:&lt;/p&gt;
+
+&lt;p&gt;Aitozi, Alexander Fedulov, Alexey Trenikhin, Aljoscha Krettek, Andrey Zagrebin, Arvid Heise, Ayush Saxena, Bairos, Bartosz Krasinski, Benchao Li, Benoit Hanotte, Benoît Paris, Bhagavan Das, Canbin Zheng, Cedric Chen, Chesnay Schepler, Colm O hEigeartaigh, Congxian Qiu, CrazyTomatoOo, Danish Amjad, Danny Chan, David Anderson, Dawid Wysakowicz, Dian Fu, Dominik Wosiński, Echo Lee, Ethan Marsh, Etienne Chauchot, Fabian Hueske, Fabian Paul, Flavio Pompermaier, Gao Yun, Gary Yao, Ghildiyal, Grebennikov Roman, GuoWei Ma, Guru Prasad, Gyula Fora, Hequn Cheng, Hu Guang, HuFeiHu, HuangXingBo, Igal Shilman, Ismael Juma, Jacob Sevart, Jark Wu, Jaskaran Bindra, Jason K, Jeff Yang, Jeff Zhang, Jerry Wang, Jiangjie (Becket) Qin, Jiayi, Jiayi Liao, Jiayi-Liao, Jincheng Sun, Jing Zhang, Jingsong Lee, JingsongLi, Jun Qin, JunZhang, Jörn Kottmann, Kevin Bohinski, Konstantin Knauf, Kostas Kloudas, Kurt Young, Leonard Xu, Lining Jing, Liupengcheng, LululuAlu, Marta Paes Moreira, Matt Welke, Max Kuklinski, Maximilian Michels, Nico Kruber, Niels Basjes, Oleksandr Nitavskyi, Paul Lam, Paul Lin, PengFei Li, PengchengLiu, Piotr Nowojski, Prem Santosh, Qingsheng Ren, Rafi Aroch, Raymond Farrelly, Richard Deurwaarder, Robert Metzger, RocMarshal, Roey Shem Tov, Roman, Roman Khachatryan, Rong Rong, RoyRuan, Rui Li, Seth Wiesman, Shaobin.Ou, Shengkai, Shuiqiang Chen, Shuo Cheng, Sivaprasanna, Sivaprasanna S, SteNicholas, Stefan Richter, Stephan Ewen, Steve OU, Steve Whelan, Tartarus, Terry Wang, Thomas Weise, Till Rohrmann, Timo Walther, TsReaper, Tzu-Li (Gordon) Tai, Victor Wong, Wei Zhong, Weike DONG, Xiaogang Zhou, Xintong Song, Xu Bai, Xuannan, Yadong Xie, Yang Wang, Yangze Guo, Yichao Yang, Ying, Yu Li, Yuan Mei, Yun Gao, Yun Tang, Yuval Itzchakov, Zakelly, Zhao, Zhenghua Gao, Zhijiang, Zhu Zhu, acqua.csq, austin ce, azagrebin, bdine, bowen.li, caoyingjie, caozhen, caozhen1937, chaojianok, chen, chendonglin, comsir, cpugputpu, czhang2, dianfu, edu05, eduardowt, fangliang, felixzheng, fmyblack, gauss, gk0916, godfrey he, godfreyhe, guliziduo, guowei.mgw, hehuiyuan, hequn8128, hpeter, huangxingbo, huzheng, ifndef-SleePy, jingwen-ywb, jrthe42, kevin.cyj, klion26, lamber-ken, leesf, libenchao, lijiewang.wlj, liuyongvs, lsy, lumen, machinedoll, mans2singh, molsionmo, oliveryunchang, openinx, paul8263, ptmagic, qqibrow, sev7e0, shuai-xu, shuai.xu, shuiqiangchen, snuyanzin, spafka, sunhaibotb, sunjincheng121, testfixer, tison, vinoyang, vthinkxie, wangtong, wangxianghu, wangxiyuan, wangxlong, wangyang0918, wenlong.lwl, whlwanghailong, william, windWheel, wooplevip, wuxuyang, xushiwei, xuyang1706, yanghua, yangyichao-mango, yuzhao.cyz, zentol, zhanglibing, zhangmang, zhangzhanchun, zhengcanbin, zhengshuli, zhenxianyimeng, zhijiang, zhongyong jin, zhule, zhuxiaoshang, zjuwangg, zoudan, zoudaokoulife, zzchun, “lzh576177775”, 骚sir, 厉颖, 张军, 曹建华, 漫步云端&lt;/p&gt;
+</description>
+<pubDate>Mon, 06 Jul 2020 10:00:00 +0200</pubDate>
+<link>https://flink.apache.org/news/2020/07/06/release-1.11.0.html</link>
+<guid isPermaLink="true">/news/2020/07/06/release-1.11.0.html</guid>
+</item>
+
+<item>
 <title>Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</title>
 <description>&lt;p&gt;In a previous post, we introduced the basics of Flink on Zeppelin and how to do Streaming ETL. In this second part of the “Flink on Zeppelin” series of posts, I will share how to 
 perform streaming data visualization via Flink on Zeppelin and how to use Apache Flink UDFs in Zeppelin.&lt;/p&gt;
@@ -16426,462 +16777,5 @@
 <guid isPermaLink="true">/news/2015/09/01/release-0.9.1.html</guid>
 </item>
 
-<item>
-<title>Introducing Gelly: Graph Processing with Apache Flink</title>
-<description>&lt;p&gt;This blog post introduces &lt;strong&gt;Gelly&lt;/strong&gt;, Apache Flink’s &lt;em&gt;graph-processing API and library&lt;/em&gt;. Flink’s native support
-for iterations makes it a suitable platform for large-scale graph analytics.
-By leveraging delta iterations, Gelly is able to map various graph processing models such as
-vertex-centric or gather-sum-apply to Flink dataflows.&lt;/p&gt;
-
-&lt;p&gt;Gelly allows Flink users to perform end-to-end data analysis in a single system.
-Gelly can be seamlessly used with Flink’s DataSet API,
-which means that pre-processing, graph creation, analysis, and post-processing can be done
-in the same application. At the end of this post, we will go through a step-by-step example
-in order to demonstrate that loading, transformation, filtering, graph creation, and analysis
-can be performed in a single Flink program.&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;Overview&lt;/strong&gt;&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;&lt;a href=&quot;#what-is-gelly&quot;&gt;What is Gelly?&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;a href=&quot;#graph-representation-and-creation&quot;&gt;Graph Representation and Creation&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;a href=&quot;#transformations-and-utilities&quot;&gt;Transformations and Utilities&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;a href=&quot;#iterative-graph-processing&quot;&gt;Iterative Graph Processing&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;a href=&quot;#library-of-graph-algorithms&quot;&gt;Library of Graph Algorithms&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;a href=&quot;#use-case-music-profiles&quot;&gt;Use-Case: Music Profiles&lt;/a&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;a href=&quot;#ongoing-and-future-work&quot;&gt;Ongoing and Future Work&lt;/a&gt;&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;what-is-gelly&quot;&gt;What is Gelly?&lt;/h2&gt;
-
-&lt;p&gt;Gelly is a Graph API for Flink. It is currently supported in both Java and Scala.
-The Scala methods are implemented as wrappers on top of the basic Java operations.
-The API contains a set of utility functions for graph analysis, supports iterative graph
-processing and introduces a library of graph algorithms.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/flink-stack.png&quot; style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;graph-representation-and-creation&quot;&gt;Graph Representation and Creation&lt;/h2&gt;
-
-&lt;p&gt;In Gelly, a graph is represented by a DataSet of vertices and a DataSet of edges.
-A vertex is defined by its unique ID and a value, whereas an edge is defined by its source ID,
-target ID, and value. A vertex or edge for which a value is not specified will simply have the
-value type set to &lt;code&gt;NullValue&lt;/code&gt;.&lt;/p&gt;
-
-&lt;p&gt;A graph can be created from:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;&lt;strong&gt;DataSet of edges&lt;/strong&gt; and an optional &lt;strong&gt;DataSet of vertices&lt;/strong&gt; using &lt;code&gt;Graph.fromDataSet()&lt;/code&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;DataSet of Tuple3&lt;/strong&gt; and an optional &lt;strong&gt;DataSet of Tuple2&lt;/strong&gt; using &lt;code&gt;Graph.fromTupleDataSet()&lt;/code&gt;&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;Collection of edges&lt;/strong&gt; and an optional &lt;strong&gt;Collection of vertices&lt;/strong&gt; using &lt;code&gt;Graph.fromCollection()&lt;/code&gt;&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;In all three cases, if the vertices are not provided,
-Gelly will automatically produce the vertex IDs from the edge source and target IDs.&lt;/p&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;transformations-and-utilities&quot;&gt;Transformations and Utilities&lt;/h2&gt;
-
-&lt;p&gt;These are methods of the Graph class and include common graph metrics, transformations
-and mutations as well as neighborhood aggregations.&lt;/p&gt;
-
-&lt;h4 id=&quot;common-graph-metrics&quot;&gt;Common Graph Metrics&lt;/h4&gt;
-&lt;p&gt;These methods can be used to retrieve several graph metrics and properties, such as the number
-of vertices, edges and the node degrees.&lt;/p&gt;
-
-&lt;h4 id=&quot;transformations&quot;&gt;Transformations&lt;/h4&gt;
-&lt;p&gt;The transformation methods enable several Graph operations, using high-level functions similar to
-the ones provided by the batch processing API. These transformations can be applied one after the
-other, yielding a new Graph after each step, in a fashion similar to operators on DataSets:&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;inputGraph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getUndirected&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;mapEdges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;CustomEdgeMapper&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;p&gt;Transformations can be applied on:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;&lt;strong&gt;Vertices&lt;/strong&gt;: &lt;code&gt;mapVertices&lt;/code&gt;, &lt;code&gt;joinWithVertices&lt;/code&gt;, &lt;code&gt;filterOnVertices&lt;/code&gt;, &lt;code&gt;addVertex&lt;/code&gt;, …&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;Edges&lt;/strong&gt;: &lt;code&gt;mapEdges&lt;/code&gt;, &lt;code&gt;filterOnEdges&lt;/code&gt;, &lt;code&gt;removeEdge&lt;/code&gt;, …&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;Triplets&lt;/strong&gt; (source vertex, target vertex, edge): &lt;code&gt;getTriplets&lt;/code&gt;&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;h4 id=&quot;neighborhood-aggregations&quot;&gt;Neighborhood Aggregations&lt;/h4&gt;
-
-&lt;p&gt;Neighborhood methods allow vertices to perform an aggregation on their first-hop neighborhood.
-This provides a vertex-centric view, where each vertex can access its neighboring edges and neighbor values.&lt;/p&gt;
-
-&lt;p&gt;&lt;code&gt;reduceOnEdges()&lt;/code&gt; provides access to the neighboring edges of a vertex,
-i.e. the edge value and the vertex ID of the edge endpoint. In order to also access the
-neighboring vertices’ values, one should call the &lt;code&gt;reduceOnNeighbors()&lt;/code&gt; function.
-The scope of the neighborhood is defined by the EdgeDirection parameter, which can be IN, OUT or ALL,
-to gather in-coming, out-going or all edges (neighbors) of a vertex.&lt;/p&gt;
-
-&lt;p&gt;The two neighborhood
-functions mentioned above can only be used when the aggregation function is associative and commutative.
-In case the function does not comply with these restrictions or if it is desirable to return zero,
-one or more values per vertex, the more general  &lt;code&gt;groupReduceOnEdges()&lt;/code&gt; and 
-&lt;code&gt;groupReduceOnNeighbors()&lt;/code&gt; functions must be called.&lt;/p&gt;
-
-&lt;p&gt;Consider the following graph, for instance:&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/neighborhood.png&quot; style=&quot;width:60%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;Assume you would want to compute the sum of the values of all incoming neighbors for each vertex.
-We will call the &lt;code&gt;reduceOnNeighbors()&lt;/code&gt; aggregation method since the sum is an associative and commutative operation and the neighbors’ values are needed:&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;graph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;reduceOnNeighbors&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;SumValues&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;EdgeDirection&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;IN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;p&gt;The vertex with id 1 is the only node that has no incoming edges. The result is therefore:&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/reduce-on-neighbors.png&quot; style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;iterative-graph-processing&quot;&gt;Iterative Graph Processing&lt;/h2&gt;
-
-&lt;p&gt;During the past few years, many different programming models for distributed graph processing
-have been introduced: &lt;a href=&quot;http://delivery.acm.org/10.1145/2490000/2484843/a22-salihoglu.pdf?ip=141.23.53.206&amp;amp;id=2484843&amp;amp;acc=ACTIVE%20SERVICE&amp;amp;key=2BA2C432AB83DA15.0F42380CB8DD3307.4D4702B0C3E38B35.4D4702B0C3E38B35&amp;amp;CFID=706313474&amp;amp;CFTOKEN=60107876&amp;amp;__acm__=1440408958_b131e035942130653e5782409b5c0cde&quot;&gt;vertex-centric&lt;/a&gt;,
-&lt;a href=&quot;http://researcher.ibm.com/researcher/files/us-ytian/giraph++.pdf&quot;&gt;partition-centric&lt;/a&gt;, &lt;a href=&quot;http://www.eecs.harvard.edu/cs261/notes/gonzalez-2012.htm&quot;&gt;gather-apply-scatter&lt;/a&gt;,
-&lt;a href=&quot;http://infoscience.epfl.ch/record/188535/files/paper.pdf&quot;&gt;edge-centric&lt;/a&gt;, &lt;a href=&quot;http://www.vldb.org/pvldb/vol7/p1673-quamar.pdf&quot;&gt;neighborhood-centric&lt;/a&gt;.
-Each one of these models targets a specific class of graph applications and each corresponding
-system implementation optimizes the runtime respectively. In Gelly, we would like to exploit the
-flexible dataflow model and the efficient iterations of Flink, to support multiple distributed
-graph processing models on top of the same system.&lt;/p&gt;
-
-&lt;p&gt;Currently, Gelly has methods for writing vertex-centric programs and provides support for programs
-implemented using the gather-sum(accumulate)-apply model. We are also considering to offer support
-for the partition-centric computation model, using Fink’s &lt;code&gt;mapPartition()&lt;/code&gt; operator.
-This model exposes the partition structure to the user and allows local graph structure exploitation
-inside a partition to avoid unnecessary communication.&lt;/p&gt;
-
-&lt;h4 id=&quot;vertex-centric&quot;&gt;Vertex-centric&lt;/h4&gt;
-
-&lt;p&gt;Gelly wraps Flink’s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-0.8/spargel_guide.html&quot;&gt;Spargel APi&lt;/a&gt; to 
-support the vertex-centric, Pregel-like programming model. Gelly’s &lt;code&gt;runVertexCentricIteration&lt;/code&gt; method accepts two user-defined functions:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;&lt;strong&gt;MessagingFunction:&lt;/strong&gt; defines what messages a vertex sends out for the next superstep.&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;VertexUpdateFunction:&lt;/strong&gt;* defines how a vertex will update its value based on the received messages.&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;The method will execute the vertex-centric iteration on the input Graph and return a new Graph, with updated vertex values.&lt;/p&gt;
-
-&lt;p&gt;Gelly’s vertex-centric programming model exploits Flink’s efficient delta iteration operators.
-Many iterative graph algorithms expose non-uniform behavior, where some vertices converge to
-their final value faster than others. In such cases, the number of vertices that need to be
-recomputed during an iteration decreases as the algorithm moves towards convergence.&lt;/p&gt;
-
-&lt;p&gt;For example, consider a Single Source Shortest Paths problem on the following graph, where S
-is the source node, i is the iteration counter and the edge values represent distances between nodes:&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/sssp.png&quot; style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;In each iteration, a vertex receives distances from its neighbors and adopts the minimum of
-these distances and its current distance as the new value. Then, it  propagates its new value
-to its neighbors. If a vertex does not change value during an iteration, there is no need for
-it to propagate its old distance to its neighbors; as they have already taken it into account.&lt;/p&gt;
-
-&lt;p&gt;Flink’s &lt;code&gt;IterateDelta&lt;/code&gt; operator permits exploitation of this property as well as the
-execution of computations solely on the active parts of the graph. The operator receives two inputs:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;the &lt;strong&gt;Solution Set&lt;/strong&gt;, which represents the current state of the input and&lt;/li&gt;
-  &lt;li&gt;the &lt;strong&gt;Workset&lt;/strong&gt;, which determines which parts of the graph will be recomputed in the next iteration.&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;In the SSSP example above, the Workset contains the vertices which update their distances.
-The user-defined iterative function is applied on these inputs to produce state updates.
-These updates are efficiently applied on the state, which is kept in memory.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/iteration.png&quot; style=&quot;width:60%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;Internally, a vertex-centric iteration is a Flink delta iteration, where the initial Solution Set
-is the vertex set of the input graph and the Workset is created by selecting the active vertices,
-i.e. the ones that updated their value in the previous iteration. The messaging and vertex-update
-functions are user-defined functions wrapped inside coGroup operators. In each superstep,
-the active vertices (Workset) are coGrouped with the edges to generate the neighborhoods for
-each vertex. The messaging function is then applied on each neighborhood. Next, the result of the
-messaging function is coGrouped with the current vertex values (Solution Set) and the user-defined
-vertex-update function is applied on the result. The output of this coGroup operator is finally
-used to update the Solution Set and create the Workset input for the next iteration.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/vertex-centric-plan.png&quot; style=&quot;width:40%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;h4 id=&quot;gather-sum-apply&quot;&gt;Gather-Sum-Apply&lt;/h4&gt;
-
-&lt;p&gt;Gelly supports a variation of the popular Gather-Sum-Apply-Scatter  computation model,
-introduced by PowerGraph. In GSA, a vertex pulls information from its neighbors as opposed to the
-vertex-centric approach where the updates are pushed from the incoming neighbors.
-The &lt;code&gt;runGatherSumApplyIteration()&lt;/code&gt; accepts three user-defined functions:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;&lt;strong&gt;GatherFunction:&lt;/strong&gt; gathers neighboring partial values along in-edges.&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;SumFunction:&lt;/strong&gt; accumulates/reduces the values into a single one.&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;ApplyFunction:&lt;/strong&gt; uses the result computed in the sum phase to update the current vertex’s value.&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;Similarly to vertex-centric, GSA leverages Flink’s delta iteration operators as, in many cases,
-vertex values do not need to be recomputed during an iteration.&lt;/p&gt;
-
-&lt;p&gt;Let us reconsider the Single Source Shortest Paths algorithm. In each iteration, a vertex:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;&lt;strong&gt;Gather&lt;/strong&gt; retrieves distances from its neighbors summed up with the corresponding edge values;&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;Sum&lt;/strong&gt; compares the newly obtained distances in order to extract the minimum;&lt;/li&gt;
-  &lt;li&gt;&lt;strong&gt;Apply&lt;/strong&gt; and finally adopts the minimum distance computed in the sum step,
-provided that it is lower than its current value. If a vertex’s value does not change during
-an iteration, it no longer propagates its distance.&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;Internally, a Gather-Sum-Apply Iteration is a Flink delta iteration where the initial solution
-set is the vertex input set and the workset is created by selecting the active vertices.&lt;/p&gt;
-
-&lt;p&gt;The three functions: gather, sum and apply are user-defined functions wrapped in map, reduce
-and join operators respectively. In each superstep, the active vertices are joined with the
-edges in order to create neighborhoods for each vertex. The gather function is then applied on
-the neighborhood values via a map function. Afterwards, the result is grouped by the vertex ID
-and reduced using the sum function. Finally, the outcome of the sum phase is joined with the
-current vertex values (solution set), the values are updated, thus creating a new workset that
-serves as input for the next iteration.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/GSA-plan.png&quot; style=&quot;width:40%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;library-of-graph-algorithms&quot;&gt;Library of Graph Algorithms&lt;/h2&gt;
-
-&lt;p&gt;We are building a library of graph algorithms in Gelly, to easily analyze large-scale graphs.
-These algorithms extend the &lt;code&gt;GraphAlgorithm&lt;/code&gt; interface and can be simply executed on
-the input graph by calling a &lt;code&gt;run()&lt;/code&gt; method.&lt;/p&gt;
-
-&lt;p&gt;We currently have implementations of the following algorithms:&lt;/p&gt;
-
-&lt;ol&gt;
-  &lt;li&gt;PageRank&lt;/li&gt;
-  &lt;li&gt;Single-Source-Shortest-Paths&lt;/li&gt;
-  &lt;li&gt;Label Propagation&lt;/li&gt;
-  &lt;li&gt;Community Detection (based on &lt;a href=&quot;http://arxiv.org/pdf/0808.2633.pdf&quot;&gt;this paper&lt;/a&gt;)&lt;/li&gt;
-  &lt;li&gt;Connected Components&lt;/li&gt;
-  &lt;li&gt;GSA Connected Components&lt;/li&gt;
-  &lt;li&gt;GSA PageRank&lt;/li&gt;
-  &lt;li&gt;GSA Single-Source-Shortest-Paths&lt;/li&gt;
-&lt;/ol&gt;
-
-&lt;p&gt;Gelly also offers implementations of common graph algorithms through &lt;a href=&quot;https://github.com/apache/flink/tree/master/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example&quot;&gt;examples&lt;/a&gt;.
-Among them, one can find graph weighting schemes, like Jaccard Similarity and Euclidean Distance Weighting, 
-as well as computation of common graph metrics.&lt;/p&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;use-case-music-profiles&quot;&gt;Use-Case: Music Profiles&lt;/h2&gt;
-
-&lt;p&gt;In the following section, we go through a use-case scenario that combines the Flink DataSet API
-with Gelly in order to process users’ music preferences to suggest additions to their playlist.&lt;/p&gt;
-
-&lt;p&gt;First, we read a user’s music profile which is in the form of user-id, song-id and the number of
-plays that each song has. We then filter out the list of songs the users do not wish to see in their
-playlist. Then we compute the top songs per user (i.e. the songs a user listened to the most).
-Finally, as a separate use-case on the same data set, we create a user-user similarity graph based
-on the common songs and use this resulting graph to detect communities by calling Gelly’s Label Propagation
-library method.&lt;/p&gt;
-
-&lt;p&gt;For running the example implementation, please use the 0.10-SNAPSHOT version of Flink as a
-dependency. The full example code base can be found &lt;a href=&quot;https://github.com/apache/flink/blob/master/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example/MusicProfiles.java&quot;&gt;here&lt;/a&gt;. The public data set used for testing
-can be found &lt;a href=&quot;http://labrosa.ee.columbia.edu/millionsong/tasteprofile&quot;&gt;here&lt;/a&gt;. This data set contains &lt;strong&gt;48,373,586&lt;/strong&gt; real user-id, song-id and
-play-count triplets.&lt;/p&gt;
-
-&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The code snippets in this post try to reduce verbosity by skipping type parameters of generic functions. Please have a look at &lt;a href=&quot;https://github.com/apache/flink/blob/master/flink-staging/flink-gelly/src/main/java/org/apache/flink/graph/example/MusicProfiles.java&quot;&gt;the full example&lt;/a&gt; for the correct and complete code.&lt;/p&gt;
-
-&lt;h4 id=&quot;filtering-out-bad-records&quot;&gt;Filtering out Bad Records&lt;/h4&gt;
-
-&lt;p&gt;After reading the &lt;code&gt;(user-id, song-id, play-count)&lt;/code&gt; triplets from a CSV file and after parsing a
-text file in order to retrieve the list of songs that a user would not want to include in a
-playlist, we use a coGroup function to filter out the mismatches.&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// read the user-song-play triplets.&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple3&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;triplets&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
-    &lt;span class=&quot;n&quot;&gt;getUserSongTripletsData&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
-
-&lt;span class=&quot;c1&quot;&gt;// read the mismatches dataset and extract the songIDs&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple3&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;validTriplets&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;triplets&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;coGroup&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;mismatches&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;where&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;equalTo&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;with&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;CoGroupFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;coGroup&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Iterable&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;triplets&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Iterable&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;invalidSongs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                        &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(!&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;invalidSongs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;iterator&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;hasNext&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                            &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple3&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;triplet&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;triplets&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// valid triplet&lt;/span&gt;
-                                &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;triplet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
-                            &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-                        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-                    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-                &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;p&gt;The coGroup simply takes the triplets whose song-id (second field) matches the song-id from the
-mismatches list (first field) and if the iterator was empty for a certain triplet, meaning that
-there were no mismatches found, the triplet associated with that song is collected.&lt;/p&gt;
-
-&lt;h4 id=&quot;compute-the-top-songs-per-user&quot;&gt;Compute the Top Songs per User&lt;/h4&gt;
-
-&lt;p&gt;As a next step, we would like to see which songs a user played more often. To this end, we
-build a user-song weighted, bipartite graph in which edge source vertices are users, edge target
-vertices are songs and where the weight represents the number of times the user listened to that
-certain song.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/user-song-graph.png&quot; style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// create a user -&amp;gt; song weighted bipartite graph where the edge weights&lt;/span&gt;
-&lt;span class=&quot;c1&quot;&gt;// correspond to play counts&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;Graph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;NullValue&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;userSongGraph&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Graph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;fromTupleDataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;validTriplets&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;p&gt;Consult the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/dev/libs/gelly/&quot;&gt;Gelly guide&lt;/a&gt; for guidelines 
-on how to create a graph from a given DataSet of edges or from a collection.&lt;/p&gt;
-
-&lt;p&gt;To retrieve the top songs per user, we call the groupReduceOnEdges function as it perform an
-aggregation over the first hop neighborhood taking just the edges into consideration. We will
-basically iterate through the edge value and collect the target (song) of the maximum weight edge.&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;//get the top track (most listened to) for each user&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;usersWithTopTrack&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;userSongGraph&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;groupReduceOnEdges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;GetTopSongPerUser&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;EdgeDirection&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;OUT&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
-
-&lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;GetTopSongPerUser&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;implements&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;EdgesFunctionWithVertexValue&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-    &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;iterateEdges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Vertex&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;vertex&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Iterable&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-        &lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;maxPlaycount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
-        &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;topSong&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
-
-        &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-            &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getValue&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;maxPlaycount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                &lt;span class=&quot;n&quot;&gt;maxPlaycount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getValue&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
-                &lt;span class=&quot;n&quot;&gt;topSong&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getTarget&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
-            &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-        &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;vertex&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;topSong&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
-    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;h4 id=&quot;creating-a-user-user-similarity-graph&quot;&gt;Creating a User-User Similarity Graph&lt;/h4&gt;
-
-&lt;p&gt;Clustering users based on common interests, in this case, common top songs, could prove to be
-very useful for advertisements or for recommending new musical compilations. In a user-user graph,
-two users who listen to the same song will simply be linked together through an edge as depicted
-in the figure below.&lt;/p&gt;
-
-&lt;center&gt;
-&lt;img src=&quot;/img/blog/user-song-to-user-user.png&quot; style=&quot;width:90%;margin:15px&quot; /&gt;
-&lt;/center&gt;
-
-&lt;p&gt;To form the user-user graph in Flink, we will simply take the edges from the user-song graph
-(left-hand side of the image), group them by song-id, and then add all the users (source vertex ids)
-to an ArrayList.&lt;/p&gt;
-
-&lt;p&gt;We then match users who listened to the same song two by two, creating a new edge to mark their
-common interest (right-hand side of the image).&lt;/p&gt;
-
-&lt;p&gt;Afterwards, we perform a &lt;code&gt;distinct()&lt;/code&gt; operation to avoid creation of duplicate data.
-Considering that we now have the DataSet of edges which present interest, creating a graph is as
-straightforward as a call to the &lt;code&gt;Graph.fromDataSet()&lt;/code&gt; method.&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// create a user-user similarity graph:&lt;/span&gt;
-&lt;span class=&quot;c1&quot;&gt;// two users that listen to the same song are connected&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;similarUsers&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;userSongGraph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getEdges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
-        &lt;span class=&quot;c1&quot;&gt;// filter out user-song edges that are below the playcount threshold&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;filter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;FilterFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-            	&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;boolean&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;filter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getValue&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;playcountThreshold&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
-                &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;})&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;groupBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;reduceGroup&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;GroupReduceFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;reduce&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Iterable&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                    &lt;span class=&quot;n&quot;&gt;List&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;users&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;ArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
-                    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Edge&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;edges&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                        &lt;span class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
-                        &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;size&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;++)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                            &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;j&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;j&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;size&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;j&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;++)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                                &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Edge&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;i&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;users&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;j&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)));&lt;/span&gt;
-                            &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-                        &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-                    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-                &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;})&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;distinct&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
-
-&lt;span class=&quot;n&quot;&gt;Graph&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;similarUsersGraph&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Graph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;fromDataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;similarUsers&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getUndirected&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;p&gt;After having created a user-user graph, it would make sense to detect the various communities
-formed. To do so, we first initialize each vertex with a numeric label using the
-&lt;code&gt;joinWithVertices()&lt;/code&gt; function that takes a data set of Tuple2 as a parameter and joins
-the id of a vertex with the first element of the tuple, afterwards applying a map function.
-Finally, we call the &lt;code&gt;run()&lt;/code&gt; method with the LabelPropagation library method passed
-as a parameter. In the end, the vertices will be updated to contain the most frequent label
-among their neighbors.&lt;/p&gt;
-
-&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// detect user communities using label propagation&lt;/span&gt;
-&lt;span class=&quot;c1&quot;&gt;// initialize each vertex with a unique numeric label&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;idsWithInitialLabels&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;DataSetUtils&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;zipWithUniqueId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;similarUsersGraph&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getVertexIds&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;MapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
-                &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;f1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;f0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
-                &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;});&lt;/span&gt;
-
-&lt;span class=&quot;c1&quot;&gt;// update the vertex values and run the label propagation algorithm&lt;/span&gt;
-&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Vertex&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;verticesWithCommunity&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;similarUsersGraph&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;joinWithVertices&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;idsWithlLabels&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;MapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;idWithLabel&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
-                    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;idWithLabel&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;f1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
-                &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;})&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;run&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;LabelPropagation&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;numIterations&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
-        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getVertices&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;ongoing-and-future-work&quot;&gt;Ongoing and Future Work&lt;/h2&gt;
-
-&lt;p&gt;Currently, Gelly matches the basic functionalities provided by most state-of-the-art graph
-processing systems. Our vision is to turn Gelly into more than “yet another library for running
-PageRank-like algorithms” by supporting generic iterations, implementing graph partitioning,
-providing bipartite graph support and by offering numerous other features.&lt;/p&gt;
-
-&lt;p&gt;We are also enriching Flink Gelly with a set of operators suitable for highly skewed graphs
-as well as a Graph API built on Flink Streaming.&lt;/p&gt;
-
-&lt;p&gt;In the near future, we would like to see how Gelly can be integrated with graph visualization
-tools, graph database systems and sampling techniques.&lt;/p&gt;
-
-&lt;p&gt;Curious? Read more about our plans for Gelly in the &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/Flink+Gelly&quot;&gt;roadmap&lt;/a&gt;.&lt;/p&gt;
-
-&lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
-
-&lt;h2 id=&quot;links&quot;&gt;Links&lt;/h2&gt;
-&lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/dev/libs/gelly/&quot;&gt;Gelly Documentation&lt;/a&gt;&lt;/p&gt;
-</description>
-<pubDate>Mon, 24 Aug 2015 00:00:00 +0200</pubDate>
-<link>https://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html</link>
-<guid isPermaLink="true">/news/2015/08/24/introducing-flink-gelly.html</guid>
-</item>
-
 </channel>
 </rss>
diff --git a/content/blog/index.html b/content/blog/index.html
index fa55ee8..c75001f 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -196,6 +196,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></h2>
+
+      <p>06 Jul 2020
+       Marta Paes (<a href="https://twitter.com/morsapaes">@morsapaes</a>)</p>
+
+      <p>The Apache Flink community is proud to announce the release of Flink 1.11.0! More than 200 contributors worked on over 1.3k issues to bring significant improvements to usability as well as new features to Flink users across the whole API stack. We're particularly excited about unaligned checkpoints to cope with high backpressure scenarios, a new source API that simplifies and unifies the implementation of (custom) sources, and support for Change Data Capture (CDC) and other common use cases in the Table API/SQL. Read on for all major new features and improvements, important changes to be aware of and what to expect moving forward!</p>
+
+      <p><a href="/news/2020/07/06/release-1.11.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></h2>
 
       <p>23 Jun 2020
@@ -325,19 +338,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2020/04/15/flink-serialization-tuning-vol-1.html">Flink Serialization Tuning Vol. 1: Choosing your Serializer — if you can</a></h2>
-
-      <p>15 Apr 2020
-       Nico Kruber </p>
-
-      <p>Serialization is a crucial element of your Flink job. This article is the first in a series of posts that will highlight Flink’s serialization stack, and looks at the different ways Flink can serialize your data types.</p>
-
-      <p><a href="/news/2020/04/15/flink-serialization-tuning-vol-1.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -370,6 +370,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page10/index.html b/content/blog/page10/index.html
index 5bc32c9..9da2914 100644
--- a/content/blog/page10/index.html
+++ b/content/blog/page10/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2016/03/08/release-1.0.0.html">Announcing Apache Flink 1.0.0</a></h2>
+
+      <p>08 Mar 2016
+      </p>
+
+      <p><p>The Apache Flink community is pleased to announce the availability of the 1.0.0 release. The community put significant effort into improving and extending Apache Flink since the last release, focusing on improving the experience of writing and executing data stream processing pipelines in production.</p>
+
+</p>
+
+      <p><a href="/news/2016/03/08/release-1.0.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2016/02/11/release-0.10.2.html">Flink 0.10.2 Released</a></h2>
 
       <p>11 Feb 2016
@@ -328,24 +343,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/08/24/introducing-flink-gelly.html">Introducing Gelly: Graph Processing with Apache Flink</a></h2>
-
-      <p>24 Aug 2015
-      </p>
-
-      <p><p>This blog post introduces <strong>Gelly</strong>, Apache Flink’s <em>graph-processing API and library</em>. Flink’s native support
-for iterations makes it a suitable platform for large-scale graph analytics.
-By leveraging delta iterations, Gelly is able to map various graph processing models such as
-vertex-centric or gather-sum-apply to Flink dataflows.</p>
-
-</p>
-
-      <p><a href="/news/2015/08/24/introducing-flink-gelly.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -378,6 +375,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page11/index.html b/content/blog/page11/index.html
index 2b4dfdf..ce6d1cc 100644
--- a/content/blog/page11/index.html
+++ b/content/blog/page11/index.html
@@ -196,6 +196,24 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/08/24/introducing-flink-gelly.html">Introducing Gelly: Graph Processing with Apache Flink</a></h2>
+
+      <p>24 Aug 2015
+      </p>
+
+      <p><p>This blog post introduces <strong>Gelly</strong>, Apache Flink’s <em>graph-processing API and library</em>. Flink’s native support
+for iterations makes it a suitable platform for large-scale graph analytics.
+By leveraging delta iterations, Gelly is able to map various graph processing models such as
+vertex-centric or gather-sum-apply to Flink dataflows.</p>
+
+</p>
+
+      <p><a href="/news/2015/08/24/introducing-flink-gelly.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/06/24/announcing-apache-flink-0.9.0-release.html">Announcing Apache Flink 0.9.0</a></h2>
 
       <p>24 Jun 2015
@@ -337,21 +355,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/01/21/release-0.8.html">Apache Flink 0.8.0 available</a></h2>
-
-      <p>21 Jan 2015
-      </p>
-
-      <p><p>We are pleased to announce the availability of Flink 0.8.0. This release includes new user-facing features as well as performance and bug fixes, extends the support for filesystems and introduces the Scala API and flexible windowing semantics for Flink Streaming. A total of 33 people have contributed to this release, a big thanks to all of them!</p>
-
-</p>
-
-      <p><a href="/news/2015/01/21/release-0.8.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -384,6 +387,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page12/index.html b/content/blog/page12/index.html
index 8d3a1d2..27f2425 100644
--- a/content/blog/page12/index.html
+++ b/content/blog/page12/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/01/21/release-0.8.html">Apache Flink 0.8.0 available</a></h2>
+
+      <p>21 Jan 2015
+      </p>
+
+      <p><p>We are pleased to announce the availability of Flink 0.8.0. This release includes new user-facing features as well as performance and bug fixes, extends the support for filesystems and introduces the Scala API and flexible windowing semantics for Flink Streaming. A total of 33 people have contributed to this release, a big thanks to all of them!</p>
+
+</p>
+
+      <p><a href="/news/2015/01/21/release-0.8.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/01/06/december-in-flink.html">December 2014 in the Flink community</a></h2>
 
       <p>06 Jan 2015
@@ -320,6 +335,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index bad923d..f634033 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -196,6 +196,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2020/04/15/flink-serialization-tuning-vol-1.html">Flink Serialization Tuning Vol. 1: Choosing your Serializer — if you can</a></h2>
+
+      <p>15 Apr 2020
+       Nico Kruber </p>
+
+      <p>Serialization is a crucial element of your Flink job. This article is the first in a series of posts that will highlight Flink’s serialization stack, and looks at the different ways Flink can serialize your data types.</p>
+
+      <p><a href="/news/2020/04/15/flink-serialization-tuning-vol-1.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/2020/04/09/pyflink-udf-support-flink.html">PyFlink: Introducing Python Support for UDFs in Flink's Table API</a></h2>
 
       <p>09 Apr 2020
@@ -319,21 +332,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2020/01/30/release-1.9.2.html">Apache Flink 1.9.2 Released</a></h2>
-
-      <p>30 Jan 2020
-       Hequn Cheng (<a href="https://twitter.com/HequnC">@HequnC</a>)</p>
-
-      <p><p>The Apache Flink community released the second bugfix version of the Apache Flink 1.9 series.</p>
-
-</p>
-
-      <p><a href="/news/2020/01/30/release-1.9.2.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -366,6 +364,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index 7de6587..1a1760b 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2020/01/30/release-1.9.2.html">Apache Flink 1.9.2 Released</a></h2>
+
+      <p>30 Jan 2020
+       Hequn Cheng (<a href="https://twitter.com/HequnC">@HequnC</a>)</p>
+
+      <p><p>The Apache Flink community released the second bugfix version of the Apache Flink 1.9 series.</p>
+
+</p>
+
+      <p><a href="/news/2020/01/30/release-1.9.2.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2020/01/29/state-unlocked-interacting-with-state-in-apache-flink.html">State Unlocked: Interacting with State in Apache Flink</a></h2>
 
       <p>29 Jan 2020
@@ -318,22 +333,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2019/08/22/release-1.9.0.html">Apache Flink 1.9.0 Release Announcement</a></h2>
-
-      <p>22 Aug 2019
-      </p>
-
-      <p><p>The Apache Flink community is proud to announce the release of Apache Flink
-1.9.0.</p>
-
-</p>
-
-      <p><a href="/news/2019/08/22/release-1.9.0.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -366,6 +365,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index 6d53835..f3b4b2f 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -196,6 +196,22 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2019/08/22/release-1.9.0.html">Apache Flink 1.9.0 Release Announcement</a></h2>
+
+      <p>22 Aug 2019
+      </p>
+
+      <p><p>The Apache Flink community is proud to announce the release of Apache Flink
+1.9.0.</p>
+
+</p>
+
+      <p><a href="/news/2019/08/22/release-1.9.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/2019/07/23/flink-network-stack-2.html">Flink Network Stack Vol. 2: Monitoring, Metrics, and that Backpressure Thing</a></h2>
 
       <p>23 Jul 2019
@@ -322,19 +338,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/features/2019/03/11/prometheus-monitoring.html">Flink and Prometheus: Cloud-native monitoring of streaming applications</a></h2>
-
-      <p>11 Mar 2019
-       Maximilian Bode, TNG Technology Consulting (<a href="https://twitter.com/mxpbode">@mxpbode</a>)</p>
-
-      <p>This blog post describes how developers can leverage Apache Flink's built-in metrics system together with Prometheus to observe and monitor streaming applications in an effective way.</p>
-
-      <p><a href="/features/2019/03/11/prometheus-monitoring.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -367,6 +370,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page5/index.html b/content/blog/page5/index.html
index c970070..85fc481 100644
--- a/content/blog/page5/index.html
+++ b/content/blog/page5/index.html
@@ -196,6 +196,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/features/2019/03/11/prometheus-monitoring.html">Flink and Prometheus: Cloud-native monitoring of streaming applications</a></h2>
+
+      <p>11 Mar 2019
+       Maximilian Bode, TNG Technology Consulting (<a href="https://twitter.com/mxpbode">@mxpbode</a>)</p>
+
+      <p>This blog post describes how developers can leverage Apache Flink's built-in metrics system together with Prometheus to observe and monitor streaming applications in an effective way.</p>
+
+      <p><a href="/features/2019/03/11/prometheus-monitoring.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2019/03/06/ffsf-preview.html">What to expect from Flink Forward San Francisco 2019</a></h2>
 
       <p>06 Mar 2019
@@ -326,21 +339,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2018/10/29/release-1.6.2.html">Apache Flink 1.6.2 Released</a></h2>
-
-      <p>29 Oct 2018
-      </p>
-
-      <p><p>The Apache Flink community released the second bugfix version of the Apache Flink 1.6 series.</p>
-
-</p>
-
-      <p><a href="/news/2018/10/29/release-1.6.2.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -373,6 +371,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page6/index.html b/content/blog/page6/index.html
index 9b8bdd5..8d3489c 100644
--- a/content/blog/page6/index.html
+++ b/content/blog/page6/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2018/10/29/release-1.6.2.html">Apache Flink 1.6.2 Released</a></h2>
+
+      <p>29 Oct 2018
+      </p>
+
+      <p><p>The Apache Flink community released the second bugfix version of the Apache Flink 1.6 series.</p>
+
+</p>
+
+      <p><a href="/news/2018/10/29/release-1.6.2.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2018/10/29/release-1.5.5.html">Apache Flink 1.5.5 Released</a></h2>
 
       <p>29 Oct 2018
@@ -330,21 +345,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2018/03/08/release-1.4.2.html">Apache Flink 1.4.2 Released</a></h2>
-
-      <p>08 Mar 2018
-      </p>
-
-      <p><p>The Apache Flink community released the second bugfix version of the Apache Flink 1.4 series.</p>
-
-</p>
-
-      <p><a href="/news/2018/03/08/release-1.4.2.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -377,6 +377,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page7/index.html b/content/blog/page7/index.html
index 7aa5f1d..0c80454 100644
--- a/content/blog/page7/index.html
+++ b/content/blog/page7/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2018/03/08/release-1.4.2.html">Apache Flink 1.4.2 Released</a></h2>
+
+      <p>08 Mar 2018
+      </p>
+
+      <p><p>The Apache Flink community released the second bugfix version of the Apache Flink 1.4 series.</p>
+
+</p>
+
+      <p><a href="/news/2018/03/08/release-1.4.2.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/features/2018/03/01/end-to-end-exactly-once-apache-flink.html">An Overview of End-to-End Exactly-Once Processing in Apache Flink (with Apache Kafka, too!)</a></h2>
 
       <p>01 Mar 2018
@@ -327,21 +342,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2017/06/01/release-1.3.0.html">Apache Flink 1.3.0 Release Announcement</a></h2>
-
-      <p>01 Jun 2017 by Robert Metzger (<a href="https://twitter.com/">@rmetzger_</a>)
-      </p>
-
-      <p><p>The Apache Flink community is pleased to announce the 1.3.0 release. Over the past 4 months, the Flink community has been working hard to resolve more than 680 issues. See the <a href="/blog/release_1.3.0-changelog.html">complete changelog</a> for more detail.</p>
-
-</p>
-
-      <p><a href="/news/2017/06/01/release-1.3.0.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -374,6 +374,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page8/index.html b/content/blog/page8/index.html
index 0b5facc..c5e623e 100644
--- a/content/blog/page8/index.html
+++ b/content/blog/page8/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2017/06/01/release-1.3.0.html">Apache Flink 1.3.0 Release Announcement</a></h2>
+
+      <p>01 Jun 2017 by Robert Metzger (<a href="https://twitter.com/">@rmetzger_</a>)
+      </p>
+
+      <p><p>The Apache Flink community is pleased to announce the 1.3.0 release. Over the past 4 months, the Flink community has been working hard to resolve more than 680 issues. See the <a href="/blog/release_1.3.0-changelog.html">complete changelog</a> for more detail.</p>
+
+</p>
+
+      <p><a href="/news/2017/06/01/release-1.3.0.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2017/05/16/official-docker-image.html">Introducing Docker Images for Apache Flink</a></h2>
 
       <p>16 May 2017 by Patrick Lucas (Data Artisans) and Ismaël Mejía (Talend) (<a href="https://twitter.com/">@iemejia</a>)
@@ -323,21 +338,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2016/09/05/release-1.1.2.html">Apache Flink 1.1.2 Released</a></h2>
-
-      <p>05 Sep 2016
-      </p>
-
-      <p><p>The Apache Flink community released another bugfix version of the Apache Flink 1.1. series.</p>
-
-</p>
-
-      <p><a href="/news/2016/09/05/release-1.1.2.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -370,6 +370,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/blog/page9/index.html b/content/blog/page9/index.html
index c8497bc..406ccf6 100644
--- a/content/blog/page9/index.html
+++ b/content/blog/page9/index.html
@@ -196,6 +196,21 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2016/09/05/release-1.1.2.html">Apache Flink 1.1.2 Released</a></h2>
+
+      <p>05 Sep 2016
+      </p>
+
+      <p><p>The Apache Flink community released another bugfix version of the Apache Flink 1.1. series.</p>
+
+</p>
+
+      <p><a href="/news/2016/09/05/release-1.1.2.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2016/08/24/ff16-keynotes-panels.html">Flink Forward 2016: Announcing Schedule, Keynotes, and Panel Discussion</a></h2>
 
       <p>24 Aug 2016
@@ -327,21 +342,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2016/03/08/release-1.0.0.html">Announcing Apache Flink 1.0.0</a></h2>
-
-      <p>08 Mar 2016
-      </p>
-
-      <p><p>The Apache Flink community is pleased to announce the availability of the 1.0.0 release. The community put significant effort into improving and extending Apache Flink since the last release, focusing on improving the experience of writing and executing data stream processing pipelines in production.</p>
-
-</p>
-
-      <p><a href="/news/2016/03/08/release-1.0.0.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -374,6 +374,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></li>
+
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></li>
 
       
diff --git a/content/index.html b/content/index.html
index 9c9f7dc..e5086a1 100644
--- a/content/index.html
+++ b/content/index.html
@@ -568,6 +568,9 @@
 
   <dl>
       
+        <dt> <a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></dt>
+        <dd>The Apache Flink community is proud to announce the release of Flink 1.11.0! More than 200 contributors worked on over 1.3k issues to bring significant improvements to usability as well as new features to Flink users across the whole API stack. We're particularly excited about unaligned checkpoints to cope with high backpressure scenarios, a new source API that simplifies and unifies the implementation of (custom) sources, and support for Change Data Capture (CDC) and other common use cases in the Table API/SQL. Read on for all major new features and improvements, important changes to be aware of and what to expect moving forward!</dd>
+      
         <dt> <a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></dt>
         <dd><p>In a previous post, we introduced the basics of Flink on Zeppelin and how to do Streaming ETL. In this second part of the “Flink on Zeppelin” series of posts, I will share how to 
 perform streaming data visualization via Flink on Zeppelin and how to use Apache Flink UDFs in Zeppelin.</p>
@@ -588,11 +591,6 @@
         <dd><p>The Apache Flink community is happy to announce the release of Stateful Functions (StateFun) 2.1.0! This release introduces new features around state expiration and performance improvements for co-located deployments, as well as other important changes that improve the stability and testability of the project. As the community around StateFun grows, the release cycle will follow this pattern of smaller and more frequent releases to incorporate user feedback and allow for faster iteration.</p>
 
 </dd>
-      
-        <dt> <a href="/news/2020/05/12/release-1.10.1.html">Apache Flink 1.10.1 Released</a></dt>
-        <dd><p>The Apache Flink community released the first bugfix version of the Apache Flink 1.10 series.</p>
-
-</dd>
     
   </dl>
 
diff --git a/content/zh/index.html b/content/zh/index.html
index d44e4bc..77c874b 100644
--- a/content/zh/index.html
+++ b/content/zh/index.html
@@ -565,6 +565,9 @@
 
   <dl>
       
+        <dt> <a href="/news/2020/07/06/release-1.11.0.html">Apache Flink 1.11.0 Release Announcement</a></dt>
+        <dd>The Apache Flink community is proud to announce the release of Flink 1.11.0! More than 200 contributors worked on over 1.3k issues to bring significant improvements to usability as well as new features to Flink users across the whole API stack. We're particularly excited about unaligned checkpoints to cope with high backpressure scenarios, a new source API that simplifies and unifies the implementation of (custom) sources, and support for Change Data Capture (CDC) and other common use cases in the Table API/SQL. Read on for all major new features and improvements, important changes to be aware of and what to expect moving forward!</dd>
+      
         <dt> <a href="/ecosystem/2020/06/23/flink-on-zeppelin-part2.html">Flink on Zeppelin Notebooks for Interactive Data Analysis - Part 2</a></dt>
         <dd><p>In a previous post, we introduced the basics of Flink on Zeppelin and how to do Streaming ETL. In this second part of the “Flink on Zeppelin” series of posts, I will share how to 
 perform streaming data visualization via Flink on Zeppelin and how to use Apache Flink UDFs in Zeppelin.</p>
@@ -585,11 +588,6 @@
         <dd><p>The Apache Flink community is happy to announce the release of Stateful Functions (StateFun) 2.1.0! This release introduces new features around state expiration and performance improvements for co-located deployments, as well as other important changes that improve the stability and testability of the project. As the community around StateFun grows, the release cycle will follow this pattern of smaller and more frequent releases to incorporate user feedback and allow for faster iteration.</p>
 
 </dd>
-      
-        <dt> <a href="/news/2020/05/12/release-1.10.1.html">Apache Flink 1.10.1 Released</a></dt>
-        <dd><p>The Apache Flink community released the first bugfix version of the Apache Flink 1.10 series.</p>
-
-</dd>
     
   </dl>