Apply suggestions from code review

Co-authored-by: morsapaes <marta.paes.moreira@gmail.com>
diff --git a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
index 14450a5..0950b45 100644
--- a/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
+++ b/_posts/2020-05-03-flink-sql-demo-building-e2e-streaming-application.md
@@ -1,8 +1,7 @@
 ---
 layout: post
 title: "Flink SQL Demo: Building an End-to-End Streaming Application"
-date: 2020-05-03T12:00:00.000Z
-categories: news
+date: 2020-07-28T12:00:00.000Z
 authors:
 - jark:
   name: "Jark Wu"
@@ -12,10 +11,10 @@
 
 Apache Flink 1.11.0 has released many exciting new features, including many developments in Flink SQL which is evolving at a fast pace. This article takes a closer look at how to quickly build streaming applications with Flink SQL from a practical point of view.
 
-In the following sections, we describe how to integrate Kafka, MySQL, Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in real-time. All exercises in this article are performed in the Flink SQL CLI, while the entire process uses standard SQL syntax, without a single line of Java or Scala code or IDE installation. The final result of this demo is shown in the following figure:
+In the following sections, we describe how to integrate Kafka, MySQL, Elasticsearch, and Kibana with Flink SQL to analyze e-commerce user behavior in real-time. All exercises in this blogpost are performed in the Flink SQL CLI, and the entire process uses standard SQL syntax, without a single line of Java/Scala code or IDE installation. The final result of this demo is shown in the following figure:
 
 <center>
-<img src="{{ site.baseurl }}/img/blog/2020-05-03-flink-sql-demo/image1.gif" width="800px" alt="Demo Overview"/>
+<img src="{{ site.baseurl }}/img/blog/2020-05-03-flink-sql-demo/image1.gif" width="650px" alt="Demo Overview"/>
 </center>
 <br>
 
@@ -23,7 +22,7 @@
 
 Prepare a Linux or MacOS computer with Docker installed.
 
-## Use Docker Compose to Start Demo Environment
+## Starting the Demo Environment
 
 The components required in this demo are all managed in containers, so we will use `docker-compose` to start them. First, download the `docker-compose.yml` file that defines the demo environment, for example by running the following commands:
 
@@ -34,16 +33,19 @@
 
 The Docker Compose environment consists of the following containers:
 
-- **Flink SQL CLI:** It's used to submit queries and visualize their results.
-- **Flink Cluster:** A Flink master and a Flink worker container to execute queries.
-- **MySQL:** MySQL 5.7 and a `category` table in the database. The `category` table will be joined with data in Kafka to enrich the real-time data.
-- **Kafka:** It is mainly used as a data source. The DataGen component automatically writes data into a Kafka topic.
-- **Zookeeper:** This component is required by Kafka.
-- **Elasticsearch:** It is mainly used as a data sink.
-- **Kibana:** It's used to visualize the data in Elasticsearch.
-- **DataGen:** It is the data generator. After the container is started, user behavior data is automatically generated and sent to the Kafka topic. By default, 2000 data entries are generated each second for about 1.5 hours. You can modify datagen's `speedup` parameter in `docker-compose.yml` to adjust the generation rate (which takes effect after docker compose is restarted).
+- **Flink SQL CLI:** used to submit queries and visualize their results.
+- **Flink Cluster:** a Flink JobManager and a Flink TaskManager container to execute queries.
+- **MySQL:** MySQL 5.7 and a pre-populated `category` table in the database. The `category` table will be joined with data in Kafka to enrich the real-time data.
+- **Kafka:** mainly used as a data source. The DataGen component automatically writes data into a Kafka topic.
+- **Zookeeper:** this component is required by Kafka.
+- **Elasticsearch:** mainly used as a data sink.
+- **Kibana:** used to visualize the data in Elasticsearch.
+- **DataGen:** the data generator. After the container is started, user behavior data is automatically generated and sent to the Kafka topic. By default, 2000 data entries are generated each second for about 1.5 hours. You can modify DataGen's `speedup` parameter in `docker-compose.yml` to adjust the generation rate (which takes effect after Docker Compose is restarted).
 
-**Important:** Before starting the containers, we recommend configuring Docker so that sufficient resources are available and the environment does not become unresponsive. We suggest running Docker at 3-4 GB memory and 3-4 CPU cores.
+<div class="alert alert-danger" markdown="1">
+<span class="label label-danger" style="display: inline-block"> Note </span>
+Before starting the containers, we recommend configuring Docker so that sufficient resources are available and the environment does not become unresponsive. We suggest running Docker at 3-4 GB memory and 3-4 CPU cores.
+</div>
 
 To start all containers, run the following command in the directory that contains the `docker-compose.yml` file.
 
@@ -71,11 +73,11 @@
 You should see the welcome screen of the CLI client.
 
 <center>
-<img src="{{ site.baseurl }}/img/blog/2020-05-03-flink-sql-demo/image3.png" width="700px" alt="Flink SQL CLI welcome page"/>
+<img src="{{ site.baseurl }}/img/blog/2020-05-03-flink-sql-demo/image3.png" width="500px" alt="Flink SQL CLI welcome page"/>
 </center>
 <br>
 
-# Create a Kafka table using DDL
+## Creating a Kafka table using DDL
 
 The DataGen container continuously writes events into the Kafka `user_behavior` topic. This data contains the user behavior on the day of November 27, 2017 (behaviors include “click”, “like”, “purchase” and “add to shopping cart” events). Each row represents a user behavior event, with the user ID, product ID, product category ID, event type, and timestamp in JSON format. Note that the dataset is from the [Alibaba Cloud Tianchi public dataset](https://tianchi.aliyun.com/dataset/dataDetail?dataId=649).
 
@@ -109,18 +111,18 @@
 );
 ```
 
-The above snippet declares five fields based on the data format. In addition, it uses the computed column syntax and built-in `PROCTIME()` function to declare a virtual column that generates the processing-time attribute. It also uses the WATERMARK syntax to declare the watermark strategy on the `ts` field (tolerate 5-seconds out-of-order). Therefore, the `ts` field becomes an event-time attribute. For more information about time attributes and DDL syntax, see the following official documents:
+The above snippet declares five fields based on the data format. In addition, it uses the computed column syntax and built-in `PROCTIME()` function to declare a virtual column that generates the processing-time attribute. It also uses the `WATERMARK` syntax to declare the watermark strategy on the `ts` field (tolerate 5-seconds out-of-order). Therefore, the `ts` field becomes an event-time attribute. For more information about time attributes and DDL syntax, see the following official documents:
 
-- [For time attributes](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/streaming/time_attributes.html)
-- [For SQL DDL](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sql/create.html#create-table)
+- [Time attributes in Flink’s Table API & SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/streaming/time_attributes.html)
+- [DDL Syntax in Flink SQL](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sql/create.html#create-table)
 
-After creating the `user_behavior` table in the SQL CLI, run `show tables;` and `describe user_behavior;` to see registered tables and table details. Also, run the command `SELECT * FROM user_behavior;` directly in the SQL CLI to preview the data (press `q` to exit).
+After creating the `user_behavior` table in the SQL CLI, run `SHOW TABLES;` and `DESCRIBE user_behavior;` to see registered tables and table details. Also, run the command `SELECT * FROM user_behavior;` directly in the SQL CLI to preview the data (press `q` to exit).
 
 Next, we discover more about Flink SQL through three real-world scenarios.
 
 # Hourly Trading Volume
 
-## Create Elasticsearch table using DDL
+## Creating an Elasticsearch table using DDL
 
 Let’s create an Elasticsearch result table in the SQL CLI. We need two columns in this case: `hour_of_day` and `buy_cnt` (trading volume).
 
@@ -137,9 +139,9 @@
 
 There is no need to create the `buy_cnt_per_hour` index in Elasticsearch in advance since Elasticsearch will automatically create the index if it does not exist.
 
-## Submit a Query
+## Submitting a Query
 
-The hourly trading volume is the number of "buy" behaviors completed each hour. Therefore, we can use a TUMBLE window function to assign data into hourly windows. Then, we count the number of “buy” records in each window. To implement this, we can filter out the "buy" data first and then apply `COUNT(*)`.
+The hourly trading volume is the number of "buy" behaviors completed each hour. Therefore, we can use a `TUMBLE` window function to assign data into hourly windows. Then, we count the number of “buy” records in each window. To implement this, we can filter out the "buy" data first and then apply `COUNT(*)`.
 
 ```sql
 INSERT INTO buy_cnt_per_hour
@@ -151,7 +153,7 @@
 
 Here, we use the built-in `HOUR` function to extract the value for each hour in the day from a `TIMESTAMP` column. Use `INSERT INTO` to start a Flink SQL job that continuously writes results into the Elasticsearch `buy_cnt_per_hour` index. The Elasticearch result table can be seen as a materialized view of the query. You can find more information about Flink’s window aggregation in the [Apache Flink documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/sql/queries.html#group-windows).
 
-After running the previous query in the Flink SQL CLI, we can observe the submitted task on the Flink Web UI. This task is a streaming task and therefore runs continuously.
+After running the previous query in the Flink SQL CLI, we can observe the submitted task on the [Flink Web UI](http://localhost:8081). This task is a streaming task and therefore runs continuously.
 
 <center>
 <img src="{{ site.baseurl }}/img/blog/2020-05-03-flink-sql-demo/image4.jpg" width="800px" alt="Flink Dashboard"/>
@@ -162,7 +164,10 @@
 
 Access Kibana at [http://localhost:5601](http://localhost:5601). First, configure an index pattern by clicking "Management" in the left-side toolbar and find "Index Patterns". Next, click "Create Index Pattern" and enter the full index name `buy_cnt_per_hour` to create the index pattern. After creating the index pattern, we can explore data in Kibana.
 
-Note: since we are using the TUMBLE window of one hour here, it might take about four minutes between the time that containers started and until the first row is emitted. Until then the index does not exist and Kibana is unable to find the index.
+<div class="alert alert-info" markdown="1">
+<span class="label label-info" style="display: inline-block"><span class="glyphicon glyphicon-info-sign" aria-hidden="true"></span> Note </span>
+Since we are using the TUMBLE window of one hour here, it might take about four minutes between the time that containers started and until the first row is emitted. Until then the index does not exist and Kibana is unable to find the index.
+</div>
 
 Click "Discover" in the left-side toolbar. Kibana lists the content of the created index.
 
@@ -187,7 +192,7 @@
 Another interesting visualization is the cumulative number of unique visitors (UV). For example, the number of UV at 10:00 represents the total number of UV from 00:00 to 10:00. Therefore, the curve is monotonically increasing.
 
 Let’s create another Elasticsearch table in the SQL CLI to store the UV results. This table contains 3 columns: date, time and cumulative UVs.
-The `date_str` and `time_str` column are defined as primary key, Elasticsearch sink will use them to calculate the document id and work in upsert mode to update UV values under the document id.
+The `date_str` and `time_str` column are defined as primary key, Elasticsearch sink will use them to calculate the document ID and work in upsert mode to update UV values under the document ID.
 
 ```sql
 CREATE TABLE cumulative_uv (
@@ -202,7 +207,7 @@
 );
 ```
 
-We can extract the date and time using `DATE_FORMAT` function based on the `ts` field. As the section title describes, we only need to report every 10 minute. So we can use `SUBSTR` and the string concat function `||` to convert the time value into a 10-minute interval time string, such as `12:00`, `12:10`.
+We can extract the date and time using `DATE_FORMAT` function based on the `ts` field. As the section title describes, we only need to report every 10 minutes. So, we can use `SUBSTR` and the string concat function `||` to convert the time value into a 10-minute interval time string, such as `12:00`, `12:10`.
 Next, we group data by `date_str` and perform a `COUNT DISTINCT` aggregation on `user_id` to get the current cumulative UV in this day. Additionally, we perform a `MAX` aggregation on `time_str` field to get the current stream time: the maximum event time observed so far.
 As the maximum time is also a part of the primary key of the sink, the final result is that we will insert a new point into the elasticsearch every 10 minute. And every latest point will be updated continuously until the next 10-minute point is generated.
 
@@ -218,7 +223,7 @@
 GROUP BY date_str;
 ```
 
-After submitting this query, we create a `cumulative_uv` index pattern in Kibana. We then create a "Line" (line graph) on the dashboard, by selecting the `cumulative_uv` index, and drawing the cumulative UV curve according to the configuration on the left side of the following figure before finally, saving the curve.
+After submitting this query, we create a `cumulative_uv` index pattern in Kibana. We then create a "Line" (line graph) on the dashboard, by selecting the `cumulative_uv` index, and drawing the cumulative UV curve according to the configuration on the left side of the following figure before finally saving the curve.
 
 <center>
 <img src="{{ site.baseurl }}/img/blog/2020-05-03-flink-sql-demo/image7.jpg" width="800px" alt="Cumulative Unique Visitors every 10-min"/>
@@ -246,7 +251,7 @@
 );
 ```
 
-The underlying of JDBC connectors implements `LookupTableSource` interface, so the created JDBC table `category_dim` can be used as a temporal table (aka. lookup table) out-of-the-box in the data enrichment.
+The underlying JDBC connector implements the `LookupTableSource` interface, so the created JDBC table `category_dim` can be used as a temporal table (i.e. lookup table) out-of-the-box in the data enrichment.
 
 In addition, create an Elasticsearch table to store the category statistics.
 
@@ -261,7 +266,7 @@
 );
 ```
 
-In order to enrich the category names, we use Flink SQL’s temporal table joins to join a dimension table. You can access more information about [temporal joins](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/streaming/joins.html#join-with-a-temporal-table) in the Flink documentation:
+In order to enrich the category names, we use Flink SQL’s temporal table joins to join a dimension table. You can access more information about [temporal joins](https://ci.apache.org/projects/flink/flink-docs-release-1.11/dev/table/streaming/joins.html#join-with-a-temporal-table) in the Flink documentation.
 
 Additionally, we use the `CREATE VIEW` syntax to register the query as a logical view, allowing us to easily reference this query in subsequent queries and simplify nested queries. Please note that creating a logical view does not trigger the execution of the job and the view results are not persisted. Therefore, this statement is lightweight and does not have additional overhead.
 
@@ -272,7 +277,7 @@
 ON U.category_id = C.sub_category_id;
 ```
 
-Finally, we group the dimensional table by category name to count the number of `buy` events and we write the result to Elasticsearch’s `top_category` index.
+Finally, we group the dimensional table by category name to count the number of `buy` events and write the result to Elasticsearch’s `top_category` index.
 
 ```sql
 INSERT INTO top_category
@@ -291,8 +296,10 @@
 
 As illustrated in the diagram, the categories of clothing and shoes exceed by far other categories on the e-commerce website.
 
-We have now implemented three practical applications and created charts for them. We now return to the dashboard page and to drag and drop each view and give our dashboard a more formal and intuitive style, as illustrated at the beginning of this article. Of course, Kibana also provides a rich set of graphics and visualization features, and the user_behavior logs contain a lot more interesting information to explore. With the use of Flink SQL you can analyze data in more dimensions, while using Kibana allows you to display more views and observe real-time changes in its charts!
+<hr>
+
+We have now implemented three practical applications and created charts for them. We can now return to the dashboard page and drag-and-drop each view to give our dashboard a more formal and intuitive style, as illustrated in the beginning of the blogpost. Of course, Kibana also provides a rich set of graphics and visualization features, and the user_behavior logs contain a lot more interesting information to explore. Using Flink SQL, you can analyze data in more dimensions, while using Kibana allows you to display more views and observe real-time changes in its charts!
 
 # Summary
 
-In the previous sections we described how to use Flink SQL to integrate Kafka, MySQL, Elasticsearch, and Kibana to quickly build a real-time analytics application. The entire process can be completed using standard SQL syntax, without a line of Java or Scala code. We hope that this article provides some clear and practical examples of the convenience and power of Flink SQL, featuring, among others, an easy connection to various external systems, native support for event time and out-of-order handling, dimension table joins and a wide range of built-in functions. We hope you have fun following the examples in the article!
+In the previous sections, we described how to use Flink SQL to integrate Kafka, MySQL, Elasticsearch, and Kibana to quickly build a real-time analytics application. The entire process can be completed using standard SQL syntax, without a line of Java or Scala code. We hope that this article provides some clear and practical examples of the convenience and power of Flink SQL, featuring an easy connection to various external systems, native support for event time and out-of-order handling, dimension table joins and a wide range of built-in functions. We hope you have fun following the examples in this blogpost!