Merge pull request #1920 from apache/3.0.0-release

ShardingSphere ElasticJob 3.0.0 released
diff --git a/docs/content/faq/_index.cn.md b/docs/content/faq/_index.cn.md
index ce9b54e..9561e9c 100644
--- a/docs/content/faq/_index.cn.md
+++ b/docs/content/faq/_index.cn.md
@@ -91,7 +91,7 @@
 ## 11. 为什么首次启动存在任务调度延迟的情况?
 
 回答:
-ElasticJob 执行任务会获取本机IP,首次可能存在获取IP较慢的情况。尝试设置-Djava.net.preferIPv4Stack=true.
+ElasticJob 执行任务会获取本机IP,首次可能存在获取IP较慢的情况。尝试设置 `-Djava.net.preferIPv4Stack=true`.
 
 
 ## 12. Windows环境下,运行ShardingSphere-ElasticJob-UI,找不到或无法加载主类 org.apache.shardingsphere.elasticjob.lite.ui.Bootstrap,如何解决?
@@ -104,7 +104,9 @@
 
 打开cmd.exe并执行下面的命令:
 
+```bash
 tar zxvf apache-shardingsphere-elasticjob-${RELEASE.VERSION}-lite-ui-bin.tar.gz
+```
 
 ## 13. 运行 Cloud Scheduler 持续输出日志 "Elastic job: IP:PORT has leadership",不能正常运行
 
@@ -115,3 +117,11 @@
 例如,Mesos 库位于 `/usr/local/lib`,启动 Cloud Scheduler 前需要设置 `-Djava.library.path=/usr/local/lib`。
 
 Mesos 相关请参考 [Apache Mesos](https://mesos.apache.org/)。
+
+## 14. 在多网卡的情况下无法获取到合适的 IP
+
+回答:
+
+可以通过系统变量 `elasticjob.preferred.network.interface` 指定网卡。
+
+例如指定网卡 eno1:`-Delasticjob.preferred.network.interface=eno1`。
diff --git a/docs/content/faq/_index.en.md b/docs/content/faq/_index.en.md
index cb2df34..befb735 100644
--- a/docs/content/faq/_index.en.md
+++ b/docs/content/faq/_index.en.md
@@ -92,7 +92,7 @@
 
 Answer:
 
-ElasticJob will obtain the local IP when performing task scheduling, and it may be slow to obtain the IP for the first time. Try to set -Djava.net.preferIPv4Stack=true.
+ElasticJob will obtain the local IP when performing task scheduling, and it may be slow to obtain the IP for the first time. Try to set `-Djava.net.preferIPv4Stack=true`.
 
 
 ## 12. In Windows env, run ShardingSphere-ElasticJob-UI, could not find or load main class org.apache.shardingsphere.elasticjob.lite.ui.Bootstrap. Why?
@@ -103,9 +103,11 @@
 
 Open cmd.exe and execute the following command:
 
+```bash
 tar zxvf apache-shardingsphere-elasticjob-${RELEASE.VERSION}-lite-ui-bin.tar.gz
+```
 
-## 13. Unable to startup Cloud Scheduler. Continuously output "Elastic job: IP:PORT has leadership"gg
+## 13. Unable to startup Cloud Scheduler. Continuously output "Elastic job: IP:PORT has leadership"
 
 Answer: 
 
@@ -114,3 +116,11 @@
 For instance, Mesos native libraries are under `/usr/local/lib`, so the property `-Djava.library.path=/usr/local/lib` need to be set to start the Cloud Scheduler.
 
 About Apache Mesos, please refer to [Apache Mesos](https://mesos.apache.org/).
+
+## 14. Unable to obtain a suitable IP in the case of multiple network interfaces
+
+Answer: 
+
+You may specify interface by system property `elasticjob.preferred.network.interface`.
+
+For example, specify the interface eno1: `-Delasticjob.preferred.network.interface=eno1`.
diff --git a/docs/content/features/failover.cn.md b/docs/content/features/failover.cn.md
index 297ed6d..25768c2 100644
--- a/docs/content/features/failover.cn.md
+++ b/docs/content/features/failover.cn.md
@@ -8,8 +8,6 @@
 ElasticJob 不会在本次执行过程中进行重新分片,而是等待下次调度之前才开启重新分片流程。
 当作业执行过程中服务器宕机,失效转移允许将该次未完成的任务在另一作业节点上补偿执行。
 
-失效转移需要与监听作业运行时状态同时开启才可生效。
-
 ## 概念
 
 失效转移是当前执行作业的临时补偿执行机制,在下次作业运行时,会通过重分片对当前作业分配进行调整。
diff --git a/docs/content/features/failover.en.md b/docs/content/features/failover.en.md
index dc55d53..c6fc5bf 100644
--- a/docs/content/features/failover.en.md
+++ b/docs/content/features/failover.en.md
@@ -8,8 +8,6 @@
 ElasticJob will not re-shard during this execution, but wait for the next scheduling before starting the re-sharding process.
 When the server is down during job execution, failover allows the unfinished task to be compensated and executed on another job node.
 
-Enable failover and monitorExecution together to take effect.
-
 ## Concept
 
 Failover is a temporary compensation execution mechanism for the currently executed job. When the next job is run, the current job allocation will be adjusted through resharding.
diff --git a/docs/content/user-manual/elasticjob-lite/configuration/_index.cn.md b/docs/content/user-manual/elasticjob-lite/configuration/_index.cn.md
index 5df6223..364a783 100644
--- a/docs/content/user-manual/elasticjob-lite/configuration/_index.cn.md
+++ b/docs/content/user-manual/elasticjob-lite/configuration/_index.cn.md
@@ -76,10 +76,6 @@
 因为是瞬时状态,所以无必要监控。请用户自行增加数据堆积监控。并且不能保证数据重复选取,应在作业中实现幂等性。
 每次作业执行时间和间隔时间均较长的情况,建议监控作业运行时状态,可保证数据不会重复选取。
 
-**failover:**
-
-需要与 monitorExecution 同时开启才可生效。
-
 **maxTimeDiffSeconds:**
 
 如果时间误差超过配置秒数则作业启动时将抛异常。
diff --git a/docs/content/user-manual/elasticjob-lite/configuration/_index.en.md b/docs/content/user-manual/elasticjob-lite/configuration/_index.en.md
index 0e91d8c..ae5ba39 100644
--- a/docs/content/user-manual/elasticjob-lite/configuration/_index.en.md
+++ b/docs/content/user-manual/elasticjob-lite/configuration/_index.en.md
@@ -75,10 +75,6 @@
 There is no need to monitor because it is a transient state. User can add data accumulation monitoring by self. And there is no guarantee that the data will be selected repeatedly, idempotency should be achieved in the job.
 If the job execution time and interval time are longer, it is recommended to monitor the job status, and it can guarantee that the data will not be selected repeatedly.
 
-**failover:**
-
-Enable failover and monitorExecution together to take effect.
-
 **maxTimeDiffSeconds:**
 
 If the time error exceeds the configured seconds, an exception will be thrown when the job starts.
diff --git a/elasticjob-ecosystem/elasticjob-tracing/elasticjob-tracing-rdb/src/main/resources/META-INF/sql/MySQL.properties b/elasticjob-ecosystem/elasticjob-tracing/elasticjob-tracing-rdb/src/main/resources/META-INF/sql/MySQL.properties
index 8be48c5..7b5cc53 100644
--- a/elasticjob-ecosystem/elasticjob-tracing/elasticjob-tracing-rdb/src/main/resources/META-INF/sql/MySQL.properties
+++ b/elasticjob-ecosystem/elasticjob-tracing/elasticjob-tracing-rdb/src/main/resources/META-INF/sql/MySQL.properties
@@ -15,7 +15,23 @@
 # limitations under the License.
 #
 
-JOB_EXECUTION_LOG.TABLE.CREATE=CREATE TABLE IF NOT EXISTS JOB_EXECUTION_LOG (id VARCHAR(40) NOT NULL, job_name VARCHAR(100) NOT NULL, task_id VARCHAR(255) NOT NULL, hostname VARCHAR(255) NOT NULL, ip VARCHAR(50) NOT NULL, sharding_item INT NOT NULL, execution_source VARCHAR(20) NOT NULL, failure_cause VARCHAR(4000) NULL, is_success INT NOT NULL, start_time TIMESTAMP NULL, complete_time TIMESTAMP NULL, PRIMARY KEY (id))
+JOB_EXECUTION_LOG.TABLE.CREATE = CREATE TABLE \
+    IF NOT EXISTS JOB_EXECUTION_LOG (   \
+        auto_id int NOT NULL AUTO_INCREMENT,    \
+        id VARCHAR (40) NOT NULL,               \
+        job_name VARCHAR (100) NOT NULL,        \
+        task_id VARCHAR (255) NOT NULL,         \
+        hostname VARCHAR (255) NOT NULL,        \
+        ip VARCHAR (50) NOT NULL,               \
+        sharding_item INT NOT NULL,             \
+        execution_source VARCHAR (20) NOT NULL, \
+        failure_cause VARCHAR (4000) NULL,      \
+        is_success INT NOT NULL,                \
+        start_time TIMESTAMP NULL,              \
+        complete_time TIMESTAMP NULL,           \
+        PRIMARY KEY (auto_id),                  \
+        UNIQUE KEY `id` (`id`)                  \
+    ) ENGINE=InnoDB
 
 JOB_EXECUTION_LOG.INSERT=INSERT INTO JOB_EXECUTION_LOG (id, job_name, task_id, hostname, ip, sharding_item, execution_source, is_success, start_time) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
 JOB_EXECUTION_LOG.INSERT_COMPLETE=INSERT INTO JOB_EXECUTION_LOG (id, job_name, task_id, hostname, ip, sharding_item, execution_source, is_success, start_time, complete_time) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
@@ -23,9 +39,25 @@
 JOB_EXECUTION_LOG.UPDATE=UPDATE JOB_EXECUTION_LOG SET is_success = ?, complete_time = ? WHERE id = ?
 JOB_EXECUTION_LOG.UPDATE_FAILURE=UPDATE JOB_EXECUTION_LOG SET is_success = ?, complete_time = ?, failure_cause = ? WHERE id = ?
 
-JOB_STATUS_TRACE_LOG.TABLE.CREATE=CREATE TABLE IF NOT EXISTS JOB_STATUS_TRACE_LOG (id VARCHAR(40) NOT NULL, job_name VARCHAR(100) NOT NULL, original_task_id VARCHAR(255) NOT NULL, task_id VARCHAR(255) NOT NULL, slave_id VARCHAR(50) NOT NULL, source VARCHAR(50) NOT NULL, execution_type VARCHAR(20) NOT NULL, sharding_item VARCHAR(100) NOT NULL, state VARCHAR(20) NOT NULL, message VARCHAR(4000) NULL, creation_time TIMESTAMP NULL, PRIMARY KEY (id))
+JOB_STATUS_TRACE_LOG.TABLE.CREATE= CREATE TABLE \
+    IF NOT EXISTS JOB_STATUS_TRACE_LOG ( \
+        auto_id int NOT NULL AUTO_INCREMENT,     \
+        id VARCHAR (40) NOT NULL,                \
+        job_name VARCHAR (100) NOT NULL,         \
+        original_task_id VARCHAR (255) NOT NULL, \
+        task_id VARCHAR (255) NOT NULL,          \
+        slave_id VARCHAR (50) NOT NULL,          \
+        source VARCHAR (50) NOT NULL,            \
+        execution_type VARCHAR (20) NOT NULL,    \
+        sharding_item VARCHAR (100) NOT NULL,    \
+        state VARCHAR (20) NOT NULL,             \
+        message VARCHAR (4000) NULL,             \
+        creation_time TIMESTAMP NULL,            \
+        PRIMARY KEY (auto_id)                    \
+    ) ENGINE=InnoDB
+
 TASK_ID_STATE_INDEX.INDEX.CREATE=CREATE INDEX TASK_ID_STATE_INDEX ON JOB_STATUS_TRACE_LOG (task_id(128), state)
 
 JOB_STATUS_TRACE_LOG.INSERT=INSERT INTO JOB_STATUS_TRACE_LOG (id, job_name, original_task_id, task_id, slave_id, source, execution_type, sharding_item, state, message, creation_time) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
-JOB_STATUS_TRACE_LOG.SELECT=SELECT * FROM JOB_STATUS_TRACE_LOG WHERE task_id = ?
+JOB_STATUS_TRACE_LOG.SELECT=SELECT id, job_name, original_task_id, task_id, slave_id, source, execution_type, sharding_item, state, message, creation_time FROM JOB_STATUS_TRACE_LOG WHERE task_id = ?
 JOB_STATUS_TRACE_LOG.SELECT_ORIGINAL_TASK_ID=SELECT original_task_id FROM JOB_STATUS_TRACE_LOG WHERE task_id = ? and state= 'TASK_STAGING' LIMIT 1