cn: format table & fix typo (#150)

diff --git a/.github/workflows/hugo.yml b/.github/workflows/hugo.yml
index a7892ee..9fd8b99 100644
--- a/.github/workflows/hugo.yml
+++ b/.github/workflows/hugo.yml
@@ -41,13 +41,13 @@
         uses: actions/setup-node@v2.4.0
         with:
           node-version: "16"
-            
+
       - name: Setup Hugo
         uses: peaceiris/actions-hugo@v2
         with:
           hugo-version: '0.102.3'
           extended: true
-          
+
       - uses: actions/cache@v2
         with:
           path: /tmp/hugo_cache
diff --git a/.gitignore b/.gitignore
index b507d68..bf7cbe3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -14,3 +14,4 @@
 # Datasource local storage ignored files
 /dataSources/
 /dataSources.local.xml
+.idea/
diff --git a/content/cn/docs/BUILDING.md b/content/cn/docs/BUILDING.md
deleted file mode 100644
index 0607141..0000000
--- a/content/cn/docs/BUILDING.md
+++ /dev/null
@@ -1,24 +0,0 @@
-HugeDoc Installation
-
-HugeDoc use [GitBook](https://github.com/GitbookIO/gitbook) to convert markdown to static website, 
-and use GitBook with NodeJs to server as web server.
-
-### How To use
-
-Install GitBook is via **NPM**:
-
-```
-$ npm install gitbook-cli -g
-```
-
-Preview and serve your book using:
-
-```
-$ gitbook serve
-```
-
-Or build the static website using:
-
-```
-$ gitbook build
-```
diff --git a/content/cn/docs/_index.md b/content/cn/docs/_index.md
index f4f549e..dbffc40 100755
--- a/content/cn/docs/_index.md
+++ b/content/cn/docs/_index.md
@@ -7,4 +7,5 @@
   main:
     weight: 20
 ---
+
 欢迎阅读HugeGraph文档
diff --git a/content/cn/docs/clients/hugegraph-client.md b/content/cn/docs/clients/hugegraph-client.md
index 83f30cc..fb95070 100644
--- a/content/cn/docs/clients/hugegraph-client.md
+++ b/content/cn/docs/clients/hugegraph-client.md
@@ -57,39 +57,38 @@
 
 - name: 属性的名字,用来区分不同的 PropertyKey,不允许有同名的属性;
 
-interface                | param | must set
------------------------- | ----- | --------
-propertyKey(String name) | name  | y
+| interface                | param | must set |
+|--------------------------|-------|----------|
+| propertyKey(String name) | name  | y        |
 
 - datatype:属性值类型,必须从下表中选择符合具体业务场景的一项显式设置;
 
-interface     | Java Class
-------------- | ----------
-asText()      | String
-asInt()       | Integer
-asDate()      | Date
-asUuid()      | UUID
-asBoolean()   | Boolean
-asByte()      | Byte
-asBlob()      | Byte[]
-asDouble()    | Double
-asFloat()     | Float
-asLong()      | Long
+| interface   | Java Class |
+|-------------|------------|
+| asText()    | String     |
+| asInt()     | Integer    |
+| asDate()    | Date       |
+| asUuid()    | UUID       |
+| asBoolean() | Boolean    |
+| asByte()    | Byte       |
+| asBlob()    | Byte[]     |
+| asDouble()  | Double     |
+| asFloat()   | Float      |
+| asLong()    | Long       |
 
 - cardinality:属性值是单值还是多值,多值的情况下又分为允许有重复值和不允许有重复值,该项默认为 single,如有必要可从下表中选择一项设置;
 
-interface     | cardinality | description
-------------- | ----------- | -------------------------------------------
-valueSingle() | single      | single value
-valueList()   | list        | multi-values that allow duplicate value
-valueSet()    | set         | multi-values that not allow duplicate value
+| interface     | cardinality | description                                 |
+|---------------|-------------|---------------------------------------------|
+| valueSingle() | single      | single value                                |
+| valueList()   | list        | multi-values that allow duplicate value     |
+| valueSet()    | set         | multi-values that not allow duplicate value |
 
 - userdata:用户可以自己添加一些约束或额外信息,然后自行检查传入的属性是否满足约束,或者必要的时候提取出额外信息
 
-interface                          | description
----------------------------------- | ----------------------------------------------
-userdata(String key, Object value) | The same key, the latter will cover the former
-
+| interface                          | description                                    |
+|------------------------------------|------------------------------------------------|
+| userdata(String key, Object value) | The same key, the latter will cover the former |
 
 ##### 2.2.2 创建 PropertyKey
 
@@ -136,59 +135,59 @@
 
 - name: 属性的名字,用来区分不同的 VertexLabel,不允许有同名的属性;
 
-interface                | param | must set
------------------------- | ----- | --------
-vertexLabel(String name) | name  | y
+| interface                | param | must set |
+|--------------------------|-------|----------|
+| vertexLabel(String name) | name  | y        |
 
 - idStrategy: 每一个 VertexLabel 都可以选择自己的 Id 策略,目前有三种策略供选择,即 Automatic(自动生成)、Customize(用户传入)和 PrimaryKey(主属性键)。其中 Automatic 使用 Snowflake 算法生成 Id,Customize 需要用户自行传入字符串或数字类型的 Id,PrimaryKey 则允许用户从 VertexLabel 的属性中选择若干主属性作为区分的依据,HugeGraph 内部会根据主属性的值拼接生成 Id。idStrategy 默认使用 Automatic的,但如果用户没有显式设置 idStrategy 又调用了 primaryKeys(...) 方法设置了主属性,则 idStrategy 将自动使用 PrimaryKey;
 
-interface             | idStrategy        | description
---------------------- | ----------------- | ------------------------------------------------------
-useAutomaticId        | AUTOMATIC         | generate id automaticly by Snowflake algorithom
-useCustomizeStringId  | CUSTOMIZE_STRING  | passed id by user, must be string type
-useCustomizeNumberId  | CUSTOMIZE_NUMBER  | passed id by user, must be number type
-usePrimaryKeyId       | PRIMARY_KEY       | choose some important prop as primary key to splice id
+| interface            | idStrategy       | description                                             |
+|----------------------|------------------|---------------------------------------------------------|
+| useAutomaticId       | AUTOMATIC        | generate id automatically by Snowflake algorithm        |
+| useCustomizeStringId | CUSTOMIZE_STRING | passed id by user, must be string type                  |
+| useCustomizeNumberId | CUSTOMIZE_NUMBER | passed id by user, must be number type                  |
+| usePrimaryKeyId      | PRIMARY_KEY      | choose some important prop as primary key to splice id  |
 
 - properties: 定义顶点的属性,传入的参数是 PropertyKey 的 name
 
-interface                        | description
--------------------------------- | -------------------------
-properties(String... properties) | allow to pass multi props
+| interface                        | description               |
+|----------------------------------|---------------------------|
+| properties(String... properties) | allow to pass multi props |
 
 - primaryKeys: 当用户选择了 PrimaryKey 的 Id 策略时,需要从 VertexLabel 的属性中选择若干主属性作为区分的依据;
 
-interface                   | description
---------------------------- | -----------------------------------------
-primaryKeys(String... keys) | allow to choose multi prop as primaryKeys
+| interface                   | description                               |
+|-----------------------------|-------------------------------------------|
+| primaryKeys(String... keys) | allow to choose multi prop as primaryKeys |
 
 需要注意的是,Id 策略的选择与 primaryKeys 的设置有一些相互约束,不能随意调用,约束关系见下表:
 
-|                   | useAutomaticId | useCustomizeStringId | useCustomizeNumberId | usePrimaryKeyId
-| ----------------- | -------------- | -------------------- | -------------------- | ---------------
-| unset primaryKeys | AUTOMATIC      | CUSTOMIZE_STRING     | CUSTOMIZE_NUMBER     | ERROR
-| set primaryKeys   | ERROR          | ERROR                | ERROR                | PRIMARY_KEY
+|                   | useAutomaticId | useCustomizeStringId | useCustomizeNumberId | usePrimaryKeyId |
+|-------------------|----------------|----------------------|----------------------|-----------------|
+| unset primaryKeys | AUTOMATIC      | CUSTOMIZE_STRING     | CUSTOMIZE_NUMBER     | ERROR           |
+| set primaryKeys   | ERROR          | ERROR                | ERROR                | PRIMARY_KEY     |
 
 - nullableKeys: 对于通过 properties(...) 方法设置过的属性,默认全都是不可为空的,也就是在创建顶点时该属性必须赋值,这样可能对用户数据提出了太过严格的完整性要求。为避免这样的强约束,用户可以通过
 本方法设置若干属性为可空的,这样添加顶点时该属性可以不赋值。
 
-interface                          | description
----------------------------------- | -------------------------
-nullableKeys(String... properties) | allow to pass multi props
+| interface                          | description               |
+|------------------------------------|---------------------------|
+| nullableKeys(String... properties) | allow to pass multi props |
 
 注意:primaryKeys 和 nullableKeys 不能有交集,因为一个属性不能既作为主属性,又是可空的。
 
 - enableLabelIndex:用户可以指定是否需要为label创建索引。不创建则无法全局搜索指定label的顶点和边,创建则可以全局搜索,做类似于`g.V().hasLabel('person'), g.E().has('label', 'person')`这样的查询,
 但是插入数据时性能上会更加慢,并且需要占用更多的存储空间。此项默认为 true。
 
-interface                          | description
----------------------------------- | -------------------------------
-enableLabelIndex(boolean enable)   | Whether to create a label index
+| interface                        | description                     |
+|----------------------------------|---------------------------------|
+| enableLabelIndex(boolean enable) | Whether to create a label index |
 
 - userdata:用户可以自己添加一些约束或额外信息,然后自行检查传入的属性是否满足约束,或者必要的时候提取出额外信息
 
-interface                          | description
----------------------------------- | ----------------------------------------------
-userdata(String key, Object value) | The same key, the latter will cover the former
+| interface                          | description                                    |
+|------------------------------------|------------------------------------------------|
+| userdata(String key, Object value) | The same key, the latter will cover the former |
 
 ##### 2.3.2 创建 VertexLabel
 
@@ -246,37 +245,37 @@
 
 - name: 属性的名字,用来区分不同的 EdgeLabel,不允许有同名的属性;
 
-interface              | param | must set
----------------------- | ----- | --------
-edgeLabel(String name) | name  | y
+| interface              | param | must set |
+|------------------------|-------|----------|
+| edgeLabel(String name) | name  | y        |
 
 - sourceLabel: 边连接的源顶点类型名,只允许设置一个;
 
 - targetLabel: 边连接的目标顶点类型名,只允许设置一个;
 
-interface                 | param | must set
-------------------------- | ----- | --------
-sourceLabel(String label) | label | y
-targetLabel(String label) | label | y
+| interface                 | param | must set |
+|---------------------------|-------|----------|
+| sourceLabel(String label) | label | y        |
+| targetLabel(String label) | label | y        |
 
 - frequency: 字面意思是频率,表示在两个具体的顶点间某个关系出现的次数,可以是单次(single)或多次(frequency),默认为single;
 
-interface    | frequency | description
------------- | --------- | -----------------------------------
-singleTime() | single    | a relationship can only occur once
-multiTimes() | multiple  | a relationship can occur many times
+| interface    | frequency | description                         |
+|--------------|-----------|-------------------------------------|
+| singleTime() | single    | a relationship can only occur once  |
+| multiTimes() | multiple  | a relationship can occur many times |
 
 - properties: 定义边的属性
 
-interface                        | description
--------------------------------- | -------------------------
-properties(String... properties) | allow to pass multi props
+| interface                        | description               |
+|----------------------------------|---------------------------|
+| properties(String... properties) | allow to pass multi props |
 
 - sortKeys: 当 EdgeLabel 的 frequency 为 multiple 时,需要某些属性来区分这多次的关系,故引入了 sortKeys(排序键);
 
-interface                | description
------------------------- | --------------------------------------
-sortKeys(String... keys) | allow to choose multi prop as sortKeys
+| interface                | description                            |
+|--------------------------|----------------------------------------|
+| sortKeys(String... keys) | allow to choose multi prop as sortKeys |
 
 - nullableKeys: 与顶点中的 nullableKeys 概念一致,不再赘述
 
@@ -286,9 +285,9 @@
 
 - userdata:用户可以自己添加一些约束或额外信息,然后自行检查传入的属性是否满足约束,或者必要的时候提取出额外信息
 
-interface                          | description
----------------------------------- | ----------------------------------------------
-userdata(String key, Object value) | The same key, the latter will cover the former
+| interface                          | description                                    |
+|------------------------------------|------------------------------------------------|
+| userdata(String key, Object value) | The same key, the latter will cover the former |
 
 ##### 2.4.2 创建 EdgeLabel
 
@@ -332,28 +331,28 @@
 
 IndexLabel 用来定义索引类型,描述索引的约束信息,主要是为了方便查询。
 
-IndexLabel 允许定义的约束信息包括:name、baseType、baseValue、indexFeilds、indexType,下面逐一介绍。
+IndexLabel 允许定义的约束信息包括:name、baseType、baseValue、indexFields、indexType,下面逐一介绍。
 
 - name: 属性的名字,用来区分不同的 IndexLabel,不允许有同名的属性;
 
-interface               | param | must set
------------------------ | ----- | --------
-indexLabel(String name) | name  | y
+| interface               | param | must set |
+|-------------------------|-------|----------|
+| indexLabel(String name) | name  | y        |
 
 - baseType: 表示要为 VertexLabel 还是 EdgeLabel 建立索引, 与下面的 baseValue 配合使用;
 
 - baseValue: 指定要建立索引的 VertexLabel 或 EdgeLabel 的名称;
 
-interface             | param     | description
---------------------- | --------- | ----------------------------------------
-onV(String baseValue) | baseValue | build index for VertexLabel: 'baseValue'
-onE(String baseValue) | baseValue | build index for EdgeLabel: 'baseValue'
+| interface             | param     | description                              |
+|-----------------------|-----------|------------------------------------------|
+| onV(String baseValue) | baseValue | build index for VertexLabel: 'baseValue' |
+| onE(String baseValue) | baseValue | build index for EdgeLabel: 'baseValue'   |
 
-- indexFeilds: 要在哪些属性上建立索引,可以是为多列建立联合索引;
+- indexFields: 要在哪些属性上建立索引,可以是为多列建立联合索引;
 
-interface            | param | description
--------------------- | ----- | ---------------------------------------------------------
-by(String... fields) | files | allow to build index for multi fields for secondary index
+| interface            | param | description                                               |
+|----------------------|-------|-----------------------------------------------------------|
+| by(String... fields) | files | allow to build index for multi fields for secondary index |
 
 - indexType: 建立的索引类型,目前支持五种,即 Secondary、Range、Search、Shard 和 Unique。
     - Secondary 支持精确匹配的二级索引,允许建立联合索引,联合索引支持索引前缀搜索
@@ -382,13 +381,13 @@
     - Unique 支持属性值唯一性约束,即可以限定属性的值不重复,允许联合索引,但不支持查询
         - 单个或者多个属性的唯一性索引,不可用来查询,只可对属性的值进行限定,当出现重复值时将报错
 
-interface   | indexType | description
------------ | --------- | ---------------------------------------
-secondary() | Secondary | support prefix search
-range()     | Range     | support range(numeric or date type) search
-search()    | Search    | support full text search
-shard()     | Shard     | support prefix + range(numeric or date type) search
-unique()    | Unique    | support unique props value, not support search
+| interface   | indexType | description                                         |
+|-------------|-----------|-----------------------------------------------------|
+| secondary() | Secondary | support prefix search                               |
+| range()     | Range     | support range(numeric or date type) search          |
+| search()    | Search    | support full text search                            |
+| shard()     | Shard     | support prefix + range(numeric or date type) search |
+| unique()    | Unique    | support unique props value, not support search      |
 
 ##### 2.5.2 创建 IndexLabel
 
@@ -450,7 +449,7 @@
 ```
 
 - 由(源)顶点来调用添加边的函数,函数第一个参数为边的label,第二个参数是目标顶点,这两个参数的位置和顺序是固定的。后续的参数就是`key1 -> val1, key2 -> val2 ···`的顺序排列,设置边的属性,键值对顺序自由。
-- 源顶点和目标顶点必须符合 EdgeLabel 中 sourcelabel 和 targetlabel 的定义,不能随意添加。
+- 源顶点和目标顶点必须符合 EdgeLabel 中 source-label 和 target-label 的定义,不能随意添加。
 - 对于非 nullableKeys 的属性,必须要赋值。
 
 **注意:当frequency为multiple时必须要设置sortKeys对应属性类型的值。**
diff --git a/content/cn/docs/clients/restful-api/edge.md b/content/cn/docs/clients/restful-api/edge.md
index 7794117..8207447 100644
--- a/content/cn/docs/clients/restful-api/edge.md
+++ b/content/cn/docs/clients/restful-api/edge.md
@@ -370,18 +370,18 @@
 
 属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如`properties={"weight":0.8}`,范围匹配时形如`properties={"age":"P.gt(0.8)"}`,范围匹配支持的表达式如下:
 
-表达式           | 说明
----------------- | -------
-P.eq(number)     | 属性值等于number的边
-P.neq(number)    | 属性值不等于number的边
-P.lt(number)     | 属性值小于number的边
-P.lte(number)    | 属性值小于等于number的边
-P.gt(number)     | 属性值大于number的边
-P.gte(number)    | 属性值大于等于number的边
-P.between(number1,number2)            | 属性值大于等于number1且小于number2的边
-P.inside(number1,number2)             | 属性值大于number1且小于number2的边
-P.outside(number1,number2)            | 属性值小于number1且大于number2的边
-P.within(value1,value2,value3,...)    | 属性值等于任何一个给定value的边
+| 表达式                                | 说明                         |
+|------------------------------------|----------------------------|
+| P.eq(number)                       | 属性值等于number的边              |
+| P.neq(number)                      | 属性值不等于number的边             |
+| P.lt(number)                       | 属性值小于number的边              |
+| P.lte(number)                      | 属性值小于等于number的边            |
+| P.gt(number)                       | 属性值大于number的边              |
+| P.gte(number)                      | 属性值大于等于number的边            |
+| P.between(number1,number2)         | 属性值大于等于number1且小于number2的边 |
+| P.inside(number1,number2)          | 属性值大于number1且小于number2的边   |
+| P.outside(number1,number2)         | 属性值小于number1且大于number2的边   |
+| P.within(value1,value2,value3,...) | 属性值等于任何一个给定value的边         |
 
 **查询与顶点 person:josh(vertex_id="1:josh") 相连且 label 为 created 的边**
 
diff --git a/content/cn/docs/clients/restful-api/vertex.md b/content/cn/docs/clients/restful-api/vertex.md
index 71b9f22..0db38d0 100644
--- a/content/cn/docs/clients/restful-api/vertex.md
+++ b/content/cn/docs/clients/restful-api/vertex.md
@@ -8,13 +8,13 @@
 
 顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
 
-Id_Strategy      | id type
----------------- | -------
-AUTOMATIC        | number
-PRIMARY_KEY      | string
-CUSTOMIZE_STRING | string
-CUSTOMIZE_NUMBER | number
-CUSTOMIZE_UUID   | uuid
+| Id_Strategy      | id type |
+|------------------|---------|
+| AUTOMATIC        | number  |
+| PRIMARY_KEY      | string  |
+| CUSTOMIZE_STRING | string  |
+| CUSTOMIZE_NUMBER | number  |
+| CUSTOMIZE_UUID   | uuid    |
 
 顶点的 `GET/PUT/DELETE` API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
 
@@ -387,18 +387,18 @@
 
 属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如`properties={"age":29}`,范围匹配时形如`properties={"age":"P.gt(29)"}`,范围匹配支持的表达式如下:
 
-表达式           | 说明
----------------- | -------
-P.eq(number)     | 属性值等于number的顶点
-P.neq(number)    | 属性值不等于number的顶点
-P.lt(number)     | 属性值小于number的顶点
-P.lte(number)    | 属性值小于等于number的顶点
-P.gt(number)     | 属性值大于number的顶点
-P.gte(number)    | 属性值大于等于number的顶点
-P.between(number1,number2)            | 属性值大于等于number1且小于number2的顶点
-P.inside(number1,number2)             | 属性值大于number1且小于number2的顶点
-P.outside(number1,number2)            | 属性值小于number1且大于number2的顶点
-P.within(value1,value2,value3,...)    | 属性值等于任何一个给定value的顶点
+| 表达式                                | 说明                          |
+|------------------------------------|-----------------------------|
+| P.eq(number)                       | 属性值等于number的顶点              |
+| P.neq(number)                      | 属性值不等于number的顶点             |
+| P.lt(number)                       | 属性值小于number的顶点              |
+| P.lte(number)                      | 属性值小于等于number的顶点            |
+| P.gt(number)                       | 属性值大于number的顶点              |
+| P.gte(number)                      | 属性值大于等于number的顶点            |
+| P.between(number1,number2)         | 属性值大于等于number1且小于number2的顶点 |
+| P.inside(number1,number2)          | 属性值大于number1且小于number2的顶点   |
+| P.outside(number1,number2)         | 属性值小于number1且大于number2的顶点   |
+| P.within(value1,value2,value3,...) | 属性值等于任何一个给定value的顶点         |
 
 **查询所有 age 为 20 且 label 为 person 的顶点**
 
diff --git a/content/cn/docs/config/config-option.md b/content/cn/docs/config/config-option.md
index 0588c0a..dae587b 100644
--- a/content/cn/docs/config/config-option.md
+++ b/content/cn/docs/config/config-option.md
@@ -8,269 +8,269 @@
 
 对应配置文件`gremlin-server.yaml`
 
-config option           | default value                                                                                                | descrition
------------------------ | ------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------
-host                    | 127.0.0.1                                                                                                    | The host or ip of Gremlin Server.
-port                    | 8182                                                                                                         | The listening port of Gremlin Server.
-graphs                  | hugegraph: conf/hugegraph.properties                                                                         | The map of graphs with name and config file path.
-scriptEvaluationTimeout | 30000                                                                                                        | The timeout for gremlin script execution(millisecond).
-channelizer             | org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer                                                  | Indicates the protocol which the Gremlin Server provides service.
-authentication          | authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} | The authenticator and config(contains tokens path) of authentication mechanism.
+| config option           | default value                                                                                                | description                                                                      |
+|-------------------------|--------------------------------------------------------------------------------------------------------------|---------------------------------------------------------------------------------|
+| host                    | 127.0.0.1                                                                                                    | The host or ip of Gremlin Server.                                               |
+| port                    | 8182                                                                                                         | The listening port of Gremlin Server.                                           |
+| graphs                  | hugegraph: conf/hugegraph.properties                                                                         | The map of graphs with name and config file path.                               |
+| scriptEvaluationTimeout | 30000                                                                                                        | The timeout for gremlin script execution(millisecond).                          |
+| channelizer             | org.apache.tinkerpop.gremlin.server.channel.HttpChannelizer                                                  | Indicates the protocol which the Gremlin Server provides service.               |
+| authentication          | authenticator: com.baidu.hugegraph.auth.StandardAuthenticator, config: {tokens: conf/rest-server.properties} | The authenticator and config(contains tokens path) of authentication mechanism. |
 
 ### Rest Server & API 配置项
 
 对应配置文件`rest-server.properties`
 
-config option                      | default value                                    | descrition
----------------------------------- | ------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------------
-graphs                             | [hugegraph:conf/hugegraph.properties]            | The map of graphs' name and config file.
-server.id                          | server-1                                         | The id of rest server, used for license verification.
-server.role                        | master                                           | The role of nodes in the cluster, available types are [master, worker, computer]
-restserver.url                     | http://127.0.0.1:8080                            | The url for listening of rest server.
-ssl.keystore_file                  | server.keystore                                  | The path of server keystore file used when https protocol is enabled.
-ssl.keystore_password              |                                                  | The password of the path of the server keystore file used when the https protocol is enabled.
-restserver.max_worker_threads      | 2 * CPUs                                         | The maximum worker threads of rest server.
-restserver.min_free_memory         | 64                                               | The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.
-restserver.request_timeout         | 30                                               | The time in seconds within which a request must complete, -1 means no timeout.
-restserver.connection_idle_timeout | 30                                               | The time in seconds to keep an inactive connection alive, -1 means no timeout.
-restserver.connection_max_requests | 256                                              | The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.
-gremlinserver.url                  | http://127.0.0.1:8182                            | The url of gremlin server.
-gremlinserver.max_route            | 8                                                | The max route number for gremlin server.
-gremlinserver.timeout              | 30                                               | The timeout in seconds of waiting for gremlin server.
-batch.max_edges_per_batch          | 500                                              | The maximum number of edges submitted per batch.
-batch.max_vertices_per_batch       | 500                                              | The maximum number of vertices submitted per batch.
-batch.max_write_ratio              | 50                                               | The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.
-batch.max_write_threads            | 0                                                | The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.
-auth.authenticator                 |                                                  | The class path of authenticator implemention. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.
-auth.admin_token                   | 162f7848-0b6d-4faf-b557-3a0797869c55             | Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
-auth.graph_store                   | hugegraph                                        | The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.
-auth.user_tokens                   | [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.
-auth.audit_log_rate                | 1000.0                                           | The max rate of audit log output per user, default value is 1000 records per second.
-auth.cache_capacity                | 10240                                            | The max cache capacity of each auth cache item.
-auth.cache_expire                  | 600                                              | The expiration time in seconds of vertex cache.
-auth.remote_url                    |                                                  | If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ','.
-auth.token_expire                  | 86400                                            | The expiration time in seconds after token created
-auth.token_secret                  | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg                 | Secret key of HS256 algorithm.
-exception.allow_trace              | false                                            | Whether to allow exception trace stack.
+| config option                      | default value                                    | description                                                                                                                                                                                                    |
+|------------------------------------|--------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| graphs                             | [hugegraph:conf/hugegraph.properties]            | The map of graphs' name and config file.                                                                                                                                                                      |
+| server.id                          | server-1                                         | The id of rest server, used for license verification.                                                                                                                                                         |
+| server.role                        | master                                           | The role of nodes in the cluster, available types are [master, worker, computer]                                                                                                                              |
+| restserver.url                     | http://127.0.0.1:8080                            | The url for listening of rest server.                                                                                                                                                                         |
+| ssl.keystore_file                  | server.keystore                                  | The path of server keystore file used when https protocol is enabled.                                                                                                                                         |
+| ssl.keystore_password              |                                                  | The password of the path of the server keystore file used when the https protocol is enabled.                                                                                                                 |
+| restserver.max_worker_threads      | 2 * CPUs                                         | The maximum worker threads of rest server.                                                                                                                                                                    |
+| restserver.min_free_memory         | 64                                               | The minimum free memory(MB) of rest server, requests will be rejected when the available memory of system is lower than this value.                                                                           |
+| restserver.request_timeout         | 30                                               | The time in seconds within which a request must complete, -1 means no timeout.                                                                                                                                |
+| restserver.connection_idle_timeout | 30                                               | The time in seconds to keep an inactive connection alive, -1 means no timeout.                                                                                                                                |
+| restserver.connection_max_requests | 256                                              | The max number of HTTP requests allowed to be processed on one keep-alive connection, -1 means unlimited.                                                                                                     |
+| gremlinserver.url                  | http://127.0.0.1:8182                            | The url of gremlin server.                                                                                                                                                                                    |
+| gremlinserver.max_route            | 8                                                | The max route number for gremlin server.                                                                                                                                                                      |
+| gremlinserver.timeout              | 30                                               | The timeout in seconds of waiting for gremlin server.                                                                                                                                                         |
+| batch.max_edges_per_batch          | 500                                              | The maximum number of edges submitted per batch.                                                                                                                                                              |
+| batch.max_vertices_per_batch       | 500                                              | The maximum number of vertices submitted per batch.                                                                                                                                                           |
+| batch.max_write_ratio              | 50                                               | The maximum thread ratio for batch writing, only take effect if the batch.max_write_threads is 0.                                                                                                             |
+| batch.max_write_threads            | 0                                                | The maximum threads for batch writing, if the value is 0, the actual value will be set to batch.max_write_ratio * restserver.max_worker_threads.                                                              |
+| auth.authenticator                 |                                                  | The class path of authenticator implementation. e.g., com.baidu.hugegraph.auth.StandardAuthenticator, or com.baidu.hugegraph.auth.ConfigAuthenticator.                                                          |
+| auth.admin_token                   | 162f7848-0b6d-4faf-b557-3a0797869c55             | Token for administrator operations, only for com.baidu.hugegraph.auth.ConfigAuthenticator.                                                                                                                    |
+| auth.graph_store                   | hugegraph                                        | The name of graph used to store authentication information, like users, only for com.baidu.hugegraph.auth.StandardAuthenticator.                                                                              |
+| auth.user_tokens                   | [hugegraph:9fd95c9c-711b-415b-b85f-d4df46ba5c31] | The map of user tokens with name and password, only for com.baidu.hugegraph.auth.ConfigAuthenticator.                                                                                                         |
+| auth.audit_log_rate                | 1000.0                                           | The max rate of audit log output per user, default value is 1000 records per second.                                                                                                                          |
+| auth.cache_capacity                | 10240                                            | The max cache capacity of each auth cache item.                                                                                                                                                               |
+| auth.cache_expire                  | 600                                              | The expiration time in seconds of vertex cache.                                                                                                                                                               |
+| auth.remote_url                    |                                                  | If the address is empty, it provide auth service, otherwise it is auth client and also provide auth service through rpc forwarding. The remote url can be set to multiple addresses, which are concat by ','. |
+| auth.token_expire                  | 86400                                            | The expiration time in seconds after token created                                                                                                                                                            |
+| auth.token_secret                  | FXQXbJtbCLxODc6tGci732pkH1cyf8Qg                 | Secret key of HS256 algorithm.                                                                                                                                                                                |
+| exception.allow_trace              | false                                            | Whether to allow exception trace stack.                                                                                                                                                                       |
 
 ### 基本配置项
 
 基本配置项及后端配置项对应配置文件:{graph-name}.properties,如`hugegraph.properties`
 
-config option                    | default value                   | descrition
--------------------------------- | ------------------------------- | -----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-gremlin.graph	                 | com.baidu.hugegraph.HugeFactory | Gremlin entrance to create graph.
-backend                          | rocksdb                         | The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].
-serializer                       | binary                          | The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].
-store                            | hugegraph                       | The database name like Cassandra Keyspace.
-store.connection_detect_interval | 600                             | The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.
-store.graph                      | g                               | The graph table name, which store vertex, edge and property.
-store.schema                     | m                               | The schema table name, which store meta data.
-store.system                     | s                               | The system table name, which store system data.
-schema.illegal_name_regex	     | .*\s+$|~.*	               | The regex specified the illegal format for schema name.
-schema.cache_capacity            | 10000                           | The max cache size(items) of schema cache.
-vertex.cache_type                | l2                              | The type of vertex cache, allowed values are [l1, l2].
-vertex.cache_capacity            | 10000000                        | The max cache size(items) of vertex cache.
-vertex.cache_expire              | 600                             | The expire time in seconds of vertex cache.
-vertex.check_customized_id_exist | false                           | Whether to check the vertices exist for those using customized id strategy.
-vertex.default_label             | vertex                          | The default vertex label.
-vertex.tx_capacity               | 10000                           | The max size(items) of vertices(uncommitted) in transaction.
-vertex.check_adjacent_vertex_exist | false                         | Whether to check the adjacent vertices of edges exist.
-vertex.lazy_load_adjacent_vertex | true                            | Whether to lazy load adjacent vertices of edges.
-vertex.part_edge_commit_size     | 5000                            | Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.
-vertex.encode_primary_key_number | true                            | Whether to encode number value of primary key in vertex id.
-vertex.remove_left_index_at_overwrite | false                      | Whether remove left index at overwrite.
-edge.cache_type                  | l2                              | The type of edge cache, allowed values are [l1, l2].
-edge.cache_capacity              | 1000000                         | The max cache size(items) of edge cache.
-edge.cache_expire                | 600                             | The expiration time in seconds of edge cache.
-edge.tx_capacity                 | 10000                           | The max size(items) of edges(uncommitted) in transaction.
-query.page_size                  | 500                             | The size of each page when querying by paging.
-query.batch_size                 | 1000                            | The size of each batch when querying by batch.
-query.ignore_invalid_data        | true                            | Whether to ignore invalid data of vertex or edge.
-query.index_intersect_threshold  | 1000                            | The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.
-query.ramtable_edges_capacity    | 20000000                        | The maximum number of edges in ramtable, include OUT and IN edges.
-query.ramtable_enable            | false                           | Whether to enable ramtable for query of adjacent edges.
-query.ramtable_vertices_capacity | 10000000                        | The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.
-query.optimize_aggregate_by_index| false                           | Whether to optimize aggregate query(like count) by index.
-oltp.concurrent_depth            | 10                              | The min depth to enable concurrent oltp algorithm.
-oltp.concurrent_threads          | 10                              | Thread number to concurrently execute oltp algorithm.
-oltp.collection_type             | EC                              | The implementation type of collections used in oltp algorithm.
-rate_limit.read                  | 0                               | The max rate(times/s) to execute query of vertices/edges.
-rate_limit.write                 | 0                               | The max rate(items/s) to add/update/delete vertices/edges.
-task.wait_timeout                | 10                              | Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.
-task.input_size_limit            | 16777216                        | The job input size limit in bytes.
-task.result_size_limit           | 16777216                        | The job result size limit in bytes.
-task.sync_deletion               | false                           | Whether to delete schema or expired data synchronously.
-task.ttl_delete_batch            | 1                               | The batch size used to delete expired data.
-computer.config                  | /conf/computer.yaml             | The config file path of computer job.
-search.text_analyzer             | ikanalyzer                      | Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].
-search.text_analyzer_mode        | smart                           | Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}.
-snowflake.datecenter_id          | 0                               | The datacenter id of snowflake id generator.
-snowflake.force_string           | false                           | Whether to force the snowflake long id to be a string.
-snowflake.worker_id              | 0                               | The worker id of snowflake id generator.
-raft.mode                        | false                           | Whether the backend storage works in raft mode.
-raft.safe_read                   | false                           | Whether to use linearly consistent read.
-raft.use_snapshot                | false                           | Whether to use snapshot.
-raft.endpoint                    | 127.0.0.1:8281                  | The peerid of current raft node.
-raft.group_peers                 | 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 | The peers of current raft group.
-raft.path                        | ./raft-log                      | The log path of current raft node.
-raft.use_replicator_pipeline     | true                            | Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn't have to wait for the ack message of the current log to be sent.
-raft.election_timeout            | 10000                           | Timeout in milliseconds to launch a round of election.
-raft.snapshot_interval           | 3600                            | The interval in seconds to trigger snapshot save.
-raft.backend_threads             | current CPU vcores              | The thread number used to apply task to bakcend.
-raft.read_index_threads          | 8                               | The thread number used to execute reading index.
-raft.apply_batch                 | 1                               | The apply batch size to trigger disruptor event handler.
-raft.queue_size                  | 16384                           | The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.
-raft.queue_publish_timeout       | 60                              | The timeout in second when publish event into disruptor.
-raft.rpc_threads                 | 80                              | The rpc threads for jraft RPC layer.
-raft.rpc_connect_timeout         | 5000                            | The rpc connect timeout for jraft rpc.
-raft.rpc_timeout                 | 60000                           | The rpc timeout for jraft rpc.
-raft.rpc_buf_low_water_mark      | 10485760                        | The ChannelOutboundBuffer's low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.
-raft.rpc_buf_high_water_mark     | 20971520                        | The ChannelOutboundBuffer's high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.
-raft.read_strategy               | ReadOnlyLeaseBased              | The linearizability of read strategy.
+| config option                         | default value                                | description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+|---------------------------------------|----------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| gremlin.graph	                        | com.baidu.hugegraph.HugeFactory              | Gremlin entrance to create graph.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| backend                               | rocksdb                                      | The data store type, available values are [memory, rocksdb, cassandra, scylladb, hbase, mysql].                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| serializer                            | binary                                       | The serializer for backend store, available values are [text, binary, cassandra, hbase, mysql].                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| store                                 | hugegraph                                    | The database name like Cassandra Keyspace.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| store.connection_detect_interval      | 600                                          | The interval in seconds for detecting connections, if the idle time of a connection exceeds this value, detect it and reconnect if needed before using, value 0 means detecting every time.                                                                                                                                                                                                                                                                                                                                                                                                      |
+| store.graph                           | g                                            | The graph table name, which store vertex, edge and property.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| store.schema                          | m                                            | The schema table name, which store meta data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| store.system                          | s                                            | The system table name, which store system data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| schema.illegal_name_regex	            | .*\s+$|~.*	                             | The regex specified the illegal format for schema name.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| schema.cache_capacity                 | 10000                                        | The max cache size(items) of schema cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| vertex.cache_type                     | l2                                           | The type of vertex cache, allowed values are [l1, l2].                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| vertex.cache_capacity                 | 10000000                                     | The max cache size(items) of vertex cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| vertex.cache_expire                   | 600                                          | The expire time in seconds of vertex cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| vertex.check_customized_id_exist      | false                                        | Whether to check the vertices exist for those using customized id strategy.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| vertex.default_label                  | vertex                                       | The default vertex label.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
+| vertex.tx_capacity                    | 10000                                        | The max size(items) of vertices(uncommitted) in transaction.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| vertex.check_adjacent_vertex_exist    | false                                        | Whether to check the adjacent vertices of edges exist.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| vertex.lazy_load_adjacent_vertex      | true                                         | Whether to lazy load adjacent vertices of edges.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| vertex.part_edge_commit_size          | 5000                                         | Whether to enable the mode to commit part of edges of vertex, enabled if commit size > 0, 0 means disabled.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| vertex.encode_primary_key_number      | true                                         | Whether to encode number value of primary key in vertex id.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| vertex.remove_left_index_at_overwrite | false                                        | Whether remove left index at overwrite.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| edge.cache_type                       | l2                                           | The type of edge cache, allowed values are [l1, l2].                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| edge.cache_capacity                   | 1000000                                      | The max cache size(items) of edge cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| edge.cache_expire                     | 600                                          | The expiration time in seconds of edge cache.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                    |
+| edge.tx_capacity                      | 10000                                        | The max size(items) of edges(uncommitted) in transaction.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
+| query.page_size                       | 500                                          | The size of each page when querying by paging.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| query.batch_size                      | 1000                                         | The size of each batch when querying by batch.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| query.ignore_invalid_data             | true                                         | Whether to ignore invalid data of vertex or edge.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| query.index_intersect_threshold       | 1000                                         | The maximum number of intermediate results to intersect indexes when querying by multiple single index properties.                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| query.ramtable_edges_capacity         | 20000000                                     | The maximum number of edges in ramtable, include OUT and IN edges.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| query.ramtable_enable                 | false                                        | Whether to enable ramtable for query of adjacent edges.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| query.ramtable_vertices_capacity      | 10000000                                     | The maximum number of vertices in ramtable, generally the largest vertex id is used as capacity.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| query.optimize_aggregate_by_index     | false                                        | Whether to optimize aggregate query(like count) by index.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
+| oltp.concurrent_depth                 | 10                                           | The min depth to enable concurrent oltp algorithm.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| oltp.concurrent_threads               | 10                                           | Thread number to concurrently execute oltp algorithm.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
+| oltp.collection_type                  | EC                                           | The implementation type of collections used in oltp algorithm.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| rate_limit.read                       | 0                                            | The max rate(times/s) to execute query of vertices/edges.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                        |
+| rate_limit.write                      | 0                                            | The max rate(items/s) to add/update/delete vertices/edges.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       |
+| task.wait_timeout                     | 10                                           | Timeout in seconds for waiting for the task to complete,such as when truncating or clearing the backend.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| task.input_size_limit                 | 16777216                                     | The job input size limit in bytes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| task.result_size_limit                | 16777216                                     | The job result size limit in bytes.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              |
+| task.sync_deletion                    | false                                        | Whether to delete schema or expired data synchronously.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
+| task.ttl_delete_batch                 | 1                                            | The batch size used to delete expired data.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| computer.config                       | /conf/computer.yaml                          | The config file path of computer job.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
+| search.text_analyzer                  | ikanalyzer                                   | Choose a text analyzer for searching the vertex/edge properties, available type are [word, ansj, hanlp, smartcn, jieba, jcseg, mmseg4j, ikanalyzer].                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| search.text_analyzer_mode             | smart                                        | Specify the mode for the text analyzer, the available mode of analyzer are {word: [MaximumMatching, ReverseMaximumMatching, MinimumMatching, ReverseMinimumMatching, BidirectionalMaximumMatching, BidirectionalMinimumMatching, BidirectionalMaximumMinimumMatching, FullSegmentation, MinimalWordCount, MaxNgramScore, PureEnglish], ansj: [BaseAnalysis, IndexAnalysis, ToAnalysis, NlpAnalysis], hanlp: [standard, nlp, index, nShort, shortest, speed], smartcn: [], jieba: [SEARCH, INDEX], jcseg: [Simple, Complex], mmseg4j: [Simple, Complex, MaxWord], ikanalyzer: [smart, max_word]}. |
+| snowflake.datecenter_id               | 0                                            | The datacenter id of snowflake id generator.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
+| snowflake.force_string                | false                                        | Whether to force the snowflake long id to be a string.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| snowflake.worker_id                   | 0                                            | The worker id of snowflake id generator.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| raft.mode                             | false                                        | Whether the backend storage works in raft mode.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  |
+| raft.safe_read                        | false                                        | Whether to use linearly consistent read.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| raft.use_snapshot                     | false                                        | Whether to use snapshot.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| raft.endpoint                         | 127.0.0.1:8281                               | The peerid of current raft node.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| raft.group_peers                      | 127.0.0.1:8281,127.0.0.1:8282,127.0.0.1:8283 | The peers of current raft group.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| raft.path                             | ./raft-log                                   | The log path of current raft node.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               |
+| raft.use_replicator_pipeline          | true                                         | Whether to use replicator line, when turned on it multiple logs can be sent in parallel, and the next log doesn't have to wait for the ack message of the current log to be sent.                                                                                                                                                                                                                                                                                                                                                                                                                |
+| raft.election_timeout                 | 10000                                        | Timeout in milliseconds to launch a round of election.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| raft.snapshot_interval                | 3600                                         | The interval in seconds to trigger snapshot save.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                |
+| raft.backend_threads                  | current CPU v-cores                          | The thread number used to apply task to backend.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| raft.read_index_threads               | 8                                            | The thread number used to execute reading index.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
+| raft.apply_batch                      | 1                                            | The apply batch size to trigger disruptor event handler.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| raft.queue_size                       | 16384                                        | The disruptor buffers size for jraft RaftNode, StateMachine and LogManager.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                      |
+| raft.queue_publish_timeout            | 60                                           | The timeout in second when publish event into disruptor.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
+| raft.rpc_threads                      | 80                                           | The rpc threads for jraft RPC layer.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             |
+| raft.rpc_connect_timeout              | 5000                                         | The rpc connect timeout for jraft rpc.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           |
+| raft.rpc_timeout                      | 60000                                        | The rpc timeout for jraft rpc.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
+| raft.rpc_buf_low_water_mark           | 10485760                                     | The ChannelOutboundBuffer's low water mark of netty, when buffer size less than this size, the method ChannelOutboundBuffer.isWritable() will return true, it means that low downstream pressure or good network.                                                                                                                                                                                                                                                                                                                                                                                |
+| raft.rpc_buf_high_water_mark          | 20971520                                     | The ChannelOutboundBuffer's high water mark of netty, only when buffer size exceed this size, the method ChannelOutboundBuffer.isWritable() will return false, it means that the downstream pressure is too great to process the request or network is very congestion, upstream needs to limit rate at this time.                                                                                                                                                                                                                                                                               |
+| raft.read_strategy                    | ReadOnlyLeaseBased                           | The linearizability of read strategy.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                            |
 
 ### RPC server 配置
 
-config option                  | default value  | descrition
------------------------------- | -------------- | ------------------------------------------------------------------
-rpc.client_connect_timeout     | 20             | The timeout(in seconds) of rpc client connect to rpc server.
-rpc.client_load_balancer       | consistentHash | The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is 'consistentHash', means forwording by request parameters.
-rpc.client_read_timeout        | 40             | The timeout(in seconds) of rpc client read from rpc server.
-rpc.client_reconnect_period    | 10             | The period(in seconds) of rpc client reconnect to rpc server.
-rpc.client_retries             | 3              | Failed retry number of rpc client calls to rpc server.
-rpc.config_order               | 999            | Sofa rpc configuration file loading order, the larger the more later loading.
-rpc.logger_impl                | com.alipay.sofa.rpc.log.SLF4JLoggerImpl | Sofa rpc log implementation class.
-rpc.protocol                   | bolt           | Rpc communication protocol, client and server need to be specified the same value.
-rpc.remote_url                 |                | The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ',', empty value means not enabled.
-rpc.server_adaptive_port       | false          | Whether the bound port is adaptive, if it's enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts.
-rpc.server_host                |                | The hosts/ips bound by rpc server to provide services, empty value means not enabled.
-rpc.server_port                | 8090           | The port bound by rpc server to provide services.
-rpc.server_timeout             | 30             | The timeout(in seconds) of rpc server execution.
+| config option               | default value                           | description                                                                                                                                                                                                   |
+|-----------------------------|-----------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| rpc.client_connect_timeout  | 20                                      | The timeout(in seconds) of rpc client connect to rpc server.                                                                                                                                                  |
+| rpc.client_load_balancer    | consistentHash                          | The rpc client uses a load-balancing algorithm to access multiple rpc servers in one cluster. Default value is 'consistentHash', means forwarding by request parameters.                                      |
+| rpc.client_read_timeout     | 40                                      | The timeout(in seconds) of rpc client read from rpc server.                                                                                                                                                   |
+| rpc.client_reconnect_period | 10                                      | The period(in seconds) of rpc client reconnect to rpc server.                                                                                                                                                 |
+| rpc.client_retries          | 3                                       | Failed retry number of rpc client calls to rpc server.                                                                                                                                                        |
+| rpc.config_order            | 999                                     | Sofa rpc configuration file loading order, the larger the more later loading.                                                                                                                                 |
+| rpc.logger_impl             | com.alipay.sofa.rpc.log.SLF4JLoggerImpl | Sofa rpc log implementation class.                                                                                                                                                                            |
+| rpc.protocol                | bolt                                    | Rpc communication protocol, client and server need to be specified the same value.                                                                                                                            |
+| rpc.remote_url              |                                         | The remote urls of rpc peers, it can be set to multiple addresses, which are concat by ',', empty value means not enabled.                                                                                    |
+| rpc.server_adaptive_port    | false                                   | Whether the bound port is adaptive, if it's enabled, when the port is in use, automatically +1 to detect the next available port. Note that this process is not atomic, so there may still be port conflicts. |
+| rpc.server_host             |                                         | The hosts/ips bound by rpc server to provide services, empty value means not enabled.                                                                                                                         |
+| rpc.server_port             | 8090                                    | The port bound by rpc server to provide services.                                                                                                                                                             |
+| rpc.server_timeout          | 30                                      | The timeout(in seconds) of rpc server execution.                                                                                                                                                              |
 
 ### Cassandra 后端配置项
 
-config option                  | default value  | descrition
------------------------------- | -------------- | ------------------------------------------------------------------
-backend                        |                | Must be set to `cassandra`.
-serializer                     |                | Must be set to `cassandra`.
-cassandra.host                 | localhost      | The seeds hostname or ip address of cassandra cluster.
-cassandra.port                 | 9042           | The seeds port address of cassandra cluster.
-cassandra.connect_timeout      | 5              | The cassandra driver connect server timeout(seconds).
-cassandra.read_timeout         | 20             | The cassandra driver read from server timeout(seconds).
-cassandra.keyspace.strategy    | SimpleStrategy | The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.
-cassandra.keyspace.replication | [3]            | The keyspace replication factor of SimpleStrategy, like '[3]'.Or replicas in each datacenter of NetworkTopologyStrategy, like '[dc1:2,dc2:1]'.
-cassandra.username             |                | The username to use to login to cassandra cluster.
-cassandra.password             |                | The password corresponding to cassandra.username.
-cassandra.compression_type     | none           | The compression algorithm of cassandra transport: none/snappy/lz4.
-cassandra.jmx_port=7199        | 7199           | The port of JMX API service for cassandra.
-cassandra.aggregation_timeout  | 43200          | The timeout in seconds of waiting for aggregation.
+| config option                  | default value  | description                                                                                                                                     |
+|--------------------------------|----------------|------------------------------------------------------------------------------------------------------------------------------------------------|
+| backend                        |                | Must be set to `cassandra`.                                                                                                                    |
+| serializer                     |                | Must be set to `cassandra`.                                                                                                                    |
+| cassandra.host                 | localhost      | The seeds hostname or ip address of cassandra cluster.                                                                                         |
+| cassandra.port                 | 9042           | The seeds port address of cassandra cluster.                                                                                                   |
+| cassandra.connect_timeout      | 5              | The cassandra driver connect server timeout(seconds).                                                                                          |
+| cassandra.read_timeout         | 20             | The cassandra driver read from server timeout(seconds).                                                                                        |
+| cassandra.keyspace.strategy    | SimpleStrategy | The replication strategy of keyspace, valid value is SimpleStrategy or NetworkTopologyStrategy.                                                |
+| cassandra.keyspace.replication | [3]            | The keyspace replication factor of SimpleStrategy, like '[3]'.Or replicas in each datacenter of NetworkTopologyStrategy, like '[dc1:2,dc2:1]'. |
+| cassandra.username             |                | The username to use to login to cassandra cluster.                                                                                             |
+| cassandra.password             |                | The password corresponding to cassandra.username.                                                                                              |
+| cassandra.compression_type     | none           | The compression algorithm of cassandra transport: none/snappy/lz4.                                                                             |
+| cassandra.jmx_port=7199        | 7199           | The port of JMX API service for cassandra.                                                                                                     |
+| cassandra.aggregation_timeout  | 43200          | The timeout in seconds of waiting for aggregation.                                                                                             |
 
 ### ScyllaDB 后端配置项
 
-config option                  | default value | descrition
------------------------------- | ------------- | ------------------------------------------------------------------------------------------------
-backend                        |               | Must be set to `scylladb`.
-serializer                     |               | Must be set to `scylladb`.
+| config option | default value | description                 |
+|---------------|---------------|----------------------------|
+| backend       |               | Must be set to `scylladb`. |
+| serializer    |               | Must be set to `scylladb`. |
 
 其它与 Cassandra 后端一致。
 
 ### RocksDB 后端配置项
 
-config option                                   | default value                                                                                                                        | descrition
------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
-backend                                         |                                                                                                                                      | Must be set to `rocksdb`.
-serializer                                      |                                                                                                                                      | Must be set to `binary`.
-rocksdb.data_disks                              | []                                                                                                                                   | The optimized disks for storing data of RocksDB. The format of each element: `STORE/TABLE: /path/disk`.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]
-rocksdb.data_path                               | rocksdb-data                                                                                                                         | The path for storing data of RocksDB.
-rocksdb.wal_path                                | rocksdb-data                                                                                                                         | The path for storing WAL of RocksDB.
-rocksdb.allow_mmap_reads                        | false                                                                                                                                | Allow the OS to mmap file for reading sst tables.
-rocksdb.allow_mmap_writes                       | false                                                                                                                                | Allow the OS to mmap file for writing.
-rocksdb.block_cache_capacity                    | 8388608                                                                                                                              | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.
-rocksdb.bloom_filter_bits_per_key               | -1                                                                                                                                   | The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.
-rocksdb.bloom_filter_block_based_mode           | false                                                                                                                                | Use block based filter rather than full filter.
-rocksdb.bloom_filter_whole_key_filtering        | true                                                                                                                                 | True if place whole keys in the bloom filter, else place the prefix of keys.
-rocksdb.bottommost_compression                  | NO_COMPRESSION                                                                                                                       | The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
-rocksdb.bulkload_mode                           | false                                                                                                                                | Switch to the mode to bulk load data into RocksDB.
-rocksdb.cache_index_and_filter_blocks           | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.
-rocksdb.compaction_style                        | LEVEL                                                                                                                                | Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.
-rocksdb.compression                             | SNAPPY_COMPRESSION                                                                                                                   | The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
-rocksdb.compression_per_level                   | [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] | The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.
-rocksdb.delayed_write_rate                      | 16777216                                                                                                                             | The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.
-rocksdb.log_level                               | INFO                                                                                                                                 | The info log level of RocksDB.
-rocksdb.max_background_jobs                     | 8                                                                                                                                    | Maximum number of concurrent background jobs, including flushes and compactions.
-rocksdb.level_compaction_dynamic_level_bytes    | false                                                                                                                                | Whether to enable level_compaction_dynamic_level_bytes, if it's enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it's not recommended.
-rocksdb.max_bytes_for_level_base                | 536870912                                                                                                                            | The upper-bound of the total size of level-1 files in bytes.
-rocksdb.max_bytes_for_level_multiplier          | 10.0                                                                                                                                 | The ratio between the total size of level (L+1) files and the total size of level L files for all L.
-rocksdb.max_open_files                          | -1                                                                                                                                   | The maximum number of open files that can be cached by RocksDB, -1 means no limit.
-rocksdb.max_subcompactions                      | 4                                                                                                                                    | The value represents the maximum number of threads per compaction job.
-rocksdb.max_write_buffer_number                 | 6                                                                                                                                    | The maximum number of write buffers that are built up in memory.
-rocksdb.max_write_buffer_number_to_maintain     | 0                                                                                                                                    | The total maximum number of write buffers to maintain in memory.
-rocksdb.min_write_buffer_number_to_merge        | 2                                                                                                                                    | The minimum number of write buffers that will be merged together.
-rocksdb.num_levels                              | 7                                                                                                                                    | Set the number of levels for this database.
-rocksdb.optimize_filters_for_hits               | false                                                                                                                                | This flag allows us to not store filters for the last level.
-rocksdb.optimize_mode                           | true                                                                                                                                 | Optimize for heavy workloads and big datasets.
-rocksdb.pin_l0_filter_and_index_blocks_in_cache | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.
-rocksdb.sst_path                                |                                                                                                                                      | The path for ingesting SST file into RocksDB.
-rocksdb.target_file_size_base                   | 67108864                                                                                                                             | The target file size for compaction in bytes.
-rocksdb.target_file_size_multiplier             | 1                                                                                                                                    | The size ratio between a level L file and a level (L+1) file.
-rocksdb.use_direct_io_for_flush_and_compaction  | false                                                                                                                                | Enable the OS to use direct read/writes in flush and compaction.
-rocksdb.use_direct_reads                        | false                                                                                                                                | Enable the OS to use direct I/O for reading sst tables.
-rocksdb.write_buffer_size                       | 134217728                                                                                                                            | Amount of data in bytes to build up in memory.
-rocksdb.max_manifest_file_size                  | 104857600                                                                                                                            | The max size of manifest file in bytes.
-rocksdb.skip_stats_update_on_db_open            | false                                                                                                                                | Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.
-rocksdb.max_file_opening_threads                | 16                                                                                                                                   | The max number of threads used to open files.
-rocksdb.max_total_wal_size                      | 0                                                                                                                                    | Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.
-rocksdb.db_write_buffer_size                    | 0                                                                                                                                    | Total size of write buffers in bytes across all column families, 0 means no limit.
-rocksdb.delete_obsolete_files_period            | 21600                                                                                                                                | The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.
-rocksdb.hard_pending_compaction_bytes_limit     | 274877906944                                                                                                                         | The hard limit to impose on pending compaction in bytes.
-rocksdb.level0_file_num_compaction_trigger      | 2                                                                                                                                    | Number of files to trigger level-0 compaction.
-rocksdb.level0_slowdown_writes_trigger          | 20                                                                                                                                   | Soft limit on number of level-0 files for slowing down writes.
-rocksdb.level0_stop_writes_trigger              | 36                                                                                                                                   | Hard limit on number of level-0 files for stopping writes.
-rocksdb.soft_pending_compaction_bytes_limit     | 68719476736                                                                                                                          | The soft limit to impose on pending compaction in bytes.
+| config option                                   | default value                                                                                                                        | description                                                                                                                                                                                                                                                                                                                                                                                            |
+|-------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| backend                                         |                                                                                                                                      | Must be set to `rocksdb`.                                                                                                                                                                                                                                                                                                                                                                             |
+| serializer                                      |                                                                                                                                      | Must be set to `binary`.                                                                                                                                                                                                                                                                                                                                                                              |
+| rocksdb.data_disks                              | []                                                                                                                                   | The optimized disks for storing data of RocksDB. The format of each element: `STORE/TABLE: /path/disk`.Allowed keys are [g/vertex, g/edge_out, g/edge_in, g/vertex_label_index, g/edge_label_index, g/range_int_index, g/range_float_index, g/range_long_index, g/range_double_index, g/secondary_index, g/search_index, g/shard_index, g/unique_index, g/olap]                                       |
+| rocksdb.data_path                               | rocksdb-data                                                                                                                         | The path for storing data of RocksDB.                                                                                                                                                                                                                                                                                                                                                                 |
+| rocksdb.wal_path                                | rocksdb-data                                                                                                                         | The path for storing WAL of RocksDB.                                                                                                                                                                                                                                                                                                                                                                  |
+| rocksdb.allow_mmap_reads                        | false                                                                                                                                | Allow the OS to mmap file for reading sst tables.                                                                                                                                                                                                                                                                                                                                                     |
+| rocksdb.allow_mmap_writes                       | false                                                                                                                                | Allow the OS to mmap file for writing.                                                                                                                                                                                                                                                                                                                                                                |
+| rocksdb.block_cache_capacity                    | 8388608                                                                                                                              | The amount of block cache in bytes that will be used by RocksDB, 0 means no block cache.                                                                                                                                                                                                                                                                                                              |
+| rocksdb.bloom_filter_bits_per_key               | -1                                                                                                                                   | The bits per key in bloom filter, a good value is 10, which yields a filter with ~ 1% false positive rate, -1 means no bloom filter.                                                                                                                                                                                                                                                                  |
+| rocksdb.bloom_filter_block_based_mode           | false                                                                                                                                | Use block based filter rather than full filter.                                                                                                                                                                                                                                                                                                                                                       |
+| rocksdb.bloom_filter_whole_key_filtering        | true                                                                                                                                 | True if place whole keys in the bloom filter, else place the prefix of keys.                                                                                                                                                                                                                                                                                                                          |
+| rocksdb.bottommost_compression                  | NO_COMPRESSION                                                                                                                       | The compression algorithm for the bottommost level of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.                                                                                                                                                                                                                                                                          |
+| rocksdb.bulkload_mode                           | false                                                                                                                                | Switch to the mode to bulk load data into RocksDB.                                                                                                                                                                                                                                                                                                                                                    |
+| rocksdb.cache_index_and_filter_blocks           | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.compaction_style                        | LEVEL                                                                                                                                | Set compaction style for RocksDB: LEVEL/UNIVERSAL/FIFO.                                                                                                                                                                                                                                                                                                                                               |
+| rocksdb.compression                             | SNAPPY_COMPRESSION                                                                                                                   | The compression algorithm for compressing blocks of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.                                                                                                                                                                                                                                                                            |
+| rocksdb.compression_per_level                   | [NO_COMPRESSION, NO_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION, SNAPPY_COMPRESSION] | The compression algorithms for different levels of RocksDB, allowed values are none/snappy/z/bzip2/lz4/lz4hc/xpress/zstd.                                                                                                                                                                                                                                                                             |
+| rocksdb.delayed_write_rate                      | 16777216                                                                                                                             | The rate limit in bytes/s of user write requests when need to slow down if the compaction gets behind.                                                                                                                                                                                                                                                                                                |
+| rocksdb.log_level                               | INFO                                                                                                                                 | The info log level of RocksDB.                                                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.max_background_jobs                     | 8                                                                                                                                    | Maximum number of concurrent background jobs, including flushes and compactions.                                                                                                                                                                                                                                                                                                                      |
+| rocksdb.level_compaction_dynamic_level_bytes    | false                                                                                                                                | Whether to enable level_compaction_dynamic_level_bytes, if it's enabled we give max_bytes_for_level_multiplier a priority against max_bytes_for_level_base, the bytes of base level is dynamic for a more predictable LSM tree, it is useful to limit worse case space amplification. Turning this feature on/off for an existing DB can cause unexpected LSM tree structure so it's not recommended. |
+| rocksdb.max_bytes_for_level_base                | 536870912                                                                                                                            | The upper-bound of the total size of level-1 files in bytes.                                                                                                                                                                                                                                                                                                                                          |
+| rocksdb.max_bytes_for_level_multiplier          | 10.0                                                                                                                                 | The ratio between the total size of level (L+1) files and the total size of level L files for all L.                                                                                                                                                                                                                                                                                                  |
+| rocksdb.max_open_files                          | -1                                                                                                                                   | The maximum number of open files that can be cached by RocksDB, -1 means no limit.                                                                                                                                                                                                                                                                                                                    |
+| rocksdb.max_subcompactions                      | 4                                                                                                                                    | The value represents the maximum number of threads per compaction job.                                                                                                                                                                                                                                                                                                                                |
+| rocksdb.max_write_buffer_number                 | 6                                                                                                                                    | The maximum number of write buffers that are built up in memory.                                                                                                                                                                                                                                                                                                                                      |
+| rocksdb.max_write_buffer_number_to_maintain     | 0                                                                                                                                    | The total maximum number of write buffers to maintain in memory.                                                                                                                                                                                                                                                                                                                                      |
+| rocksdb.min_write_buffer_number_to_merge        | 2                                                                                                                                    | The minimum number of write buffers that will be merged together.                                                                                                                                                                                                                                                                                                                                     |
+| rocksdb.num_levels                              | 7                                                                                                                                    | Set the number of levels for this database.                                                                                                                                                                                                                                                                                                                                                           |
+| rocksdb.optimize_filters_for_hits               | false                                                                                                                                | This flag allows us to not store filters for the last level.                                                                                                                                                                                                                                                                                                                                          |
+| rocksdb.optimize_mode                           | true                                                                                                                                 | Optimize for heavy workloads and big datasets.                                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.pin_l0_filter_and_index_blocks_in_cache | false                                                                                                                                | Indicating if we'd put index/filter blocks to the block cache.                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.sst_path                                |                                                                                                                                      | The path for ingesting SST file into RocksDB.                                                                                                                                                                                                                                                                                                                                                         |
+| rocksdb.target_file_size_base                   | 67108864                                                                                                                             | The target file size for compaction in bytes.                                                                                                                                                                                                                                                                                                                                                         |
+| rocksdb.target_file_size_multiplier             | 1                                                                                                                                    | The size ratio between a level L file and a level (L+1) file.                                                                                                                                                                                                                                                                                                                                         |
+| rocksdb.use_direct_io_for_flush_and_compaction  | false                                                                                                                                | Enable the OS to use direct read/writes in flush and compaction.                                                                                                                                                                                                                                                                                                                                      |
+| rocksdb.use_direct_reads                        | false                                                                                                                                | Enable the OS to use direct I/O for reading sst tables.                                                                                                                                                                                                                                                                                                                                               |
+| rocksdb.write_buffer_size                       | 134217728                                                                                                                            | Amount of data in bytes to build up in memory.                                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.max_manifest_file_size                  | 104857600                                                                                                                            | The max size of manifest file in bytes.                                                                                                                                                                                                                                                                                                                                                               |
+| rocksdb.skip_stats_update_on_db_open            | false                                                                                                                                | Whether to skip statistics update when opening the database, setting this flag true allows us to not update statistics.                                                                                                                                                                                                                                                                               |
+| rocksdb.max_file_opening_threads                | 16                                                                                                                                   | The max number of threads used to open files.                                                                                                                                                                                                                                                                                                                                                         |
+| rocksdb.max_total_wal_size                      | 0                                                                                                                                    | Total size of WAL files in bytes. Once WALs exceed this size, we will start forcing the flush of column families related, 0 means no limit.                                                                                                                                                                                                                                                           |
+| rocksdb.db_write_buffer_size                    | 0                                                                                                                                    | Total size of write buffers in bytes across all column families, 0 means no limit.                                                                                                                                                                                                                                                                                                                    |
+| rocksdb.delete_obsolete_files_period            | 21600                                                                                                                                | The periodicity in seconds when obsolete files get deleted, 0 means always do full purge.                                                                                                                                                                                                                                                                                                             |
+| rocksdb.hard_pending_compaction_bytes_limit     | 274877906944                                                                                                                         | The hard limit to impose on pending compaction in bytes.                                                                                                                                                                                                                                                                                                                                              |
+| rocksdb.level0_file_num_compaction_trigger      | 2                                                                                                                                    | Number of files to trigger level-0 compaction.                                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.level0_slowdown_writes_trigger          | 20                                                                                                                                   | Soft limit on number of level-0 files for slowing down writes.                                                                                                                                                                                                                                                                                                                                        |
+| rocksdb.level0_stop_writes_trigger              | 36                                                                                                                                   | Hard limit on number of level-0 files for stopping writes.                                                                                                                                                                                                                                                                                                                                            |
+| rocksdb.soft_pending_compaction_bytes_limit     | 68719476736                                                                                                                          | The soft limit to impose on pending compaction in bytes.                                                                                                                                                                                                                                                                                                                                              |
 
 ### HBase 后端配置项
 
-config option            | default value               | descrition
------------------------- | --------------------------- | -------------------------------------------------------------------------------
-backend                  |                             | Must be set to `hbase`.
-serializer               |                             | Must be set to `hbase`.
-hbase.hosts              | localhost                   | The hostnames or ip addresses of HBase zookeeper, separated with commas. 
-hbase.port               | 2181                        | The port address of HBase zookeeper.
-hbase.threads_max        | 64                          | The max threads num of hbase connections.
-hbase.znode_parent       | /hbase                      | The znode parent path of HBase zookeeper.
-hbase.zk_retry           | 3                           | The recovery retry times of HBase zookeeper.
-hbase.aggregation_timeout |  43200                     | The timeout in seconds of waiting for aggregation.
-hbase.kerberos_enable    |  false                      | Is Kerberos authentication enabled for HBase.
-hbase.kerberos_keytab    |                             | The HBase's key tab file for kerberos authentication.
-hbase.kerberos_principal |                             | The HBase's principal for kerberos authentication.
-hbase.krb5_conf          |  etc/krb5.conf              | Kerberos configuration file, including KDC IP, default realm, etc.
-hbase.hbase_site         | /etc/hbase/conf/hbase-site.xml| The HBase's configuration file
-hbase.enable_partition   | true                           | Is pre-split partitions enabled for HBase.
-hbase.vertex_partitions  | 10                             | The number of partitions of the HBase vertex table.
-hbase.edge_partitions    | 30                             | The number of partitions of the HBase edge table.
+| config option             | default value                  | description                                                               |
+|---------------------------|--------------------------------|--------------------------------------------------------------------------|
+| backend                   |                                | Must be set to `hbase`.                                                  |
+| serializer                |                                | Must be set to `hbase`.                                                  |
+| hbase.hosts               | localhost                      | The hostnames or ip addresses of HBase zookeeper, separated with commas. |
+| hbase.port                | 2181                           | The port address of HBase zookeeper.                                     |
+| hbase.threads_max         | 64                             | The max threads num of hbase connections.                                |
+| hbase.znode_parent        | /hbase                         | The znode parent path of HBase zookeeper.                                |
+| hbase.zk_retry            | 3                              | The recovery retry times of HBase zookeeper.                             |
+| hbase.aggregation_timeout | 43200                          | The timeout in seconds of waiting for aggregation.                       |
+| hbase.kerberos_enable     | false                          | Is Kerberos authentication enabled for HBase.                            |
+| hbase.kerberos_keytab     |                                | The HBase's key tab file for kerberos authentication.                    |
+| hbase.kerberos_principal  |                                | The HBase's principal for kerberos authentication.                       |
+| hbase.krb5_conf           | etc/krb5.conf                  | Kerberos configuration file, including KDC IP, default realm, etc.       |
+| hbase.hbase_site          | /etc/hbase/conf/hbase-site.xml | The HBase's configuration file                                           |
+| hbase.enable_partition    | true                           | Is pre-split partitions enabled for HBase.                               |
+| hbase.vertex_partitions   | 10                             | The number of partitions of the HBase vertex table.                      |
+| hbase.edge_partitions     | 30                             | The number of partitions of the HBase edge table.                        |
 
 ### MySQL & PostgreSQL 后端配置项
 
-config option            | default value               | descrition
------------------------- | --------------------------- | -------------------------------------------------------------------------------
-backend                  |                             | Must be set to `mysql`.
-serializer               |                             | Must be set to `mysql`.
-jdbc.driver              | com.mysql.jdbc.Driver       | The JDBC driver class to connect database.
-jdbc.url                 | jdbc:mysql://127.0.0.1:3306 | The url of database in JDBC format.
-jdbc.username            | root                        | The username to login database.
-jdbc.password            | ******                      | The password corresponding to jdbc.username.
-jdbc.ssl_mode            | false                       | The SSL mode of connections with database.
-jdbc.reconnect_interval  | 3                           | The interval(seconds) between reconnections when the database connection fails.
-jdbc.reconnect_max_times | 3                           | The reconnect times when the database connection fails.
-jdbc.storage_engine      | InnoDB                      | The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL.
-jdbc.postgresql.connect_database | template1           | The database used to connect when init store, drop store or check store exist.
+| config option                    | default value               | description                                                                          |
+|----------------------------------|-----------------------------|-------------------------------------------------------------------------------------|
+| backend                          |                             | Must be set to `mysql`.                                                             |
+| serializer                       |                             | Must be set to `mysql`.                                                             |
+| jdbc.driver                      | com.mysql.jdbc.Driver       | The JDBC driver class to connect database.                                          |
+| jdbc.url                         | jdbc:mysql://127.0.0.1:3306 | The url of database in JDBC format.                                                 |
+| jdbc.username                    | root                        | The username to login database.                                                     |
+| jdbc.password                    | ******                      | The password corresponding to jdbc.username.                                        |
+| jdbc.ssl_mode                    | false                       | The SSL mode of connections with database.                                          |
+| jdbc.reconnect_interval          | 3                           | The interval(seconds) between reconnections when the database connection fails.     |
+| jdbc.reconnect_max_times         | 3                           | The reconnect times when the database connection fails.                             |
+| jdbc.storage_engine              | InnoDB                      | The storage engine of backend store database, like InnoDB/MyISAM/RocksDB for MySQL. |
+| jdbc.postgresql.connect_database | template1                   | The database used to connect when init store, drop store or check store exist.      |
 
 ### PostgreSQL 后端配置项
 
-config option            | default value               | descrition
------------------------- | --------------------------- | -------------------------------------------------------------------------------
-backend                  |                             | Must be set to `postgresql`.
-serializer               |                             | Must be set to `postgresql`.
+| config option | default value | description                   |
+|---------------|---------------|------------------------------|
+| backend       |               | Must be set to `postgresql`. |
+| serializer    |               | Must be set to `postgresql`. |
 
 其它与 MySQL 后端一致。
 
diff --git a/content/cn/docs/download/download.md b/content/cn/docs/download/download.md
index c4093e4..c6b6cd6 100644
--- a/content/cn/docs/download/download.md
+++ b/content/cn/docs/download/download.md
@@ -8,26 +8,26 @@
 
 The latest HugeGraph: **0.12.0**, released on _2021-12-31_.
 
-components       | description          | download
----------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------
-HugeGraph-Server | HugeGraph的主程序      | [0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz)
-HugeGraph-Hubble | 基于Web的可视化图形界面  | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)
-HugeGraph-Loader | 数据导入工具            | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz)
-HugeGraph-Tools  | 命令行工具集            | [1.6.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.6.0/hugegraph-tools-1.6.0.tar.gz)
+| components       | description   | download                                                                                                         |
+|------------------|---------------|------------------------------------------------------------------------------------------------------------------|
+| HugeGraph-Server | HugeGraph的主程序 | [0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz)               |
+| HugeGraph-Hubble | 基于Web的可视化图形界面 | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)    |
+| HugeGraph-Loader | 数据导入工具        | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz) |
+| HugeGraph-Tools  | 命令行工具集        | [1.6.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.6.0/hugegraph-tools-1.6.0.tar.gz)      |
 
 ### Versions mapping
 
-server                                                                                           | client | loader                                                                                                                                                                      | hubble                                                                                                             | common | tools |
------------------------------------------------------------------------------------------------- | ------ | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------ | -----  | -----------------------------------------------------------------------------------------------------------
-[0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz)  | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/2.0.1)  | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz)   | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)       | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/2.0.1)  | [1.6.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.6.0/hugegraph-tools-1.6.0.tar.gz)
-[0.11.2](https://github.com/hugegraph/hugegraph/releases/download/v0.11.2/hugegraph-0.11.2.tar.gz)  | [1.9.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.9.1)  | [0.11.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.11.1/hugegraph-loader-0.11.1.tar.gz)   | [1.5.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.5.0/hugegraph-hubble-1.5.0.tar.gz)       | [1.8.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.8.1)  | [1.5.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.5.0/hugegraph-tools-1.5.0.tar.gz)
-[0.10.4](https://github.com/hugegraph/hugegraph/releases/download/v0.10.4/hugegraph-0.10.4.tar.gz)  | [1.8.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.8.0)  | [0.10.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.10.1/hugegraph-loader-0.10.1.tar.gz)   |  [0.10.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.10.0/hugegraph-studio-0.10.0.tar.gz)      | [1.6.16](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.6.16)  | [1.4.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.4.0/hugegraph-tools-1.4.0.tar.gz)
-[0.9.2](https://github.com/hugegraph/hugegraph/releases/download/v0.9.2/hugegraph-0.9.2.tar.gz)  | [1.7.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.7.0)  | [0.9.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.9.0/hugegraph-loader-0.9.0.tar.gz)   | [0.9.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.9.0/hugegraph-studio-0.9.0.tar.gz)               | [1.6.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.6.0)  | [1.3.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.3.0/hugegraph-tools-1.3.0.tar.gz)
-[0.8.0](https://github.com/hugegraph/hugegraph/releases/download/v0.8.0/hugegraph-0.8.0.tar.gz)  | [1.6.4](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.6.4)  | [0.8.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.8.0/hugegraph-loader-0.8.0.tar.gz)   | [0.8.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.8.0/hugegraph-studio-0.8.0.tar.gz)               | [1.5.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.5.3)  | [1.2.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.2.0/hugegraph-tools-1.2.0.tar.gz)
-[0.7.4](https://github.com/hugegraph/hugegraph/releases/download/v0.7.4/hugegraph-0.7.4.tar.gz)  | [1.5.8](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.8)  | [0.7.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.7.0/hugegraph-loader-0.7.0.tar.gz)   |  [0.7.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.7.0/hugegraph-studio-0.7.0.tar.gz)               | [1.4.9](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.4.9)  | [1.1.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.1.0/hugegraph-tools-1.1.0.tar.gz)
-[0.6.1](https://github.com/hugegraph/hugegraph/releases/download/v0.6.1/hugegraph-0.6.1.tar.gz)  | [1.5.6](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.6)  | [0.6.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.6.1/hugegraph-loader-0.6.1.tar.gz)   | [0.6.1](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.6.1/hugegraph-studio-0.6.1.tar.gz)               | [1.4.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.4.3)  | [1.0.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.0.0/hugegraph-tools-1.0.0.tar.gz)
-[0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.5.6-SNAPSHOT.tar.gz) | [1.5.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.0)  | [0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.5.6-bin.tar.gz)     | [0.5.0](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.5.0-SNAPSHOT.tar.gz) | 1.4.0  |
-[0.4.5](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.4.5-SNAPSHOT.tar.gz) | [1.4.7](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.4.7)  | [0.2.2](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.2.2-bin.tar.gz)     | [0.4.1](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.4.1-SNAPSHOT.tar.gz) | 1.3.12 |
+| server                                                                                             | client                                                                                 | loader                                                                                                           | hubble                                                                                                             | common                                                                                   | tools                                                                                                       |
+|----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------------------------|
+| [0.12.0](https://github.com/hugegraph/hugegraph/releases/download/v0.12.0/hugegraph-0.12.0.tar.gz) | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/2.0.1) | [0.12.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.12.0/hugegraph-loader-0.12.0.tar.gz) | [1.6.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.6.0/hugegraph-hubble-1.6.0.tar.gz)      | [2.0.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/2.0.1)   | [1.6.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.6.0/hugegraph-tools-1.6.0.tar.gz) |
+| [0.11.2](https://github.com/hugegraph/hugegraph/releases/download/v0.11.2/hugegraph-0.11.2.tar.gz) | [1.9.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.9.1) | [0.11.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.11.1/hugegraph-loader-0.11.1.tar.gz) | [1.5.0](https://github.com/hugegraph/hugegraph-hubble/releases/download/v1.5.0/hugegraph-hubble-1.5.0.tar.gz)      | [1.8.1](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.8.1)   | [1.5.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.5.0/hugegraph-tools-1.5.0.tar.gz) |
+| [0.10.4](https://github.com/hugegraph/hugegraph/releases/download/v0.10.4/hugegraph-0.10.4.tar.gz) | [1.8.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.8.0) | [0.10.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.10.1/hugegraph-loader-0.10.1.tar.gz) | [0.10.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.10.0/hugegraph-studio-0.10.0.tar.gz)   | [1.6.16](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.6.16) | [1.4.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.4.0/hugegraph-tools-1.4.0.tar.gz) |
+| [0.9.2](https://github.com/hugegraph/hugegraph/releases/download/v0.9.2/hugegraph-0.9.2.tar.gz)    | [1.7.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.7.0) | [0.9.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.9.0/hugegraph-loader-0.9.0.tar.gz)    | [0.9.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.9.0/hugegraph-studio-0.9.0.tar.gz)      | [1.6.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.6.0)   | [1.3.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.3.0/hugegraph-tools-1.3.0.tar.gz) |
+| [0.8.0](https://github.com/hugegraph/hugegraph/releases/download/v0.8.0/hugegraph-0.8.0.tar.gz)    | [1.6.4](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.6.4) | [0.8.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.8.0/hugegraph-loader-0.8.0.tar.gz)    | [0.8.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.8.0/hugegraph-studio-0.8.0.tar.gz)      | [1.5.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.5.3)   | [1.2.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.2.0/hugegraph-tools-1.2.0.tar.gz) |
+| [0.7.4](https://github.com/hugegraph/hugegraph/releases/download/v0.7.4/hugegraph-0.7.4.tar.gz)    | [1.5.8](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.8) | [0.7.0](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.7.0/hugegraph-loader-0.7.0.tar.gz)    | [0.7.0](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.7.0/hugegraph-studio-0.7.0.tar.gz)      | [1.4.9](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.4.9)   | [1.1.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.1.0/hugegraph-tools-1.1.0.tar.gz) |
+| [0.6.1](https://github.com/hugegraph/hugegraph/releases/download/v0.6.1/hugegraph-0.6.1.tar.gz)    | [1.5.6](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.6) | [0.6.1](https://github.com/hugegraph/hugegraph-loader/releases/download/v0.6.1/hugegraph-loader-0.6.1.tar.gz)    | [0.6.1](https://github.com/hugegraph/hugegraph-studio/releases/download/v0.6.1/hugegraph-studio-0.6.1.tar.gz)      | [1.4.3](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-common/1.4.3)   | [1.0.0](https://github.com/hugegraph/hugegraph-tools/releases/download/v1.0.0/hugegraph-tools-1.0.0.tar.gz) |
+| [0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.5.6-SNAPSHOT.tar.gz)   | [1.5.0](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.5.0) | [0.5.6](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.5.6-bin.tar.gz)      | [0.5.0](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.5.0-SNAPSHOT.tar.gz) | 1.4.0                                                                                    |                                                                                                             |
+| [0.4.5](https://hugegraph.github.io/hugegraph-downloads/hugegraph-release-0.4.5-SNAPSHOT.tar.gz)   | [1.4.7](https://mvnrepository.com/artifact/com.baidu.hugegraph/hugegraph-client/1.4.7) | [0.2.2](https://hugegraph.github.io/hugegraph-downloads/hugegraph-loader/hugegraph-loader-0.2.2-bin.tar.gz)      | [0.4.1](https://hugegraph.github.io/hugegraph-downloads/hugegraph-studio/hugestudio-release-0.4.1-SNAPSHOT.tar.gz) | 1.3.12                                                                                   |                                                                                                             |
 
 > 说明:最新的图分析和展示平台为 hubble,支持 0.10 及之后的 server 版本;studio 为 server 0.10.x 以及之前的版本的图分析和展示平台,其功能从 0.10 起不再更新。
 ### Release Notes
diff --git a/content/cn/docs/guides/custom-plugin.md b/content/cn/docs/guides/custom-plugin.md
index 59a750a..7a74ae7 100644
--- a/content/cn/docs/guides/custom-plugin.md
+++ b/content/cn/docs/guides/custom-plugin.md
@@ -259,7 +259,7 @@
 }
 ```
  
-#### 3 实现插件接口,并进行注册
+#### 3. 实现插件接口,并进行注册
 
 插件注册入口为`HugeGraphPlugin.register()`,自定义插件必须实现该接口方法,在其内部注册上述定义好的扩展项。
 接口`com.baidu.hugegraph.plugin.HugeGraphPlugin`定义如下:
@@ -304,13 +304,13 @@
 }
 ```
 
-#### 4 配置SPI入口
+#### 4. 配置SPI入口
 
 1. 确保services目录存在:hugegraph-plugin-demo/resources/META-INF/services
 2. 在services目录下建立文本文件:com.baidu.hugegraph.plugin.HugeGraphPlugin
 3. 文件内容如下:com.baidu.hugegraph.plugin.DemoPlugin
  
-#### 5 打Jar包
+#### 5. 打Jar包
 
 通过maven打包,在项目目录下执行命令`mvn package`,在target目录下会生成Jar包文件。
 使用时将该Jar包拷到`plugins`目录,重启服务即可生效。
\ No newline at end of file
diff --git a/content/cn/docs/guides/faq.md b/content/cn/docs/guides/faq.md
index 750493f..9503ea8 100644
--- a/content/cn/docs/guides/faq.md
+++ b/content/cn/docs/guides/faq.md
@@ -38,7 +38,7 @@
 
 - 服务启动成功后,使用`curl`查询所有顶点时返回乱码
 
-  服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至gunzip进行解压(`curl http://example | gunzip`),也可以用`Firefox`的`postman`或者`Chrome`浏览器的`restlet`插件发请求,会自动解压缩响应数据。
+  服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 `gunzip` 进行解压(`curl http://example | gunzip`),也可以用`Firefox`的`postman`或者`Chrome`浏览器的`restlet`插件发请求,会自动解压缩响应数据。
 
 - 使用顶点Id通过`RESTful API`查询顶点时返回空,但是顶点确实是存在的
 
diff --git a/content/cn/docs/language/hugegraph-example.md b/content/cn/docs/language/hugegraph-example.md
index a103665..050b8ca 100644
--- a/content/cn/docs/language/hugegraph-example.md
+++ b/content/cn/docs/language/hugegraph-example.md
@@ -35,20 +35,20 @@
 
 该关系图谱中有两类顶点,分别是人物(character)和位置(location)如下表:
 
-名称        | 类型     | 属性
---------- | ------ | -------------
-character | vertex | name,age,type
-location  | vertex | name
+| 名称        | 类型     | 属性            |
+|-----------|--------|---------------|
+| character | vertex | name,age,type |
+| location  | vertex | name          |
 
 有六种关系,分别是父子(father)、母子(mother)、兄弟(brother)、战斗(battled)、居住(lives)、拥有宠物(pet) 关于关系图谱的具体信息如下:
 
-名称      | 类型   | source vertex label | target vertex label | 属性
-------- | ---- | ------------------- | ------------------- | ------
-father  | edge | character           | character           | -
-mother  | edge | character           | character           | -
-brother | edge | character           | character           | -
-pet     | edge | character           | character           | -
-lives   | edge | character           | location            | reason
+| 名称      | 类型   | source vertex label | target vertex label | 属性     |
+|---------|------|---------------------|---------------------|--------|
+| father  | edge | character           | character           | -      |
+| mother  | edge | character           | character           | -      |
+| brother | edge | character           | character           | -      |
+| pet     | edge | character           | character           | -      |
+| lives   | edge | character           | location            | reason |
 
 在HugeGraph中,每个edge label只能作用于一对source vertex label和target vertex label。也就是说,如果一个图内定义了一种关系father连接character和character,那farther就不能再连接其他的vertex labels。
 
@@ -125,7 +125,7 @@
 
 #### 3.1 Traversal Query
 
-**1\. Find the grand father of hercules**
+**1\. Find the grandfather of hercules**
 
 ```groovy
 g.V().hasLabel('character').has('name','hercules').out('father').out('father')
diff --git a/content/cn/docs/language/hugegraph-gremlin.md b/content/cn/docs/language/hugegraph-gremlin.md
index d231ba3..47b9ac6 100644
--- a/content/cn/docs/language/hugegraph-gremlin.md
+++ b/content/cn/docs/language/hugegraph-gremlin.md
@@ -18,107 +18,107 @@
 
 ### Graph Features
 
-Name                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                   | Support
--------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------
-Computer             | Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing                                                                                                                                                                                                                                                                                                                                                                | false
-Transactions         | Determines if the {@code Graph} implementations supports transactions.                                                                                                                                                                                                                                                                                                                                                                                        | true
-Persistence          | Determines if the {@code Graph} implementation supports persisting it's contents natively to disk.This feature does not refer to every graph's ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.                                                                         | true
-ThreadedTransactions | Determines if the {@code Graph} implementation supports threaded transactions which allow a transactionto be executed across multiple threads via {@link Transaction#createThreadedTx()}.                                                                                                                                                                                                                                                                     | false
-ConcurrentAccess     | Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. | false
+| Name                 | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                   | Support |
+|----------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| Computer             | Determines if the {@code Graph} implementation supports {@link GraphComputer} based processing                                                                                                                                                                                                                                                                                                                                                                | false   |
+| Transactions         | Determines if the {@code Graph} implementations supports transactions.                                                                                                                                                                                                                                                                                                                                                                                        | true    |
+| Persistence          | Determines if the {@code Graph} implementation supports persisting it's contents natively to disk.This feature does not refer to every graph's ability to write to disk via the Gremlin IO packages(.e.g. GraphML), unless the graph natively persists to disk via those options somehow. For example,TinkerGraph does not support this feature as it is a pure in-sideEffects graph.                                                                         | true    |
+| ThreadedTransactions | Determines if the {@code Graph} implementation supports threaded transactions which allow a transaction be executed across multiple threads via {@link Transaction#createThreadedTx()}.                                                                                                                                                                                                                                                                     | false   |
+| ConcurrentAccess     | Determines if the {@code Graph} implementation supports more than one connection to the same instance at the same time. For example, Neo4j embedded does not support this feature because concurrent access to the same database files by multiple instances is not possible. However, Neo4j HA could support this feature as each new {@code Graph} instance coordinates with the Neo4j cluster allowing multiple instances to operate on the same database. | false   |
 
 ### Vertex Features
 
-Name                     | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | Support
------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------
-UserSuppliedIds          | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier datat ype that the {@link Graph} will accept. | false
-NumericIds               | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                          | false
-StringIds                | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                           | false
-UuidIds                  | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                         | false
-CustomIds                | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                         | false
-AnyIds                   | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                                        | false
-AddProperty              | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                                      | true
-RemoveProperty           | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                               | true
-AddVertices              | Determines if a {@link Vertex} can be added to the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                                               | true
-MultiProperties          | Determines if a {@link Vertex} can support multiple properties with the same key.                                                                                                                                                                                                                                                                                                                                                                                                               | false
-DuplicateMultiProperties | Determines if a {@link Vertex} can support non-unique values on the same key. For this valueto be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.                                                                                                                                                                                                                             | false
-MetaProperties           | Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.                                                                                                                                                                                                                                                                                      | false
-RemoveVertices           | Determines if a {@link Vertex} can be removed from the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                                           | true
+| Name                     | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | Support |
+|--------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| UserSuppliedIds          | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. | false   |
+| NumericIds               | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                          | false   |
+| StringIds                | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                           | false   |
+| UuidIds                  | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                         | false   |
+| CustomIds                | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                         | false   |
+| AnyIds                   | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                                        | false   |
+| AddProperty              | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                                      | true    |
+| RemoveProperty           | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                               | true    |
+| AddVertices              | Determines if a {@link Vertex} can be added to the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                                               | true    |
+| MultiProperties          | Determines if a {@link Vertex} can support multiple properties with the same key.                                                                                                                                                                                                                                                                                                                                                                                                               | false   |
+| DuplicateMultiProperties | Determines if a {@link Vertex} can support non-unique values on the same key. For this value to be {@code true}, then {@link #supportsMetaProperties()} must also return true. By default this method, just returns what {@link #supportsMultiProperties()} returns.                                                                                                                                                                                                                            | false   |
+| MetaProperties           | Determines if a {@link Vertex} can support properties on vertex properties. It is assumed that a graph will support all the same data types for meta-properties that are supported for regular properties.                                                                                                                                                                                                                                                                                      | false   |
+| RemoveVertices           | Determines if a {@link Vertex} can be removed from the {@code Graph}.                                                                                                                                                                                                                                                                                                                                                                                                                           | true    |
 
 ### Edge Features
 
-Name            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | Support
---------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------
-UserSuppliedIds | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier datat ype that the {@link Graph} will accept. | false
-NumericIds      | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                          | false
-StringIds       | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                           | false
-UuidIds         | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                         | false
-CustomIds       | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                         | false
-AnyIds          | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                                        | false
-AddProperty     | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                                      | true
-RemoveProperty  | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                               | true
-AddEdges        | Determines if an {@link Edge} can be added to a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                                 | true
-RemoveEdges     | Determines if an {@link Edge} can be removed from a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                             | true
+| Name            | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     | Support |
+|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| UserSuppliedIds | Determines if an {@link Element} can have a user defined identifier. Implementation that do not support this feature will be expected to auto-generate unique identifiers. In other words, if the {@link Graph} allows {@code graph.addVertex(id,x)} to work and thus set the identifier of the newly added {@link Vertex} to the value of {@code x} then this feature should return true. In this case, {@code x} is assumed to be an identifier data type that the {@link Graph} will accept. | false   |
+| NumericIds      | Determines if an {@link Element} has numeric identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a numeric value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                          | false   |
+| StringIds       | Determines if an {@link Element} has string identifiers as their internal representation. In other words, if the value returned from {@link Element#id()} is a string value then this method should be return {@code true}. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                           | false   |
+| UuidIds         | Determines if an {@link Element} has UUID identifiers as their internal representation. In other words,if the value returned from {@link Element#id()} is a {@link UUID} value then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                                                                                         | false   |
+| CustomIds       | Determines if an {@link Element} has a specific custom object as their internal representation.In other words, if the value returned from {@link Element#id()} is a type defined by the graph implementations, such as OrientDB's {@code Rid}, then this method should be return {@code true}.Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite.                                                                         | false   |
+| AnyIds          | Determines if an {@link Element} any Java object is a suitable identifier. TinkerGraph is a good example of a {@link Graph} that can support this feature, as it can use any {@link Object} as a value for the identifier. Note that this feature is most generally used for determining the appropriate tests to execute in the Gremlin Test Suite. This setting should only return {@code true} if {@link #supportsUserSuppliedIds()} is {@code true}.                                        | false   |
+| AddProperty     | Determines if an {@link Element} allows properties to be added. This feature is set independently from supporting "data types" and refers to support of calls to {@link Element#property(String, Object)}.                                                                                                                                                                                                                                                                                      | true    |
+| RemoveProperty  | Determines if an {@link Element} allows properties to be removed.                                                                                                                                                                                                                                                                                                                                                                                                                               | true    |
+| AddEdges        | Determines if an {@link Edge} can be added to a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                                 | true    |
+| RemoveEdges     | Determines if an {@link Edge} can be removed from a {@code Vertex}.                                                                                                                                                                                                                                                                                                                                                                                                                             | true    |
 
 ### Data Type Features
 
-Name               | Description                                                                                                                                                                                                                                                         | Support
------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------
-BooleanValues      |                                                                                                                                                                                                                                                                     | true
-ByteValues         |                                                                                                                                                                                                                                                                     | true
-DoubleValues       |                                                                                                                                                                                                                                                                     | true
-FloatValues        |                                                                                                                                                                                                                                                                     | true
-IntegerValues      |                                                                                                                                                                                                                                                                     | true
-LongValues         |                                                                                                                                                                                                                                                                     | true
-MapValues          | Supports setting of a {@code Map} value. The assumption is that the {@code Map} can containarbitrary serializable values that may or may not be defined as a feature itself                                                                                         | false
-MixedListValues    | Supports setting of a {@code List} value. The assumption is that the {@code List} can containarbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "mixed" it does not need to contain objects of the same type. | false
-BooleanArrayValues |                                                                                                                                                                                                                                                                     | false
-ByteArrayValues    |                                                                                                                                                                                                                                                                     | true
-DoubleArrayValues  |                                                                                                                                                                                                                                                                     | false
-FloatArrayValues   |                                                                                                                                                                                                                                                                     | false
-IntegerArrayValues |                                                                                                                                                                                                                                                                     | false
-LongArrayValues    |                                                                                                                                                                                                                                                                     | false
-SerializableValues |                                                                                                                                                                                                                                                                     | false
-StringArrayValues  |                                                                                                                                                                                                                                                                     | false
-StringValues       |                                                                                                                                                                                                                                                                     | true
-UniformListValues  | Supports setting of a {@code List} value. The assumption is that the {@code List} can containarbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "uniform" it must contain objects of the same type.           | false
+| Name               | Description                                                                                                                                                                                                                                                         | Support |
+|--------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------|
+| BooleanValues      |                                                                                                                                                                                                                                                                     | true    |
+| ByteValues         |                                                                                                                                                                                                                                                                     | true    |
+| DoubleValues       |                                                                                                                                                                                                                                                                     | true    |
+| FloatValues        |                                                                                                                                                                                                                                                                     | true    |
+| IntegerValues      |                                                                                                                                                                                                                                                                     | true    |
+| LongValues         |                                                                                                                                                                                                                                                                     | true    |
+| MapValues          | Supports setting of a {@code Map} value. The assumption is that the {@code Map} can contain arbitrary serializable values that may or may not be defined as a feature itself                                                                                         | false   |
+| MixedListValues    | Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "mixed" it does not need to contain objects of the same type. | false   |
+| BooleanArrayValues |                                                                                                                                                                                                                                                                     | false   |
+| ByteArrayValues    |                                                                                                                                                                                                                                                                     | true    |
+| DoubleArrayValues  |                                                                                                                                                                                                                                                                     | false   |
+| FloatArrayValues   |                                                                                                                                                                                                                                                                     | false   |
+| IntegerArrayValues |                                                                                                                                                                                                                                                                     | false   |
+| LongArrayValues    |                                                                                                                                                                                                                                                                     | false   |
+| SerializableValues |                                                                                                                                                                                                                                                                     | false   |
+| StringArrayValues  |                                                                                                                                                                                                                                                                     | false   |
+| StringValues       |                                                                                                                                                                                                                                                                     | true    |
+| UniformListValues  | Supports setting of a {@code List} value. The assumption is that the {@code List} can contain arbitrary serializable values that may or may not be defined as a feature itself. As this{@code List} is "uniform" it must contain objects of the same type.           | false   |
 
 ### Gremlin的步骤
 
 HugeGraph支持Gremlin的所有步骤。有关Gremlin的完整参考信息,请参与[Gremlin官网](http://tinkerpop.apache.org/docs/current/reference/)。
 
-步骤         | 说明                                                                                              | 文档
----------- | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------
-addE       | 在两个顶点之间添加边                                                                                      | [addE step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)
-addV       | 将顶点添加到图形                                                                                        | [addV step](http://tinkerpop.apache.org/docs/current/reference/#addvertex-step)
-and        | 确保所有遍历都返回值                                                                                      | [and step](http://tinkerpop.apache.org/docs/current/reference/#add-step)
-as         | 用于向步骤的输出分配变量的步骤调制器                                                                              | [as step](http://tinkerpop.apache.org/docs/current/reference/#as-step)
-by         | 与`group`和`order`配合使用的步骤调制器                                                                      | [by step](http://tinkerpop.apache.org/docs/current/reference/#by-step)
-coalesce   | 返回第一个返回结果的遍历                                                                                    | [coalesce step](http://tinkerpop.apache.org/docs/current/reference/#coalesce-step)
-constant   | 返回常量值。 与`coalesce`配合使用                                                                          | [constant step](http://tinkerpop.apache.org/docs/current/reference/#constant-step)
-count      | 从遍历返回计数                                                                                         | [count step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)
-dedup      | 返回已删除重复内容的值                                                                                     | [dedup step](http://tinkerpop.apache.org/docs/current/reference/#dedup-step)
-drop       | 丢弃值(顶点/边缘)                                                                                      | [drop step](http://tinkerpop.apache.org/docs/current/reference/#drop-step)
-fold       | 充当用于计算结果聚合值的屏障                                                                                  | [fold step](http://tinkerpop.apache.org/docs/current/reference/#fold-step)
-group      | 根据指定的标签将值分组                                                                                     | [group step](http://tinkerpop.apache.org/docs/current/reference/#group-step)
-has        | 用于筛选属性、顶点和边缘。 支持`hasLabel`、`hasId`、`hasNot` 和 `has` 变体                                          | [has step](http://tinkerpop.apache.org/docs/current/reference/#has-step)
-inject     | 将值注入流中                                                                                          | [inject step](http://tinkerpop.apache.org/docs/current/reference/#inject-step)
-is         | 用于通过布尔表达式执行筛选器                                                                                  | [is step](http://tinkerpop.apache.org/docs/current/reference/#is-step)
-limit      | 用于限制遍历中的项数                                                                                      | [limit step](http://tinkerpop.apache.org/docs/current/reference/#limit-step)
-local      | 本地包装遍历的某个部分,类似于子查询                                                                              | [local step](http://tinkerpop.apache.org/docs/current/reference/#local-step)
-not        | 用于生成筛选器的求反结果                                                                                    | [not step](http://tinkerpop.apache.org/docs/current/reference/#not-step)
-optional   | 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素                                                                   | [optional step](http://tinkerpop.apache.org/docs/current/reference/#optional-step)
-or         | 确保至少有一个遍历会返回值                                                                                   | [or step](http://tinkerpop.apache.org/docs/current/reference/#or-step)
-order      | 按指定的排序顺序返回结果                                                                                    | [order step](http://tinkerpop.apache.org/docs/current/reference/#order-step)
-path       | 返回遍历的完整路径                                                                                       | [path step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)
-project    | 将属性投影为映射                                                                                        | [project step](http://tinkerpop.apache.org/docs/current/reference/#project-step)
-properties | 返回指定标签的属性                                                                                       | [properties step](http://tinkerpop.apache.org/docs/current/reference/#properties-step)
-range      | 根据指定的值范围进行筛选                                                                                    | [range step](http://tinkerpop.apache.org/docs/current/reference/#range-step)
-repeat     | 将步骤重复指定的次数。 用于循环                                                                                | [repeat step](http://tinkerpop.apache.org/docs/current/reference/#repeat-step)
-sample     | 用于对遍历返回的结果采样                                                                                    | [sample step](http://tinkerpop.apache.org/docs/current/reference/#sample-step)
-select     | 用于投影遍历返回的结果                                                                                     | [select step](http://tinkerpop.apache.org/docs/current/reference/#select-step)
-store      | 用于遍历返回的非阻塞聚合                                                                                    | [store step](http://tinkerpop.apache.org/docs/current/reference/#store-step)
-tree       | 将顶点中的路径聚合到树中                                                                                    | [tree step](http://tinkerpop.apache.org/docs/current/reference/#tree-step)
-unfold     | 将迭代器作为步骤展开                                                                                      | [unfold step](http://tinkerpop.apache.org/docs/current/reference/#unfold-step)
-union      | 合并多个遍历返回的结果                                                                                     | [union step](http://tinkerpop.apache.org/docs/current/reference/#union-step)
-V          | 包括顶点与边之间的遍历所需的步骤:`V`、`E`、`out`、`in`、`both`、`outE`、`inE`、`bothE`、`outV`、`inV`、`bothV` 和 `otherV` | [order step](http://tinkerpop.apache.org/docs/current/reference/#vertex-steps)
-where      | 用于筛选遍历返回的结果。 支持 `eq`、`neq`、`lt`、`lte`、`gt`、`gte` 和 `between` 运算符                                | [where step](http://tinkerpop.apache.org/docs/current/reference/#where-step)
+| 步骤         | 说明                                                                                              | 文档                                                                                     |
+|------------|-------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------|
+| addE       | 在两个顶点之间添加边                                                                                      | [addE step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)          |
+| addV       | 将顶点添加到图形                                                                                        | [addV step](http://tinkerpop.apache.org/docs/current/reference/#addvertex-step)        |
+| and        | 确保所有遍历都返回值                                                                                      | [and step](http://tinkerpop.apache.org/docs/current/reference/#add-step)               |
+| as         | 用于向步骤的输出分配变量的步骤调制器                                                                              | [as step](http://tinkerpop.apache.org/docs/current/reference/#as-step)                 |
+| by         | 与`group`和`order`配合使用的步骤调制器                                                                      | [by step](http://tinkerpop.apache.org/docs/current/reference/#by-step)                 |
+| coalesce   | 返回第一个返回结果的遍历                                                                                    | [coalesce step](http://tinkerpop.apache.org/docs/current/reference/#coalesce-step)     |
+| constant   | 返回常量值。 与`coalesce`配合使用                                                                          | [constant step](http://tinkerpop.apache.org/docs/current/reference/#constant-step)     |
+| count      | 从遍历返回计数                                                                                         | [count step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)         |
+| dedup      | 返回已删除重复内容的值                                                                                     | [dedup step](http://tinkerpop.apache.org/docs/current/reference/#dedup-step)           |
+| drop       | 丢弃值(顶点/边缘)                                                                                      | [drop step](http://tinkerpop.apache.org/docs/current/reference/#drop-step)             |
+| fold       | 充当用于计算结果聚合值的屏障                                                                                  | [fold step](http://tinkerpop.apache.org/docs/current/reference/#fold-step)             |
+| group      | 根据指定的标签将值分组                                                                                     | [group step](http://tinkerpop.apache.org/docs/current/reference/#group-step)           |
+| has        | 用于筛选属性、顶点和边缘。 支持`hasLabel`、`hasId`、`hasNot` 和 `has` 变体                                          | [has step](http://tinkerpop.apache.org/docs/current/reference/#has-step)               |
+| inject     | 将值注入流中                                                                                          | [inject step](http://tinkerpop.apache.org/docs/current/reference/#inject-step)         |
+| is         | 用于通过布尔表达式执行筛选器                                                                                  | [is step](http://tinkerpop.apache.org/docs/current/reference/#is-step)                 |
+| limit      | 用于限制遍历中的项数                                                                                      | [limit step](http://tinkerpop.apache.org/docs/current/reference/#limit-step)           |
+| local      | 本地包装遍历的某个部分,类似于子查询                                                                              | [local step](http://tinkerpop.apache.org/docs/current/reference/#local-step)           |
+| not        | 用于生成筛选器的求反结果                                                                                    | [not step](http://tinkerpop.apache.org/docs/current/reference/#not-step)               |
+| optional   | 如果生成了某个结果,则返回指定遍历的结果,否则返回调用元素                                                                   | [optional step](http://tinkerpop.apache.org/docs/current/reference/#optional-step)     |
+| or         | 确保至少有一个遍历会返回值                                                                                   | [or step](http://tinkerpop.apache.org/docs/current/reference/#or-step)                 |
+| order      | 按指定的排序顺序返回结果                                                                                    | [order step](http://tinkerpop.apache.org/docs/current/reference/#order-step)           |
+| path       | 返回遍历的完整路径                                                                                       | [path step](http://tinkerpop.apache.org/docs/current/reference/#addedge-step)          |
+| project    | 将属性投影为映射                                                                                        | [project step](http://tinkerpop.apache.org/docs/current/reference/#project-step)       |
+| properties | 返回指定标签的属性                                                                                       | [properties step](http://tinkerpop.apache.org/docs/current/reference/#properties-step) |
+| range      | 根据指定的值范围进行筛选                                                                                    | [range step](http://tinkerpop.apache.org/docs/current/reference/#range-step)           |
+| repeat     | 将步骤重复指定的次数。 用于循环                                                                                | [repeat step](http://tinkerpop.apache.org/docs/current/reference/#repeat-step)         |
+| sample     | 用于对遍历返回的结果采样                                                                                    | [sample step](http://tinkerpop.apache.org/docs/current/reference/#sample-step)         |
+| select     | 用于投影遍历返回的结果                                                                                     | [select step](http://tinkerpop.apache.org/docs/current/reference/#select-step)         |
+| store      | 用于遍历返回的非阻塞聚合                                                                                    | [store step](http://tinkerpop.apache.org/docs/current/reference/#store-step)           |
+| tree       | 将顶点中的路径聚合到树中                                                                                    | [tree step](http://tinkerpop.apache.org/docs/current/reference/#tree-step)             |
+| unfold     | 将迭代器作为步骤展开                                                                                      | [unfold step](http://tinkerpop.apache.org/docs/current/reference/#unfold-step)         |
+| union      | 合并多个遍历返回的结果                                                                                     | [union step](http://tinkerpop.apache.org/docs/current/reference/#union-step)           |
+| V          | 包括顶点与边之间的遍历所需的步骤:`V`、`E`、`out`、`in`、`both`、`outE`、`inE`、`bothE`、`outV`、`inV`、`bothV` 和 `otherV` | [order step](http://tinkerpop.apache.org/docs/current/reference/#vertex-steps)         |
+| where      | 用于筛选遍历返回的结果。 支持 `eq`、`neq`、`lt`、`lte`、`gt`、`gte` 和 `between` 运算符                                | [where step](http://tinkerpop.apache.org/docs/current/reference/#where-step)           |
diff --git a/content/cn/docs/performance/hugegraph-benchmark-0.4.4.md b/content/cn/docs/performance/hugegraph-benchmark-0.4.4.md
index 2e4bd58..08e951d 100644
--- a/content/cn/docs/performance/hugegraph-benchmark-0.4.4.md
+++ b/content/cn/docs/performance/hugegraph-benchmark-0.4.4.md
@@ -2,9 +2,9 @@
 
 #### 1.1 硬件信息
 
-CPU                                          | Memory | 网卡        | 磁盘
--------------------------------------------- | ------ | --------- | ---------
-48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD
+| CPU                                          | Memory | 网卡        | 磁盘        |
+|----------------------------------------------|--------|-----------|-----------|
+| 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD |
 
 #### 1.2 软件信息
 
@@ -40,16 +40,16 @@
 
 ###### 本测试用到的数据集规模
 
-名称                      | vertex数目  | edge数目    | 文件大小
------------------------ | --------- | --------- | ------
-email-enron.txt         | 36,691    | 367,661   | 4MB
-com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB
-amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB
+| 名称                      | vertex数目  | edge数目    | 文件大小   |
+|-------------------------|-----------|-----------|--------|
+| email-enron.txt         | 36,691    | 367,661   | 4MB    |
+| com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
+| amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB |
 
 #### 1.3 服务配置
 
 - HugeGraph版本:0.4.4,RestServer和Gremlin Server和backends都在同一台服务器上
-- Cassandra版本:cassandra-3.10,commitlog和data共用SSD
+- Cassandra版本:cassandra-3.10,commit-log 和data共用SSD
 - RocksDB版本:rocksdbjni-5.8.6
 - Titan版本:0.5.4, 使用thrift+Cassandra模式
 
@@ -59,12 +59,12 @@
 
 #### 2.1 Batch插入性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w)
---------- | ---------------- | ---------------- | -------------------------
-Titan     | 9.516            | 88.123           | 111.586
-RocksDB   | 2.345            | 14.076           | 16.636
-Cassandra | 11.930           | 108.709          | 101.959
-Memory    | 3.077            | 15.204           | 13.841
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
+|-----------|------------------|------------------|---------------------------|
+| Titan     | 9.516            | 88.123           | 111.586                   |
+| RocksDB   | 2.345            | 14.076           | 16.636                    |
+| Cassandra | 11.930           | 108.709          | 101.959                   |
+| Memory    | 3.077            | 15.204           | 13.841                    |
 
 _说明_
 
@@ -86,12 +86,12 @@
 
 ##### 2.2.2 FN性能
 
-Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w)
---------- | ----------------- | --------------- | -------------------------
-Titan     | 7.724             | 70.935          | 128.884
-RocksDB   | 8.876             | 65.852          | 63.388
-Cassandra | 13.125            | 126.959         | 102.580
-Memory    | 22.309            | 207.411         | 165.609
+| Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) |
+|-----------|-------------------|-----------------|---------------------------|
+| Titan     | 7.724             | 70.935          | 128.884                   |
+| RocksDB   | 8.876             | 65.852          | 63.388                    |
+| Cassandra | 13.125            | 126.959         | 102.580                   |
+| Memory    | 22.309            | 207.411         | 165.609                   |
 
 _说明_
 
@@ -101,12 +101,12 @@
 
 ##### 2.2.3 FA性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w)
---------- | ---------------- | ---------------- | -------------------------
-Titan     | 7.119            | 63.353           | 115.633
-RocksDB   | 6.032            | 64.526           | 52.721
-Cassandra | 9.410            | 102.766          | 94.197
-Memory    | 12.340           | 195.444          | 140.89
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
+|-----------|------------------|------------------|---------------------------|
+| Titan     | 7.119            | 63.353           | 115.633                   |
+| RocksDB   | 6.032            | 64.526           | 52.721                    |
+| Cassandra | 9.410            | 102.766          | 94.197                    |
+| Memory    | 12.340           | 195.444          | 140.89                    |
 
 _说明_
 
@@ -128,12 +128,12 @@
 
 ##### FS性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w)
---------- | ---------------- | ---------------- | -------------------------
-Titan     | 11.333           | 0.313            | 376.06
-RocksDB   | 44.391           | 2.221            | 268.792
-Cassandra | 39.845           | 3.337            | 331.113
-Memory    | 35.638           | 2.059            | 388.987
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) |
+|-----------|------------------|------------------|---------------------------|
+| Titan     | 11.333           | 0.313            | 376.06                    |
+| RocksDB   | 44.391           | 2.221            | 268.792                   |
+| Cassandra | 39.845           | 3.337            | 331.113                   |
+| Memory    | 35.638           | 2.059            | 388.987                   |
 
 _说明_
 
@@ -180,12 +180,12 @@
 
 #### 2.4 图综合性能测试-CW
 
-数据库             | 规模1000 | 规模5000   | 规模10000  | 规模20000
---------------- | ------ | -------- | -------- | --------
-Titan           | 45.943 | 849.168  | 2737.117 | 9791.46
-Memory(core)    | 41.077 | 1825.905 | *        | *
-Cassandra(core) | 39.783 | 862.744  | 2423.136 | 6564.191
-RcoksDB(core)   | 33.383 | 199.894  | 763.869  | 1677.813
+| 数据库             | 规模1000 | 规模5000   | 规模10000  | 规模20000  |
+|-----------------|--------|----------|----------|----------|
+| Titan           | 45.943 | 849.168  | 2737.117 | 9791.46  |
+| Memory(core)    | 41.077 | 1825.905 | *        | *        |
+| Cassandra(core) | 39.783 | 862.744  | 2423.136 | 6564.191 |
+| RocksDB(core)   | 33.383 | 199.894  | 763.869  | 1677.813 |
 
 _说明_
 
diff --git a/content/cn/docs/performance/hugegraph-benchmark-0.5.6.md b/content/cn/docs/performance/hugegraph-benchmark-0.5.6.md
index d8a0ed6..bb3db47 100644
--- a/content/cn/docs/performance/hugegraph-benchmark-0.5.6.md
+++ b/content/cn/docs/performance/hugegraph-benchmark-0.5.6.md
@@ -8,9 +8,9 @@
 
 #### 1.1 硬件信息
 
-CPU                                          | Memory | 网卡        | 磁盘
--------------------------------------------- | ------ | --------- | ---------
-48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD
+| CPU                                          | Memory | 网卡        | 磁盘        |
+|----------------------------------------------|--------|-----------|-----------|
+| 48 Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz | 128G   | 10000Mbps | 750GB SSD |
 
 #### 1.2 软件信息
 
@@ -46,12 +46,12 @@
 
 ###### 本测试用到的数据集规模
 
-名称                      | vertex数目  | edge数目    | 文件大小
------------------------ | --------- | --------- | ------
-email-enron.txt         | 36,691    | 367,661   | 4MB
-com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB
-amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB
-com-lj.ungraph.txt      | 3997961   | 34681189  | 479MB
+| 名称                      | vertex数目  | edge数目    | 文件大小   |
+|-------------------------|-----------|-----------|--------|
+| email-enron.txt         | 36,691    | 367,661   | 4MB    |
+| com-youtube.ungraph.txt | 1,157,806 | 2,987,624 | 38.7MB |
+| amazon0601.txt          | 403,393   | 3,387,388 | 47.9MB |
+| com-lj.ungraph.txt      | 3997961   | 34681189  | 479MB  |
 
 #### 1.3 服务配置
 
@@ -61,7 +61,7 @@
 
 - Titan版本:0.5.4, 使用thrift+Cassandra模式
 
-  - Cassandra版本:cassandra-3.10,commitlog和data共用SSD
+  - Cassandra版本:cassandra-3.10,commit-log 和 data 共用SSD
 
 - Neo4j版本:2.0.1
 
@@ -71,11 +71,11 @@
 
 #### 2.1 Batch插入性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w)
---------- | ---------------- | ---------------- | ------------------------- | ---------------------
-HugeGraph | 0.629            | 5.711            | 5.243                     | 67.033
-Titan     | 10.15            | 108.569          | 150.266                   | 1217.944
-Neo4j     | 3.884            | 18.938           | 24.890                    | 281.537
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
+|-----------|------------------|------------------|---------------------------|-----------------------|
+| HugeGraph | 0.629            | 5.711            | 5.243                     | 67.033                |
+| Titan     | 10.15            | 108.569          | 150.266                   | 1217.944              |
+| Neo4j     | 3.884            | 18.938           | 24.890                    | 281.537               |
 
 _说明_
 
@@ -96,11 +96,11 @@
 
 ##### 2.2.2 FN性能
 
-Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) | com-lj.ungraph(400w)
---------- | ----------------- | --------------- | ------------------------- | ---------------------
-HugeGraph | 4.072             | 45.118          | 66.006     | 609.083
-Titan     | 8.084             | 92.507          | 184.543    | 1099.371
-Neo4j     | 2.424             | 10.537          | 11.609     | 106.919
+| Backend   | email-enron(3.6w) | amazon0601(40w) | com-youtube.ungraph(120w) | com-lj.ungraph(400w) |
+|-----------|-------------------|-----------------|---------------------------|----------------------|
+| HugeGraph | 4.072             | 45.118          | 66.006                    | 609.083              |
+| Titan     | 8.084             | 92.507          | 184.543                   | 1099.371             |
+| Neo4j     | 2.424             | 10.537          | 11.609                    | 106.919              |
 
 _说明_
 
@@ -110,11 +110,11 @@
 
 ##### 2.2.3 FA性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w)
---------- | ---------------- | ---------------- | ------------------------- | ---------------------
-HugeGraph | 1.540             | 10.764          | 11.243     | 151.271
-Titan     | 7.361             | 93.344          | 169.218    | 1085.235
-Neo4j     | 1.673             | 4.775           | 4.284      | 40.507
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
+|-----------|------------------|------------------|---------------------------|-----------------------|
+| HugeGraph | 1.540            | 10.764           | 11.243                    | 151.271               |
+| Titan     | 7.361            | 93.344           | 169.218                   | 1085.235              |
+| Neo4j     | 1.673            | 4.775            | 4.284                     | 40.507                |
 
 _说明_
 
@@ -136,11 +136,11 @@
 
 ##### FS性能
 
-Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w)
---------- | ---------------- | ---------------- | ------------------------- | ---------------------
-HugeGraph | 0.494            | 0.103            | 3.364      | 8.155
-Titan     | 11.818           | 0.239            | 377.709    | 575.678
-Neo4j     | 1.719            | 1.800            | 1.956      | 8.530
+| Backend   | email-enron(30w) | amazon0601(300w) | com-youtube.ungraph(300w) | com-lj.ungraph(3000w) |
+|-----------|------------------|------------------|---------------------------|-----------------------|
+| HugeGraph | 0.494            | 0.103            | 3.364                     | 8.155                 |
+| Titan     | 11.818           | 0.239            | 377.709                   | 575.678               |
+| Neo4j     | 1.719            | 1.800            | 1.956                     | 8.530                 |
 
 _说明_
 
@@ -187,11 +187,11 @@
 
 #### 2.4 图综合性能测试-CW
 
-数据库             | 规模1000 | 规模5000   | 规模10000  | 规模20000
---------------- | ------ | -------- | -------- | --------
-HugeGraph(core) | 20.804 | 242.099  |  744.780 | 1700.547
-Titan           | 45.790 | 820.633  | 2652.235 | 9568.623
-Neo4j           |  5.913 |  50.267  |  142.354 |  460.880
+| 数据库             | 规模1000 | 规模5000  | 规模10000  | 规模20000  |
+|-----------------|--------|---------|----------|----------|
+| HugeGraph(core) | 20.804 | 242.099 | 744.780  | 1700.547 |
+| Titan           | 45.790 | 820.633 | 2652.235 | 9568.623 |
+| Neo4j           | 5.913  | 50.267  | 142.354  | 460.880  |
 
 _说明_
 
diff --git a/content/cn/docs/quickstart/hugegraph-loader.md b/content/cn/docs/quickstart/hugegraph-loader.md
index 49ba937..4c07eda 100644
--- a/content/cn/docs/quickstart/hugegraph-loader.md
+++ b/content/cn/docs/quickstart/hugegraph-loader.md
@@ -6,7 +6,7 @@
 
 ### 1 HugeGraph-Loader概述
 
-HugeGraph-Loader 是 HugeGragh 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
+HugeGraph-Loader 是 HugeGraph 的数据导入组件,能够将多种数据源的数据转化为图的顶点和边并批量导入到图数据库中。
 
 目前支持的数据源包括:
 
@@ -576,7 +576,7 @@
 
 - type: 输入源类型,必须填 hdfs 或 HDFS,必填; 
 - path: HDFS 文件或目录的路径,必须是 HDFS 的绝对路径,必填; 
-- core_site_path: HDFS 集群的 core-site.xml 文件路径,重点要指明 namenode 的地址(fs.default.name),以及文件系统的实现(fs.hdfs.impl);
+- core_site_path: HDFS 集群的 core-site.xml 文件路径,重点要指明 NameNode 的地址(`fs.default.name`),以及文件系统的实现(`fs.hdfs.impl`);
 
 ###### 3.3.2.3 JDBC 输入源
 
@@ -716,37 +716,37 @@
 
 ##### 3.4.1 参数说明
 
-参数                 | 默认值        | 是否必传 | 描述信息
-------------------- | ------------ | ------- | -----------------------
--f 或 --file    |              |    Y    | 配置脚本的路径
--g 或 --graph   |              |    Y    | 图数据库空间
--s 或 --schema  |              |    Y    | schema文件路径
--h 或 --host    | localhost    |         | HugeGraphServer 的地址
--p 或 --port    | 8080         |         | HugeGraphServer 的端口号
---username          | null         |         | 当 HugeGraphServer 开启了权限认证时,当前图的 username
---token             | null         |         | 当 HugeGraphServer 开启了权限认证时,当前图的 token 
---protocol          | http         |         | 向服务端发请求的协议,可选 http 或 https
---trust-store-file  |              |         | 请求协议为 https 时,客户端的证书文件路径
---trust-store-password |           |         | 请求协议为 https 时,客户端证书密码
---clear-all-data    | false        |         | 导入数据前是否清除服务端的原有数据
---clear-timeout     | 240          |         | 导入数据前清除服务端的原有数据的超时时间
---incremental-mode  | false        |         | 是否使用断点续导模式,仅输入源为 FILE 和 HDFS 支持该模式,启用该模式能从上一次导入停止的地方开始导
---failure-mode      | false        |         | 失败模式为 true 时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入
---batch-insert-threads | CPUs      |         | 批量插入线程池大小 (CPUs是当前OS可用**逻辑核**个数) 
---single-insert-threads | 8        |         | 单条插入线程池的大小
---max-conn          | 4 * CPUs     |         | HugeClient 与 HugeGraphServer 的最大 HTTP 连接数,**调整线程**的时候建议同时调整此项 
---max-conn-per-route| 2 * CPUs     |         | HugeClient 与 HugeGraphServer 每个路由的最大 HTTP 连接数,**调整线程**的时候建议同时调整此项 
---batch-size        | 500          |         | 导入数据时每个批次包含的数据条数
---max-parse-errors  | 1            |         | 最多允许多少行数据解析错误,达到该值则程序退出
---max-insert-errors | 500          |         | 最多允许多少行数据插入错误,达到该值则程序退出
---timeout           | 60           |         | 插入结果返回的超时时间(秒)
---shutdown-timeout  | 10           |         | 多线程停止的等待时间(秒)
---retry-times       | 0            |         | 发生特定异常时的重试次数
---retry-interval    | 10           |         | 重试之前的间隔时间(秒)
---check-vertex      | false        |         | 插入边时是否检查边所连接的顶点是否存在
---print-progress    | true         |         | 是否在控制台实时打印导入条数
---dry-run           | false        |         | 打开该模式,只解析不导入,通常用于测试
---help              | false        |         | 打印帮助信息
+| 参数                      | 默认值       | 是否必传 | 描述信息                                                              |
+|-------------------------|-----------|------|-------------------------------------------------------------------|
+| -f 或 --file             |           | Y    | 配置脚本的路径                                                           |
+| -g 或 --graph            |           | Y    | 图数据库空间                                                            |
+| -s 或 --schema           |           | Y    | schema文件路径                                                        |
+| -h 或 --host             | localhost |      | HugeGraphServer 的地址                                               |
+| -p 或 --port             | 8080      |      | HugeGraphServer 的端口号                                              |
+| --username              | null      |      | 当 HugeGraphServer 开启了权限认证时,当前图的 username                          |
+| --token                 | null      |      | 当 HugeGraphServer 开启了权限认证时,当前图的 token                             |
+| --protocol              | http      |      | 向服务端发请求的协议,可选 http 或 https                                        |
+| --trust-store-file      |           |      | 请求协议为 https 时,客户端的证书文件路径                                          |
+| --trust-store-password  |           |      | 请求协议为 https 时,客户端证书密码                                             |
+| --clear-all-data        | false     |      | 导入数据前是否清除服务端的原有数据                                                 |
+| --clear-timeout         | 240       |      | 导入数据前清除服务端的原有数据的超时时间                                              |
+| --incremental-mode      | false     |      | 是否使用断点续导模式,仅输入源为 FILE 和 HDFS 支持该模式,启用该模式能从上一次导入停止的地方开始导           |
+| --failure-mode          | false     |      | 失败模式为 true 时,会导入之前失败了的数据,一般来说失败数据文件需要在人工更正编辑好后,再次进行导入             |
+| --batch-insert-threads  | CPUs      |      | 批量插入线程池大小 (CPUs是当前OS可用**逻辑核**个数)                                  |
+| --single-insert-threads | 8         |      | 单条插入线程池的大小                                                        |
+| --max-conn              | 4 * CPUs  |      | HugeClient 与 HugeGraphServer 的最大 HTTP 连接数,**调整线程**的时候建议同时调整此项     |
+| --max-conn-per-route    | 2 * CPUs  |      | HugeClient 与 HugeGraphServer 每个路由的最大 HTTP 连接数,**调整线程**的时候建议同时调整此项 |
+| --batch-size            | 500       |      | 导入数据时每个批次包含的数据条数                                                  |
+| --max-parse-errors      | 1         |      | 最多允许多少行数据解析错误,达到该值则程序退出                                           |
+| --max-insert-errors     | 500       |      | 最多允许多少行数据插入错误,达到该值则程序退出                                           |
+| --timeout               | 60        |      | 插入结果返回的超时时间(秒)                                                    |
+| --shutdown-timeout      | 10        |      | 多线程停止的等待时间(秒)                                                     |
+| --retry-times           | 0         |      | 发生特定异常时的重试次数                                                      |
+| --retry-interval        | 10        |      | 重试之前的间隔时间(秒)                                                      |
+| --check-vertex          | false     |      | 插入边时是否检查边所连接的顶点是否存在                                               |
+| --print-progress        | true      |      | 是否在控制台实时打印导入条数                                                    |
+| --dry-run               | false     |      | 打开该模式,只解析不导入,通常用于测试                                               |
+| --help                  | false     |      | 打印帮助信息                                                            |
 
 ##### 3.4.2 断点续导模式
 
@@ -783,7 +783,7 @@
 
 ##### 3.4.4 执行命令
 
-运行 bin/hugeloader 并传入参数
+运行 bin/hugegraph-loader 并传入参数
 
 ```bash
 bin/hugegraph-loader -g {GRAPH_NAME} -f ${INPUT_DESC_FILE} -s ${SCHEMA_FILE} -h {HOST} -p {PORT}
diff --git a/content/cn/docs/quickstart/hugegraph-server.md b/content/cn/docs/quickstart/hugegraph-server.md
index 0618d83..c5ab910 100644
--- a/content/cn/docs/quickstart/hugegraph-server.md
+++ b/content/cn/docs/quickstart/hugegraph-server.md
@@ -337,7 +337,7 @@
 _说明_
 
 1. 由于图的点和边很多,对于 list 型的请求,比如获取所有顶点,获取所有边等,Server 会将数据压缩再返回,
-所以使用 curl 时得到一堆乱码,可以重定向至 gunzip 进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。
+所以使用 curl 时得到一堆乱码,可以重定向至 `gunzip` 进行解压。推荐使用 Chrome 浏览器 + Restlet 插件发送 HTTP 请求进行测试。
 
     ```
     curl "http://localhost:8080/graphs/hugegraph/graph/vertices" | gunzip
diff --git a/content/cn/docs/quickstart/hugegraph-spark.md b/content/cn/docs/quickstart/hugegraph-spark.md
index 483f1a1..787dd7e 100644
--- a/content/cn/docs/quickstart/hugegraph-spark.md
+++ b/content/cn/docs/quickstart/hugegraph-spark.md
@@ -4,9 +4,9 @@
 weight: 7
 ---
 
-> Note: This project is archived now, consider use hugegraph-computer instead
+> Note: HugeGraph-Spark 已经停止维护, 不再更新, 请转向使用 hugegraph-computer, 感谢理解
 
-### 1 HugeGraph-Spark概述
+### 1 HugeGraph-Spark概述 (Deprecated)
 
 HugeGraph-Spark 是一个连接 HugeGraph 和 Spark GraphX 的工具,能够读取 HugeGraph 中的数据并转换成 Spark GraphX 的 RDD,然后执行 GraphX 中的各种图算法。
 
diff --git a/content/cn/docs/quickstart/hugegraph-studio.md b/content/cn/docs/quickstart/hugegraph-studio.md
index 9a7b6d8..ab7661b 100644
--- a/content/cn/docs/quickstart/hugegraph-studio.md
+++ b/content/cn/docs/quickstart/hugegraph-studio.md
@@ -7,7 +7,7 @@
 
 > Note: Studio 已经停止维护, 不再更新, 请转向使用 hubble, 感谢理解
 
-### 1 HugeGraph-Studio概述
+### 1 HugeGraph-Studio概述 (Deprecated)
 
 HugeGraph-Studio是HugeGraph的前端展示工具,是基于Web的图形化IDE环境。
 通过HugeGraph-Studio,用户可以执行Gremlin语句,并及时获得图形化的展示结果。
@@ -244,23 +244,23 @@
 
 ##### 4.4.1 自定义VertexLabel 样式
 
-属性                         | 默认值       | 类型     | 说明
-:------------------------- | :-------- | :----- | :--------------------------------------------------------------------------------------------------------------
-`vis.size`                 | `25`      | number | 顶点大小
-`vis.scaling.min`          | `10`      | number | 根据标签内容调整节点大小,优先级比vis.size高
-`vis.scaling.max`          | `30`      | number | 根据标签内容调整节点大小,优先级比vis.size高
-`vis.shape`                | dot       | string | 形状,包括ellipse, circle, database, box, text,diamond, dot, star, triangle, triangleDown, hexagon, square and icon.
-`vis.border`               | #00ccff   | string | 顶点边框颜色
-`vis.background`           | #00ccff   | string | 顶点背景颜色
-`vis.hover.border`         | #00ccff   | string | 鼠标悬浮时,顶点边框颜色
-`vis.hover.background`     | #ec3112   | string | 鼠标悬浮时,顶点背景颜色
-`vis.highlight.border`     | #fb6a02   | string | 选中时,顶点边框颜色
-`vis.highlight.background` | #fb6a02   | string | 选中时,顶点背景颜色
-`vis.font.color`           | #343434   | string | 顶点类型字体颜色
-`vis.font.size`            | `12`      | string | 顶点类型字体大小
-`vis.icon.code`            | `\uf111`  | string | FontAwesome 图标编码,目前支持4.7.5版本的图标
-`vis.icon.color`           | `#2B7CE9` | string | 图标颜色,优先级比vis.background高
-`vis.icon.size`            | 50        | string | icon大小,优先级比vis.size高
+| 属性                         | 默认值       | 类型     | 说明                                                                                                              |
+|:---------------------------|:----------|:-------|:----------------------------------------------------------------------------------------------------------------|
+| `vis.size`                 | `25`      | number | 顶点大小                                                                                                            |
+| `vis.scaling.min`          | `10`      | number | 根据标签内容调整节点大小,优先级比vis.size高                                                                                      |
+| `vis.scaling.max`          | `30`      | number | 根据标签内容调整节点大小,优先级比vis.size高                                                                                      |
+| `vis.shape`                | dot       | string | 形状,包括ellipse, circle, database, box, text,diamond, dot, star, triangle, triangleDown, hexagon, square and icon. |
+| `vis.border`               | #00ccff   | string | 顶点边框颜色                                                                                                          |
+| `vis.background`           | #00ccff   | string | 顶点背景颜色                                                                                                          |
+| `vis.hover.border`         | #00ccff   | string | 鼠标悬浮时,顶点边框颜色                                                                                                    |
+| `vis.hover.background`     | #ec3112   | string | 鼠标悬浮时,顶点背景颜色                                                                                                    |
+| `vis.highlight.border`     | #fb6a02   | string | 选中时,顶点边框颜色                                                                                                      |
+| `vis.highlight.background` | #fb6a02   | string | 选中时,顶点背景颜色                                                                                                      |
+| `vis.font.color`           | #343434   | string | 顶点类型字体颜色                                                                                                        |
+| `vis.font.size`            | `12`      | string | 顶点类型字体大小                                                                                                        |
+| `vis.icon.code`            | `\uf111`  | string | FontAwesome 图标编码,目前支持4.7.5版本的图标                                                                                 |
+| `vis.icon.color`           | `#2B7CE9` | string | 图标颜色,优先级比vis.background高                                                                                        |
+| `vis.icon.size`            | 50        | string | icon大小,优先级比vis.size高                                                                                            |
 
 示例:
 
diff --git a/content/cn/docs/quickstart/hugegraph-tools.md b/content/cn/docs/quickstart/hugegraph-tools.md
index bda90c9..5f860d3 100644
--- a/content/cn/docs/quickstart/hugegraph-tools.md
+++ b/content/cn/docs/quickstart/hugegraph-tools.md
@@ -6,7 +6,7 @@
 
 ### 1 HugeGraph-Tools概述
 
-HugeGraph-Tools 是 HugeGragh 的自动化部署、管理和备份/还原组件。
+HugeGraph-Tools 是 HugeGraph 的自动化部署、管理和备份/还原组件。
 
 ### 2 获取 HugeGraph-Tools
 
@@ -73,15 +73,15 @@
 上述全局变量,也可以通过环境变量来设置。一种方式是在命令行使用 export 设置临时环境变量,在该命令行关闭之前均有效
 
 
-全局变量      | 环境变量                | 示例                                           
------------- | --------------------- | ------------------------------------------
---url        | HUGEGRAPH_URL         | export HUGEGRAPH_URL=http://127.0.0.1:8080
---graph      | HUGEGRAPH_GRAPH       | export HUGEGRAPH_GRAPH=hugegraph 
---user       | HUGEGRAPH_USERNAME    | export HUGEGRAPH_USERNAME=admin
---password   | HUGEGRAPH_PASSWORD    | export HUGEGRAPH_PASSWORD=test
---timeout    | HUGEGRAPH_TIMEOUT     | export HUGEGRAPH_TIMEOUT=30
---trust-store-file | HUGEGRAPH_TRUST_STORE_FILE | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store
---trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx
+| 全局变量                   | 环境变量                           | 示例                                                 |
+|------------------------|--------------------------------|----------------------------------------------------|
+| --url                  | HUGEGRAPH_URL                  | export HUGEGRAPH_URL=http://127.0.0.1:8080         |
+| --graph                | HUGEGRAPH_GRAPH                | export HUGEGRAPH_GRAPH=hugegraph                   |
+| --user                 | HUGEGRAPH_USERNAME             | export HUGEGRAPH_USERNAME=admin                    |
+| --password             | HUGEGRAPH_PASSWORD             | export HUGEGRAPH_PASSWORD=test                     |
+| --timeout              | HUGEGRAPH_TIMEOUT              | export HUGEGRAPH_TIMEOUT=30                        |
+| --trust-store-file     | HUGEGRAPH_TRUST_STORE_FILE     | export HUGEGRAPH_TRUST_STORE_FILE=/tmp/trust-store |
+| --trust-store-password | HUGEGRAPH_TRUST_STORE_PASSWORD | export HUGEGRAPH_TRUST_STORE_PASSWORD=xxxx         |
 
 另一种方式是在 bin/hugegraph 脚本中设置环境变量:
 
diff --git a/content/en/docs/clients/restful-api/edge.md b/content/en/docs/clients/restful-api/edge.md
index 7794117..8207447 100644
--- a/content/en/docs/clients/restful-api/edge.md
+++ b/content/en/docs/clients/restful-api/edge.md
@@ -370,18 +370,18 @@
 
 属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如`properties={"weight":0.8}`,范围匹配时形如`properties={"age":"P.gt(0.8)"}`,范围匹配支持的表达式如下:
 
-表达式           | 说明
----------------- | -------
-P.eq(number)     | 属性值等于number的边
-P.neq(number)    | 属性值不等于number的边
-P.lt(number)     | 属性值小于number的边
-P.lte(number)    | 属性值小于等于number的边
-P.gt(number)     | 属性值大于number的边
-P.gte(number)    | 属性值大于等于number的边
-P.between(number1,number2)            | 属性值大于等于number1且小于number2的边
-P.inside(number1,number2)             | 属性值大于number1且小于number2的边
-P.outside(number1,number2)            | 属性值小于number1且大于number2的边
-P.within(value1,value2,value3,...)    | 属性值等于任何一个给定value的边
+| 表达式                                | 说明                         |
+|------------------------------------|----------------------------|
+| P.eq(number)                       | 属性值等于number的边              |
+| P.neq(number)                      | 属性值不等于number的边             |
+| P.lt(number)                       | 属性值小于number的边              |
+| P.lte(number)                      | 属性值小于等于number的边            |
+| P.gt(number)                       | 属性值大于number的边              |
+| P.gte(number)                      | 属性值大于等于number的边            |
+| P.between(number1,number2)         | 属性值大于等于number1且小于number2的边 |
+| P.inside(number1,number2)          | 属性值大于number1且小于number2的边   |
+| P.outside(number1,number2)         | 属性值小于number1且大于number2的边   |
+| P.within(value1,value2,value3,...) | 属性值等于任何一个给定value的边         |
 
 **查询与顶点 person:josh(vertex_id="1:josh") 相连且 label 为 created 的边**
 
diff --git a/content/en/docs/clients/restful-api/vertex.md b/content/en/docs/clients/restful-api/vertex.md
index 71b9f22..0db38d0 100644
--- a/content/en/docs/clients/restful-api/vertex.md
+++ b/content/en/docs/clients/restful-api/vertex.md
@@ -8,13 +8,13 @@
 
 顶点类型中的 Id 策略决定了顶点的 Id 类型,其对应关系如下:
 
-Id_Strategy      | id type
----------------- | -------
-AUTOMATIC        | number
-PRIMARY_KEY      | string
-CUSTOMIZE_STRING | string
-CUSTOMIZE_NUMBER | number
-CUSTOMIZE_UUID   | uuid
+| Id_Strategy      | id type |
+|------------------|---------|
+| AUTOMATIC        | number  |
+| PRIMARY_KEY      | string  |
+| CUSTOMIZE_STRING | string  |
+| CUSTOMIZE_NUMBER | number  |
+| CUSTOMIZE_UUID   | uuid    |
 
 顶点的 `GET/PUT/DELETE` API 中 url 的 id 部分传入的应是带有类型信息的 id 值,这个类型信息用 json 串是否带引号表示,也就是说:
 
@@ -387,18 +387,18 @@
 
 属性键值对由JSON格式的属性名称和属性值组成,允许多个属性键值对作为查询条件,属性值支持精确匹配和范围匹配,精确匹配时形如`properties={"age":29}`,范围匹配时形如`properties={"age":"P.gt(29)"}`,范围匹配支持的表达式如下:
 
-表达式           | 说明
----------------- | -------
-P.eq(number)     | 属性值等于number的顶点
-P.neq(number)    | 属性值不等于number的顶点
-P.lt(number)     | 属性值小于number的顶点
-P.lte(number)    | 属性值小于等于number的顶点
-P.gt(number)     | 属性值大于number的顶点
-P.gte(number)    | 属性值大于等于number的顶点
-P.between(number1,number2)            | 属性值大于等于number1且小于number2的顶点
-P.inside(number1,number2)             | 属性值大于number1且小于number2的顶点
-P.outside(number1,number2)            | 属性值小于number1且大于number2的顶点
-P.within(value1,value2,value3,...)    | 属性值等于任何一个给定value的顶点
+| 表达式                                | 说明                          |
+|------------------------------------|-----------------------------|
+| P.eq(number)                       | 属性值等于number的顶点              |
+| P.neq(number)                      | 属性值不等于number的顶点             |
+| P.lt(number)                       | 属性值小于number的顶点              |
+| P.lte(number)                      | 属性值小于等于number的顶点            |
+| P.gt(number)                       | 属性值大于number的顶点              |
+| P.gte(number)                      | 属性值大于等于number的顶点            |
+| P.between(number1,number2)         | 属性值大于等于number1且小于number2的顶点 |
+| P.inside(number1,number2)          | 属性值大于number1且小于number2的顶点   |
+| P.outside(number1,number2)         | 属性值小于number1且大于number2的顶点   |
+| P.within(value1,value2,value3,...) | 属性值等于任何一个给定value的顶点         |
 
 **查询所有 age 为 20 且 label 为 person 的顶点**
 
diff --git a/content/en/docs/guides/faq.md b/content/en/docs/guides/faq.md
index b0d4c8b..9503ea8 100644
--- a/content/en/docs/guides/faq.md
+++ b/content/en/docs/guides/faq.md
@@ -38,7 +38,7 @@
 
 - 服务启动成功后,使用`curl`查询所有顶点时返回乱码
 
-  服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 gunzip 进行解压(`curl http://example | gunzip`),也可以用`Firefox`的`postman`或者`Chrome`浏览器的`restlet`插件发请求,会自动解压缩响应数据。
+  服务端返回的批量顶点/边是压缩(gzip)过的,可以使用管道重定向至 `gunzip` 进行解压(`curl http://example | gunzip`),也可以用`Firefox`的`postman`或者`Chrome`浏览器的`restlet`插件发请求,会自动解压缩响应数据。
 
 - 使用顶点Id通过`RESTful API`查询顶点时返回空,但是顶点确实是存在的