Add : add jenkins drive sysbench test document
diff --git a/.env b/.env
new file mode 100644
index 0000000..2dacaac
--- /dev/null
+++ b/.env
@@ -0,0 +1,41 @@
+# the directory for benchmark activity
+BASE_PATH=/opt/sphere-ex
+SHARDINGSPHERE_PROJECT_NAME=shardingsphere
+FOLDER_COUNT=15
+
+GIT_MIRROR=https://hub.fastgit.org/apache/shardingsphere.git
+
+PGSQL=pgsql
+MYSQL=mysql
+OPENGAUSS=opengauss
+PROXY_MYSQL=proxy-mysql
+PROXY_PGSQL=proxy-pgsql
+PROXY_OPENGAUSS=proxy-opengauss
+PROXY_TAR_NAME=apache-shardingsphere-*-shardingsphere-proxy-bin.tar.gz
+PROXY_DIRECTORY_NAME=apache-shardingsphere-*-shardingsphere-proxy-bin
+PROXY_START_BASH_FILE=start.sh
+PROXY_STOP_BASH_FILE=stop.sh
+PREPARED_CONF_PATH=prepared-conf
+MYSQL_DRIVER=mysql-connector-java-8.0.24.jar
+
+SYSBENCH_RESULT=sysbench-result
+SYSBENCH_GRAPH=graph
+SYSBENCH_GRAPH_PYTHON_FILE=plot_graph.py
+
+PREPARED_CONF=prepared-conf
+SYSBENCH_PROXY_PGSQL_SCRIPT=sysbench-proxy-pgsql-script.sh
+SYSBENCH_PGSQL_SCRIPT=sysbench-postgresql-script.sh
+SYSBENCH_MYSQL_SCRIPT=sysbench-mysql-script.sh
+SYSBENCH_OPENGAUSS_SCRIPT=sysbench-opengauss-script.sh
+SYSBENCH_PROXY_MYSQL_SCRIPT=sysbench-proxy-mysql-script.sh
+SYSBENCH_PURE_MYSQL_SCRIPT=sysbench-pure-mysql-script.sh
+SYSBENCH_PURE_PGSQL_SCRIPT=sysbench-pgsql-script.sh
+SYSBENCH_PROXY_OPENGAUSS_SCRIPT=sysbench-proxy-opengauss-script.sh
+SYSBENCH_TEST_FUNCTION=sysbench-test-function.sh
+BUILD_NUMBER_FILE=.build_number.txt
+
+SHARDING=sharding
+READWRITE_SPLITTING=readwrite-splitting
+SHADOW=shadow
+ENCRYPT=encrypt
+
diff --git a/resources/image/jenkins-sysbench-pipeline.png b/resources/image/jenkins-sysbench-pipeline.png
new file mode 100644
index 0000000..719d823
--- /dev/null
+++ b/resources/image/jenkins-sysbench-pipeline.png
Binary files differ
diff --git a/resources/image/sysbench-distributed-arch.png b/resources/image/sysbench-distributed-arch.png
new file mode 100644
index 0000000..13893dd
--- /dev/null
+++ b/resources/image/sysbench-distributed-arch.png
Binary files differ
diff --git a/resources/image/sysbench_result_img.png b/resources/image/sysbench_result_img.png
new file mode 100644
index 0000000..299afb7
--- /dev/null
+++ b/resources/image/sysbench_result_img.png
Binary files differ
diff --git a/sysbench/README_ZH.md b/sysbench/README_ZH.md
new file mode 100644
index 0000000..758b50c
--- /dev/null
+++ b/sysbench/README_ZH.md
@@ -0,0 +1,111 @@
+## SharingSphere 的 sysbench 压测工具集
+
+
+### Sysbench 的简介
+
+Sysbench 是一款基于 LuaJIT 的开源脚本化基准测试工具集,常用于测试 CPU、内存、I/O 的性能。自带的一系列脚本可以有针对性的测试 OLTP 类数据库的性能。
+
+ShardingSphere 引入 Sysbench 作为性能测试工具集的一部分,可以通过 Sysbench 测试 proxy 连接 MySQL、PGSQL 等的性能,并与直连 MySQL、PGSQL 等数据库进行对比。
+
+### Sysbench 测试所需的环境
+
+目前 ShardingSphere 是通过 Jenkins 每日定时触发任务进行压测的,使用的硬件如下:
+
+| 主机      | CPU     | 内存 | IP |
+| ------   | ------  | ---  | --- |
+| Jenkins  | 4 core  | 8G   | 10.0.100.10 |
+| Sysbench | 8 core  | 16G  | 10.0.100.20 |
+| Proxy    | 32 core | 32G | 10.0.100.30 |
+| MySQL or PGSQL | 32 core | 32G  | 10.0.100.40 |
+| MySQL or PGSQL | 32 core | 32G  | 10.0.100.41 |
+
+网络带宽统一为 10Gb Ethernet
+
+![](resources/image/sysbench-distributed-arch.png)
+
+假设目前有 5 台主机,ip 以及配置如上图
+
+在 `10.0.100.10` 上安装 Jenkins,并设置两个 node。
+
+在 `10.0.100.20` 上安装 Sysbench 并分别在 `10.0.100.20`,`10.0.100.30` 启动 Jenkins Master 上下载的 agent。
+
+
+### Jenkins 的 Pipeline 是如何组成的
+
+Sysbench 是不依赖 Jenkins 以及任何 CI/CD 工具的,甚至可以手动在命令行中执行 sysbench 的脚本。选用 Jenkins 的目的是利用 Jenkins 的 pipeline 管理压测流程,使之透明化、自动化。
+
+如下为 Jenkins 管理压测步骤的流程:
+
+  1. 安装 Proxy
+  2. 	准备 sysbench
+  3. 准备 proxy
+  4. 测试 proxy
+
+#### 安装 proxy
+
+在这一阶段,Jenkins 会通过 agent 创建 proxy 所需目录,通过 git 克隆 ShardigSphere 的代码,编译 ShardingSphere 的代码,解压缩编译后的 proxy districution
+
+#### 准备 sysbench
+
+在这一阶段,Jenkins 会通过 agent 创建 sysbench 结果集所需要的目录,准备好 sysbench 所需的脚本跟参数
+
+#### 准备 proxy
+ 
+在这一阶段,Jenkins 会将 proxy 所需的配置文件跟驱动准备好(例如 MySQL Driver),复制到 proxy 中的相应目录并且启动 proxy。
+
+#### 测试 proxy
+
+在这一阶段,Jenkins 会根据配置文件中的参数启动 sysbench 压测 proxy。压测结束后,会通过一个 Python 脚本将不同批次的压测结果整合并输出为图片,在 Jenkins 的 report 中即可查看不同批次的结果变化。
+
+### sysbench 的 report
+
+sysbench 的结果会输出为一个 .txt 文件,内容类似如下:
+
+```
+SQL statistics:
+    queries performed:
+        read:                            28147663
+        write:                           0
+        other:                           0
+        total:                           28147663
+    transactions:                        28147663 (156336.98 per sec.)
+    queries:                             28147663 (156336.98 per sec.)
+    ignored errors:                      0      (0.00 per sec.)
+    reconnects:                          0      (0.00 per sec.)
+
+General statistics:
+    total time:                          180.0437s
+    total number of events:              28147663
+
+Latency (ms):
+         min:                                    0.59
+         avg:                                    2.46
+         max:                                  281.08
+         99th percentile:                        5.28
+         sum:                             69103716.45
+
+Threads fairness:
+    events (avg/stddev):           73301.2057/786.12
+    execution time (avg/stddev):   179.9576/0.01
+```
+
+项目中的 python 脚本会根据 sysbench 生成如下的统计图表:
+
+![](resources/image/sysbench_result_img.png)
+
+### 如何自己通过 Jenkins 做同样的测试
+
+建议 fork 一份当前项目,然后创建 Jenkins 的 pipeline,`Pipeline` 部分选择 `pipeline script from SCM`。SCM 的 git 地址为当前项目地址 `https://github.com/apache/shardingsphere-benchmark.git`
+
+想测试什么类型的数据库组合,根据不同的 script path 进行设置:
+
+测试 mysql : sysbench/mysql/sysbench-mysql.pipeline
+测试 pgsql : sysbench/mysql/sysbench-pgsql.pipeline
+测试 proxy + mysql : sysbench/proxy-mysql/sysbench-proxy-mysql.pipeline
+测试 proxy + pgsqsl : sysbench/proxy-pgsql/sysbench-proxy-pgsql.pipeline
+
+同时修改 `.env` 文件以及 `prepared-conf` 中对应的 yaml 配置,以期符合自己的测试需求
+
+如下图所示 : 
+
+![](resources/image/jenkins-sysbench-pipeline.png)
\ No newline at end of file
diff --git a/sysbench/mysql/prepare-sysbench.sh b/sysbench/mysql/prepare-sysbench.sh
new file mode 100644
index 0000000..357a7c0
--- /dev/null
+++ b/sysbench/mysql/prepare-sysbench.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+SYSBENCH_GRAPH=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH")
+SYSBENCH_GRAPH_PYTHON_FILE=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH_PYTHON_FILE")
+SYSBENCH_PURE_MYSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PURE_MYSQL_SCRIPT")
+SYSBENCH_TEST_FUNCTION=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_TEST_FUNCTION")
+DATABASE_TYPE=""$(sh toolkit/read-constant-from-file.sh .env "MYSQL")""
+FOLDER_COUNT=$(sh toolkit/read-constant-from-file.sh .env "FOLDER_COUNT")
+echo "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}"
+
+if [ ! -d "${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}" ]; then
+  # debug info
+  echo "start to mkdir the sysbench related directories"
+  mkdir -p ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}
+  mkdir -p ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}
+  mkdir -p ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}
+  mkdir ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/{1..15}
+  mkdir -p ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${DATABASE_TYPE}
+  cp toolkit/${SYSBENCH_GRAPH_PYTHON_FILE} ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+fi
+
+# todo , change pure mysql to mysql.
+if [ ! -f "${BASE_PATH}/pure-mysql/${SYSBENCH_MYSQL_SCRIPT}" ]; then
+  cp sysbench/mysql/${SYSBENCH_PURE_MYSQL_SCRIPT} ${BASE_PATH}/pure-mysql/${SYSBENCH_PURE_MYSQL_SCRIPT}
+  cp sysbench/mysql/${SYSBENCH_TEST_FUNCTION} ${BASE_PATH}/pure-mysql/${SYSBENCH_TEST_FUNCTION}
+fi
+
+chmod +x ${BASE_PATH}/pure-mysql/${SYSBENCH_TEST_FUNCTION}
+chmod +x ${BASE_PATH}/pure-mysql/${SYSBENCH_PURE_MYSQL_SCRIPT}
+chmod +x ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
\ No newline at end of file
diff --git a/sysbench/mysql/sysbench-mysql.pipeline b/sysbench/mysql/sysbench-mysql.pipeline
new file mode 100644
index 0000000..36ded8d
--- /dev/null
+++ b/sysbench/mysql/sysbench-mysql.pipeline
@@ -0,0 +1,39 @@
+#!groovy
+
+pipeline {
+  agent any
+
+  stages {
+
+    // generate the corresponding directories and make the script read on sysbench node.
+    stage('Prepare Sysbench') {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo '..... prepare the directories for sysbench ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/mysql/prepare-sysbench.sh pure-mysql'
+        sh 'sysbench/mysql/sysbench-test-function.sh'
+      }
+    }
+
+    stage("Generate Report for MySQL") {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo "generate report for MySQL"
+        publishHTML target: [
+          allowMissing: true,
+          alwaysLinkToLastBuild: true,
+          keepAll: true,
+          reportDir: "/opt/sphere-ex/pure-mysql/sysbench-result/graph/mysql/",
+          reportFiles: '*.png',
+          reportName: "MySQL Report"
+        ]
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/sysbench/mysql/sysbench-pure-mysql-script.sh b/sysbench/mysql/sysbench-pure-mysql-script.sh
new file mode 100644
index 0000000..82b84c6
--- /dev/null
+++ b/sysbench/mysql/sysbench-pure-mysql-script.sh
@@ -0,0 +1,25 @@
+# mysql replica
+MYSQL_HOST=10.12.3.80
+
+sysbench oltp_read_only --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=120 --threads=32 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+
+sysbench oltp_read_only --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=360 --threads=32 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off prepare
+
+
+sysbench oltp_read_only        --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run
+
+sysbench oltp_read_only        --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+
+sysbench oltp_point_select     --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+
+sysbench oltp_read_write        --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_readwrite.master.txt
+
+sysbench oltp_write_only       --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+
+sysbench oltp_update_index     --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+
+sysbench oltp_update_non_index --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+
+sysbench oltp_delete           --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
+
+sysbench oltp_read_only --mysql-host=${MYSQL_HOST} --mysql-port=3306 --mysql-user=root --mysql-password='sphereEx@2021' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=120 --threads=10 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off cleanup
diff --git a/sysbench/mysql/sysbench-test-function.sh b/sysbench/mysql/sysbench-test-function.sh
new file mode 100644
index 0000000..a667c26
--- /dev/null
+++ b/sysbench/mysql/sysbench-test-function.sh
@@ -0,0 +1,56 @@
+#!/bin/sh
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "MYSQL")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+BUILD_NUMBER_FILE=$(sh toolkit/read-constant-from-file.sh .env "BUILD_NUMBER_FILE")
+SYSBENCH_PURE_MYSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PURE_MYSQL_SCRIPT")
+BUILD_NUMBER_FILE_PATH="${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER_FILE}"
+
+if [ ! -f "${BUILD_NUMBER_FILE_PATH}" ]; then
+  mkdir -p ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/
+  touch ${BUILD_NUMBER_FILE_PATH}
+fi
+# debug
+echo "build number file path is : ${BUILD_NUMBER_FILE_PATH}"
+
+BUILD_NUMBER=$(cat "${BUILD_NUMBER_FILE_PATH}" | awk 'END {print}')
+# debug info
+echo "build number is : ${BUILD_NUMBER}"
+
+## if there is no build number, then create the first folder
+if [ ! -n "${BUILD_NUMBER}" ]; then
+  BUILD_NUMBER=0
+fi
+
+let BUILD_NUMBER+=1;
+
+#debug info
+echo "now build number is : ${BUILD_NUMBER}"
+
+
+## if there is a failure test, then delete the folder
+if [ -d "${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"  ]; then
+  rm -rf "${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+fi
+
+mkdir -p "${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+#debug info
+echo "create the number folder : ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+
+cp ${BASE_PATH}/pure-mysql/${SYSBENCH_PURE_MYSQL_SCRIPT} ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}/${SYSBENCH_PURE_MYSQL_SCRIPT}
+cd ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}
+
+#Debug
+echo "path : ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+echo "the execution script : ${SYSBENCH_PURE_MYSQL_SCRIPT}"
+sh ${SYSBENCH_PURE_MYSQL_SCRIPT}
+
+rm -rf ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_PURE_MYSQL_SCRIPT}
+cd ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}/${DATABASE_TYPE}
+ls -v | tail -n14 > ${BUILD_NUMBER_FILE_PATH}
+
+
+cd ${BASE_PATH}/pure-mysql/${SYSBENCH_RESULT}
+python3 plot_graph.py mysql
+
diff --git a/sysbench/opengauss/prepare-sysbench.sh b/sysbench/opengauss/prepare-sysbench.sh
new file mode 100644
index 0000000..e57a9ac
--- /dev/null
+++ b/sysbench/opengauss/prepare-sysbench.sh
@@ -0,0 +1,31 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+SYSBENCH_GRAPH=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH")
+SYSBENCH_GRAPH_PYTHON_FILE=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH_PYTHON_FILE")
+SYSBENCH_OPENGAUSS_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_OPENGAUSS_SCRIPT")
+SYSBENCH_TEST_FUNCTION=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_TEST_FUNCTION")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "OPENGAUSS")
+FOLDER_COUNT=$(sh toolkit/read-constant-from-file.sh .env "FOLDER_COUNT")
+echo "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}"
+
+if [ ! -d "${BASE_PATH}/opengauss/${SYSBENCH_RESULT}" ]; then
+  # debug info
+  echo "start to mkdir the sysbench related directories"
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/{1..15}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${DATABASE_TYPE}
+  cp toolkit/${SYSBENCH_GRAPH_PYTHON_FILE} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+fi
+
+if [ ! -f "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_OPENGAUSS_SCRIPT}" ]; then
+  cp sysbench/${DATABASE_TYPE}/${SYSBENCH_OPENGAUSS_SCRIPT} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_OPENGAUSS_SCRIPT}
+  cp sysbench/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION}
+fi
+
+chmod +x ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION}
+chmod +x ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_OPENGAUSS_SCRIPT}
+chmod +x ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
\ No newline at end of file
diff --git a/sysbench/opengauss/sysbench-opengauss-script.sh b/sysbench/opengauss/sysbench-opengauss-script.sh
new file mode 100644
index 0000000..70eebe6
--- /dev/null
+++ b/sysbench/opengauss/sysbench-opengauss-script.sh
@@ -0,0 +1,22 @@
+PROXY_HOST=10.12.3.182
+
+sysbench oltp_read_only --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=120 --threads=10 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+
+sysbench oltp_read_only --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=360 --threads=10 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off prepare
+
+
+sysbench oltp_read_only         --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run
+
+sysbench oltp_read_only        --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+
+sysbench oltp_point_select     --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+
+sysbench oltp_read_write        --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_readwrite.master.txt
+
+sysbench oltp_write_only       --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+
+sysbench oltp_update_index     --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+
+sysbench oltp_update_non_index --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+
+sysbench oltp_delete           --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=40000 --pgsql-user=test --pgsql-password='Huawei@123' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
diff --git a/sysbench/opengauss/sysbench-opengauss.pipeline b/sysbench/opengauss/sysbench-opengauss.pipeline
new file mode 100644
index 0000000..be7a957
--- /dev/null
+++ b/sysbench/opengauss/sysbench-opengauss.pipeline
@@ -0,0 +1,39 @@
+#!groovy
+
+pipeline {
+  agent any
+
+  stages {
+
+    // generate the corresponding directories and make the script read on sysbench node.
+    stage('Start Sysbench') {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo '..... prepare the directories for sysbench ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/opengauss/prepare-sysbench.sh opengauss'
+        sh 'sysbench/opengauss/sysbench-test-function.sh'
+      }
+    }
+
+    stage("Generate Report for MySQL") {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo "generate report for MySQL"
+        publishHTML target: [
+          allowMissing: true,
+          alwaysLinkToLastBuild: true,
+          keepAll: true,
+          reportDir: "/opt/sphere-ex/pure-mysql/sysbench-result/graph/mysql/",
+          reportFiles: '*.png',
+          reportName: "MySQL Report"
+        ]
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/sysbench/opengauss/sysbench-test-function.sh b/sysbench/opengauss/sysbench-test-function.sh
new file mode 100644
index 0000000..7206207
--- /dev/null
+++ b/sysbench/opengauss/sysbench-test-function.sh
@@ -0,0 +1,56 @@
+#!/bin/sh
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "OPENGAUSS")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+BUILD_NUMBER_FILE=$(sh toolkit/read-constant-from-file.sh .env "BUILD_NUMBER_FILE")
+SYSBENCH_OPENGAUSS_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_OPENGAUSS_SCRIPT")
+BUILD_NUMBER_FILE_PATH="${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER_FILE}"
+
+if [ ! -f "${BUILD_NUMBER_FILE_PATH}" ]; then
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/
+  touch ${BUILD_NUMBER_FILE_PATH}
+fi
+
+# debug
+echo "build number file path is : ${BUILD_NUMBER_FILE_PATH}"
+
+BUILD_NUMBER=$(cat "${BUILD_NUMBER_FILE_PATH}" | awk 'END {print}')
+# debug info
+echo "build number is : ${BUILD_NUMBER}"
+
+## if there is no build number, then create the first folder
+if [ ! -n "${BUILD_NUMBER}" ]; then
+  BUILD_NUMBER=0
+fi
+
+let BUILD_NUMBER+=1;
+
+#debug info
+echo "now build number is : ${BUILD_NUMBER}"
+
+## if there is a failure test, then delete the folder
+if [ -d "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"  ]; then
+  rm -rf "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+fi
+
+mkdir -p "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+#debug info
+echo "create the number folder : ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+
+cp ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_OPENGAUSS_SCRIPT} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}/${SYSBENCH_OPENGAUSS_SCRIPT}
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}
+
+#Debug
+echo "path : ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+echo "the execution script : ${SYSBENCH_OPENGAUSS_SCRIPT}"
+sh ${SYSBENCH_OPENGAUSS_SCRIPT}
+
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_OPENGAUSS_SCRIPT}
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${DATABASE_TYPE}
+ls -v | tail -n14 > ${BUILD_NUMBER_FILE_PATH}
+
+
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}
+python3 plot_graph.py opengauss
+
diff --git a/sysbench/pgsql/prepare-sysbench.sh b/sysbench/pgsql/prepare-sysbench.sh
new file mode 100644
index 0000000..037668c
--- /dev/null
+++ b/sysbench/pgsql/prepare-sysbench.sh
@@ -0,0 +1,32 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+SYSBENCH_GRAPH=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH")
+SYSBENCH_GRAPH_PYTHON_FILE=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH_PYTHON_FILE")
+SYSBENCH_PURE_PGSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PURE_PGSQL_SCRIPT")
+SYSBENCH_TEST_FUNCTION=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_TEST_FUNCTION")
+DATABASE_TYPE=""$(sh toolkit/read-constant-from-file.sh .env "PGSQL")""
+FOLDER_COUNT=$(sh toolkit/read-constant-from-file.sh .env "FOLDER_COUNT")
+echo "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}"
+
+if [ ! -d "${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}" ]; then
+  # debug info
+  echo "start to mkdir the sysbench related directories"
+  mkdir -p ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}
+  mkdir -p ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}
+  mkdir -p ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}
+  mkdir ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/{1..15}
+  mkdir -p ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${DATABASE_TYPE}
+  cp toolkit/${SYSBENCH_GRAPH_PYTHON_FILE} ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+fi
+
+# todo , change pure pgsql to pgsql.
+if [ ! -f "${BASE_PATH}/pure-pgsql/${SYSBENCH_PURE_PGSQL_SCRIPT}" ]; then
+  cp sysbench/pgsql/${SYSBENCH_PURE_PGSQL_SCRIPT} ${BASE_PATH}/pure-pgsql/${SYSBENCH_PURE_PGSQL_SCRIPT}
+  cp sysbench/pgsql/${SYSBENCH_TEST_FUNCTION} ${BASE_PATH}/pure-pgsql/${SYSBENCH_TEST_FUNCTION}
+fi
+
+chmod +x ${BASE_PATH}/pure-pgsql/${SYSBENCH_TEST_FUNCTION}
+chmod +x ${BASE_PATH}/pure-pgsql/${SYSBENCH_PURE_PGSQL_SCRIPT}
+chmod +x ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
\ No newline at end of file
diff --git a/sysbench/pgsql/sysbench-pgsql-script.sh b/sysbench/pgsql/sysbench-pgsql-script.sh
new file mode 100644
index 0000000..a08f75e
--- /dev/null
+++ b/sysbench/pgsql/sysbench-pgsql-script.sh
@@ -0,0 +1,22 @@
+PROXY_HOST=10.12.3.28
+
+sysbench oltp_read_only --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=120 --threads=10 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+
+sysbench oltp_read_only --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=360 --threads=10 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off prepare
+
+
+sysbench oltp_read_only         --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run
+
+sysbench oltp_read_only        --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+
+sysbench oltp_point_select     --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+
+sysbench oltp_read_write        --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_readwrite.master.txt
+
+sysbench oltp_write_only       --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+
+sysbench oltp_update_index     --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+
+sysbench oltp_update_non_index --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+
+sysbench oltp_delete           --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=5432 --pgsql-user=postgres --pgsql-password='sphereEx@2021' --pgsql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
diff --git a/sysbench/pgsql/sysbench-pgsql.pipeline b/sysbench/pgsql/sysbench-pgsql.pipeline
new file mode 100644
index 0000000..bb7594b
--- /dev/null
+++ b/sysbench/pgsql/sysbench-pgsql.pipeline
@@ -0,0 +1,39 @@
+#!groovy
+
+pipeline {
+  agent any
+
+  stages {
+
+    // generate the corresponding directories and make the script read on sysbench node.
+    stage('Prepare Sysbench') {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo '..... prepare the directories for sysbench ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/pgsql/prepare-sysbench.sh pure-pgsql'
+        sh 'sysbench/pgsql/sysbench-test-function.sh'
+      }
+    }
+
+    stage("Generate Report for PGSQL") {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo "generate report for PGSQL"
+        publishHTML target: [
+          allowMissing: true,
+          alwaysLinkToLastBuild: true,
+          keepAll: true,
+          reportDir: "/opt/sphere-ex/pure-pgsql/sysbench-result/graph/pgsql/",
+          reportFiles: '*.png',
+          reportName: "PGSQL Report"
+        ]
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/sysbench/pgsql/sysbench-test-function.sh b/sysbench/pgsql/sysbench-test-function.sh
new file mode 100644
index 0000000..b89a909
--- /dev/null
+++ b/sysbench/pgsql/sysbench-test-function.sh
@@ -0,0 +1,56 @@
+#!/bin/sh
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PGSQL")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+BUILD_NUMBER_FILE=$(sh toolkit/read-constant-from-file.sh .env "BUILD_NUMBER_FILE")
+SYSBENCH_PURE_PGSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PURE_PGSQL_SCRIPT")
+BUILD_NUMBER_FILE_PATH="${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER_FILE}"
+
+if [ ! -f "${BUILD_NUMBER_FILE_PATH}" ]; then
+  mkdir -p ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/
+  touch ${BUILD_NUMBER_FILE_PATH}
+fi
+# debug
+echo "build number file path is : ${BUILD_NUMBER_FILE_PATH}"
+
+BUILD_NUMBER=$(cat "${BUILD_NUMBER_FILE_PATH}" | awk 'END {print}')
+# debug info
+echo "build number is : ${BUILD_NUMBER}"
+
+## if there is no build number, then create the first folder
+if [ ! -n "${BUILD_NUMBER}" ]; then
+  BUILD_NUMBER=0
+fi
+
+let BUILD_NUMBER+=1;
+
+#debug info
+echo "now build number is : ${BUILD_NUMBER}"
+
+
+## if there is a failure test, then delete the folder
+if [ -d "${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"  ]; then
+  rm -rf "${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+fi
+
+mkdir -p "${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+#debug info
+echo "create the number folder : ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+
+cp ${BASE_PATH}/pure-pgsql/${SYSBENCH_PURE_PGSQL_SCRIPT} ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}/${SYSBENCH_PURE_PGSQL_SCRIPT}
+cd ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}
+
+#Debug
+echo "path : ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}/${BUILD_NUMBER}"
+echo "the execution script : ${SYSBENCH_PURE_PGSQL_SCRIPT}"
+sh ${SYSBENCH_PURE_PGSQL_SCRIPT}
+
+rm -rf ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_PURE_PGSQL_SCRIPT}
+cd ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}/${DATABASE_TYPE}
+ls -v | tail -n14 > ${BUILD_NUMBER_FILE_PATH}
+
+
+cd ${BASE_PATH}/pure-pgsql/${SYSBENCH_RESULT}
+python3 plot_graph.py pgsql
+
diff --git a/sysbench/proxy-mysql/install-proxy.sh b/sysbench/proxy-mysql/install-proxy.sh
new file mode 100644
index 0000000..2944241
--- /dev/null
+++ b/sysbench/proxy-mysql/install-proxy.sh
@@ -0,0 +1,64 @@
+#!/bin/bash
+
+CURRENT_FILE_PATH=$(cd `dirname $0` ; pwd)
+CURRENT_PATH=$(pwd)
+
+cd "${CURRENT_FILE_PATH}/../../"
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_MYSQL")
+PROXY_DIRECTORY_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_DIRECTORY_NAME")
+PROXY_STOP_BASH_FILE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_STOP_BASH_FILE")
+PROXY_TAR_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_TAR_NAME")
+SHARDINGSPHERE_PROJECT_NAME=$(sh toolkit/read-constant-from-file.sh .env "SHARDINGSPHERE_PROJECT_NAME")
+GIT_MIRROR=$(sh toolkit/read-constant-from-file.sh .env "GIT_MIRROR")
+
+# 1. create proxy MySQL directory
+echo "start to mkdir ${BASE_PATH}/${DATABASE_TYPE}"
+if [ ! -d "${BASE_PATH}/${DATABASE_TYPE}" ]; then
+    mkdir -p "${BASE_PATH}/${DATABASE_TYPE}"
+fi
+
+# 2. stop & clean proxy
+echo "start to clean proxy"
+cd "${BASE_PATH}/${DATABASE_TYPE}"
+PROXY_DIRECTORIES_COUNT=$(ls "${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}" 2> /dev/null | wc -l)
+if [ "${PROXY_DIRECTORIES_COUNT}" != 0 ]; then
+  DEPLOY_PATH="${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}"
+  PIDS=`ps -ef | grep java | grep proxy | grep "${DEPLOY_PATH}" | grep -v grep |awk '{print $2}'`
+  if [ -n "$PIDS" ]; then
+    echo "stop the previous proxy ......"
+    kill $PIDS
+  fi
+fi
+
+# 3. git clone shardingsphere
+if [ -d "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}" ]; then
+    cd "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+    git pull
+else
+  rm -rf ${SHARDINGSPHERE_PROJECT_NAME}
+
+  # in this situation, choose a faster mirror and then change to original github mirror
+  echo "start to clone shardingsphere"
+  git clone "${GIT_MIRROR}" "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+  cd "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+  git remote set-url --push origin https://github.com/apache/shardingsphere.git
+  git remote set-url origin https://github.com/apache/shardingsphere.git
+  git pull
+fi
+
+rm -rf ${PROXY_DIRECTORY_NAME}
+
+# 4. install proxy
+echo "start to maven install and extract the distribution"
+cd "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+./mvnw clean install -Dmaven.javadoc.skip=true -B -Drat.skip=true -Djacoco.skip=true -Dmaven.test.skip=true -Prelease
+cp "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}/shardingsphere-distribution/shardingsphere-proxy-distribution/target/"${PROXY_TAR_NAME} "${BASE_PATH}/${DATABASE_TYPE}/"
+cd "${BASE_PATH}/${DATABASE_TYPE}"
+tar -zxvf ${PROXY_TAR_NAME}
+rm -rf ${PROXY_TAR_NAME}
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/*.yaml
+
+
+cd "${CURRENT_PATH}"
diff --git a/sysbench/proxy-mysql/prepare-proxy.sh b/sysbench/proxy-mysql/prepare-proxy.sh
new file mode 100644
index 0000000..6096319
--- /dev/null
+++ b/sysbench/proxy-mysql/prepare-proxy.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_MYSQL")
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+PREPARED_CONF_PATH=$(sh toolkit/read-constant-from-file.sh .env "PREPARED_CONF_PATH")
+PROXY_DIRECTORY_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_DIRECTORY_NAME")
+MYSQL_DRIVER=$(sh toolkit/read-constant-from-file.sh .env "MYSQL_DRIVER")
+
+# TODO this action is useless
+if [ ! -d "${BASE_PATH}/${DATABASE_TYPE}/${PREPARED_CONF_PATH}" ]; then
+  cp -R sysbench/${DATABASE_TYPE}/${PREPARED_CONF_PATH} ${BASE_PATH}/${DATABASE_TYPE}/
+fi
+
+# debug info
+CURRENTPATH=`pwd`
+echo "current path is ${CURRENTPATH}"
+if [ ! -f ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/lib/${MYSQL_DRIVER} ]; then
+  cp sysbench/${DATABASE_TYPE}/${PREPARED_CONF_PATH}/${MYSQL_DRIVER} ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/lib/
+fi
+
+
diff --git a/sysbench/proxy-mysql/prepare-sysbench.sh b/sysbench/proxy-mysql/prepare-sysbench.sh
new file mode 100644
index 0000000..da8fa50
--- /dev/null
+++ b/sysbench/proxy-mysql/prepare-sysbench.sh
@@ -0,0 +1,56 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+SYSBENCH_GRAPH=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH")
+SHARDING=$(sh toolkit/read-constant-from-file.sh .env "SHARDING")
+READWRITE_SPLITTING=$(sh toolkit/read-constant-from-file.sh .env "READWRITE_SPLITTING")
+SHADOW=$(sh toolkit/read-constant-from-file.sh .env "SHADOW")
+ENCRYPT=$(sh toolkit/read-constant-from-file.sh .env "ENCRYPT")
+SYSBENCH_GRAPH_PYTHON_FILE=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH_PYTHON_FILE")
+SYSBENCH_MYSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PROXY_MYSQL_SCRIPT")
+SYSBENCH_TEST_FUNCTION=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_TEST_FUNCTION")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_MYSQL")
+FOLDER_COUNT=$(sh toolkit/read-constant-from-file.sh .env "FOLDER_COUNT")
+BUILD_NUMBER_FILE=$(sh toolkit/read-constant-from-file.sh .env "BUILD_NUMBER_FILE")
+
+echo "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}"
+CURRENT_PATH=`pwd`
+
+if [ ! -d "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}" ]; then
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}
+
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}/{1..15}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}/{1..15}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}/{1..15}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}/{1..15}
+
+
+  cp toolkit/${SYSBENCH_GRAPH_PYTHON_FILE} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+  chmod +x ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${SHARDING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${READWRITE_SPLITTING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${SHADOW}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${ENCRYPT}
+
+  touch ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}/${BUILD_NUMBER_FILE}
+  touch ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}/${BUILD_NUMBER_FILE}
+  touch ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}/${BUILD_NUMBER_FILE}
+  touch ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}/${BUILD_NUMBER_FILE}
+
+  ls -v ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}/ | tail -n14 > ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}/${BUILD_NUMBER_FILE}
+  ls -v ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}/ | tail -n14 > ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}/${BUILD_NUMBER_FILE}
+  ls -v ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}/ | tail -n14 > ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}/${BUILD_NUMBER_FILE}
+  ls -v ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}/ | tail -n14 > ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}/${BUILD_NUMBER_FILE}
+fi
+
+if [ ! -f "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_MYSQL_SCRIPT}" ]; then
+  cp sysbench/${DATABASE_TYPE}/${SYSBENCH_MYSQL_SCRIPT} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_MYSQL_SCRIPT}
+  cp sysbench/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION}
+fi
\ No newline at end of file
diff --git a/sysbench/proxy-mysql/prepared-conf/encrypt/config-encrypt.yaml b/sysbench/proxy-mysql/prepared-conf/encrypt/config-encrypt.yaml
new file mode 100644
index 0000000..b67dc71
--- /dev/null
+++ b/sysbench/proxy-mysql/prepared-conf/encrypt/config-encrypt.yaml
@@ -0,0 +1,70 @@
+schemaName: sbtest
+
+dataSources:
+  ds_0:
+    url: jdbc:mysql://10.12.3.28:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password: sphereEx@2021
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 256
+    minPoolSize: 256
+    maintenanceIntervalMilliseconds: 30000
+
+rules:
+- !ENCRYPT
+  encryptors:
+    md5_encryptor:
+      type: MD5
+  tables:
+    sbtest1:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest2:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest3:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest4:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest5:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest6:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest7:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest8:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest9:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest10:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
diff --git a/sysbench/proxy-mysql/prepared-conf/prepare-for-function.sh b/sysbench/proxy-mysql/prepared-conf/prepare-for-function.sh
new file mode 100644
index 0000000..d71485e
--- /dev/null
+++ b/sysbench/proxy-mysql/prepared-conf/prepare-for-function.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_MYSQL")
+PREPARED_CONF=$(sh toolkit/read-constant-from-file.sh .env "PREPARED_CONF")
+PROXY_DIRECTORY_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_DIRECTORY_NAME")
+PROXY_START_BASH_FILE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_START_BASH_FILE")
+TEST_FUNCTION=$1
+
+cd ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}
+
+DEPLOY_PATH=`pwd`
+PIDS=`ps -ef | grep java | grep proxy | grep "${DEPLOY_PATH}" | grep -v grep |awk '{print $2}'`
+if [ -n "$PIDS" ]; then
+    kill -9 $PIDS
+fi
+
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/*.yaml
+cp ${BASE_PATH}/${DATABASE_TYPE}/${PREPARED_CONF}/server.yaml ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/
+cp ${BASE_PATH}/${DATABASE_TYPE}/${PREPARED_CONF}/${TEST_FUNCTION}/config-${TEST_FUNCTION}.yaml ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/
+sudo sh ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/bin/${PROXY_START_BASH_FILE}
+
diff --git a/sysbench/proxy-mysql/prepared-conf/readwrite-splitting/config-readwrite-splitting.yaml b/sysbench/proxy-mysql/prepared-conf/readwrite-splitting/config-readwrite-splitting.yaml
new file mode 100644
index 0000000..bbee262
--- /dev/null
+++ b/sysbench/proxy-mysql/prepared-conf/readwrite-splitting/config-readwrite-splitting.yaml
@@ -0,0 +1,31 @@
+schemaName: sbtest
+
+dataSources:
+  write_ds:
+    url: jdbc:mysql://10.12.3.28:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password: sphereEx@2021
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 128
+    minPoolSize: 128
+    maintenanceIntervalMilliseconds: 30000
+
+  read_ds:
+    url: jdbc:mysql://10.12.3.80:3306/sbtest?serverTimezone=UTC&useSSL=false
+    username: root
+    password: sphereEx@2021
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 128
+    minPoolSize: 128
+    maintenanceIntervalMilliseconds: 30000
+
+rules:
+- !READWRITE_SPLITTING
+  dataSources:
+    pr_ds:
+      writeDataSourceName: write_ds
+      readDataSourceNames: [read_ds]
\ No newline at end of file
diff --git a/sysbench/proxy-mysql/prepared-conf/server.yaml b/sysbench/proxy-mysql/prepared-conf/server.yaml
new file mode 100644
index 0000000..9d0240b
--- /dev/null
+++ b/sysbench/proxy-mysql/prepared-conf/server.yaml
@@ -0,0 +1,61 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+######################################################################################################
+# 
+# If you want to configure governance, authorization and proxy properties, please refer to this file.
+# 
+######################################################################################################
+#
+#governance:
+#  name: governance_ds
+#  registryCenter:
+#    type: ZooKeeper
+#    serverLists: localhost:2181
+#    props:
+#      retryIntervalMilliseconds: 500
+#      timeToLiveSeconds: 60
+#      maxRetries: 3
+#      operationTimeoutMilliseconds: 500
+#  overwrite: false
+
+rules:
+  - !AUTHORITY
+    users:
+      - root@:root
+    provider:
+      type: NATIVE
+
+#scaling:
+#  blockQueueSize: 10000
+#  workerThread: 40
+
+props:
+ max-connections-size-per-query: 1
+ executor-size: 16  # Infinite by default.
+ proxy-frontend-flush-threshold: 128  # The default value is 128.
+   # LOCAL: Proxy will run with LOCAL transaction.
+   # XA: Proxy will run with XA transaction.
+   # BASE: Proxy will run with B.A.S.E transaction.
+ proxy-transaction-type: LOCAL
+ xa-transaction-manager-type: Atomikos
+ proxy-opentracing-enabled: false
+ proxy-hint-enabled: false
+ query-with-cipher-column: true
+ sql-show: false
+ check-table-metadata-enabled: false
+ lock-wait-timeout-milliseconds: 50000 # The maximum time to wait for a lock
diff --git a/sysbench/proxy-mysql/prepared-conf/sharding/config-sharding.yaml b/sysbench/proxy-mysql/prepared-conf/sharding/config-sharding.yaml
new file mode 100644
index 0000000..83469dd
--- /dev/null
+++ b/sysbench/proxy-mysql/prepared-conf/sharding/config-sharding.yaml
@@ -0,0 +1,175 @@
+schemaName: sbtest
+
+dataSources:
+ ds_0:
+   url: jdbc:mysql://10.12.3.28:3306/sbtest?serverTimezone=UTC&useSSL=false
+   username: root
+   password: sphereEx@2021
+   connectionTimeoutMilliseconds: 30000
+   idleTimeoutMilliseconds: 60000
+   maxLifetimeMilliseconds: 1800000
+   maxPoolSize: 50
+   minPoolSize: 1
+   maintenanceIntervalMilliseconds: 30000
+ ds_1:
+   url: jdbc:mysql://10.12.3.182:3306/sbtest?serverTimezone=UTC&useSSL=false
+   username: root
+   password: sphereEx@2021
+   connectionTimeoutMilliseconds: 30000
+   idleTimeoutMilliseconds: 60000
+   maxLifetimeMilliseconds: 1800000
+   maxPoolSize: 50
+   minPoolSize: 1
+   maintenanceIntervalMilliseconds: 30000
+
+
+
+rules:
+- !SHARDING
+  tables:
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_1
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_2
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_3
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_4
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_5
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_6
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_7
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_8
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_9
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_10
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+
+  defaultDatabaseStrategy:
+    standard:
+      shardingColumn: id
+      shardingAlgorithmName: database_inline
+
+  shardingAlgorithms:
+    database_inline:
+      type: INLINE
+      props:
+        algorithm-expression: ds_${id % 2}
+    table_inline_1:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest1_${id % 100}
+    table_inline_2:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest2_${id % 100}
+    table_inline_3:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest3_${id % 100}
+    table_inline_4:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest4_${id % 100}
+    table_inline_5:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest5_${id % 100}
+    table_inline_6:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest6_${id % 100}
+    table_inline_7:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest7_${id % 100}
+    table_inline_8:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest8_${id % 100}
+    table_inline_9:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest9_${id % 100}
+    table_inline_10:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest10_${id % 100}
+  keyGenerators:
+    snowflake:
+      type: SNOWFLAKE
+      props:
+        worker-id: 123
diff --git a/sysbench/proxy-mysql/sysbench-proxy-mysql-script.sh b/sysbench/proxy-mysql/sysbench-proxy-mysql-script.sh
new file mode 100644
index 0000000..43a1a44
--- /dev/null
+++ b/sysbench/proxy-mysql/sysbench-proxy-mysql-script.sh
@@ -0,0 +1,22 @@
+PROXY_HOST=10.12.3.160
+
+sysbench oltp_read_only --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=120 --threads=32 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+
+sysbench oltp_read_only --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=10 --time=360 --threads=32 --max-requests=0 --percentile=99 --mysql-ignore-errors="all" --rand-type=uniform --range_selects=off --auto_inc=off prepare
+
+
+sysbench oltp_read_only        --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run
+
+sysbench oltp_read_only        --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+
+sysbench oltp_point_select     --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+
+sysbench oltp_read_write        --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_readwrite.master.txt
+
+sysbench oltp_write_only       --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+
+sysbench oltp_update_index     --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+
+sysbench oltp_update_non_index --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+
+sysbench oltp_delete           --mysql-host=${PROXY_HOST} --mysql-port=3307 --mysql-user=root --mysql-password='root' --mysql-db=sbtest --tables=10 --table-size=100000 --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --mysql-ignore-errors="all" --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
\ No newline at end of file
diff --git a/sysbench/proxy-mysql/sysbench-proxy-mysql.pipeline b/sysbench/proxy-mysql/sysbench-proxy-mysql.pipeline
new file mode 100644
index 0000000..0edc5d4
--- /dev/null
+++ b/sysbench/proxy-mysql/sysbench-proxy-mysql.pipeline
@@ -0,0 +1,96 @@
+#!groovy
+
+def DATABASE_TYPE
+def TEST_FUNCTION_LIST
+
+pipeline {
+  agent any
+
+  stages {
+    stage('Init Jenkins Pipeline Environment') {
+      steps {
+        script {
+          DATABASE_TYPE = 'mysql'
+//          TEST_FUNCTION_LIST = ['sharding', 'readwrite-splitting', 'encrypt']
+            TEST_FUNCTION_LIST = ['sharding',  'encrypt']
+        }
+      }
+    }
+
+    // clone code, compile code and unzip proxy distribution on proxy node
+    stage('Install Sharding Proxy') {
+      agent {
+        label 'proxy'
+      }
+      steps {
+        echo '...... Installing Sharding Proxy ...... '
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/proxy-mysql/install-proxy.sh'
+      }
+    }
+
+    // generate the corresponding directories and make the script read on sysbench node.
+    stage('Prepare Sysbench') {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo '..... prepare the directories for sysbench ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/proxy-mysql/prepare-sysbench.sh mysql'
+      }
+    }
+
+    stage('Prepare Proxy') {
+      agent {
+        label 'proxy'
+      }
+      steps {
+        echo '..... prepare the directories for proxy ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/proxy-mysql/prepare-proxy.sh'
+      }
+    }
+
+    stage('Test In a Loop') {
+      steps {
+        script {
+          for (int i = 0; i < TEST_FUNCTION_LIST.size(); i++) {
+            stage("Prepare Proxy Config for ${TEST_FUNCTION_LIST[i]}") {
+              node('proxy') {
+                echo "prepare the config for ${TEST_FUNCTION_LIST[i]}"
+                sh "sysbench/proxy-mysql/prepared-conf/prepare-for-function.sh ${TEST_FUNCTION_LIST[i]}"
+              }
+            }
+
+            stage("Sysbench Start to Test ${TEST_FUNCTION_LIST[i]}") {
+              node('sysbench') {
+                echo "sysbench start to test ${TEST_FUNCTION_LIST[i]}"
+                echo "sleep 10 sencods for waiting proxy start"
+                sh "sleep 10"
+                sh "sysbench/proxy-mysql/sysbench-test-function.sh ${TEST_FUNCTION_LIST[i]}"
+              }
+            }
+
+            stage("Generate Report for ${TEST_FUNCTION_LIST[i]}") {
+              node('sysbench') {
+                echo "generate report for ${TEST_FUNCTION_LIST[i]}"
+                publishHTML target: [
+                  allowMissing: true,
+                  alwaysLinkToLastBuild: true,
+                  keepAll: true,
+                  reportDir: "/opt/sphere-ex/proxy-mysql/sysbench-result/graph/${TEST_FUNCTION_LIST[i]}/",
+                  reportFiles: '*.png',
+                  reportName: "${TEST_FUNCTION_LIST[i]} Report"
+                ]
+              }
+            }
+          }
+        }
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/sysbench/proxy-mysql/sysbench-test-function.sh b/sysbench/proxy-mysql/sysbench-test-function.sh
new file mode 100644
index 0000000..f5bab77
--- /dev/null
+++ b/sysbench/proxy-mysql/sysbench-test-function.sh
@@ -0,0 +1,51 @@
+#!/bin/sh
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_MYSQL")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+BUILD_NUMBER_FILE=$(sh toolkit/read-constant-from-file.sh .env "BUILD_NUMBER_FILE")
+SYSBENCH_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PROXY_MYSQL_SCRIPT")
+TEST_FUNCTION=$1
+
+BUILD_NUMBER_FILE_PATH="${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER_FILE}"
+
+BUILD_NUMBER=$(cat "${BUILD_NUMBER_FILE_PATH}" | awk 'END {print}')
+# debug info
+echo "build number is : ${BUILD_NUMBER}"
+
+## if there is no build number, then create the first folder
+if [ ! -n "${BUILD_NUMBER}" ]; then
+  BUILD_NUMBER=0
+fi
+
+let BUILD_NUMBER+=1;
+
+#debug info
+echo "now build number is : ${BUILD_NUMBER}"
+
+
+## if there is a failure test, then delete the folder
+if [ -d "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"  ]; then
+  rm -rf "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"
+fi
+
+mkdir -p "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"
+#debug info
+echo "create the number folder : ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"
+
+cp ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_SCRIPT} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_SCRIPT}
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}
+
+#Debug
+echo "path : ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"
+echo "the execution script : ${SYSBENCH_SCRIPT}"
+sh ${SYSBENCH_SCRIPT}
+
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_SCRIPT}
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}
+ls -v | tail -n14 > ${BUILD_NUMBER_FILE_PATH}
+
+
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}
+python3 plot_graph.py ${TEST_FUNCTION}
+
diff --git a/sysbench/proxy-pgsql/install-proxy.sh b/sysbench/proxy-pgsql/install-proxy.sh
new file mode 100644
index 0000000..3bf0f99
--- /dev/null
+++ b/sysbench/proxy-pgsql/install-proxy.sh
@@ -0,0 +1,63 @@
+#!/bin/bash
+
+CURRENT_FILE_PATH=$(cd `dirname $0` ; pwd)
+CURRENT_PATH=$(pwd)
+
+cd "${CURRENT_FILE_PATH}/../../"
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_PGSQL")
+PROXY_DIRECTORY_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_DIRECTORY_NAME")
+PROXY_STOP_BASH_FILE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_STOP_BASH_FILE")
+PROXY_TAR_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_TAR_NAME")
+SHARDINGSPHERE_PROJECT_NAME=$(sh toolkit/read-constant-from-file.sh .env "SHARDINGSPHERE_PROJECT_NAME")
+GIT_MIRROR=$(sh toolkit/read-constant-from-file.sh .env "GIT_MIRROR")
+
+# 1. create base directory
+echo "start to mkdir ${BASE_PATH}"
+if [ ! -d "${BASE_PATH}" ]; then
+    mkdir -p "${BASE_PATH}"
+fi
+
+# 2. create pg directory
+echo "start to mkdir ${BASE_PATH}/${DATABASE_TYPE}"
+if [ ! -d "${BASE_PATH}/${DATABASE_TYPE}" ]; then
+    mkdir -p "${BASE_PATH}/${DATABASE_TYPE}"
+fi
+
+# 3. stop & clean proxy
+echo "start to clean proxy"
+cd "${BASE_PATH}/${DATABASE_TYPE}"
+PROXY_DIRECTORIES_COUNT=$(ls "${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}" 2> /dev/null | wc -l)
+if [ "${PROXY_DIRECTORIES_COUNT}" != 0 ]; then
+  DEPLOY_PATH="${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}"
+  PIDS=`ps -ef | grep java | grep proxy | grep "${DEPLOY_PATH}" | grep -v grep |awk '{print $2}'`
+  if [ -n "$PIDS" ]; then
+    sh "${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/bin/${PROXY_STOP_BASH_FILE}"
+  fi
+fi
+
+rm -rf ${PROXY_DIRECTORY_NAME}
+rm -rf ${SHARDINGSPHERE_PROJECT_NAME}
+
+# 4. git clone shardingsphere
+# in this situation, choose a faster mirror and then change to original github mirror
+echo "start to clone shardingsphere"
+git clone "${GIT_MIRROR}" "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+cd "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+git remote set-url --push origin https://github.com/apache/shardingsphere.git
+git remote set-url origin https://github.com/apache/shardingsphere.git
+git pull
+
+# 5. install proxy
+echo "start to maven install and extract the distribution"
+cd "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}"
+./mvnw -q -Dmaven.javadoc.skip=true -Djacoco.skip=true -DskipITs -DskipTests clean install -T1C -Prelease
+cp "${BASE_PATH}/${DATABASE_TYPE}/${SHARDINGSPHERE_PROJECT_NAME}/shardingsphere-distribution/shardingsphere-proxy-distribution/target/"${PROXY_TAR_NAME} "${BASE_PATH}/${DATABASE_TYPE}/"
+cd "${BASE_PATH}/${DATABASE_TYPE}"
+tar -zxvf ${PROXY_TAR_NAME}
+rm -rf ${PROXY_TAR_NAME}
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/*.yaml
+
+
+cd "${CURRENT_PATH}"
diff --git a/sysbench/proxy-pgsql/prepare-proxy.sh b/sysbench/proxy-pgsql/prepare-proxy.sh
new file mode 100644
index 0000000..c8cf9d6
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepare-proxy.sh
@@ -0,0 +1,12 @@
+#!/bin/bash
+
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_PGSQL")
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+PREPARED_CONF_PATH=$(sh toolkit/read-constant-from-file.sh .env "PREPARED_CONF_PATH")
+PROXY_DIRECTORY_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_DIRECTORY_NAME")
+
+if [ ! -d "${BASE_PATH}/${DATABASE_TYPE}/${PREPARED_CONF_PATH}" ]; then
+  cp -R sysbench/${DATABASE_TYPE}/${PREPARED_CONF_PATH} ${BASE_PATH}/${DATABASE_TYPE}/
+fi
+
+
diff --git a/sysbench/proxy-pgsql/prepare-sysbench.sh b/sysbench/proxy-pgsql/prepare-sysbench.sh
new file mode 100644
index 0000000..d15bbc0
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepare-sysbench.sh
@@ -0,0 +1,42 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+SYSBENCH_GRAPH=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH")
+SHARDING=$(sh toolkit/read-constant-from-file.sh .env "SHARDING")
+READWRITE_SPLITTING=$(sh toolkit/read-constant-from-file.sh .env "READWRITE_SPLITTING")
+SHADOW=$(sh toolkit/read-constant-from-file.sh .env "SHADOW")
+ENCRYPT=$(sh toolkit/read-constant-from-file.sh .env "ENCRYPT")
+SYSBENCH_GRAPH_PYTHON_FILE=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_GRAPH_PYTHON_FILE")
+SYSBENCH_PGSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PROXY_PGSQL_SCRIPT")
+SYSBENCH_TEST_FUNCTION=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_TEST_FUNCTION")
+DATABASE_TYPE=$1
+FOLDER_COUNT=$(sh toolkit/read-constant-from-file.sh .env "FOLDER_COUNT")
+echo "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}"
+
+if [ ! -d "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}" ]; then
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}
+
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHARDING}/{1..15}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${READWRITE_SPLITTING}/{1..15}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SHADOW}/{1..15}
+  mkdir ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${ENCRYPT}/{1..15}
+
+  cp toolkit/${SYSBENCH_GRAPH_PYTHON_FILE} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+  chmod +x ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH_PYTHON_FILE}
+
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${SHARDING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${READWRITE_SPLITTING}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${SHADOW}
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${SYSBENCH_GRAPH}/${ENCRYPT}
+fi
+
+if [ ! -f "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_PGSQL_SCRIPT}" ]; then
+  cp sysbench/${DATABASE_TYPE}/${SYSBENCH_PGSQL_SCRIPT} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_PGSQL_SCRIPT}
+  cp sysbench/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_TEST_FUNCTION}
+fi
\ No newline at end of file
diff --git a/sysbench/proxy-pgsql/prepared-conf/encrypt/config-encrypt.yaml b/sysbench/proxy-pgsql/prepared-conf/encrypt/config-encrypt.yaml
new file mode 100644
index 0000000..f5cc55f
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepared-conf/encrypt/config-encrypt.yaml
@@ -0,0 +1,70 @@
+schemaName: sbtest
+
+dataSources:
+  ds_0:
+    url: jdbc:postgresql://10.12.3.28:5432/sbtest?serverTimezone=UTC&useSSL=false
+    username: postgres
+    password: sphereEx@2021
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 256
+    minPoolSize: 256
+    maintenanceIntervalMilliseconds: 30000
+
+rules:
+- !ENCRYPT
+  encryptors:
+    md5_encryptor:
+      type: MD5
+  tables:
+    sbtest1:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest2:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest3:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest4:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest5:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest6:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest7:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest8:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest9:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
+    sbtest10:
+      columns:
+        pad:
+          cipherColumn: pad
+          encryptorName: md5_encryptor
diff --git a/sysbench/proxy-pgsql/prepared-conf/prepare-for-function.sh b/sysbench/proxy-pgsql/prepared-conf/prepare-for-function.sh
new file mode 100644
index 0000000..4c670a5
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepared-conf/prepare-for-function.sh
@@ -0,0 +1,22 @@
+#!/bin/bash
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_PGSQL")
+PREPARED_CONF=$(sh toolkit/read-constant-from-file.sh .env "PREPARED_CONF")
+PROXY_DIRECTORY_NAME=$(sh toolkit/read-constant-from-file.sh .env "PROXY_DIRECTORY_NAME")
+PROXY_START_BASH_FILE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_START_BASH_FILE")
+TEST_FUNCTION=$1
+
+cd ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}
+
+DEPLOY_PATH=`pwd`
+PIDS=`ps -ef | grep java | grep proxy | grep "${DEPLOY_PATH}" | grep -v grep |awk '{print $2}'`
+if [ -n "$PIDS" ]; then
+    kill -9 $PIDS
+fi
+
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/*.yaml
+cp ${BASE_PATH}/${DATABASE_TYPE}/${PREPARED_CONF}/server.yaml ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/
+cp ${BASE_PATH}/${DATABASE_TYPE}/${PREPARED_CONF}/${TEST_FUNCTION}/config-${TEST_FUNCTION}.yaml ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/conf/
+sudo sh ${BASE_PATH}/${DATABASE_TYPE}/${PROXY_DIRECTORY_NAME}/bin/${PROXY_START_BASH_FILE}
+
diff --git a/sysbench/proxy-pgsql/prepared-conf/readwrite-splitting/config-readwrite-splitting.yaml b/sysbench/proxy-pgsql/prepared-conf/readwrite-splitting/config-readwrite-splitting.yaml
new file mode 100644
index 0000000..7eae060
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepared-conf/readwrite-splitting/config-readwrite-splitting.yaml
@@ -0,0 +1,31 @@
+schemaName: sbtest
+
+dataSources:
+  write_ds:
+    url: jdbc:postgresql://10.12.3.28:5432/sbtest?serverTimezone=UTC&useSSL=false
+    username: postgres
+    password: sphereEx@2021
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 128
+    minPoolSize: 128
+    maintenanceIntervalMilliseconds: 30000
+
+  read_ds:
+    url: jdbc:postgresql://10.12.3.182:5432/sbtest?serverTimezone=UTC&useSSL=false
+    username: postgres
+    password: sphereEx@2021
+    connectionTimeoutMilliseconds: 30000
+    idleTimeoutMilliseconds: 60000
+    maxLifetimeMilliseconds: 1800000
+    maxPoolSize: 128
+    minPoolSize: 128
+    maintenanceIntervalMilliseconds: 30000
+
+rules:
+- !READWRITE_SPLITTING
+  dataSources:
+    pr_ds:
+      writeDataSourceName: write_ds
+      readDataSourceNames: [read_ds]
\ No newline at end of file
diff --git a/sysbench/proxy-pgsql/prepared-conf/server.yaml b/sysbench/proxy-pgsql/prepared-conf/server.yaml
new file mode 100644
index 0000000..9d0240b
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepared-conf/server.yaml
@@ -0,0 +1,61 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
+######################################################################################################
+# 
+# If you want to configure governance, authorization and proxy properties, please refer to this file.
+# 
+######################################################################################################
+#
+#governance:
+#  name: governance_ds
+#  registryCenter:
+#    type: ZooKeeper
+#    serverLists: localhost:2181
+#    props:
+#      retryIntervalMilliseconds: 500
+#      timeToLiveSeconds: 60
+#      maxRetries: 3
+#      operationTimeoutMilliseconds: 500
+#  overwrite: false
+
+rules:
+  - !AUTHORITY
+    users:
+      - root@:root
+    provider:
+      type: NATIVE
+
+#scaling:
+#  blockQueueSize: 10000
+#  workerThread: 40
+
+props:
+ max-connections-size-per-query: 1
+ executor-size: 16  # Infinite by default.
+ proxy-frontend-flush-threshold: 128  # The default value is 128.
+   # LOCAL: Proxy will run with LOCAL transaction.
+   # XA: Proxy will run with XA transaction.
+   # BASE: Proxy will run with B.A.S.E transaction.
+ proxy-transaction-type: LOCAL
+ xa-transaction-manager-type: Atomikos
+ proxy-opentracing-enabled: false
+ proxy-hint-enabled: false
+ query-with-cipher-column: true
+ sql-show: false
+ check-table-metadata-enabled: false
+ lock-wait-timeout-milliseconds: 50000 # The maximum time to wait for a lock
diff --git a/sysbench/proxy-pgsql/prepared-conf/sharding/config-sharding.yaml b/sysbench/proxy-pgsql/prepared-conf/sharding/config-sharding.yaml
new file mode 100644
index 0000000..61905e8
--- /dev/null
+++ b/sysbench/proxy-pgsql/prepared-conf/sharding/config-sharding.yaml
@@ -0,0 +1,175 @@
+schemaName: sbtest
+
+dataSources:
+ ds_0:
+   url: jdbc:postgresql://10.12.3.28:5432/sbtest?serverTimezone=UTC&useSSL=false
+   username: postgres
+   password: sphereEx@2021
+   connectionTimeoutMilliseconds: 30000
+   idleTimeoutMilliseconds: 60000
+   maxLifetimeMilliseconds: 1800000
+   maxPoolSize: 50
+   minPoolSize: 1
+   maintenanceIntervalMilliseconds: 30000
+ ds_1:
+   url: jdbc:postgresql://10.12.3.182:5432/sbtest?serverTimezone=UTC&useSSL=false
+   username: postgres
+   password: sphereEx@2021
+   connectionTimeoutMilliseconds: 30000
+   idleTimeoutMilliseconds: 60000
+   maxLifetimeMilliseconds: 1800000
+   maxPoolSize: 50
+   minPoolSize: 1
+   maintenanceIntervalMilliseconds: 30000
+
+
+
+rules:
+- !SHARDING
+  tables:
+    sbtest1:
+      actualDataNodes: ds_${0..1}.sbtest1_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_1
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest2:
+      actualDataNodes: ds_${0..1}.sbtest2_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_2
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest3:
+      actualDataNodes: ds_${0..1}.sbtest3_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_3
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest4:
+      actualDataNodes: ds_${0..1}.sbtest4_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_4
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest5:
+      actualDataNodes: ds_${0..1}.sbtest5_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_5
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest6:
+      actualDataNodes: ds_${0..1}.sbtest6_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_6
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest7:
+      actualDataNodes: ds_${0..1}.sbtest7_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_7
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest8:
+      actualDataNodes: ds_${0..1}.sbtest8_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_8
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest9:
+      actualDataNodes: ds_${0..1}.sbtest9_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_9
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+    sbtest10:
+      actualDataNodes: ds_${0..1}.sbtest10_${0..99}
+      tableStrategy:
+        standard:
+          shardingColumn: id
+          shardingAlgorithmName: table_inline_10
+      keyGenerateStrategy:
+        column: id
+        keyGeneratorName: snowflake
+
+  defaultDatabaseStrategy:
+    standard:
+      shardingColumn: id
+      shardingAlgorithmName: database_inline
+
+  shardingAlgorithms:
+    database_inline:
+      type: INLINE
+      props:
+        algorithm-expression: ds_${id % 2}
+    table_inline_1:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest1_${id % 100}
+    table_inline_2:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest2_${id % 100}
+    table_inline_3:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest3_${id % 100}
+    table_inline_4:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest4_${id % 100}
+    table_inline_5:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest5_${id % 100}
+    table_inline_6:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest6_${id % 100}
+    table_inline_7:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest7_${id % 100}
+    table_inline_8:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest8_${id % 100}
+    table_inline_9:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest9_${id % 100}
+    table_inline_10:
+      type: INLINE
+      props:
+        algorithm-expression: sbtest10_${id % 100}
+  keyGenerators:
+    snowflake:
+      type: SNOWFLAKE
+      props:
+        worker-id: 123
diff --git a/sysbench/proxy-pgsql/sysbench-proxy-pgsql-script.sh b/sysbench/proxy-pgsql/sysbench-proxy-pgsql-script.sh
new file mode 100644
index 0000000..b4573c0
--- /dev/null
+++ b/sysbench/proxy-pgsql/sysbench-proxy-pgsql-script.sh
@@ -0,0 +1,22 @@
+PROXY_HOST=10.12.3.160
+
+sysbench oltp_read_only --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=10 --time=120 --threads=10 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+
+sysbench oltp_read_only --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=10 --time=360 --threads=10 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off prepare
+
+
+sysbench oltp_read_only         --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=30  --time=15 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run
+
+sysbench oltp_read_only        --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=30  --time=15 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+
+sysbench oltp_point_select     --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=30  --time=15 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+
+sysbench oltp_read_write        --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=60  --time=21600 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_readwrite.master.txt
+
+sysbench oltp_write_only       --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=60  --time=21600 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+
+sysbench oltp_update_index     --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=30  --time=15 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+
+sysbench oltp_update_non_index --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=30  --time=15 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+
+sysbench oltp_delete           --db-driver=pgsql --pgsql-host=${PROXY_HOST} --pgsql-port=3307 --pgsql-user=root --pgsql-password='root' --pgsql-db=sbtest --tables=10 --table-size=100 --report-interval=30  --time=15 --threads=16 --max-requests=0 --percentile=99 --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
diff --git a/sysbench/proxy-pgsql/sysbench-proxy-pgsql.pipeline b/sysbench/proxy-pgsql/sysbench-proxy-pgsql.pipeline
new file mode 100644
index 0000000..8355f0b
--- /dev/null
+++ b/sysbench/proxy-pgsql/sysbench-proxy-pgsql.pipeline
@@ -0,0 +1,95 @@
+#!groovy
+
+def DATABASE_TYPE
+def TEST_FUNCTION_LIST
+
+pipeline {
+  agent any
+
+  stages {
+    stage('Init Jenkins Pipeline Environment') {
+      steps {
+        script {
+          DATABASE_TYPE = 'pgsql'
+          TEST_FUNCTION_LIST = ['sharding', 'readwrite-splitting', 'encrypt']
+        }
+      }
+    }
+
+    // clone code, compile code and unzip proxy distribution on proxy node
+    stage('Install Sharding Proxy') {
+      agent {
+        label 'proxy'
+      }
+      steps {
+        echo '...... Installing Sharding Proxy ...... '
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/proxy-pgsql/install-proxy.sh'
+      }
+    }
+
+    // generate the corresponding directories and make the script read on sysbench node.
+    stage('Prepare Sysbench') {
+      agent {
+        label 'sysbench'
+      }
+      steps {
+        echo '..... prepare the directories for sysbench ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/proxy-pgsql/prepare-sysbench.sh proxy-pgsql'
+      }
+    }
+
+    stage('Prepare Proxy') {
+      agent {
+        label 'proxy'
+      }
+      steps {
+        echo '..... prepare the directories for proxy ......'
+        git url: 'https://gitee.com/taojintianxia/gittee-database-sysbench.git'
+        sh "chmod +x -R ${env.WORKSPACE}"
+        sh 'sysbench/proxy-pgsql/prepare-proxy.sh'
+      }
+    }
+
+    stage('Test In a Loop') {
+      steps {
+        script {
+          for (int i = 0; i < TEST_FUNCTION_LIST.size(); i++) {
+            stage("Prepare Proxy Config for ${TEST_FUNCTION_LIST[i]}") {
+              node('proxy') {
+                echo "prepare the config for ${TEST_FUNCTION_LIST[i]}"
+                sh "sysbench/proxy-pgsql/prepared-conf/prepare-for-function.sh ${TEST_FUNCTION_LIST[i]}"
+              }
+            }
+
+            stage("Sysbench Start to Test ${TEST_FUNCTION_LIST[i]}") {
+              node('sysbench') {
+                echo "sysbench start to test ${TEST_FUNCTION_LIST[i]}"
+                echo "sleep 10 sencods for waiting proxy start"
+                sh "sleep 10"
+                sh "sysbench/proxy-pgsql/sysbench-test-function.sh ${TEST_FUNCTION_LIST[i]}"
+              }
+            }
+
+            stage("Generate Report for ${TEST_FUNCTION_LIST[i]}") {
+              node('sysbench') {
+                echo "generate report for ${TEST_FUNCTION_LIST[i]}"
+                publishHTML target: [
+                  allowMissing: true,
+                  alwaysLinkToLastBuild: true,
+                  keepAll: true,
+                  reportDir: "/opt/sphere-ex/pgsql/sysbench-result/graph/${TEST_FUNCTION_LIST[i]}/",
+                  reportFiles: '*.png',
+                  reportName: "${TEST_FUNCTION_LIST[i]} Report"
+                ]
+              }
+            }
+          }
+        }
+      }
+    }
+  }
+}
\ No newline at end of file
diff --git a/sysbench/proxy-pgsql/sysbench-test-function.sh b/sysbench/proxy-pgsql/sysbench-test-function.sh
new file mode 100644
index 0000000..af124bc
--- /dev/null
+++ b/sysbench/proxy-pgsql/sysbench-test-function.sh
@@ -0,0 +1,54 @@
+#!/bin/sh
+
+BASE_PATH=$(sh toolkit/read-constant-from-file.sh .env "BASE_PATH")
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh .env "PROXY_PGSQL")
+SYSBENCH_RESULT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_RESULT")
+BUILD_NUMBER_FILE=$(sh toolkit/read-constant-from-file.sh .env "BUILD_NUMBER_FILE")
+SYSBENCH_PGSQL_SCRIPT=$(sh toolkit/read-constant-from-file.sh .env "SYSBENCH_PROXY_PGSQL_SCRIPT")
+TEST_FUNCTION=$1
+
+
+BUILD_NUMBER_FILE_PATH="${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER_FILE}"
+
+# debug
+echo "ctouch path is : ${BUILD_NUMBER_FILE_PATH}"
+
+if [ ! -f "${BUILD_NUMBER_FILE_PATH}" ]; then
+  mkdir -p ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/
+  touch ${BUILD_NUMBER_FILE_PATH}
+fi
+
+BUILD_NUMBER=$(cat "${BUILD_NUMBER_FILE_PATH}" | awk 'END {print}')
+
+## if there is no build number, then create the first folder
+if [ ! -n "${BUILD_NUMBER}" ]; then
+  BUILD_NUMBER=0
+fi
+
+let BUILD_NUMBER+=1;
+
+## if there is a failure test, then delete the folder
+if [ -d "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"  ]; then
+  # if path is empty, ":?" will stop the rm command delete everything in system.
+  rm -rf "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}:?"
+fi
+
+mkdir -p "${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"
+cp ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_PGSQL_SCRIPT} ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_PGSQL_SCRIPT}
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}
+
+#Debug
+echo "path : ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}"
+echo "python file path : ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}"
+
+echo "the execution script : ${SYSBENCH_PGSQL_SCRIPT}"
+sh ${SYSBENCH_PGSQL_SCRIPT}
+
+rm -rf ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}/${BUILD_NUMBER}/${SYSBENCH_PGSQL_SCRIPT}
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}/${TEST_FUNCTION}
+ls -v | tail -n14 > ${BUILD_NUMBER_FILE_PATH}
+
+
+cd ${BASE_PATH}/${DATABASE_TYPE}/${SYSBENCH_RESULT}
+python3 plot_graph.py ${TEST_FUNCTION}
+
diff --git a/toolkit/plot_graph.py b/toolkit/plot_graph.py
new file mode 100644
index 0000000..11c0b51
--- /dev/null
+++ b/toolkit/plot_graph.py
@@ -0,0 +1,79 @@
+import sys
+import matplotlib.pyplot as plt
+import numpy as np
+
+
+def generate_graph(path, case_name):
+    dataset = {
+        'build_num': [],
+        'master_version': [],
+        'master_xa': [],
+        '4.1.1_version': [],
+        '3.0.0_version': [],
+        'mysql_server': []
+    }
+    with open(path + '/.build_number.txt') as builds:
+        for line in builds:
+            dataset['build_num'].append(int(line))
+    generate_data(path, case_name, dataset)
+    print(dataset)
+    fig, ax = plt.subplots()
+    ax.grid(True)
+    plt.title(case_name)
+
+    data = [dataset['master_version'][-7:], dataset['master_xa'][-7:], dataset['4.1.1_version'][-7:], dataset['3.0.0_version'][-7:], dataset['mysql_server'][-7:]]
+    columns = dataset['build_num'][-7:]
+    rows = ['master', 'xa', '4.1.1', '3.0.0', 'mysql']
+    rcolors = plt.cm.BuPu(np.full(len(rows), 0.1))
+    ccolors = plt.cm.BuPu(np.full(len(columns), 0.1))
+    the_table = plt.table(cellText=data, rowLabels=rows, colLabels=columns, rowColours=rcolors, colColours=ccolors,
+                          loc='bottom', bbox=[0.0, -0.50, 1, .28])
+    plt.subplots_adjust(left=0.15, bottom=0.3, right=0.98)
+
+    plt.xticks(range(14))
+    ax.set_xticklabels(dataset['build_num'])
+    plt.plot(dataset['master_version'], 'o-', color='magenta', label='master_version')
+    plt.plot(dataset['master_xa'], 'o-', color='darkviolet', label='master_xa')
+    plt.plot(dataset['4.1.1_version'], 'r--', color='blue', label='4.1.1_version')
+    plt.plot(dataset['3.0.0_version'], 'r--', color='orange', label='3.0.0_version')
+    plt.plot(dataset['mysql_server'], 'r--', color='lime', label='mysql_server')
+    plt.xlim()
+    plt.legend()
+    plt.xlabel('build_num')
+    plt.ylabel('transactions per second')
+    plt.savefig('graph/' + path + '/' + case_name)
+    plt.show()
+
+
+def generate_data(path, case_name, dataset):
+    for build in dataset['build_num']:
+        fill_dataset(build, case_name, dataset, path, 'master_version', '.master.txt')
+        fill_dataset(build, case_name, dataset, path, 'master_xa', '.xa.txt')
+        fill_dataset(build, case_name, dataset, path, '4.1.1_version', '.4_1_1.txt')
+        fill_dataset(build, case_name, dataset, path, '3.0.0_version', '.3_0_0.txt')
+        fill_dataset(build, case_name, dataset, path, 'mysql_server', '.mysql.txt')
+
+
+def fill_dataset(build, case_name, dataset, path, version, suffix):
+    try:
+        with open(path + '/' + str(build) + '/' + case_name + suffix) as version_master:
+            value = 0
+            for line in version_master:
+                if 'transactions:' in line:
+                    items = line.split('(')
+                    value = float(items[1][:-10])
+            dataset[version].append(value)
+    except FileNotFoundError:
+        dataset[version].append(0)
+
+
+if __name__ == '__main__':
+    path = sys.argv[1]
+    generate_graph(path, 'oltp_point_select')
+    generate_graph(path, 'oltp_read_only')
+    generate_graph(path, 'oltp_write_only')
+    generate_graph(path, 'oltp_readwrite')
+    generate_graph(path, 'oltp_update_index')
+    generate_graph(path, 'oltp_update_non_index')
+    generate_graph(path, 'oltp_delete')
+
diff --git a/toolkit/read-constant-from-file.sh b/toolkit/read-constant-from-file.sh
new file mode 100644
index 0000000..e90c1aa
--- /dev/null
+++ b/toolkit/read-constant-from-file.sh
@@ -0,0 +1,28 @@
+#!/bin/bash
+
+FILE_NAME=$1
+CONSTANT_NAME=$2
+
+if [ -z "${FILE_NAME}" ]; then
+    echo "file name is empty"
+    exit
+fi
+
+if [ ! -f "${FILE_NAME}" ]; then
+    echo "file ${FILE_NAME} doesn't exist"
+fi
+
+if [ -z "${CONSTANT_NAME}" ]; then
+    echo "constant name is empty"
+    exit
+fi
+
+while read -r line
+do
+ KEY=$(echo "$line" | cut -f1 -d"=")
+ if [ "${CONSTANT_NAME}" = "${KEY}" ]; then
+     echo "$line" | cut -f2 -d"="
+ fi
+
+done < "${FILE_NAME}"
+
diff --git a/toolkit/sysbench-script.sh b/toolkit/sysbench-script.sh
new file mode 100644
index 0000000..15d4869
--- /dev/null
+++ b/toolkit/sysbench-script.sh
@@ -0,0 +1,38 @@
+#!/bin/sh
+
+ENV_FILE_PATH=$1
+
+DATABASE_TYPE=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "DATABASE_TYPE")
+DATABASE_HOST=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "DATABASE_HOST")
+DATABASE_PORT=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "DATABASE_PORT")
+DATABASE_USER_NAME=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "DATABASE_USER_NAME")
+DATABASE_PASSWORD=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "DATABASE_PASSWORD")
+DATABASE_NAME=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "DATABASE_NAME")
+TABLES=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "TABLES")
+TABLE_SIZE=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "TABLE_SIZE")
+REPORT_INTERVAL=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "REPORT_INTERVAL")
+TIME=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "TIME")
+THREADS=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "THREADS")
+MAX_REQUESTS=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "MAX_REQUESTS")
+PERCENTILE=$(sh toolkit/read-constant-from-file.sh '${ENV_FILE_PATH}' "PERCENTILE")
+
+sysbench oltp_read_only --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=${REPORT_INTERVAL} --time=120 --threads=32 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off cleanup
+
+sysbench oltp_read_only --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=${REPORT_INTERVAL} --time=360 --threads=32 --max-requests=0 --percentile=99 --rand-type=uniform --range_selects=off --auto_inc=off prepare
+
+
+sysbench oltp_read_only        --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run
+
+sysbench oltp_read_only        --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_read_only.master.txt
+
+sysbench oltp_point_select     --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_point_select.master.txt
+
+sysbench oltp_read_write        --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_readwrite.master.txt
+
+sysbench oltp_write_only       --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_write_only.master.txt
+
+sysbench oltp_update_index     --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_index.master.txt
+
+sysbench oltp_update_non_index --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_update_non_index.master.txt
+
+sysbench oltp_delete           --${DATABSE_TYPE}-host=${DATABASE_HOST} --${DATABASE_TYPE}-port=${DATABASE_PORT} --${DATABASE_TYPE}-user=${DATABASE_USER_NAME} --${DATABASE_TYPE}-password=${DATABASE_PASSWORD} --${DATABSE_TYPE}-db=${DATABASE_NAME} --tables=${TABLES} --table-size=${TABLE_SIZE} --report-interval=30  --time=180 --threads=32 --max-requests=0 --percentile=99  --range_selects=off --rand-type=uniform --auto_inc=off run | tee oltp_delete.master.txt
\ No newline at end of file