Update document
diff --git a/alpha/alpha-benchmark/README.md b/alpha/alpha-benchmark/README.md
index 32d01e7..2765847 100644
--- a/alpha/alpha-benchmark/README.md
+++ b/alpha/alpha-benchmark/README.md
@@ -70,7 +70,7 @@
-Dcom.sun.management.jmxremote.port=9090 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
- -jar alpha-server-0.5.0-SNAPSHOT-exec.jar \
+ -jar alpha-server-0.7.0-SNAPSHOT-exec.jar \
--spring.datasource.username=saga-user \
--spring.datasource.password=saga-password \
--spring.datasource.url="jdbc:postgresql://0.0.09.0:5432/saga?useSSL=false" \
diff --git a/docs/fsm/eventchannel_zh.md b/docs/fsm/eventchannel_zh.md
deleted file mode 100644
index 207eb93..0000000
--- a/docs/fsm/eventchannel_zh.md
+++ /dev/null
@@ -1,56 +0,0 @@
-# 事件通道
-
-Alpha 收到 Omeag 发送的事件后放入事件通道等待 Akka 处理,事件通道有三种实现方式,一种是内存通道另外是 Kafka,Rabbit 通道
-
-| 通道类型 | 模式 | 说明 |
-| -------- | ---- | ------------------------------------------------------------ |
-| memory | 单例 | 使用内存作为数据通道,不建议在生产环境使用 |
-| kafka | 集群 | 使用 Kafka 作为数据通道,使用全局事务ID作为分区策略,集群中的所有节点同时工作,可水平扩展,当配置了 spring.profiles.active=prd,cluster 参数后默认就使用 kafka 通道 |
-| rabbit | 集群 | 使用 rabbit 作为数据通道,使用全局事务ID作为分区策略, 由于rabbit 原生不支持分区,所以引用了 [spring-cloud-stream](https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit) |
-
- 可以使用参数 `alpha.feature.akka.channel.type` 配置通道类型
-
-- Memory 通道参数
-
-| 参数名 | 参数值 | 说明 |
-| -------------------------------------- | ------ | ------------------------------------------- |
-| alpha.feature.akka.channel.type | memory | |
-| alpha.feature.akka.channel.memory.size | -1 | momory类型时内存队列大小,-1表示Integer.MAX |
-
-- Kafka 通道参数
-
-| 参数名 | 参数值 | 说明 |
-| --------------------------------------- | -------- | ------------------------------------------- |
-| alpha.feature.akka.channel.type | kafka | |
-| spring.kafka.bootstrap-servers | -1 | momory类型时内存队列大小,-1表示Integer.MAX |
-| spring.kafka.producer.batch-size | 16384 | |
-| spring.kafka.producer.retries | 0 | |
-| spring.kafka.producer.buffer.memory | 33554432 | |
-| spring.kafka.consumer.auto.offset.reset | earliest | |
-| spring.kafka.listener.pollTimeout | 1500 | |
-| kafka.numPartitions | 6 | |
-| kafka.replicationFactor | 1 | |
-
-- Rabbit 通道参数
-
-| 参数名 | 参数值 | 说明 |
-| --------------------------------------- | -------- | ------------------------------------------- |
-| alpha.feature.akka.channel.type | rabbit | |
-| spring.cloud.stream.instance-index | 0 | 分区索引|
-| spring.cloud.stream.instance-count | 1 | |
-| spring.cloud.stream.bindings.service-comb-pack-producer.producer.partition-count| 1|分区数量,分区数量需要与alpha-server保持一致|
-| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.virtual-host| servicecomb-pack | |
-| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.host | rabbitmq.servicecomb.io | |
-| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.username | servicecomb-pack | |
-| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.password | H123213PWD ||
-|spring.cloud.stream.binders.defaultRabbit.type|rabbit|
-|spring.cloud.stream.bindings.service-comb-pack-producer.destination|exchange-service-comb-pack||
-|spring.cloud.stream.bindings.service-comb-pack-producer.content-type|application/json|
-|spring.cloud.stream.bindings.service-comb-pack-producer.producer.partition-key-expression|headers['partitionKey'] | 分区表达式
-|spring.cloud.stream.bindings.service-comb-pack-consumer.group|group-pack|
-|spring.cloud.stream.bindings.service-comb-pack-consumer.content-type|application/json|
-|spring.cloud.stream.bindings.service-comb-pack-consumer.destination|exchange-service-comb-pack|
-spring.cloud.stream.bindings.service-comb-pack-consumer.consumer.partitioned|true|
-
-
-
diff --git a/docs/fsm/fsm_manual.md b/docs/fsm/fsm_manual.md
deleted file mode 100755
index da8327c..0000000
--- a/docs/fsm/fsm_manual.md
+++ /dev/null
@@ -1,298 +0,0 @@
-# 状态机模式
-
-ServiceComb Pack 0.5.0 版本开始我们尝试使用状态机模型解决分布式事务中复杂的事件和状态关系,我们将 Alpha 看作一个可以记录每个全局事务不同状态的的盒子,Alpha 收到 Omega 发送的事务消息(全局事务启动、全局事务停止,全局事务失败,子事务启动,子事务停止,子事务失败等等)后完成一些动作(等待、补偿、超时)和状态切换。
-
-分布式事务的事件使我们面临很复杂的情况,我们希望可以通过一种DSL来清晰的定义状态机,并且能够解决状态机本身的持久化和分布式问题,再经过尝试后我们觉得 [Akka](https://github.com/akka/akka) 是一个不错的选择。下面请跟我一起体验一下这个新功能。
-
-## 重大更新
-
-* 使用 Akka 状态机代替基于表扫描的状态判断
-* 性能提升一个数量级,事件吞吐量每秒1.8w+,全局事务处理量每秒1.2k+
-* 内置健康指标采集器,可清晰了解系统瓶颈
-* 支持分布式集群
-* 向前兼容原有 gRPC 协议
-* 全新的可视化监控界面
-* 开放全新的 API
-
-## 快速开始
-
-ServiceComb Pack 0.5.0 开始支持 Saga 状态机模式,你只需要在启动 Alpha 和 Omega 端程序时增加 `alpha.feature.akka.enabled=true` 参数。你可以在 [docker hub](https://hub.docker.com/r/coolbeevip/servicecomb-pack) 找到一个 docker-compose 文件,也可以按照以下方式部署。
-
-**注意:** 启用状态机模式后,Saga事务会工作在状态机模式,TCC依然采用数据库方式
-**注意:** 0.6.0+ 版本 Omega 端程序不需要配置 `alpha.feature.akka.enabled=true` 参数
-
-* 启动 PostgresSQL
-
- ```bash
- docker run -d -e "POSTGRES_DB=saga" -e "POSTGRES_USER=saga" -e "POSTGRES_PASSWORD=password" -p 5432:5432 postgres
- ```
-
-* 启动 Elasticsearch
-
- ```bash
- docker run --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.17.1
- ```
-
-* 启动 Alpha
-
- ```bash
- java -jar alpha-server-${version}-exec.jar \
- --spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --alpha.feature.akka.enabled=true \
- --alpha.feature.akka.transaction.repository.type=elasticsearch \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
- --spring.profiles.active=prd
- ```
-
-* Alpha WEB 管理界面
-
- 浏览器中打开 http://localhost:8090/admin
-
-
-### WEB 管理界面
-
-状态机模式开启后,我们还提供了一个简单的管理界面,你可以在这个界面上看到 Alpha 的性能指标、全局事务统计和事务明细
-
-#### 仪表盘
-
-![ui-dashboard](assets/ui-dashboard.png)
-
-* Dashboard 仪表盘上方显示已经结束的全局事务数量
-
- TOTAL:总事务数
-
- SUCCESSFUL:成功结束事务数
-
- COMPENSATED:成功补偿结束事务数
-
- FAILED:失败(挂起)事务数
-
-* Active Transactions 活动事务计数器
-
- COMMITTED:从启动到现在累计成功结束的事务数
-
- COMPENSATED:从启动到现在累计补偿的事务数
-
- SUSPENDED:从启动到现在累计挂起的事务数
-
-* Active Transactions 组件计数器
-
- Events、Actors、Sagas、DB是一组计数器,分别显示Alpha系统中每个处理组件的处理前、处理后计数器以及平均处理事件,通过跟踪这些指标可以了解系统当前的性能以及瓶颈
-
- Events:显示 Alpha 收到的事件数量、受理的事件数量、拒绝的事件数量、平均每个事件的处理时间
-
- Actors:显示状态机收到的事件数量、受理的事件数量、拒绝的事件数量、平均每个事件的处理时间
-
- Sagas:显示开始的全局事务数量、结束的全局事务数量、平均每个全局事务的处理时间
-
- DB:显示持久化ES组件收到的已结束全局事务数量、持久化到ES中的全局事务数量
-
-* Top 10 Slow Transactions
-
- 显示最慢的前十个事务,点击后可以看到这个慢事务的明细信息
-
-* System Info
-
- 显示了当前 Alpha 服务的系统,JVM,线程等信息
-
-**注意:**Active Transactions 中的指标值重启后自动归零
-
-#### Saga 事务查询列表
-
-![ui-transactions-list](assets/ui-transactions-list.png)
-
-事务查询列表可以显示所有已经结束的全局事务的服务ID,服务实例ID,全局事务ID,包含子事务数量、事务耗时、最终结束状态(成功提交、成功补偿、挂起)等,点击一个全局事务后可以查看这个全局事务的明细信息,你也可以在顶部搜索框中输入全局事务ID后快速定位到这个事务。
-
-#### 全局事务明细
-
-全局事务明细页面显示了这个全局事务的事件信息和子事务信息
-
-Events 面板:本事务收到的事件类型、事件时间、发送事件的服务ID和实例ID等详细信息
-
-Sub Transactions 面板:本事务包含的子事务ID,子事务状态,子事务耗时等详细信息
-
-##### 全局事务成功结束
-
-![ui-transaction-details-successful](assets/ui-transaction-details-successful.png)
-
-事件卡片右下角的下箭头点击后可以看到子事务的补偿方法信息
-
-##### 全局事务成功补偿结束
-
-![ui-transaction-details-compensated](assets/ui-transaction-details-compensated.png)
-
-红色字体显示收到了一个失败事件,点击右侧红色下箭头可以看到失败的错误堆栈
-
-##### 全局事务失败结束-超时挂起
-
-![ui-transaction-details-failed-timeout](assets/ui-transaction-details-failed-timeout.png)
-
-红色字体显示这个事务由于设置了5秒钟超时,并且在5秒钟后没有收到结束事件而处于挂起状态
-
-##### 全局事务失败结束-非预期事件挂起
-
-![ui-transaction-details-failed-unpredictable](assets/ui-transaction-details-failed-unpredictable.png)
-
-因为并没有收到子事务的任何事件,这并不符合状态机预期,所以红色字体显示不可预期挂起
-
-## 集群
-
-> 需要下载主干代码后自己编译 0.6.0 版本
-
-依赖 Kafka 和 Redis 我们可以部署一个具有分布式处理能力的 Alpha 集群。Alpha 集群基于 Akka Cluster Sharding 和 Akka Persistence 实现动态计算和故障自愈。
-
-![image-20190927150455006](assets/alpha-cluster-architecture.png)
-
-上边是 Alpha 集群的工作架构图,表示部署了两个 Alpha 节点,分别是 8070 和 8071(这两编号是 [Gossip](https://en.wikipedia.org/wiki/Gossip_protocol) 协议的通信端口)。Omega 消息被发送到 Kafka ,并使用 globalTxId 作为分区策略,这保证了同一个全局事务下的子事务可以被有序的消费。KafkaConsumer 负责从 Kafak 中读取事件并发送给集群分片器 ShardingCoordinator,ShardingCoordinator 负责在 Alpha 集群中创建 SagaActor 并发送这个消息。运行中的 SagaActor 接收到消息后会持久化到 Redis 中,当这个集群中的节点奔溃后可以在集群其他节点恢复 SagaActor 以及它的状态。当 SagaActor 结束后就会将这一笔全局事务的数据存储到 ES。
-
-启动 Alpha 集群非常容易,首先启动集群需要用到的中间件 Kafka Redis PostgreSQL/MySQL ElasticSearch,你使用 Docker 启动他们(在生产环境建议使用一个更可靠的部署方式),下边提供了一个 docker compose 文件 servicecomb-pack-middleware.yml,你可以直接使用命令 `docker-compose -f servicecomb-pack-middleware.yml up -d` 启动它。
-
-```yaml
-version: '3.2'
-services:
- postgres:
- image: postgres:9.6
- hostname: postgres
- container_name: postgres
- ports:
- - '5432:5432'
- environment:
- - POSTGRES_DB=saga
- - POSTGRES_USER=saga
- - POSTGRES_PASSWORD=password
-
- elasticsearch:
- image: elasticsearch:7.17.1
- hostname: elasticsearch
- container_name: elasticsearch
- environment:
- - "ES_JAVA_OPTS=-Xmx256m -Xms256m"
- - "discovery.type=single-node"
- - "cluster.routing.allocation.disk.threshold_enabled=false"
- ulimits:
- memlock:
- soft: -1
- hard: -1
- ports:
- - 9200:9200
- - 9300:9300
-
- zookeeper:
- image: coolbeevip/alpine-zookeeper:3.4.14
- hostname: zookeeper
- container_name: zookeeper
- ports:
- - 2181:2181
-
- kafka:
- image: coolbeevip/alpine-kafka:2.2.1-2.12
- hostname: kafka
- container_name: kafka
- environment:
- KAFKA_ADVERTISED_HOST_NAME: 10.50.8.3
- KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
- ports:
- - 9092:9092
- links:
- - zookeeper:zookeeper
- depends_on:
- - zookeeper
-
- redis:
- image: redis:5.0.5-alpine
- hostname: redis
- container_name: redis
- ports:
- - 6379:6379
-```
-
-**注意:** KAFKA_ADVERTISED_HOST_NAME 一定要配置成服务器的真实 IP 地址,不能配置成 127.0.0.1 或者 localhost
-
-然后我们启动一个具有两个 Alpha 节点的集群,因为我是在一台机器上启动两个节点,所以他们必须具备不同的端口
-
-* 端口规划
-
- | 节点 | gRPC 端口 | REST 端口 | Gossip 端口 |
- | ------- | --------- | --------- | ----------- |
- | Alpha 1 | 8080 | 8090 | 8070 |
- | Alpha 2 | 8081 | 8091 | 8071 |
-
-* 集群参数
-
- | 参数名 | 说明 |
- | ------------------------------------------------ | ------------------------------------------------------------ |
- | server.port | REST 端口,默认值 8090 |
- | alpha.server.port | gRPC 端口,默认值 8080 |
- | akkaConfig.akka.remote.artery.canonical.port | Gossip 端口,默认值 8070 |
- | spring.kafka.bootstrap-server | Kafka 地址 |
- | akkaConfig.akka-persistence-redis.redis.host | Redis Host IP |
- | akkaConfig.akka-persistence-redis.redis.port | Redis Port |
- | akkaConfig.akka-persistence-redis.redis.database | Redis Database |
- | akkaConfig.akka.cluster.seed-nodes[N] | Gossip seed 节点地址,如果有多个 seed 节点,那么就写多行这个参数,每行的序号 N 从 0 开始采用递增方式 |
- | spring.profiles.active | 必须填写 prd,cluster |
-
-* 启动 Alpha 1
-
- ```bash
- java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
- --server.port=8090 \
- --server.host=127.0.0.1 \
- --alpha.server.port=8080 \
- --alpha.feature.akka.enabled=true \
- --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
- --akkaConfig.akka.remote.artery.canonical.port=8070 \
- --akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
- --akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
- --akkaConfig.akka-persistence-redis.redis.port=6379 \
- --spring.profiles.active=prd,cluster
- ```
-
-* 启动 Alpha 2
-
- ```bash
- java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
- --server.port=8091 \
- --server.host=127.0.0.1 \
- --alpha.server.port=8081 \
- --alpha.feature.akka.enabled=true \
- --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
- --akkaConfig.akka.remote.artery.canonical.port=8071 \
- --akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
- --akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
- --akkaConfig.akka-persistence-redis.redis.port=6379 \
- --spring.profiles.active=prd,cluster
- ```
-
-## 动态扩容
-
-- Alpha 支持通过动态增加节点的的方式实现在线处理能力扩容
-- Alpha 默认创建的 Kafka Topic 分区数量是 6,也就是说 Alpha 集群节点大于6个时将不能再提升处理性能,你可以根据规划在初次启动的时候使用 `kafka.numPartitions` 参数修改自动创建的 Topic 分区数
-
-## 附件
-
-[事件通道](eventchannel_zh.md)
-
-[持久化](persistence_zh.md)
-
-[Akka 配置](akka_zh.md)
-
-[APIs](apis_zh.md)
-
-[设计文档](design_fsm_zh.md)
-
-[基准测试报告](benchmark_zh.md)
-
-
-
-
-
diff --git a/docs/fsm/akka_zh.md b/docs/spec-saga-akka/akka_zh.md
similarity index 100%
rename from docs/fsm/akka_zh.md
rename to docs/spec-saga-akka/akka_zh.md
diff --git a/docs/fsm/apis_zh.md b/docs/spec-saga-akka/apis_zh.md
similarity index 100%
rename from docs/fsm/apis_zh.md
rename to docs/spec-saga-akka/apis_zh.md
diff --git a/docs/fsm/assets/alpha-cluster-architecture.png b/docs/spec-saga-akka/assets/alpha-cluster-architecture.png
similarity index 100%
rename from docs/fsm/assets/alpha-cluster-architecture.png
rename to docs/spec-saga-akka/assets/alpha-cluster-architecture.png
Binary files differ
diff --git a/docs/fsm/assets/benchmark-alpha-1.png b/docs/spec-saga-akka/assets/benchmark-alpha-1.png
similarity index 100%
rename from docs/fsm/assets/benchmark-alpha-1.png
rename to docs/spec-saga-akka/assets/benchmark-alpha-1.png
Binary files differ
diff --git a/docs/fsm/assets/benchmark-alpha-2.png b/docs/spec-saga-akka/assets/benchmark-alpha-2.png
similarity index 100%
rename from docs/fsm/assets/benchmark-alpha-2.png
rename to docs/spec-saga-akka/assets/benchmark-alpha-2.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.4.0-1w-100.png b/docs/spec-saga-akka/assets/cmd-0.4.0-1w-100.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.4.0-1w-100.png
rename to docs/spec-saga-akka/assets/cmd-0.4.0-1w-100.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.4.0-1w-500.png b/docs/spec-saga-akka/assets/cmd-0.4.0-1w-500.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.4.0-1w-500.png
rename to docs/spec-saga-akka/assets/cmd-0.4.0-1w-500.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-1w-100.png b/docs/spec-saga-akka/assets/cmd-0.5.0-1w-100.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-1w-100.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-1w-100.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-1w-1000.png b/docs/spec-saga-akka/assets/cmd-0.5.0-1w-1000.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-1w-1000.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-1w-1000.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-1w-2000.png b/docs/spec-saga-akka/assets/cmd-0.5.0-1w-2000.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-1w-2000.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-1w-2000.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-1w-500.png b/docs/spec-saga-akka/assets/cmd-0.5.0-1w-500.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-1w-500.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-1w-500.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-5w-1000.png b/docs/spec-saga-akka/assets/cmd-0.5.0-5w-1000.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-5w-1000.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-5w-1000.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-5w-2000.png b/docs/spec-saga-akka/assets/cmd-0.5.0-5w-2000.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-5w-2000.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-5w-2000.png
Binary files differ
diff --git a/docs/fsm/assets/cmd-0.5.0-5w-3000.png b/docs/spec-saga-akka/assets/cmd-0.5.0-5w-3000.png
similarity index 100%
rename from docs/fsm/assets/cmd-0.5.0-5w-3000.png
rename to docs/spec-saga-akka/assets/cmd-0.5.0-5w-3000.png
Binary files differ
diff --git a/docs/fsm/assets/fsm.png b/docs/spec-saga-akka/assets/fsm.png
similarity index 100%
rename from docs/fsm/assets/fsm.png
rename to docs/spec-saga-akka/assets/fsm.png
Binary files differ
diff --git a/docs/fsm/assets/saga_state_diagram.png b/docs/spec-saga-akka/assets/saga_state_diagram.png
similarity index 100%
rename from docs/fsm/assets/saga_state_diagram.png
rename to docs/spec-saga-akka/assets/saga_state_diagram.png
Binary files differ
diff --git a/docs/fsm/assets/state_table.png b/docs/spec-saga-akka/assets/state_table.png
similarity index 100%
rename from docs/fsm/assets/state_table.png
rename to docs/spec-saga-akka/assets/state_table.png
Binary files differ
diff --git a/docs/fsm/assets/ui-dashboard.png b/docs/spec-saga-akka/assets/ui-dashboard.png
similarity index 100%
rename from docs/fsm/assets/ui-dashboard.png
rename to docs/spec-saga-akka/assets/ui-dashboard.png
Binary files differ
diff --git a/docs/fsm/assets/ui-transaction-details-compensated.png b/docs/spec-saga-akka/assets/ui-transaction-details-compensated.png
similarity index 100%
rename from docs/fsm/assets/ui-transaction-details-compensated.png
rename to docs/spec-saga-akka/assets/ui-transaction-details-compensated.png
Binary files differ
diff --git a/docs/fsm/assets/ui-transaction-details-failed-timeout.png b/docs/spec-saga-akka/assets/ui-transaction-details-failed-timeout.png
similarity index 100%
rename from docs/fsm/assets/ui-transaction-details-failed-timeout.png
rename to docs/spec-saga-akka/assets/ui-transaction-details-failed-timeout.png
Binary files differ
diff --git a/docs/fsm/assets/ui-transaction-details-failed-unpredictable.png b/docs/spec-saga-akka/assets/ui-transaction-details-failed-unpredictable.png
similarity index 100%
rename from docs/fsm/assets/ui-transaction-details-failed-unpredictable.png
rename to docs/spec-saga-akka/assets/ui-transaction-details-failed-unpredictable.png
Binary files differ
diff --git a/docs/fsm/assets/ui-transaction-details-failed.png b/docs/spec-saga-akka/assets/ui-transaction-details-failed.png
similarity index 100%
rename from docs/fsm/assets/ui-transaction-details-failed.png
rename to docs/spec-saga-akka/assets/ui-transaction-details-failed.png
Binary files differ
diff --git a/docs/fsm/assets/ui-transaction-details-successful.png b/docs/spec-saga-akka/assets/ui-transaction-details-successful.png
similarity index 100%
rename from docs/fsm/assets/ui-transaction-details-successful.png
rename to docs/spec-saga-akka/assets/ui-transaction-details-successful.png
Binary files differ
diff --git a/docs/fsm/assets/ui-transactions-list.png b/docs/spec-saga-akka/assets/ui-transactions-list.png
similarity index 100%
rename from docs/fsm/assets/ui-transactions-list.png
rename to docs/spec-saga-akka/assets/ui-transactions-list.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.4.0-1w-100.png b/docs/spec-saga-akka/assets/vm-0.4.0-1w-100.png
similarity index 100%
rename from docs/fsm/assets/vm-0.4.0-1w-100.png
rename to docs/spec-saga-akka/assets/vm-0.4.0-1w-100.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.4.0-1w-500.png b/docs/spec-saga-akka/assets/vm-0.4.0-1w-500.png
similarity index 100%
rename from docs/fsm/assets/vm-0.4.0-1w-500.png
rename to docs/spec-saga-akka/assets/vm-0.4.0-1w-500.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-1w-100.png b/docs/spec-saga-akka/assets/vm-0.5.0-1w-100.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-1w-100.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-1w-100.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-1w-1000.png b/docs/spec-saga-akka/assets/vm-0.5.0-1w-1000.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-1w-1000.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-1w-1000.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-1w-2000.png b/docs/spec-saga-akka/assets/vm-0.5.0-1w-2000.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-1w-2000.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-1w-2000.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-1w-500.png b/docs/spec-saga-akka/assets/vm-0.5.0-1w-500.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-1w-500.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-1w-500.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-5w-1000.png b/docs/spec-saga-akka/assets/vm-0.5.0-5w-1000.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-5w-1000.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-5w-1000.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-5w-2000.png b/docs/spec-saga-akka/assets/vm-0.5.0-5w-2000.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-5w-2000.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-5w-2000.png
Binary files differ
diff --git a/docs/fsm/assets/vm-0.5.0-5w-3000.png b/docs/spec-saga-akka/assets/vm-0.5.0-5w-3000.png
similarity index 100%
rename from docs/fsm/assets/vm-0.5.0-5w-3000.png
rename to docs/spec-saga-akka/assets/vm-0.5.0-5w-3000.png
Binary files differ
diff --git a/docs/fsm/benchmark_zh.md b/docs/spec-saga-akka/benchmark_zh.md
similarity index 95%
rename from docs/fsm/benchmark_zh.md
rename to docs/spec-saga-akka/benchmark_zh.md
index 256f4c0..44432d1 100644
--- a/docs/fsm/benchmark_zh.md
+++ b/docs/spec-saga-akka/benchmark_zh.md
@@ -67,13 +67,12 @@
-Dcom.sun.management.jmxremote.port=9090 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false \
- -jar alpha-server-0.5.0-SNAPSHOT-exec.jar \
- --spring.datasource.username=saga-user \
- --spring.datasource.password=saga-password \
- --spring.datasource.url="jdbc:postgresql://10.22.1.234:5432/saga?useSSL=false" \
+ -jar alpha-server-0.7.0-SNAPSHOT-exec.jar \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
--spring.profile.active=prd \
- --alpha.feature.nativetransport=true \
- --alpha.feature.akka.enabled=true
+ --alpha.feature.nativetransport=true
```
## 测试报告
diff --git a/docs/fsm/design_fsm_zh.md b/docs/spec-saga-akka/design_fsm_zh.md
similarity index 100%
rename from docs/fsm/design_fsm_zh.md
rename to docs/spec-saga-akka/design_fsm_zh.md
diff --git a/docs/spec-saga-akka/eventchannel_zh.md b/docs/spec-saga-akka/eventchannel_zh.md
new file mode 100644
index 0000000..34cdc48
--- /dev/null
+++ b/docs/spec-saga-akka/eventchannel_zh.md
@@ -0,0 +1,56 @@
+# 事件通道
+
+Alpha 收到 Omeag 发送的事件后放入事件通道等待 Akka 处理,事件通道有三种实现方式,一种是内存通道另外是 Kafka,Rabbit 通道
+
+| 通道类型 | 模式 | 说明 |
+| -------- | ---- | ------------------------------------------------------------ |
+| memory | 单例 | 使用内存作为数据通道,不建议在生产环境使用 |
+| kafka | 集群 | 使用 Kafka 作为数据通道,使用全局事务ID作为分区策略,集群中的所有节点同时工作,可水平扩展,当配置了 spring.profiles.active=prd,cluster 参数后默认就使用 kafka 通道 |
+| rabbit | 集群 | 使用 rabbit 作为数据通道,使用全局事务ID作为分区策略, 由于rabbit 原生不支持分区,所以引用了 [spring-cloud-stream](https://github.com/spring-cloud/spring-cloud-stream-binder-rabbit) |
+
+ 可以使用参数 `alpha.spec.saga.akka.channel.name` 配置通道类型
+
+- Memory 通道参数
+
+| 参数名 | 参数值 | 说明 |
+| -------------------------------------- | ------ | ------------------------------------------- |
+| alpha.spec.saga.akka.channel.name | memory | |
+| alpha.spec.saga.akka.channel.max-length | -1 | momory类型时内存队列大小,-1表示Integer.MAX |
+
+- Kafka 通道参数
+
+| 参数名 | 参数值 | 说明 |
+|---------------------------------------------------------------| -------- | ------------------------------------------- |
+| alpha.spec.saga.akka.channel.name | kafka | |
+| alpha.spec.saga.akka.channel.kafka.bootstrap-servers | -1 | momory类型时内存队列大小,-1表示Integer.MAX |
+| alpha.spec.saga.akka.channel.kafka.producer.batch-size | 16384 | |
+| alpha.spec.saga.akka.channel.kafka.producer.retries | 0 | |
+| alpha.spec.saga.akka.channel.kafka.producer.buffer.memory | 33554432 | |
+| alpha.spec.saga.akka.channel.kafka.consumer.auto.offset.reset | earliest | |
+| alpha.spec.saga.akka.channel.kafka.listener.pollTimeout | 1500 | |
+| alpha.spec.saga.akka.channel.kafka.numPartitions | 6 | |
+| alpha.spec.saga.akka.channel.kafka.replicationFactor | 1 | |
+
+- Rabbit 通道参数
+
+| 参数名 | 参数值 | 说明 |
+|-------------------------------------------------------------------------------------------|----------------------------|------------------------------|
+| alpha.spec.saga.akka.channel.name | rabbit | |
+| spring.cloud.stream.instance-index | 0 | 分区索引 |
+| spring.cloud.stream.instance-count | 1 | |
+| spring.cloud.stream.bindings.service-comb-pack-producer.producer.partition-count | 1 | 分区数量,分区数量需要与alpha-server保持一致 |
+| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.virtual-host | servicecomb-pack | |
+| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.host | rabbitmq.servicecomb.io | |
+| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.username | servicecomb-pack | |
+| spring.cloud.stream.binders.defaultRabbit.environment.spring.rabbitmq.password | H123213PWD | |
+| spring.cloud.stream.binders.defaultRabbit.type | rabbit | |
+| spring.cloud.stream.bindings.service-comb-pack-producer.destination | exchange-service-comb-pack | |
+| spring.cloud.stream.bindings.service-comb-pack-producer.content-type | application/json | |
+| spring.cloud.stream.bindings.service-comb-pack-producer.producer.partition-key-expression | headers['partitionKey'] | 分区表达式 |
+| spring.cloud.stream.bindings.service-comb-pack-consumer.group | group-pack | |
+| spring.cloud.stream.bindings.service-comb-pack-consumer.content-type | application/json | |
+| spring.cloud.stream.bindings.service-comb-pack-consumer.destination | exchange-service-comb-pack | |
+| spring.cloud.stream.bindings.service-comb-pack-consumer.consumer.partitioned | true | |
+
+
+
diff --git a/docs/fsm/fsm_manual_zh.md b/docs/spec-saga-akka/fsm_manual.md
similarity index 88%
copy from docs/fsm/fsm_manual_zh.md
copy to docs/spec-saga-akka/fsm_manual.md
index 31a0367..3626e03 100755
--- a/docs/fsm/fsm_manual_zh.md
+++ b/docs/spec-saga-akka/fsm_manual.md
@@ -16,7 +16,7 @@
## 快速开始
-ServiceComb Pack 0.5.0 开始支持 Saga 状态机模式,你只需要在启动 Alpha 和 Omega 端程序时增加 `alpha.feature.akka.enabled=true` 参数。你可以在 [docker hub](https://hub.docker.com/r/coolbeevip/servicecomb-pack) 找到一个 docker-compose 文件,也可以按照以下方式部署。
+ServiceComb Pack 0.5.0 开始支持 Saga 状态机模式,你只需要在启动 Alpha 时增加 `alpha.spec.names=saga-akka` 参数 和 Omega 端程序时增加 `omega.spce.names=saga` 参数。你可以在 [docker hub](https://hub.docker.com/r/coolbeevip/servicecomb-pack) 找到一个 docker-compose 文件,也可以按照以下方式部署。
**注意:** 启用状态机模式后,Saga事务会工作在状态机模式,TCC依然采用数据库方式
**注意:** 0.6.0+ 版本 Omega 端程序不需要配置 `alpha.feature.akka.enabled=true` 参数
@@ -40,9 +40,10 @@
--spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
--spring.datasource.username=saga \
--spring.datasource.password=password \
- --alpha.feature.akka.enabled=true \
- --alpha.feature.akka.transaction.repository.type=elasticsearch \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.channel.name=memory \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
--spring.profiles.active=prd
```
@@ -95,7 +96,7 @@
* System Info
- 显示了当前 Alpha 服务的系统,JVM,线程等信息
+ 显示了当前 Alpha 服务的系统,JVM,线程等信息
**注意:**Active Transactions 中的指标值重启后自动归零
@@ -215,14 +216,14 @@
* 端口规划
| 节点 | gRPC 端口 | REST 端口 | Gossip 端口 |
- | ------- | --------- | --------- | ----------- |
+ | ------- | --------- | --------- | ----------- |
| Alpha 1 | 8080 | 8090 | 8070 |
| Alpha 2 | 8081 | 8091 | 8071 |
* 集群参数
| 参数名 | 说明 |
- | ------------------------------------------------ | ------------------------------------------------------------ |
+ | ------------------------------------------------ | ------------------------------------------------------------ |
| server.port | REST 端口,默认值 8090 |
| alpha.server.port | gRPC 端口,默认值 8080 |
| akkaConfig.akka.remote.artery.canonical.port | Gossip 端口,默认值 8070 |
@@ -236,16 +237,15 @@
* 启动 Alpha 1
```bash
- java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
+ java -jar alpha-server-0.7.0-SNAPSHOT-exec.jar \
--server.port=8090 \
--server.host=127.0.0.1 \
--alpha.server.port=8080 \
- --alpha.feature.akka.enabled=true \
- --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
+ --alpha.spec.saga.akka.channel.name=kafka \
+ --alpha.spec.saga.akka.channel.kafka.bootstrap-servers=127.0.0.1:9092 \
--akkaConfig.akka.remote.artery.canonical.port=8070 \
--akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
--akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
@@ -256,16 +256,15 @@
* 启动 Alpha 2
```bash
- java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
+ java -jar alpha-server-0.7.0-SNAPSHOT-exec.jar \
--server.port=8091 \
--server.host=127.0.0.1 \
--alpha.server.port=8081 \
- --alpha.feature.akka.enabled=true \
- --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
+ --alpha.spec.saga.akka.channel.name=kafka \
+ --alpha.spec.saga.akka.channel.kafka.bootstrap-servers=127.0.0.1:9092 \
--akkaConfig.akka.remote.artery.canonical.port=8071 \
--akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
--akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
diff --git a/docs/fsm/fsm_manual_zh.md b/docs/spec-saga-akka/fsm_manual_zh.md
similarity index 90%
rename from docs/fsm/fsm_manual_zh.md
rename to docs/spec-saga-akka/fsm_manual_zh.md
index 31a0367..2a4323d 100755
--- a/docs/fsm/fsm_manual_zh.md
+++ b/docs/spec-saga-akka/fsm_manual_zh.md
@@ -16,7 +16,7 @@
## 快速开始
-ServiceComb Pack 0.5.0 开始支持 Saga 状态机模式,你只需要在启动 Alpha 和 Omega 端程序时增加 `alpha.feature.akka.enabled=true` 参数。你可以在 [docker hub](https://hub.docker.com/r/coolbeevip/servicecomb-pack) 找到一个 docker-compose 文件,也可以按照以下方式部署。
+ServiceComb Pack 0.5.0 开始支持 Saga 状态机模式,你只需要在启动 Alpha 时增加 `alpha.spec.names=saga-akka` 参数 和 Omega 端程序时增加 `omega.spce.names=saga` 参数。你可以在 [docker hub](https://hub.docker.com/r/coolbeevip/servicecomb-pack) 找到一个 docker-compose 文件,也可以按照以下方式部署。
**注意:** 启用状态机模式后,Saga事务会工作在状态机模式,TCC依然采用数据库方式
**注意:** 0.6.0+ 版本 Omega 端程序不需要配置 `alpha.feature.akka.enabled=true` 参数
@@ -40,9 +40,10 @@
--spring.datasource.url=jdbc:postgresql://0.0.0.0:5432/saga?useSSL=false \
--spring.datasource.username=saga \
--spring.datasource.password=password \
- --alpha.feature.akka.enabled=true \
- --alpha.feature.akka.transaction.repository.type=elasticsearch \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.channel.name=memory \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
--spring.profiles.active=prd
```
@@ -236,16 +237,15 @@
* 启动 Alpha 1
```bash
- java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
+ java -jar alpha-server-0.7.0-SNAPSHOT-exec.jar \
--server.port=8090 \
--server.host=127.0.0.1 \
--alpha.server.port=8080 \
- --alpha.feature.akka.enabled=true \
- --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
+ --alpha.spec.saga.akka.channel.name=kafka \
+ --alpha.spec.saga.akka.channel.kafka.bootstrap-servers=127.0.0.1:9092 \
--akkaConfig.akka.remote.artery.canonical.port=8070 \
--akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
--akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
@@ -256,16 +256,15 @@
* 启动 Alpha 2
```bash
- java -jar alpha-server-0.6.0-SNAPSHOT-exec.jar \
+ java -jar alpha-server-0.7.0-SNAPSHOT-exec.jar \
--server.port=8091 \
--server.host=127.0.0.1 \
--alpha.server.port=8081 \
- --alpha.feature.akka.enabled=true \
- --spring.datasource.url=jdbc:postgresql://127.0.0.1:5432/saga?useSSL=false \
- --spring.datasource.username=saga \
- --spring.datasource.password=password \
- --spring.kafka.bootstrap-servers=127.0.0.1:9092 \
- --spring.elasticsearch.rest.uris=http://127.0.0.1:9200 \
+ --alpha.spec.names=saga-akka \
+ --alpha.spec.saga.akka.repository.name=elasticsearch \
+ --alpha.spec.saga.akka.repository.elasticsearch.uris=http://127.0.0.1:9200 \
+ --alpha.spec.saga.akka.channel.name=kafka \
+ --alpha.spec.saga.akka.channel.kafka.bootstrap-servers=127.0.0.1:9092 \
--akkaConfig.akka.remote.artery.canonical.port=8071 \
--akkaConfig.akka.cluster.seed-nodes[0]="akka://alpha-cluster@127.0.0.1:8070" \
--akkaConfig.akka-persistence-redis.redis.host=127.0.0.1 \
diff --git a/docs/fsm/persistence_zh.md b/docs/spec-saga-akka/persistence_zh.md
similarity index 90%
rename from docs/fsm/persistence_zh.md
rename to docs/spec-saga-akka/persistence_zh.md
index c49eed9..225fb20 100644
--- a/docs/fsm/persistence_zh.md
+++ b/docs/spec-saga-akka/persistence_zh.md
@@ -15,12 +15,12 @@
### 持久化参数
-| 参数名 | 默认值 | 说明 |
-| ------------------------------------------------------------ | ------ |------------------------------------------|
-| alpha.feature.akka.transaction.repository.type | | 持久化类型,目前可选值 elasticsearch,如果不设置则不存储 |
-| alpha.feature.akka.transaction.repository.elasticsearch.batchSize | 100 | elasticsearch 批量入库数量 |
-| alpha.feature.akka.transaction.repository.elasticsearch.refreshTime | 5000 | elasticsearch 定时同步到ES时间 |
-| spring.elasticsearch.rest.uris | | ES节点地址,格式:http://localhost:9200,多个地址逗号分隔 |
+| 参数名 | 默认值 | 说明 |
+|---------------------------------------------------------------|-------|-------------------------|
+| alpha.spec.saga.akka.repository.name | | 持久化类型,目前可选值 elasticsearch,如果不设置则不存储 |
+| alpha.spec.saga.akka.repository.elasticsearch.batch-size | 100 | elasticsearch 批量入库数量 |
+| alpha.spec.saga.akka.repository.elasticsearch.refresh-time | 5000 | elasticsearch 定时同步到ES时间 |
+| alpha.spec.saga.akka.repository.elasticsearch.uris | | ES节点地址,格式:http://localhost:9200,多个地址逗号分隔 |
### Elasticsearch 索引
diff --git a/docs/fsm/plantuml/saga-state-diagram.puml b/docs/spec-saga-akka/plantuml/saga-state-diagram.puml
similarity index 100%
rename from docs/fsm/plantuml/saga-state-diagram.puml
rename to docs/spec-saga-akka/plantuml/saga-state-diagram.puml
diff --git a/docs/user_guide.md b/docs/user_guide.md
index bfca281..f7b2f9b 100644
--- a/docs/user_guide.md
+++ b/docs/user_guide.md
@@ -106,11 +106,19 @@
cluster:
address: alpha-server.servicecomb.io:8080
```
+4. Add omega.spec.names parameters
-4. Repeat step 2 for the `transferIn` service.
+ ```yaml
+ omega:
+ spec:
+ names: saga
+ ```
+
+5. Repeat step 2 for the `transferIn` service.
-5. Since pack-0.3.0, you can access the [OmegaContext](https://github.com/apache/servicecomb-pack/blob/master/omega/omega-context/src/main/java/org/apache/servicecomb/pack/omega/context/OmegaContext.java) for the gloableTxId and localTxId in the @Compensable annotated method or the cancel method.
+6. Since pack-0.3.0, you can access the [OmegaContext](https://github.com/apache/servicecomb-pack/blob/master/omega/omega-context/src/main/java/org/apache/servicecomb/pack/omega/context/OmegaContext.java) for the gloableTxId and localTxId in the @Compensable annotated method or the cancel method.
+7. Sinc pack-0.7.0, You can change the distributed transaction specification through the alpha.spec.names parameter, currently supported modes are saga-db (default), tcc-db, saga-akka
#### <a name="explicit-tx-context-passing"></a>Passing transaction context explicitly
In most cases, Omega passing the transaction context for you transparently (see [Inter-Service Communication](design.md#comm) for details). Transaction context passing is implemented in a way of injecting transaction context information on the sender side and extracting it on the receiver side. Below is an example to illustrate this process:
@@ -286,8 +294,15 @@
cluster:
address: alpha-server.servicecomb.io:8080
```
+ 4. Add omega.spec.names parameters
- 4. Repeat step 2 for the `transferIn` service.
+ ```yaml
+ omega:
+ spec:
+ names: tcc
+ ```
+
+ 5. Repeat step 2 for the `transferIn` service.
#### Passing transaction context explicitly
@@ -323,6 +338,9 @@
alpha:
cluster:
address: {alpha.cluster.addresses}
+ omega:
+ spec:
+ names: saga
```
Then you can start your micro-services and access all saga events via http://${alpha-server:port}/saga/events.
@@ -821,9 +839,9 @@
Alpha enabled JNI transports support with `alpha.feature.nativetransport=true`, These JNI transports add features specific to a particular platform, generate less garbage, and generally improve performance when compared to the NIO based transport.
-## Experiment
+## Pack Distributed Transaction Specifications
-[State Machine Mode](fsm/fsm_manual.md)
+[Saga-Akka Specifications](spec-saga-akka/fsm_manual.md)
## Upgrade Guide
diff --git a/docs/user_guide_zh.md b/docs/user_guide_zh.md
index a8265df..61e8ae0 100644
--- a/docs/user_guide_zh.md
+++ b/docs/user_guide_zh.md
@@ -106,10 +106,19 @@
cluster:
address: alpha-server.servicecomb.io:8080
```
+4. 增加 omega.spec.names 参数
-4. 对转入服务重复第二步即可。
+ ```yaml
+ omega:
+ spec:
+ names: saga
+ ```
+
+5. 对转入服务重复第二步即可。
-5. 从pack-0.3.0开始, 你可以在服务函数或者取消函数中通过访问 [OmegaContext](https://github.com/apache/servicecomb-pack/blob/master/omega/omega-context/src/main/java/org/apache/servicecomb/pack/omega/context/OmegaContext.java) 来获取 gloableTxId 以及 localTxId 信息。
+6. 从 pack-0.3.0 开始, 你可以在服务函数或者取消函数中通过访问 [OmegaContext](https://github.com/apache/servicecomb-pack/blob/master/omega/omega-context/src/main/java/org/apache/servicecomb/pack/omega/context/OmegaContext.java) 来获取 gloableTxId 以及 localTxId 信息。
+
+7. 从 pack-0.7.0 开始, 你可以通过 alpha.spec.names 参数改变分布式事务规格, 目前支持的模式有 saga-db(默认), tcc-db, saga-akka
#### <a name="explicit-tx-context-passing"></a>显式传递事务上下文
@@ -285,9 +294,17 @@
address: alpha-server.servicecomb.io:8080
```
-4. 对转入服务重复第二步即可。
+4. 增加 omega.spec.names 参数
-5. 从pack-0.3.0开始, 你可以在服务函数或者取消函数中通过访问 [OmegaContext](https://github.com/apache/servicecomb-pack/blob/master/omega/omega-context/src/main/java/org/apache/servicecomb/pack/omega/context/OmegaContext.java) 来获取 gloableTxId 以及 localTxId 信息。
+ ```yaml
+ omega:
+ spec:
+ names: tcc
+ ```
+
+5. 对转入服务重复第二步即可。
+
+6. 从pack-0.3.0开始, 你可以在服务函数或者取消函数中通过访问 [OmegaContext](https://github.com/apache/servicecomb-pack/blob/master/omega/omega-context/src/main/java/org/apache/servicecomb/pack/omega/context/OmegaContext.java) 来获取 gloableTxId 以及 localTxId 信息。
#### 显式传递事务上下文
@@ -326,6 +343,9 @@
alpha:
cluster:
address: {alpha.cluster.addresses}
+ omega:
+ spec:
+ names: saga
```
然后就可以运行相关的微服务了,可通过访问http://${alpha-server:port}/saga/events 来获取所有的saga事件信息。
@@ -808,9 +828,9 @@
[src-TransactionClientHttpRequestInterceptor]: ../omega/omega-transport/omega-transport-resttemplate/src/main/java/org/apache/servicecomb/pack/omega/transport/resttemplate/TransactionClientHttpRequestInterceptor.java
[src-TransactionHandlerInterceptor]: ../omega/omega-transport/omega-transport-resttemplate/src/main/java/org/apache/servicecomb/pack/omega/transport/resttemplate/TransactionHandlerInterceptor.java
-## 实验
+## Pack 分布式事务规则
-[状态机模式](fsm/fsm_manual_zh.md)
+[Saga-Akka 分布式状态机规格](spec-saga-akka/fsm_manual_zh.md)
## 升级指南