seata-k8s is a Kubernetes operator for deploying and managing Apache Seata distributed transaction servers. It provides a streamlined way to deploy Seata Server clusters on Kubernetes with automatic scaling, persistence management, and operational simplicity.
To deploy Seata Server using the Operator method, follow these steps:
git clone https://github.com/apache/incubator-seata-k8s.git cd incubator-seata-k8s
Deploy the controller, CRD, RBAC, and other required resources:
make deploy
Verify the deployment:
kubectl get deployment -n seata-k8s-controller-manager kubectl get pods -n seata-k8s-controller-manager
Create a SeataServer resource. Here's an example based on seata-server-cluster.yaml:
apiVersion: operator.seata.apache.org/v1alpha1 kind: SeataServer metadata: name: seata-server namespace: default spec: serviceName: seata-server-cluster replicas: 3 image: apache/seata-server:latest persistence: volumeReclaimPolicy: Retain spec: resources: requests: storage: 5Gi
Apply it to your cluster:
kubectl apply -f seata-server.yaml
If everything is working correctly, the operator will:
seata-server-clusterAccess the Seata Server cluster within your Kubernetes network:
seata-server-0.seata-server-cluster.default.svc seata-server-1.seata-server-cluster.default.svc seata-server-2.seata-server-cluster.default.svc
For complete CRD definitions, see seataservers_crd.yaml.
| Property | Description | Default | Example |
|---|---|---|---|
serviceName | Name of the Headless Service | - | seata-server-cluster |
replicas | Number of Seata Server replicas | 1 | 3 |
image | Seata Server container image | - | apache/seata-server:latest |
ports.consolePort | Console port | 7091 | 7091 |
ports.servicePort | Service port | 8091 | 8091 |
ports.raftPort | Raft consensus port | 9091 | 9091 |
resources | Container resource requests/limits | - | See example below |
persistence.volumeReclaimPolicy | Volume reclaim policy | Retain | Retain or Delete |
persistence.spec.resources.requests.storage | Persistent volume size | - | 5Gi |
env | Environment variables | - | See example below |
Configure Seata Server settings using environment variables and Kubernetes Secrets:
apiVersion: operator.seata.apache.org/v1alpha1 kind: SeataServer metadata: name: seata-server namespace: default spec: image: apache/seata-server:latest replicas: 1 persistence: spec: resources: requests: storage: 5Gi env: - name: console.user.username value: seata - name: console.user.password valueFrom: secretKeyRef: name: seata-credentials key: password --- apiVersion: v1 kind: Secret metadata: name: seata-credentials namespace: default type: Opaque stringData: password: your-secure-password
To debug and develop this operator locally, we recommend using Minikube or a similar local Kubernetes environment.
Modify the code and rebuild the controller image:
# Start minikube and set docker environment minikube start eval $(minikube docker-env) # Build and deploy make docker-build deploy # Verify deployment kubectl get deployment -n seata-k8s-controller-manager
Use Telepresence to debug locally without building container images.
Prerequisites:
Steps:
telepresence connect telepresence status # Verify connection
make manifests generate fmt vet
go run .
Now your local development environment has access to the Kubernetes cluster's DNS and services.
This method deploys Seata Server directly using Kubernetes manifests without the operator. Note that Seata Docker images currently require link-mode for container communication.
Deploy Seata server, Nacos, and MySQL:
kubectl apply -f deploy/seata-deploy.yaml kubectl apply -f deploy/seata-service.yaml
kubectl get service # Note the NodePort IPs and ports for Seata and Nacos
Update example/example-deploy.yaml with the NodePort IP addresses obtained above.
# Connect to MySQL and import Seata table schema # Replace CLUSTER_IP with your MySQL service IP mysql -h <CLUSTER_IP> -u root -p < path/to/seata-db-schema.sql
Deploy the sample microservices:
# Deploy account and storage services kubectl apply -f example/example-deploy.yaml kubectl apply -f example/example-service.yaml # Deploy order service kubectl apply -f example/order-deploy.yaml kubectl apply -f example/order-service.yaml # Deploy business service kubectl apply -f example/business-deploy.yaml kubectl apply -f example/business-service.yaml
Open Nacos console to verify service registration:
http://localhost:8848/nacos/
Check that all services are registered:
Test the distributed transaction scenarios using the following curl commands:
curl -H "Content-Type: application/json" \ -X POST \ --data '{"id":1,"userId":"1","amount":100}' \ http://<CLUSTER_IP>:8102/account/dec_account
curl -H "Content-Type: application/json" \ -X POST \ --data '{"commodityCode":"C201901140001","count":100}' \ http://<CLUSTER_IP>:8100/storage/dec_storage
curl -H "Content-Type: application/json" \ -X POST \ --data '{"userId":"1","commodityCode":"C201901140001","orderCount":10,"orderAmount":100}' \ http://<CLUSTER_IP>:8101/order/create_order
curl -H "Content-Type: application/json" \ -X POST \ --data '{"userId":"1","commodityCode":"C201901140001","count":10,"amount":100}' \ http://<CLUSTER_IP>:8104/business/dubbo/buy
Replace <CLUSTER_IP> with the actual NodePort IP address of your service.