This example demonstrates how to deploy a stateful function application written in Python to Kubernetes.
This examples create a stateful function application, that consumes LoginEvent
s from a logins
Kafka topic, and produces seen
count per user, into the seen
Kafka topic.
The main example components contains:
This example consumes LoginEvent
s from the logins
topic, and produces SeenCount
to the seen
topic
./kafka-topics.sh --create --topic logins --zookeeper <zookeeper address>:2181 --partitions 1 --replication-factor 1 ./kafka-topics.sh --create --topic seen --zookeeper <zookeeper address>:2181 --partitions 1 --replication-factor 1
Make sure that your module.yaml
ingress/and egress sections point to your Kafka cluster.
ingresses: - ingress: ... spec: address: kafka-service:9092 ... egresses: - egress: ... spec: address: kafka-service:9092
This examples creates two different Docker images, one for the Python
remote worker (k8s-demo-python-worker
) and one for the statefun cluster (k8s-demo-statefun
).
If you have a remote docker registry (i.e. gcr.io/<project-name>
) make sure to update resources/values.yaml relevant image:
sections.
Modify resources/values.yaml and set the value of checkpoint.directory
to a filesystem / object store. For example
checkpoint: dir: gcs://my-project/my-bucket
Assuming the all prerequisites where completed run:
build-example.sh
This should create the Docker images and generate a k8s-demo.yaml
file.
kubectl create -f k8s-demo.yaml
kubectl create -f python-worker-deployment.yaml
kubectl create -f python-worker-service.yaml
Run:
pip3 install kafka-python python3 event_generator.py --address <kafka address> --events 1000
This would generate 1,000 login events into the logins
topic