Stateful Functions is an API that simplifies the building of distributed stateful applications with a runtime built for serverless architectures. It brings together the benefits of stateful stream processing - the processing of large datasets with low latency and bounded resource constraints - along with a runtime for modeling stateful entities that supports location transparency, concurrency, scaling, and resiliency.

It is designed to work with modern architectures, like cloud-native deployments and popular event-driven FaaS platforms like AWS Lambda and KNative, and to provide out-of-the-box consistent state and messaging while preserving the serverless experience and elasticity of these platforms.

Stateful Functions is developed under the umbrella of Apache Flink.

This README is meant as a brief walkthrough on the core concepts and how to set things up to get yourself started with Stateful Functions.

For a fully detailed documentation, please visit the official docs.

For code examples, please take a look at the examples.

Build Status

Table of Contents

Core Concepts


A Stateful Functions application consists of the following primitives: stateful functions, ingresses, routers and egresses.

Stateful functions

  • A stateful function is a small piece of logic/code that is invoked through a message. Each stateful function exists as a uniquely invokable virtual instance of a function type. Each instance is addressed by its type, as well as an unique ID (a string) within its type.

  • Stateful functions may be invoked from ingresses or any other stateful function (including itself). The caller simply needs to know the address of the target function.

  • Function instances are virtual, because they are not all active in memory at the same time. At any point in time, only a small set of functions and their state exists as actual objects. When a virtual instance receives a message, one of the objects is configured and loaded with the state of that virtual instance and then processes the message. Similar to virtual memory, the state of many functions might be “swapped out” at any point in time.

  • Each virtual instance of a function has its own state, which can be accessed in local variables. That state is private and local to that instance.

If you know Apache Flink’s DataStream API, you can think of stateful functions a bit like a lightweight KeyedProcessFunction. The function type is the process function transformation, while the ID is the key. The difference is that functions are not assembled in a Directed Acyclic Graph (DAG) that defines the flow of data (the streaming topology), but rather send events arbitrarily to all other functions using addresses.

Ingresses and Egresses

  • Ingresses are the way that events initially arrive in a Stateful Functions application. Ingresses can be message queues, logs or HTTP servers — anything that produces an event to be handled by the application.

  • Routers are attached to ingresses to determine which function instance should handle an event initially.

  • Egresses are a way to send events out from the application in a standardized way. Egresses are optional; it is also possible that no events leave the application and functions sink events or directly make calls to other applications.


A module is the entry point for adding the core building block primitives to a Stateful Functions application, i.e. ingresses, egresses, routers and stateful functions.

A single application may be a combination of multiple modules, each contributing a part of the whole application. This allows different parts of the application to be contributed by different modules; for example, one module may provide ingresses and egresses, while other modules may individually contribute specific parts of the business logic as stateful functions. This facilitates working in independent teams, but still deploying into the same larger application.


The Stateful Functions runtime is designed to provide a set of properties similar to what characterizes serverless functions, but applied to stateful problems.

The runtime is built on Apache Flink®, with the following design principles:

  • Logical Compute/State Co-location: Messaging, state access/updates and function invocations are managed tightly together. This ensures a high-level of consistency out-of-the-box.

  • Physical Compute/State Separation: Functions can be executed remotely, with message and state access provided as part of the invocation request. This way, functions can be managed like stateless processes and support rapid scaling, rolling upgrades and other common operational patterns.

  • Language Independence: Function invocations use a simple HTTP/gRPC-based protocol so that Functions can be easily implemented in various languages.

This makes it possible to execute functions on a Kubernetes deployment, a FaaS platform or behind a (micro)service, while providing consistent state and lightweight messaging between functions.

Getting Started

Follow the steps here to get started right away with Stateful Functions.

This guide will walk you through setting up to start developing and testing your own Stateful Functions (Java) application, and running an existing example. If you prefer to get started with Python, have a look into the StateFun Python SDK and the Python Greeter example.

Project Setup


  • Docker

  • Maven 3.5.x or above

  • Java 8 or above

You can quickly get started building Stateful Functions applications using the provided quickstart Maven archetype:

mvn archetype:generate \
  -DarchetypeGroupId=org.apache.flink \
  -DarchetypeArtifactId=statefun-quickstart \

This allows you to name your newly created project. It will interactively ask you for the GroupId, ArtifactId and package name. There will be a new directory with the same name as your ArtifactId.

We recommend you import this project into your IDE to develop and test it. IntelliJ IDEA supports Maven projects out of the box. If you use Eclipse, the m2e plugin allows to import Maven projects. Some Eclipse bundles include that plugin by default, others require you to install it manually.

Building the Project

If you want to build/package your project, go to your project directory and run the mvn clean package command. You will find a JAR file that contains your application, plus any libraries that you may have added as dependencies to the application: target/<artifact-id>-<version>.jar.

Running from the IDE

To test out your application, you can directly run it in the IDE without any further packaging or deployments.

Please see the Harness example on how to do that.

Running a full example

As a simple demonstration, we will be going through the steps to run the Greeter example.

Before anything else, make sure that you have locally built the project as well as the base Stateful Functions Docker image. Then, follow the next steps to run the example:

cd statefun-examples/statefun-greeter-example
docker-compose build
docker-compose up

This example contains a very basic stateful function with a Kafka ingress and a Kafka egress.

To see the example in action, send some messages to the topic names, and see what comes out out of the topic greetings:

docker-compose exec kafka-broker \
     --broker-list localhost:9092 \
     --topic names
docker-compose exec kafka-broker \
     --bootstrap-server localhost:9092 \
     --isolation-level read_committed \
     --from-beginning \
     --topic greetings 

Deploying Applications

Stateful Functions applications can be packaged as either standalone applications or Flink jobs that can be submitted to a Flink cluster.

Deploying with a Docker image

Below is an example Dockerfile for building a Stateful Functions image with an embedded module (Java) for an application called statefun-example.

FROM flink-statefun[:version-tag]

RUN mkdir -p /opt/statefun/modules/statefun-example

COPY target/statefun-example*jar /opt/statefun/modules/statefun-example/

Deploying as a Flink job

If you prefer to package your Stateful Functions application as a Flink job to submit to an existing Flink cluster, simply include statefun-flink-distribution as a dependency to your application.


It includes all the runtime dependencies and configures the application's main entry-point. You do not need to take any action beyond adding the dependency to your POM file.

{$FLINK_DIR}/bin/flink run ./statefun-example.jar


There are multiple ways to enhance the Stateful Functions API for different types of applications; the runtime and operations will also evolve with the developments in Apache Flink.

You can learn more about how to contribute in the Apache Flink website. For code contributions, please read carefully the Contributing Code section and check the Stateful Functions component in Jira for an overview of ongoing community work.


The code in this repository is licensed under the Apache Software License 2.