layout: section title: “Portability Framework Roadmap” permalink: /roadmap/portability/ section_menu: section-menu/roadmap.html redirect_from: /contribute/portability/

Portability Framework Roadmap

Overview

Interoperability between SDKs and runners is a key aspect of Apache Beam. So far, however, the reality is that most runners support the Java SDK only, because each SDK-runner combination requires non-trivial work on both sides. All runners are also currently written in Java, which makes support of non-Java SDKs far more expensive. The portability framework aims to rectify this situation and provide full interoperability across the Beam ecosystem.

The portability framework introduces well-defined, language-neutral data structures and protocols between the SDK and runner. This interop layer -- called the portability API -- ensures that SDKs and runners can work with each other uniformly, reducing the interoperability burden for both SDKs and runners to a constant effort. It notably ensures that new SDKs automatically work with existing runners and vice versa. The framework introduces a new runner, the Universal Local Runner (ULR), as a practical reference implementation that complements the direct runners. Finally, it enables cross-language pipelines (sharing I/O or transformations across SDKs) and user-customized execution environments (“custom containers”).

The portability API consists of a set of smaller contracts that isolate SDKs and runners for job submission, management and execution. These contracts use protobufs and gRPC for broad language support.

  • Job submission and management: The Runner API defines a language-neutral pipeline representation with transformations specifying the execution environment as a docker container image. The latter both allows the execution side to set up the right environment as well as opens the door for custom containers and cross-environment pipelines. The Job API allows pipeline execution and configuration to be managed uniformly.

  • Job execution: The SDK harness is a SDK-provided program responsible for executing user code and is run separately from the runner. The Fn API defines an execution-time binary contract between the SDK harness and the runner that describes how execution tasks are managed and how data is transferred. In addition, the runner needs to handle progress and monitoring in an efficient and language-neutral way. SDK harness initialization relies on the Provision and Artifact APIs for obtaining staged files, pipeline options and environment information. Docker provides isolation between the runner and SDK/user environments to the benefit of both as defined by the container contract. The containerization of the SDK gives it (and the user, unless the SDK is closed) full control over its own environment without risk of dependency conflicts. The runner has significant freedom regarding how it manages the SDK harness containers.

The goal is that all (non-direct) runners and SDKs eventually support the portability API, perhaps exclusively.

If you are interested in digging in to the designs, you can find them on the Beam developers' wiki.

Milestones

The portability framework is a substantial effort that touches every Beam component. In addition to the sheer magnitude, a major challenge is engineering an interop layer that does not significantly compromise performance due to the additional serialization overhead of a language-neutral protocol.

The proposed project phases are roughly as follows and are not strictly sequential, as various components will likely move at different speeds. Additionally, there have been (and continues to be) supporting refactorings that are not always tracked as part of the portability effort. Work already done is not tracked here either.

  • P1 [MVP]: Implement the fundamental plumbing for portable SDKs and runners for batch and streaming, including containers and the ULR [BEAM-2899]. Each SDK and runner should use the portability framework at least to the extent that wordcount [BEAM-2896] and windowed wordcount [BEAM-2941] run portably.

  • P2 [Feature complete]: Design and implement portability support for remaining execution-side features, so that any pipeline from any SDK can run portably on any runner. These features include side inputs [BEAM-2863], User state [BEAM-2862], User timers [BEAM-2925], Splittable DoFn [BEAM-2896] and more. Each SDK and runner should use the portability framework at least to the extent that the mobile gaming examples [BEAM-2940] run portably.

  • P3 [Performance]: Measure and tune performance of portable pipelines using benchmarks such as Nexmark. Features such as progress reporting [BEAM-2940], combiner lifting [BEAM-2937] and fusion are expected to be needed.

  • P4 [Cross language]: Design and implement cross-language pipeline support, including how the ecosystem of shared transforms should work.

Issues

The portability effort touches every component, so the “portability” label is used to identify all portability-related issues. Pure design or proto definitions should use the “beam-model” component. A common pattern for new portability features is that the overall feature is in “beam-model” with subtasks for each SDK and runner in their respective components.

JIRA: [query](https://issues.apache.org/jira/issues/?jql=project %3D BEAM AND resolution %3D Unresolved AND labels %3D portability order by priority DESC%2Cupdated DESC)

Status

MVP, and FeatureCompletness nearly done (missing SDF, timers) for SDKs, Python ULR, and shared java runners library. Currently, the Flink and Spark runners support portable pipeline execution. See the Portability support table for details.

Prerequisites: Docker, Python, Java 8

Running Python wordcount on Flink

The Beam Flink runner can run Python pipelines in batch and streaming modes. Please see the [Flink Runner page]({{ site.baseurl }}/documentation/runners/flink/) for more information on how to run portable pipelines on top of Flink.

Running Python wordcount on Spark

The Beam Spark runner can run Python pipelines in batch mode. Please see the [Spark Runner page]({{ site.baseurl }}/documentation/runners/spark/) for more information on how to run portable pipelines on top of Spark.

Python streaming mode is not yet supported on Spark.

SDK Harness Configuration

The Beam Python SDK allows configuration of the SDK harness to accommodate varying cluster setups.

  • environment_type determines where user code will be executed.
    • LOOPBACK: User code is executed within the same process that submitted the pipeline. This option is useful for local testing. However, it is not suitable for a production environment, as it requires a connection between the original Python process and the worker nodes, and performs work on the machine the job originated from, not the worker nodes.
    • PROCESS: User code is executed by processes that are automatically started by the runner on each worker node.
    • DOCKER (default): User code is executed within a container started on each worker node. This requires docker to be installed on worker nodes. For more information, see [here]({{ site.baseurl }}/documentation/runtime/environments/).
  • environment_config configures the environment depending on the value of environment_type.
    • When environment_type=DOCKER: URL for the Docker container image.
    • When environment_type=PROCESS: JSON of the form {"os": "<OS>", "arch": "<ARCHITECTURE>", "command": "<process to execute>", "env":{"<Environment variables 1>": "<ENV_VAL>"} }. All fields in the JSON are optional except command.
  • sdk_worker_parallelism sets the number of SDK workers that will run on each worker node.