tree: 456fca001c16e640fa0ea0448486e13deed42b8d [path history] [tgz]
  1. __init__.py
  2. pytorch_image_classification_benchmarks.py
  3. pytorch_language_modeling_benchmarks.py
  4. README.md
sdks/python/apache_beam/testing/benchmarks/inference/README.md

RunInference Benchmarks

This module contains benchmarks used to test the performance of the RunInference transform running inference with common models and frameworks. Each benchmark is explained in detail below. Beam's performance over time can be viewed at http://s.apache.org/beam-community-metrics/d/ZpS8Uf44z/python-ml-runinference-benchmarks?orgId=1

Pytorch RunInference Image Classification 50K

The Pytorch RunInference Image Classification 50K benchmark runs an example image classification pipeline using various different resnet image classification models (the benchmarks on Beam's dashboard display resnet101 and resnet152) against 50,000 example images from the OpenImage dataset. The benchmarks produce the following metrics:

  • Mean Inference Requested Batch Size - the average batch size that RunInference groups the images into for batch prediction
  • Mean Inference Batch Latency - the average amount of time it takes to perform inference on a given batch of images
  • Mean Load Model Latency - the average amount of time it takes to load a model. This is done once per DoFn instance on worker startup, so the cost is amortized across the pipeline.

These metrics are published to InfluxDB and BigQuery.

Pytorch Image Classification Tests

  • Pytorch Image Classification with Resnet 101.

    • machine_type: n1-standard-2
    • num_workers: 75
    • autoscaling_algorithm: NONE
    • disk_size_gb: 50
  • Pytorch Image Classification with Resnet 152.

    • machine_type: n1-standard-2
    • num_workers: 75
    • autoscaling_algorithm: NONE
    • disk_size_gb: 50
  • Pytorch Imagenet Classification with Resnet 152 with Tesla T4 GPU.

    • machine_type:
      • CPU: n1-standard-2
      • GPU: NVIDIA Tesla T4
    • num_workers: 75
    • autoscaling_algorithm: NONE
    • disk_size_gb: 50

Approximate size of the models used in the tests

  • resnet101: 170.5 MB
  • resnet152: 230.4 MB

Pytorch RunInference Language Modeling

The Pytorch RunInference Language Modeling benchmark runs an example language modeling pipeline using the Bert large uncased and Bert base uncased models and a dataset of 50,000 manually generated sentences. The benchmarks produce the following metrics:

  • Mean Inference Requested Batch Size - the average batch size that RunInference groups the images into for batch prediction
  • Mean Inference Batch Latency - the average amount of time it takes to perform inference on a given batch of images
  • Mean Load Model Latency - the average amount of time it takes to load a model. This is done once per DoFn instance on worker startup, so the cost is amortized across the pipeline.

These metrics are published to InfluxDB and BigQuery.

Pytorch Language Modeling Tests

  • Pytorch Langauge Modeling using Hugging Face bert-base-uncased model.

    • machine_type: n1-standard-2
    • num_workers: 250
    • autoscaling_algorithm: NONE
    • disk_size_gb: 50
  • Pytorch Langauge Modeling using Hugging Face bert-large-uncased model.

    • machine_type: n1-standard-2
    • num_workers: 250
    • autoscaling_algorithm: NONE
    • disk_size_gb: 50

Approximate size of the models used in the tests

  • bert-base-uncased: 417.7 MB
  • bert-large-uncased: 1.2 GB

All the performance tests are defined at job_InferenceBenchmarkTests_Python.groovy.