| { |
| "nbformat": 4, |
| "nbformat_minor": 0, |
| "metadata": { |
| "colab": { |
| "provenance": [] |
| }, |
| "kernelspec": { |
| "name": "python3", |
| "display_name": "Python 3" |
| }, |
| "language_info": { |
| "name": "python" |
| } |
| }, |
| "cells": [ |
| { |
| "cell_type": "code", |
| "execution_count": null, |
| "metadata": { |
| "id": "K2MpsIa-ncMZ" |
| }, |
| "outputs": [], |
| "source": [ |
| "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n", |
| "\n", |
| "# Licensed to the Apache Software Foundation (ASF) under one\n", |
| "# or more contributor license agreements. See the NOTICE file\n", |
| "# distributed with this work for additional information\n", |
| "# regarding copyright ownership. The ASF licenses this file\n", |
| "# to you under the Apache License, Version 2.0 (the\n", |
| "# \"License\"); you may not use this file except in compliance\n", |
| "# with the License. You may obtain a copy of the License at\n", |
| "#\n", |
| "# http://www.apache.org/licenses/LICENSE-2.0\n", |
| "#\n", |
| "# Unless required by applicable law or agreed to in writing,\n", |
| "# software distributed under the License is distributed on an\n", |
| "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n", |
| "# KIND, either express or implied. See the License for the\n", |
| "# specific language governing permissions and limitations\n", |
| "# under the License" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "# Use windowing with RunInference predictions \n", |
| "\n", |
| "<table align=\"left\">\n", |
| " <td>\n", |
| " <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n", |
| " </td>\n", |
| " <td>\n", |
| " <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_windowing.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n", |
| " </td>\n", |
| "</table>\n" |
| ], |
| "metadata": { |
| "id": "fKxfINuCPsh9" |
| } |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "This notebook shows how to use the RunInference transform with [windowing](https://beam.apache.org/documentation/programming-guide/#windowing) in a streaming pipeline. Windowing is useful when your data arrives within a particular timeframe and can be divided by timestamp, or when you want to see trends before all the data is processed. In this example, the pipeline predicts the quality of milk samples and classifies them as `good`, `bad`, or `medium`. The pipeline then aggregates the predictions for each window. To make predictions, the pipeline uses the XGBoost model handler. For more information about the RunInference API, see the [Machine Learning section of the Apache Beam documentation](https://beam.apache.org/documentation/ml/overview/).\n", |
| "\n", |
| "With RunInference, a model handler manages batching, vectorization, and prediction optimization for your XGBoost pipeline or model.\n", |
| "\n", |
| "This notebook demonstrates the following common RunInference patterns:\n", |
| "\n", |
| "- Generate predictions for all samples in a window.\n", |
| "- Aggregate the results per window after running inference.\n", |
| "- Print the aggregations." |
| ], |
| "metadata": { |
| "id": "knGVsVR6P_nZ" |
| } |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "## Before you begin\n", |
| "Complete the following setup steps:\n", |
| "- Install dependencies for Apache Beam.\n", |
| "- Install XGBoost.\n", |
| "- Download the [Milk Quality Prediction dataset from Kaggle](https://www.kaggle.com/datasets/cpluzshrijayan/milkquality). Name the dataset `milk_quality.csv`, and put it in the current directory. Use the CSV file format for the dataset. If using colab, you will need to [upload it to the colab filesystem](https://neptune.ai/blog/google-colab-dealing-with-files)." |
| ], |
| "metadata": { |
| "id": "s5PPNo9HRRe1" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "!pip install apache-beam>=2.47.0\n", |
| "!pip install xgboost\n", |
| "# You may need to install a different version of Datatable directly depending on environment\n", |
| "!pip install datatable" |
| ], |
| "metadata": { |
| "id": "YiPD9-j_RRNC" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "## About the dataset\n", |
| "\n", |
| "This dataset is a CSV file that contains seven columns: `pH`, `temperature`, `taste`, `odor`, `fat`, `turbidity`, and `color`. The dataset also contains a column that labels the quality of each sample as `good`, `bad`, or `medium`." |
| ], |
| "metadata": { |
| "id": "Uz9BcQg_Qbva" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "import argparse\n", |
| "import logging\n", |
| "import time\n", |
| "from typing import NamedTuple\n", |
| "\n", |
| "import pandas\n", |
| "from sklearn.model_selection import train_test_split\n", |
| "\n", |
| "import apache_beam as beam\n", |
| "import xgboost\n", |
| "from apache_beam import window\n", |
| "from apache_beam.ml.inference import RunInference\n", |
| "from apache_beam.ml.inference.base import PredictionResult\n", |
| "from apache_beam.ml.inference.xgboost_inference import XGBoostModelHandlerPandas\n", |
| "from apache_beam.options.pipeline_options import PipelineOptions\n", |
| "from apache_beam.options.pipeline_options import SetupOptions\n", |
| "from apache_beam.runners.runner import PipelineResult\n", |
| "from apache_beam.testing.test_stream import TestStream" |
| ], |
| "metadata": { |
| "id": "sHDrJ1nTPqUv" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "## Load the dataset and train the XGBoost model\n", |
| "This section demonstrates the following steps:\n", |
| "1. Load the Milk Quality Prediction dataset from Kaggle.\n", |
| "2. Split the data into a training set and a test set.\n", |
| "2. Train the XGBoost classifier to predict the quality of milk.\n", |
| "3. Save the model in a JSON file using `mode.save_model`. For more information, see [Introduction to Model IO](https://xgboost.readthedocs.io/en/stable/tutorials/saving_model.html) in the XGBoost documentation. \n" |
| ], |
| "metadata": { |
| "id": "kpXjNoVgRpOb" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "# Replace with the path to milk_quality.csv\n", |
| "DATASET = \"milk_quality.csv\"\n\n", |
| "TRAINING_SET = \"training_set.csv\"\n", |
| "TEST_SET = \"test_set.csv\"\n", |
| "LABELS = \"labels.csv\"\n", |
| "MODEL_STATE = \"model.json\"" |
| ], |
| "metadata": { |
| "id": "cnH5lTahY6Ty" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "### Split the data into training and test sets \n", |
| "Use the following preprocessing helper functions to split the dataset into a training set and a test set." |
| ], |
| "metadata": { |
| "id": "KNRkuUQ9aA62" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "def preprocess_data(\n", |
| " dataset_path: str,\n", |
| " training_set_path: str,\n", |
| " labels_path: str,\n", |
| " test_set_path: str):\n", |
| " \"\"\"\n", |
| " Helper function to split the dataset into a training set\n", |
| " and its labels and a test set. The training set and\n", |
| " its labels are used to train a lightweight model.\n", |
| " The test set is used to create a test streaming pipeline.\n", |
| " Args:\n", |
| " dataset_path: path to the CSV file containing the Kaggle\n", |
| " Milk Quality Prediction dataset\n", |
| " training_set_path: path to output the training samples\n", |
| " labels_path: path to output the labels for the training set\n", |
| " test_set_path: path to output the test samples\n", |
| " \"\"\"\n", |
| " df = pandas.read_csv(dataset_path)\n", |
| " df['Grade'].replace(['low', 'medium', 'high'], [0, 1, 2], inplace=True)\n", |
| " x = df.drop(columns=['Grade'])\n", |
| " y = df['Grade']\n", |
| " x_train, x_test, y_train, _ = \\\n", |
| " train_test_split(x, y, test_size=0.60, random_state=99)\n", |
| " x_train.to_csv(training_set_path, index=False)\n", |
| " y_train.to_csv(labels_path, index=False)\n", |
| " x_test.to_csv(test_set_path, index=False)\n", |
| "\n", |
| "\n", |
| "def train_model(\n", |
| " samples_path: str, labels_path: str, model_state_output_path: str):\n", |
| " \"\"\"Function to train the XGBoost model.\n", |
| " Args:\n", |
| " samples_path: path to the CSV file containing the training data\n", |
| " labels_path: path to the CSV file containing the labels for the training data\n", |
| " model_state_output_path: path to store the trained model\n", |
| " \"\"\"\n", |
| " samples = pandas.read_csv(samples_path)\n", |
| " labels = pandas.read_csv(labels_path)\n", |
| " xgb = xgboost.XGBClassifier(max_depth=3)\n", |
| " xgb.fit(samples, labels)\n", |
| " xgb.save_model(model_state_output_path)\n", |
| " return xgb" |
| ], |
| "metadata": { |
| "id": "MUUq_j6NXu41" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "### Preprocess the data and train the model\n", |
| "\n", |
| "Split the dataset into three files with data: a training dataset, a test dataset, and labels for the training dataset. Use the test set as input data for the test stream and to validate the trained model's performance. Use the training set to train the XGBoost model. Store the model in a JSON file so that the model handler can load the model." |
| ], |
| "metadata": { |
| "id": "h5NKrWvlaV4T" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "preprocess_data(\n", |
| " dataset_path=DATASET,\n", |
| " training_set_path=TRAINING_SET,\n", |
| " labels_path=LABELS,\n", |
| " test_set_path=TEST_SET)\n", |
| "\n", |
| "train_model(\n", |
| " samples_path=TRAINING_SET,\n", |
| " labels_path=LABELS,\n", |
| " model_state_output_path=MODEL_STATE)" |
| ], |
| "metadata": { |
| "id": "QNLbrXfEYtZP" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "# Named tuple to store the number of good, bad, and medium quality samples in a window\n", |
| "class MilkQualityAggregation(NamedTuple):\n", |
| " bad_quality_measurements: int\n", |
| " medium_quality_measurements: int\n", |
| " high_quality_measurements: int" |
| ], |
| "metadata": { |
| "id": "1RcM3F66aqSU" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "### Count the samples by quality for each window \n", |
| "Use the helper `CombineFn` to aggregate the results of a window. The function tracks of the number of good, bad, and medium quality samples in the stream. " |
| ], |
| "metadata": { |
| "id": "uIy51BJmjuJv" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "class AggregateMilkQualityResults(beam.CombineFn):\n", |
| " \"\"\"Simple aggregation to keep track of the number\n", |
| " of samples with good, bad, and medium quality milk.\"\"\"\n", |
| " def create_accumulator(self):\n", |
| " return MilkQualityAggregation(0, 0, 0)\n", |
| "\n", |
| " def add_input(\n", |
| " self, accumulator: MilkQualityAggregation, element: PredictionResult):\n", |
| " quality = element.inference[0]\n", |
| " if quality == 0:\n", |
| " return MilkQualityAggregation(\n", |
| " accumulator.bad_quality_measurements + 1,\n", |
| " accumulator.medium_quality_measurements,\n", |
| " accumulator.high_quality_measurements)\n", |
| " elif quality == 1:\n", |
| " return MilkQualityAggregation(\n", |
| " accumulator.bad_quality_measurements,\n", |
| " accumulator.medium_quality_measurements + 1,\n", |
| " accumulator.high_quality_measurements)\n", |
| " else:\n", |
| " return MilkQualityAggregation(\n", |
| " accumulator.bad_quality_measurements,\n", |
| " accumulator.medium_quality_measurements,\n", |
| " accumulator.high_quality_measurements + 1)\n", |
| "\n", |
| " def merge_accumulators(self, accumulators: MilkQualityAggregation):\n", |
| " return MilkQualityAggregation(\n", |
| " sum(\n", |
| " aggregation.bad_quality_measurements\n", |
| " for aggregation in accumulators),\n", |
| " sum(\n", |
| " aggregation.medium_quality_measurements\n", |
| " for aggregation in accumulators),\n", |
| " sum(\n", |
| " aggregation.high_quality_measurements\n", |
| " for aggregation in accumulators),\n", |
| " )\n", |
| "\n", |
| " def extract_output(self, accumulator: MilkQualityAggregation):\n", |
| " return accumulator" |
| ], |
| "metadata": { |
| "id": "Piezn0pfjs0E" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "### Create a streaming pipeline using the test data\n", |
| "\n", |
| "Construct a [`TestStream`](https://beam.apache.org/releases/javadoc/2.1.0/org/apache/beam/sdk/testing/TestStream.html) class that contains all samples from the test set. A test stream is testing input that generates an unbounded [`PCollection`](https://beam.apache.org/releases/javadoc/2.1.0/org/apache/beam/sdk/values/PCollection.html) of elements, advancing the watermark and processing time as elements are emitted." |
| ], |
| "metadata": { |
| "id": "QOwlJga1ir4Y" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "milk_quality_data = pandas.read_csv(TEST_SET)\n", |
| "\n", |
| "start = time.mktime(time.strptime('2023/06/29 10:00:00', '%Y/%m/%d %H:%M:%S'))\n", |
| "\n", |
| "# Create a test stream\n", |
| "test_stream = TestStream()\n", |
| "\n", |
| "# Watermark is set to 10:00:00\n", |
| "test_stream.advance_watermark_to(start)\n", |
| "\n", |
| "# Split the DataFrame into individual samples\n", |
| "samples = [\n", |
| " milk_quality_data.iloc[i:i + 1] for i in range(len(milk_quality_data))\n", |
| "]\n", |
| "\n", |
| "for watermark_offset, sample in enumerate(samples, 1):\n", |
| " test_stream.advance_watermark_to(start + watermark_offset)\n", |
| " test_stream.add_elements([sample])\n", |
| "\n", |
| "test_stream.advance_watermark_to_infinity()" |
| ], |
| "metadata": { |
| "id": "HeSbuBdtfqmf" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| }, |
| { |
| "cell_type": "markdown", |
| "source": [ |
| "### Create the XGBoost model handler and run the pipeline\n", |
| "This section demonstrates first how to create a model handler, and then how to run the pipeline." |
| ], |
| "metadata": { |
| "id": "caxXDiPgjRdM" |
| } |
| }, |
| { |
| "cell_type": "code", |
| "source": [ |
| "model_handler = XGBoostModelHandlerPandas(\n", |
| " model_class=xgboost.XGBClassifier, model_state=MODEL_STATE)\n", |
| "\n", |
| "pipeline_options = PipelineOptions().from_dictionary({})\n", |
| "\n", |
| "with beam.Pipeline() as p:\n", |
| " _ = (\n", |
| " p | test_stream\n", |
| " | 'window' >> beam.WindowInto(window.SlidingWindows(30, 5))\n", |
| " | \"RunInference\" >> RunInference(model_handler)\n", |
| " | 'Count number of elements in window' >> beam.CombineGlobally(\n", |
| " AggregateMilkQualityResults()).without_defaults()\n", |
| " | 'Print' >> beam.Map(print))" |
| ], |
| "metadata": { |
| "id": "2SK5wUkdi_e_" |
| }, |
| "execution_count": null, |
| "outputs": [] |
| } |
| ] |
| } |