| { |
| "cells": [ |
| { |
| "cell_type": "code", |
| "execution_count": 1, |
| "metadata": { |
| "id": "BrKf6TQ98qIJ" |
| }, |
| "outputs": [], |
| "source": [ |
| "# @title ###### Licensed to the Apache Software Foundation (ASF), Version 2.0 (the \"License\")\n", |
| "\n", |
| "# Licensed to the Apache Software Foundation (ASF) under one\n", |
| "# or more contributor license agreements. See the NOTICE file\n", |
| "# distributed with this work for additional information\n", |
| "# regarding copyright ownership. The ASF licenses this file\n", |
| "# to you under the Apache License, Version 2.0 (the\n", |
| "# \"License\"); you may not use this file except in compliance\n", |
| "# with the License. You may obtain a copy of the License at\n", |
| "#\n", |
| "# http://www.apache.org/licenses/LICENSE-2.0\n", |
| "#\n", |
| "# Unless required by applicable law or agreed to in writing,\n", |
| "# software distributed under the License is distributed on an\n", |
| "# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n", |
| "# KIND, either express or implied. See the License for the\n", |
| "# specific language governing permissions and limitations\n", |
| "# under the License" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "hHg4SoUr8qIK" |
| }, |
| "source": [ |
| "# Run inference with a Gemma open model\n", |
| "\n", |
| "<table align=\"left\">\n", |
| " <td>\n", |
| " <a target=\"_blank\" href=\"https://colab.research.google.com/github/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_gemma.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/colab_32px.png\" />Run in Google Colab</a>\n", |
| " </td>\n", |
| " <td>\n", |
| " <a target=\"_blank\" href=\"https://github.com/apache/beam/blob/master/examples/notebooks/beam-ml/run_inference_gemma.ipynb\"><img src=\"https://raw.githubusercontent.com/google/or-tools/main/tools/github_32px.png\" />View source on GitHub</a>\n", |
| " </td>\n", |
| "</table>" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "hmvdEYwD8qIK" |
| }, |
| "source": [ |
| "Gemma is a family of lightweight, state-of-the art open models built from research and technology used to create the Gemini models.\n", |
| "You can use Gemma models in your Apache Beam inference pipelines with the `RunInference` transform.\n", |
| "\n", |
| "This notebook demonstrates how to load the preconfigured Gemma 2B model and then use it in your Apache Beam inference pipeline. The pipeline runs examples by using a built-in model handler and a custom inference function.\n", |
| "\n", |
| "For more information about using RunInference, see [Get started with AI/ML pipelines](https://beam.apache.org/documentation/ml/overview/) in the Apache Beam documentation." |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "prD5iCdl8qIL" |
| }, |
| "source": [ |
| "## Requirements\n", |
| "\n", |
| "Serving and using Gemma models requires a substantial amount of RAM. To run this example, we recommend that you use a notebook instance with GPUs. At a mimumum, use a machine that has the T4 GPU type. This configuration provides sufficient memory for running inference with a saved model.\n", |
| "\n", |
| "**Note:** When you complete this workflow in Google Colab, if you don't have Colab Enterprise, you might run into resource constraints." |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "jwfLcFSm8qIL" |
| }, |
| "source": [ |
| "## Before you begin" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "-muc6QB58qIL" |
| }, |
| "source": [ |
| "- To use a fine-tuned version of the model, follow the steps in [Gemma fine-tuning](https://ai.google.dev/gemma/docs/lora_tuning).\n", |
| "- For testing this workflow, we recommend using the instruction tuned model in your Apache Beam workflow. For example, if you use the Gemma 2B model in your pipeline, when you load the model, change the `GemmaCausalLM.from_preset()` argument from `gemma_2b_en`\n", |
| "to `gemma_instruct_2b_en`. For more information, see [Create a model](https://ai.google.dev/gemma/docs/get_started#create_a_model) in \"Get started with Gemma using KerasNLP\". For a list of models, see [Gemma models](https://www.kaggle.com/models/keras/gemma)." |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "4DZaCmvi8qIL" |
| }, |
| "source": [ |
| "## Install Dependencies\n", |
| "To use the `RunInference` transform with the built-in TensorFlow model handler, install Apache Beam version 2.46.0 or later. The model class is contained in the Keras natural language processing (NLP) package versions 0.8.0 and later." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": 1, |
| "metadata": { |
| "colab": { |
| "base_uri": "https://localhost:8080/" |
| }, |
| "id": "wYvcwsj88qIL", |
| "outputId": "29a54410-c516-4b3a-b34a-0a1481b408dd" |
| }, |
| "outputs": [ |
| { |
| "name": "stdout", |
| "output_type": "stream", |
| "text": [ |
| "\u001b[?25l \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m0.0/294.6 kB\u001b[0m \u001b[31m?\u001b[0m eta \u001b[36m-:--:--\u001b[0m\r\u001b[2K \u001b[91m━━━━━━━━━━━\u001b[0m\u001b[90m╺\u001b[0m\u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m81.9/294.6 kB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K \u001b[91m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m\u001b[91m╸\u001b[0m\u001b[90m━\u001b[0m \u001b[32m286.7/294.6 kB\u001b[0m \u001b[31m4.4 MB/s\u001b[0m eta \u001b[36m0:00:01\u001b[0m\r\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m294.6/294.6 kB\u001b[0m \u001b[31m3.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n", |
| "\u001b[?25h\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", |
| "tensorflow 2.15.0 requires keras<2.16,>=2.15.0, but you have keras 3.0.5 which is incompatible.\n", |
| "tensorflow-metadata 1.14.0 requires protobuf<4.21,>=3.20.3, but you have protobuf 4.25.3 which is incompatible.\u001b[0m\u001b[31m\n", |
| "\u001b[0m\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", |
| "tensorflow 2.15.0 requires keras<2.16,>=2.15.0, but you have keras 3.0.5 which is incompatible.\u001b[0m\u001b[31m\n", |
| "\u001b[0m" |
| ] |
| } |
| ], |
| "source": [ |
| "!pip install -q -U protobuf\n", |
| "!pip install -q -U apache_beam[interactive,gcp]\n", |
| "!pip install -q -U keras_nlp>=0.8.0\n", |
| "!pip install -q -U keras>3\n", |
| "\n", |
| "# To use the newly installed versions, restart the runtime.\n", |
| "exit()" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "Uw5g3jHnBBO-" |
| }, |
| "source": [ |
| "## Authenticate with Kaggle" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "1FQdEMq8GEpl" |
| }, |
| "source": [ |
| "The pipeline defined here automatically pulls the model weights from Kaggle. First, accept the terms of use for Gemma models on the Keras [Gemma](https://www.kaggle.com/models/keras/gemma) page. Next, generate an API token by following the instructions in [How to use Kaggle](https://www.kaggle.com/docs/api). Provide your username and token." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": 1, |
| "metadata": { |
| "colab": { |
| "base_uri": "https://localhost:8080/", |
| "height": 84, |
| "referenced_widgets": [ |
| "09f730cf36e74993883f6e69732c8022", |
| "e77abce708bc464484463dd9289c3bbe", |
| "77240ae07f4d4cd18ff227fb9b6d4582", |
| "a9bd321c50d2492a8bccf25a428b502f", |
| "7f9f1bf26e9748f68f2b1bf880fa8433", |
| "9233e5c7f34d4641b8d7ab49876f3b65", |
| "4b255639d5d2403386d940c832eb75bd", |
| "ac5fb2c208924d8a88e589558c54c3ac", |
| "aef5a638469641d1a2ba1c46a49f74c9", |
| "30798183af4f42daaa5991898859a387", |
| "d5902391f0da466f87cc9a6c98578c7c", |
| "49084c1479a24f55ae1eefd10d69a6a4", |
| "4e141a0bb15d46e0b0d403710bd8c0ae", |
| "10b7f02094d547aeaad1ddbe51f65e61", |
| "48e028cf0c33408b9e15962a32913f13", |
| "dd37580b4d1640a2b1147303c16d6c36", |
| "4d3ebcbcfc2d43178c30e9f4feac070b", |
| "cdcc2b811fb34d0893c6085098c61250", |
| "ddcb89b91bd64218974129e67f35b379", |
| "86a301e485934df9a9a8b035b61cd688", |
| "9fdf13286ba54a42a9f8c3f16948e3a1", |
| "52258607316047eb844abc9f2f2e167c", |
| "b9f2aca5a9b340349dee7065b0a22f64" |
| ] |
| }, |
| "id": "dm9Ij8PzBBgi", |
| "outputId": "4f85a246-ae55-4579-c8ef-5784acd8ad9d" |
| }, |
| "outputs": [ |
| { |
| "data": { |
| "application/vnd.jupyter.widget-view+json": { |
| "model_id": "09f730cf36e74993883f6e69732c8022", |
| "version_major": 2, |
| "version_minor": 0 |
| }, |
| "text/plain": [ |
| "VBox(children=(HTML(value='<center> <img\\nsrc=https://www.kaggle.com/static/images/site-logo.png\\nalt=\\'Kaggle…" |
| ] |
| }, |
| "metadata": {}, |
| "output_type": "display_data" |
| }, |
| { |
| "name": "stderr", |
| "output_type": "stream", |
| "text": [ |
| "Kaggle credentials set.\n", |
| "Kaggle credentials successfully validated.\n" |
| ] |
| } |
| ], |
| "source": [ |
| "import kagglehub\n", |
| "\n", |
| "kagglehub.login()" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "c_ez0S408qIM" |
| }, |
| "source": [ |
| "## Import dependencies and provide a model preset\n", |
| "Use the following code to import dependencies.\n", |
| "\n", |
| "Replace the value for the `model_preset` variable with the name of the Gemma preset to use. For example, to use the default English weights, use the value `gemma_2b_en`. This example uses the instruction-tuned preset `gemma_instruct_2b_en`. Optionally, to run the model at half-precision and reduce GPU memory usage, use Keras." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": 3, |
| "metadata": { |
| "id": "tOFdxn3s8qIM" |
| }, |
| "outputs": [], |
| "source": [ |
| "import numpy as np\n", |
| "\n", |
| "import apache_beam as beam\n", |
| "import keras_nlp\n", |
| "import keras\n", |
| "from apache_beam.ml.inference import utils\n", |
| "from apache_beam.ml.inference.base import RunInference\n", |
| "from apache_beam.ml.inference.tensorflow_inference import TFModelHandlerNumpy\n", |
| "from apache_beam.options.pipeline_options import PipelineOptions\n", |
| "\n", |
| "model_preset = \"gemma_instruct_2b_en\"\n", |
| "# Optionally set the model to run at half-precision\n", |
| "# (recommended for smaller GPUs)\n", |
| "keras.config.set_floatx(\"bfloat16\")" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "1D9C6YPn8qIM" |
| }, |
| "source": [ |
| "## Run the pipeline\n", |
| "\n", |
| "To run the pipeline, use a custom model handler.\n", |
| "\n", |
| "### Provide a custom model handler\n", |
| "To simplify model loading, this notebook defines a custom model handler that loads the model by pulling the model weights directly from Kaggle presets. To customize the behavior of the handler, implement `load_model`, `validate_inference_args`, and `share_model_across_processes`. The Keras implementation of the Gemma models has a `generate` method\n", |
| "that generates text based on a prompt. To route the prompts properly, use this function in the `run_inference` method." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": 6, |
| "metadata": { |
| "id": "xe9r7dK_9fJo" |
| }, |
| "outputs": [], |
| "source": [ |
| "# To load the model and perform the inference, define `GemmaModelHandler`.\n", |
| "\n", |
| "from apache_beam.ml.inference.base import ModelHandler\n", |
| "from apache_beam.ml.inference.base import PredictionResult\n", |
| "from typing import Any\n", |
| "from typing import Dict\n", |
| "from typing import Iterable\n", |
| "from typing import Optional\n", |
| "from typing import Sequence\n", |
| "from keras_nlp.src.models.gemma.gemma_causal_lm import GemmaCausalLM\n", |
| "\n", |
| "class GemmaModelHandler(ModelHandler[str,\n", |
| " PredictionResult,GemmaCausalLM\n", |
| " ]):\n", |
| " def __init__(\n", |
| " self,\n", |
| " model_name: str = \"gemma_2b_en\",\n", |
| " ):\n", |
| " \"\"\" Implementation of the ModelHandler interface for Gemma using text as input.\n", |
| "\n", |
| " Example Usage::\n", |
| "\n", |
| " pcoll | RunInference(GemmaModelHandler())\n", |
| "\n", |
| " Args:\n", |
| " model_name: The Gemma model preset. Default is gemma_2b_instruct_en.\n", |
| " \"\"\"\n", |
| " self._model_name = model_name\n", |
| " self._env_vars = {}\n", |
| " def share_model_across_processes(self) -> bool:\n", |
| " return True\n", |
| "\n", |
| " def load_model(self) -> GemmaCausalLM:\n", |
| " \"\"\"Loads and initializes a model for processing.\"\"\"\n", |
| " return keras_nlp.models.GemmaCausalLM.from_preset(self._model_name)\n", |
| "\n", |
| " def validate_inference_args(self, inference_args: Optional[Dict[str, Any]]):\n", |
| " \"\"\"Validates the inference arguments.\"\"\"\n", |
| " for key, value in inference_args.items():\n", |
| " if key != \"max_length\":\n", |
| " raise ValueError(f\"Invalid inference argument: {key}\")\n", |
| "\n", |
| " def run_inference(\n", |
| " self,\n", |
| " batch: Sequence[str],\n", |
| " model: GemmaCausalLM,\n", |
| " inference_args: Optional[Dict[str, Any]] = None\n", |
| " ) -> Iterable[PredictionResult]:\n", |
| " \"\"\"Runs inferences on a batch of text strings.\n", |
| "\n", |
| " Args:\n", |
| " batch: A sequence of examples as text strings.\n", |
| " model:\n", |
| " inference_args: Any additional arguments for an inference.\n", |
| "\n", |
| " Returns:\n", |
| " An Iterable of type PredictionResult.\n", |
| " \"\"\"\n", |
| " # Loop each text string, and use a tuple to store the inference results.\n", |
| " predictions = []\n", |
| " for one_text in batch:\n", |
| " result = model.generate(one_text, **inference_args)\n", |
| " predictions.append(result)\n", |
| " return utils._convert_to_result(batch, predictions, self._model_name)\n" |
| ] |
| }, |
| { |
| "cell_type": "markdown", |
| "metadata": { |
| "id": "zFwZgoB48qIM" |
| }, |
| "source": [ |
| "### Execute the pipeline\n", |
| "Use the following code to run the pipeline. The code includes the path to the trained TensorFlow model. This cell can take a few minutes to run, because the model is downloaded and then loaded onto the worker. This delay is a one-time cost per worker.\n", |
| "\n", |
| "The `max_length` argument determines how long the response from Gemma is. The response includes your input, so the response length includes your input and the output. For longer prompts, use a larger maximum length. Longer lengths require more time to generate.\n", |
| "\n", |
| "**Note:** When the pipeline completes, the memory used to load the model in the pipeline isn't freed automatically. As a result, if you run the pipeline more than once, your pipeline might fail with an out of memory (OOM) error." |
| ] |
| }, |
| { |
| "cell_type": "code", |
| "execution_count": 7, |
| "metadata": { |
| "colab": { |
| "base_uri": "https://localhost:8080/", |
| "height": 106 |
| }, |
| "id": "ibnsNUBw8qIM", |
| "outputId": "0d375964-be10-4b0a-dddd-2a6fd4ff7620" |
| }, |
| "outputs": [ |
| { |
| "name": "stderr", |
| "output_type": "stream", |
| "text": [ |
| "WARNING:apache_beam.runners.interactive.interactive_environment:Dependencies required for Interactive Beam PCollection visualization are not available, please use: `pip install apache-beam[interactive]` to install necessary dependencies to enable all data visualization features.\n" |
| ] |
| }, |
| { |
| "data": { |
| "application/javascript": "\n if (typeof window.interactive_beam_jquery == 'undefined') {\n var jqueryScript = document.createElement('script');\n jqueryScript.src = 'https://code.jquery.com/jquery-3.4.1.slim.min.js';\n jqueryScript.type = 'text/javascript';\n jqueryScript.onload = function() {\n var datatableScript = document.createElement('script');\n datatableScript.src = 'https://cdn.datatables.net/1.10.20/js/jquery.dataTables.min.js';\n datatableScript.type = 'text/javascript';\n datatableScript.onload = function() {\n window.interactive_beam_jquery = jQuery.noConflict(true);\n window.interactive_beam_jquery(document).ready(function($){\n \n });\n }\n document.head.appendChild(datatableScript);\n };\n document.head.appendChild(jqueryScript);\n } else {\n window.interactive_beam_jquery(document).ready(function($){\n \n });\n }" |
| }, |
| "metadata": {}, |
| "output_type": "display_data" |
| }, |
| { |
| "name": "stdout", |
| "output_type": "stream", |
| "text": [ |
| "Input: Tell me the sentiment of the phrase 'I like pizza': , Output: Tell me the sentiment of the phrase 'I like pizza': \n", |
| "\n", |
| "The sentiment of the phrase \"I like pizza\" is positive. It expresses a personal\n" |
| ] |
| } |
| ], |
| "source": [ |
| "class FormatOutput(beam.DoFn):\n", |
| " def process(self, element, *args, **kwargs):\n", |
| " yield \"Input: {input}, Output: {output}\".format(input=element.example, output=element.inference)\n", |
| "\n", |
| "# Instantiate a NumPy array of string prompts for the model.\n", |
| "examples = np.array([\"Tell me the sentiment of the phrase 'I like pizza': \"])\n", |
| "# Specify the model handler, providing a path and the custom inference function.\n", |
| "model_handler = GemmaModelHandler(model_preset)\n", |
| "with beam.Pipeline() as p:\n", |
| " _ = (p | beam.Create(examples) # Create a PCollection of the prompts.\n", |
| " | RunInference(model_handler, inference_args={'max_length': 32}) # Send the prompts to the model and get responses.\n", |
| " | beam.ParDo(FormatOutput()) # Format the output.\n", |
| " | beam.Map(print) # Print the formatted output.\n", |
| " )" |
| ] |
| } |
| ], |
| "metadata": { |
| "accelerator": "GPU", |
| "colab": { |
| "gpuType": "T4", |
| "provenance": [] |
| }, |
| "kernelspec": { |
| "display_name": "Python 3", |
| "name": "python3" |
| }, |
| "language_info": { |
| "name": "python" |
| }, |
| "widgets": { |
| "application/vnd.jupyter.widget-state+json": { |
| "09f730cf36e74993883f6e69732c8022": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "VBoxModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "VBoxModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "VBoxView", |
| "box_style": "", |
| "children": [ |
| "IPY_MODEL_9fdf13286ba54a42a9f8c3f16948e3a1" |
| ], |
| "layout": "IPY_MODEL_4b255639d5d2403386d940c832eb75bd" |
| } |
| }, |
| "10b7f02094d547aeaad1ddbe51f65e61": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "30798183af4f42daaa5991898859a387": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "48e028cf0c33408b9e15962a32913f13": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "ButtonStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "ButtonStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "button_color": null, |
| "font_weight": "" |
| } |
| }, |
| "49084c1479a24f55ae1eefd10d69a6a4": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "4b255639d5d2403386d940c832eb75bd": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": "center", |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": "flex", |
| "flex": null, |
| "flex_flow": "column", |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": "50%" |
| } |
| }, |
| "4d3ebcbcfc2d43178c30e9f4feac070b": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "DescriptionStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "DescriptionStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "description_width": "" |
| } |
| }, |
| "4e141a0bb15d46e0b0d403710bd8c0ae": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "DescriptionStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "DescriptionStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "description_width": "" |
| } |
| }, |
| "52258607316047eb844abc9f2f2e167c": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "77240ae07f4d4cd18ff227fb9b6d4582": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "TextModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "TextModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "TextView", |
| "continuous_update": true, |
| "description": "Username:", |
| "description_tooltip": null, |
| "disabled": false, |
| "layout": "IPY_MODEL_30798183af4f42daaa5991898859a387", |
| "placeholder": "", |
| "style": "IPY_MODEL_d5902391f0da466f87cc9a6c98578c7c", |
| "value": "jrmccluskeygoogle" |
| } |
| }, |
| "7f9f1bf26e9748f68f2b1bf880fa8433": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "ButtonModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "ButtonModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "ButtonView", |
| "button_style": "", |
| "description": "Login", |
| "disabled": false, |
| "icon": "", |
| "layout": "IPY_MODEL_10b7f02094d547aeaad1ddbe51f65e61", |
| "style": "IPY_MODEL_48e028cf0c33408b9e15962a32913f13", |
| "tooltip": "" |
| } |
| }, |
| "86a301e485934df9a9a8b035b61cd688": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "DescriptionStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "DescriptionStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "description_width": "" |
| } |
| }, |
| "9233e5c7f34d4641b8d7ab49876f3b65": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "HTMLModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "HTMLModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "HTMLView", |
| "description": "", |
| "description_tooltip": null, |
| "layout": "IPY_MODEL_dd37580b4d1640a2b1147303c16d6c36", |
| "placeholder": "", |
| "style": "IPY_MODEL_4d3ebcbcfc2d43178c30e9f4feac070b", |
| "value": "\n<b>Thank You</b></center>" |
| } |
| }, |
| "9fdf13286ba54a42a9f8c3f16948e3a1": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "LabelModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "LabelModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "LabelView", |
| "description": "", |
| "description_tooltip": null, |
| "layout": "IPY_MODEL_52258607316047eb844abc9f2f2e167c", |
| "placeholder": "", |
| "style": "IPY_MODEL_b9f2aca5a9b340349dee7065b0a22f64", |
| "value": "Kaggle credentials successfully validated." |
| } |
| }, |
| "a9bd321c50d2492a8bccf25a428b502f": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "PasswordModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "PasswordModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "PasswordView", |
| "continuous_update": true, |
| "description": "Token:", |
| "description_tooltip": null, |
| "disabled": false, |
| "layout": "IPY_MODEL_49084c1479a24f55ae1eefd10d69a6a4", |
| "placeholder": "", |
| "style": "IPY_MODEL_4e141a0bb15d46e0b0d403710bd8c0ae", |
| "value": "" |
| } |
| }, |
| "ac5fb2c208924d8a88e589558c54c3ac": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "aef5a638469641d1a2ba1c46a49f74c9": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "DescriptionStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "DescriptionStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "description_width": "" |
| } |
| }, |
| "b9f2aca5a9b340349dee7065b0a22f64": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "DescriptionStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "DescriptionStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "description_width": "" |
| } |
| }, |
| "cdcc2b811fb34d0893c6085098c61250": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "LabelModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "LabelModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "LabelView", |
| "description": "", |
| "description_tooltip": null, |
| "layout": "IPY_MODEL_ddcb89b91bd64218974129e67f35b379", |
| "placeholder": "", |
| "style": "IPY_MODEL_86a301e485934df9a9a8b035b61cd688", |
| "value": "Connecting..." |
| } |
| }, |
| "d5902391f0da466f87cc9a6c98578c7c": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "DescriptionStyleModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "DescriptionStyleModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "StyleView", |
| "description_width": "" |
| } |
| }, |
| "dd37580b4d1640a2b1147303c16d6c36": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "ddcb89b91bd64218974129e67f35b379": { |
| "model_module": "@jupyter-widgets/base", |
| "model_module_version": "1.2.0", |
| "model_name": "LayoutModel", |
| "state": { |
| "_model_module": "@jupyter-widgets/base", |
| "_model_module_version": "1.2.0", |
| "_model_name": "LayoutModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/base", |
| "_view_module_version": "1.2.0", |
| "_view_name": "LayoutView", |
| "align_content": null, |
| "align_items": null, |
| "align_self": null, |
| "border": null, |
| "bottom": null, |
| "display": null, |
| "flex": null, |
| "flex_flow": null, |
| "grid_area": null, |
| "grid_auto_columns": null, |
| "grid_auto_flow": null, |
| "grid_auto_rows": null, |
| "grid_column": null, |
| "grid_gap": null, |
| "grid_row": null, |
| "grid_template_areas": null, |
| "grid_template_columns": null, |
| "grid_template_rows": null, |
| "height": null, |
| "justify_content": null, |
| "justify_items": null, |
| "left": null, |
| "margin": null, |
| "max_height": null, |
| "max_width": null, |
| "min_height": null, |
| "min_width": null, |
| "object_fit": null, |
| "object_position": null, |
| "order": null, |
| "overflow": null, |
| "overflow_x": null, |
| "overflow_y": null, |
| "padding": null, |
| "right": null, |
| "top": null, |
| "visibility": null, |
| "width": null |
| } |
| }, |
| "e77abce708bc464484463dd9289c3bbe": { |
| "model_module": "@jupyter-widgets/controls", |
| "model_module_version": "1.5.0", |
| "model_name": "HTMLModel", |
| "state": { |
| "_dom_classes": [], |
| "_model_module": "@jupyter-widgets/controls", |
| "_model_module_version": "1.5.0", |
| "_model_name": "HTMLModel", |
| "_view_count": null, |
| "_view_module": "@jupyter-widgets/controls", |
| "_view_module_version": "1.5.0", |
| "_view_name": "HTMLView", |
| "description": "", |
| "description_tooltip": null, |
| "layout": "IPY_MODEL_ac5fb2c208924d8a88e589558c54c3ac", |
| "placeholder": "", |
| "style": "IPY_MODEL_aef5a638469641d1a2ba1c46a49f74c9", |
| "value": "<center> <img\nsrc=https://www.kaggle.com/static/images/site-logo.png\nalt='Kaggle'> <br> Create an API token from <a\nhref=\"https://www.kaggle.com/settings/account\" target=\"_blank\">your Kaggle\nsettings page</a> and paste it below along with your Kaggle username. <br> </center>" |
| } |
| } |
| } |
| } |
| }, |
| "nbformat": 4, |
| "nbformat_minor": 0 |
| } |