layout: page title: “Apache Hadoop Submarine Interpreter for Apache Zeppelin” description: “Hadoop Submarine is the latest machine learning framework subproject in the Hadoop 3.1 release. It allows Hadoop to support Tensorflow, MXNet, Caffe, Spark, etc.” group: interpreter

{% include JB/setup %}

Submarine Interpreter for Apache Zeppelin

Hadoop Submarine is the latest machine learning framework subproject in the Hadoop 3.1 release. It allows Hadoop to support Tensorflow, MXNet, Caffe, Spark, etc. A variety of deep learning frameworks provide a full-featured system framework for machine learning algorithm development, distributed model training, model management, and model publishing, combined with hadoop's intrinsic data storage and data processing capabilities to enable data scientists to Good mining and the value of the data.

A deep learning algorithm project requires data acquisition, data processing, data cleaning, interactive visual programming adjustment parameters, algorithm testing, algorithm publishing, algorithm job scheduling, offline model training, model online services and many other processes and processes. Zeppelin is a web-based notebook that supports interactive data analysis. You can use SQL, Scala, Python, etc. to make data-driven, interactive, collaborative documents.

You can use the more than 20 interpreters in zeppelin (for example: spark, hive, Cassandra, Elasticsearch, Kylin, HBase, etc.) to collect data, clean data, feature extraction, etc. in the data in Hadoop before completing the machine learning model training. The data preprocessing process.

By integrating submarine in zeppelin, we use zeppelin's data discovery, data analysis and data visualization and collaboration capabilities to visualize the results of algorithm development and parameter adjustment during machine learning model training.

Architecture

As shown in the figure above, how the Submarine develops and models the machine learning algorithms through Zeppelin is explained from the system architecture.

After installing and deploying Hadoop 3.1+ and Zeppelin, submarine will create a fully separate Zeppelin Submarine interpreter Docker container for each user in YARN. This container contains the development and runtime environment for Tensorflow. Zeppelin Server connects to the Zeppelin Submarine interpreter Docker container in YARN. allows algorithmic engineers to perform algorithm development and data visualization in Tensorflow's stand-alone environment in Zeppelin Notebook.

After the algorithm is developed, the algorithm engineer can submit the algorithm directly to the YARN in offline transfer training in Zeppelin, real-time demonstration of model training with Submarine's TensorBoard for each algorithm engineer.

You can not only complete the model training of the algorithm, but you can also use the more than twenty interpreters in Zeppelin. Complete the data preprocessing of the model, For example, you can perform data extraction, filtering, and feature extraction through the Spark interpreter in Zeppelin in the Algorithm Note.

In the future, you can also use Zeppelin's upcoming Workflow workflow orchestration service. You can complete Spark, Hive data processing and Tensorflow model training in one Note. It is organized into a workflow through visualization, etc., and the scheduling of jobs is performed in the production environment.

Overview

As shown in the figure above, from the internal implementation, how Submarine combines Zeppelin's machine learning algorithm development and model training.

  1. The algorithm engineer created a Tensorflow notebook (left image) in Zeppelin by using Submarine interpreter.

    It is important to note that you need to complete the development of the entire algorithm in a Note.

  2. You can use Spark for data preprocessing in some of the paragraphs in Note.

  3. Use Python for algorithm development and debugging of Tensorflow in other paragraphs of notebook, Submarine creates a Zeppelin Submarine Interpreter Docker Container for you in YARN, which contains the following features and services:

    • Shell Command line tool:Allows you to view the system environment in the Zeppelin Submarine Interpreter Docker Container, Install the extension tools you need or the Python dependencies.
    • Kerberos lib:Allows you to perform kerberos authentication and access to Hadoop clusters with Kerberos authentication enabled.
    • Tensorflow environment:Allows you to develop tensorflow algorithm code.
    • Python environment:Allows you to develop tensorflow code.
    • Complete a complete algorithm development with a Note in Zeppelin. If this algorithm contains multiple modules, You can write different algorithm modules in multiple paragraphs in Note. The title of each paragraph is the name of the algorithm module. The content of the paragraph is the code content of this algorithm module.
    • HDFS Client:Zeppelin Submarine Interpreter will automatically submit the algorithm code you wrote in Note to HDFS.

    Submarine interpreter Docker Image It is Submarine that provides you with an image file that supports Tensorflow (CPU and GPU versions). And installed the algorithm library commonly used by Python. You can also install other development dependencies you need on top of the base image provided by Submarine.

  4. When you complete the development of the algorithm module, You can do this by creating a new paragraph in Note and typing %submarine dashboard. Zeppelin will create a Submarine Dashboard. The machine learning algorithm written in this Note can be submitted to YARN as a JOB by selecting the JOB RUN command option in the Control Panel. Create a Tensorflow Model Training Docker Container, The container contains the following sections:

    • Tensorflow environment
    • HDFS Client Will automatically download the algorithm file Mount from HDFS into the container for distributed model training. Mount the algorithm file to the Work Dir path of the container.

    Submarine Tensorflow Docker Image There is Submarine that provides you with an image file that supports Tensorflow (CPU and GPU versions). And installed the algorithm library commonly used by Python. You can also install other development dependencies you need on top of the base image provided by Submarine.

Submarine shell

After creating a Note with Submarine Interpreter in Zeppelin, You can add a paragraph to Note if you need it. Using the %submarine.sh identifier, you can use the Shell command to perform various operations on the Submarine Interpreter Docker Container, such as:

  1. View the Pythone version in the Container
  2. View the system environment of the Container
  3. Install the dependencies you need yourself
  4. Kerberos certification with kinit
  5. Use Hadoop in Container for HDFS operations, etc.

Submarine python

You can add one or more paragraphs to Note. Write the algorithm module for Tensorflow in Python using the %submarine.python identifier.

Submarine Dashboard

After writing the Tensorflow algorithm by using %submarine.python, You can add a paragraph to Note. Enter the %submarine dashboard and execute it. Zeppelin will create a Submarine Dashboard.

With Submarine Dashboard you can do all the operational control of Submarine, for example:

  1. Usage:Display Submarine's command description to help developers locate problems.

  2. Refresh:Zeppelin will erase all your input in the Dashboard.

  3. Tensorboard:You will be redirected to the Tensorboard WEB system created by Submarine for each user. With Tensorboard you can view the real-time status of the Tensorflow model training in real time.

  4. Command

    • JOB RUN:Selecting JOB RUN will display the parameter input interface for submitting JOB.
  • JOB STOP

    You can choose to execute the JOB STOP command. Stop a Tensorflow model training task that has been submitted and is running

  • TENSORBOARD START

    You can choose to execute the TENSORBOARD START command to create your TENSORBOARD Docker Container.

  • TENSORBOARD STOP

    You can choose to execute the TENSORBOARD STOP command to stop and destroy your TENSORBOARD Docker Container.

  1. Run Command:Execute the action command of your choice
  2. Clean Chechkpoint:Checking this option will clear the data in this Note's Checkpoint Path before each JOB RUN execution.

Configuration

Zeppelin Submarine interpreter provides the following properties to customize the Submarine interpreter

Docker images

The docker images file is stored in the zeppelin/scripts/docker/submarine directory.

  1. submarine interpreter cpu version

  2. submarine interpreter gpu version

  3. tensorflow 1.10 & hadoop 3.1.2 cpu version

  4. tensorflow 1.10 & hadoop 3.1.2 gpu version

Change Log

0.1.0 (Zeppelin 0.9.0) :

  • Support distributed or standolone tensorflow model training.
  • Support submarine interpreter running local.
  • Support submarine interpreter running YARN.
  • Support Docker on YARN-3.3.0, Plan compatible with lower versions of yarn.

Bugs & Contacts

  • Submarine interpreter BUG If you encounter a bug for this interpreter, please create a sub JIRA ticket on ZEPPELIN-3856.
  • Submarine Running problem If you encounter a problem for Submarine runtime, please create a ISSUE on hadoop-submarine-ecosystem.
  • YARN Submarine BUG If you encounter a bug for Yarn Submarine, please create a JIRA ticket on SUBMARINE.

Dependency

  1. YARN Submarine currently need to run on Hadoop 3.3+
  • The hadoop version of the hadoop submarine team git repository is periodically submitted to the code repository of the hadoop.
  • The version of the git repository for the hadoop submarine team will be faster than the hadoop version release cycle.
  • You can use the hadoop version of the hadoop submarine team git repository.
  1. Submarine runtime environment you can use Submarine-installer https://github.com/hadoopsubmarine, Deploy Docker and network environments.

More

Hadoop Submarine Project: https://hadoop.apache.org/submarine Youtube Submarine Channel: https://www.youtube.com/channel/UC4JBt8Y8VJ0BW0IM9YpdCyQ