blob: 89e691cb3e0bfb84f4ef86645bf4ac2a1de39981 [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"id": "250d8327",
"metadata": {},
"source": [
"<!--- Licensed to the Apache Software Foundation (ASF) under one -->\n",
"<!--- or more contributor license agreements. See the NOTICE file -->\n",
"<!--- distributed with this work for additional information -->\n",
"<!--- regarding copyright ownership. The ASF licenses this file -->\n",
"<!--- to you under the Apache License, Version 2.0 (the -->\n",
"<!--- \"License\"); you may not use this file except in compliance -->\n",
"<!--- with the License. You may obtain a copy of the License at -->\n",
"\n",
"<!--- http://www.apache.org/licenses/LICENSE-2.0 -->\n",
"\n",
"<!--- Unless required by applicable law or agreed to in writing, -->\n",
"<!--- software distributed under the License is distributed on an -->\n",
"<!--- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->\n",
"<!--- KIND, either express or implied. See the License for the -->\n",
"<!--- specific language governing permissions and limitations -->\n",
"<!--- under the License. -->\n",
"\n",
"# Exporting to ONNX format\n",
"\n",
"[Open Neural Network Exchange (ONNX)](https://github.com/onnx/onnx) provides an open source format for AI models. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types.\n",
"\n",
"In this tutorial, we will show how you can save MXNet models to the ONNX format.\n",
"\n",
"MXNet-ONNX operators coverage and features are updated regularly. Visit the [ONNX operator coverage](https://cwiki.apache.org/confluence/display/MXNET/ONNX+Operator+Coverage) page for the latest information.\n",
"\n",
"In this tutorial, we will learn how to use MXNet to ONNX exporter on pre-trained models.\n",
"\n",
"## Prerequisites\n",
"\n",
"To run the tutorial you will need to have installed the following python modules:\n",
"- [MXNet >= 2.0.0](https://mxnet.apache.org/get_started)\n",
"- [onnx]( https://github.com/onnx/onnx#user-content-installation) v1.7 & v1.8 (follow the install guide)\n",
"\n",
"*Note:* MXNet-ONNX importer and exporter follows version 12 & 13 of ONNX operator set which comes with ONNX v1.7 & v1.8."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "2a60d8fd",
"metadata": {},
"outputs": [],
"source": [
"import mxnet as mx\n",
"from mxnet import initializer as init, np, onnx as mxnet_onnx\n",
"from mxnet.gluon import nn\n",
"import logging\n",
"logging.basicConfig(level=logging.INFO)"
]
},
{
"cell_type": "markdown",
"id": "4457d45c",
"metadata": {},
"source": [
"## Create a model from the MXNet Gluon\n",
"\n",
"Let's build a concise model with [MXNet gluon](../../../api/gluon/index.rst) package. The model is multilayer perceptrons with two fully-connected layers. The first one is our hidden layer, which contains 256 hidden units and applies ReLU activation function. The second is our output layer."
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "8daad4f3",
"metadata": {},
"outputs": [],
"source": [
"net = nn.HybridSequential()\n",
"net.add(nn.Dense(256, activation='relu'), nn.Dense(10))"
]
},
{
"cell_type": "markdown",
"id": "7d180ff7",
"metadata": {},
"source": [
"Then we initialize the model and export it into symbol file and parameter file."
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "8873b949",
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"[03:51:26] /work/mxnet/src/storage/storage.cc:202: Using Pooled (Naive) StorageManager for CPU\n"
]
},
{
"data": {
"text/plain": [
"('mlp-symbol.json', 'mlp-0000.params')"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"net.initialize(init.Normal(sigma=0.01))\n",
"net.hybridize()\n",
"input = np.ones(shape=(50,), dtype=np.float32)\n",
"output = net(input)\n",
"net.export(\"mlp\")"
]
},
{
"cell_type": "markdown",
"id": "da6e3fa5",
"metadata": {},
"source": [
"Now, we have exported the model symbol, params file on the disk.\n",
"\n",
"## MXNet to ONNX exporter API\n",
"\n",
"Let us describe the MXNet's `export_model` API."
]
},
{
"cell_type": "code",
"execution_count": 4,
"id": "fabc2b5a",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Help on function export_model in module mxnet.onnx.mx2onnx._export_model:\n",
"\n",
"export_model(sym, params, in_shapes=None, in_types=<class 'numpy.float32'>, onnx_file_path='model.onnx', verbose=False, dynamic=False, dynamic_input_shapes=None, run_shape_inference=False, input_type=None, input_shape=None, large_model=False)\n",
" Exports the MXNet model file, passed as a parameter, into ONNX model.\n",
" Accepts both symbol,parameter objects as well as json and params filepaths as input.\n",
" Operator support and coverage -\n",
" https://github.com/apache/mxnet/tree/v1.x/python/mxnet/onnx#user-content-operator-support-matrix\n",
" \n",
" Parameters\n",
" ----------\n",
" sym : str or symbol object\n",
" Path to the json file or Symbol object\n",
" params : str or dict or list of dict\n",
" str - Path to the params file\n",
" dict - params dictionary (Including both arg_params and aux_params)\n",
" list - list of length 2 that contains arg_params and aux_params\n",
" in_shapes : List of tuple\n",
" Input shape of the model e.g [(1,3,224,224)]\n",
" in_types : data type or list of data types\n",
" Input data type e.g. np.float32, or [np.float32, np.int32]\n",
" onnx_file_path : str\n",
" Path where to save the generated onnx file\n",
" verbose : Boolean\n",
" If True will print logs of the model conversion\n",
" dynamic: Boolean\n",
" If True will allow for dynamic input shapes to the model\n",
" dynamic_input_shapes: list of tuple\n",
" Specifies the dynamic input_shapes. If None then all dimensions are set to None\n",
" run_shape_inference : Boolean\n",
" If True will run shape inference on the model\n",
" input_type : data type or list of data types\n",
" This is the old name of in_types. We keep this parameter name for backward compatibility\n",
" input_shape : List of tuple\n",
" This is the old name of in_shapes. We keep this parameter name for backward compatibility\n",
" large_model : Boolean\n",
" Whether to export a model that is larger than 2 GB. If true will save param tensors in separate\n",
" files along with .onnx model file. This feature is supported since onnx 1.8.0\n",
" \n",
" Returns\n",
" -------\n",
" onnx_file_path : str\n",
" Onnx file path\n",
" \n",
" Notes\n",
" -----\n",
" This method is available when you ``import mxnet.onnx``\n",
"\n"
]
}
],
"source": [
"help(mxnet_onnx.export_model)"
]
},
{
"cell_type": "markdown",
"id": "674c1528",
"metadata": {},
"source": [
"Output:"
]
},
{
"cell_type": "markdown",
"id": "225533d0",
"metadata": {},
"source": [
"```text\n",
"Help on function export_model in module mxnet.contrib.onnx.mx2onnx.export_model:\n",
"\n",
"export_model(sym, params, input_shape, input_type=<type 'numpy.float32'>, onnx_file_path=u'model.onnx', verbose=False)\n",
" Exports the MXNet model file, passed as a parameter, into ONNX model.\n",
" Accepts both symbol,parameter objects as well as json and params filepaths as input.\n",
" Operator support and coverage - https://cwiki.apache.org/confluence/display/MXNET/MXNet-ONNX+Integration\n",
"\n",
" Parameters\n",
" ----------\n",
" sym : str or symbol object\n",
" Path to the json file or Symbol object\n",
" params : str or symbol object\n",
" Path to the params file or params dictionary. (Including both arg_params and aux_params)\n",
" input_shape : List of tuple\n",
" Input shape of the model e.g [(1,3,224,224)]\n",
" input_type : data type\n",
" Input data type e.g. np.float32\n",
" onnx_file_path : str\n",
" Path where to save the generated onnx file\n",
" verbose : Boolean\n",
" If true will print logs of the model conversion\n",
"\n",
" Returns\n",
" -------\n",
" onnx_file_path : str\n",
" Onnx file path\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "4fed0a90",
"metadata": {},
"source": [
"`export_model` API can accept the MXNet model in one of the following two ways.\n",
"\n",
"1. MXNet sym, params objects:\n",
" * This is useful if we are training a model. At the end of training, we just need to invoke the `export_model` function and provide sym and params objects as inputs with other attributes to save the model in ONNX format.\n",
"2. MXNet's exported json and params files:\n",
" * This is useful if we have pre-trained models and we want to convert them to ONNX format.\n",
"\n",
"Since we have downloaded pre-trained model files, we will use the `export_model` API by passing the path for symbol and params files.\n",
"\n",
"## How to use MXNet to ONNX exporter API\n",
"\n",
"We will use the downloaded pre-trained model files (sym, params) and define input variables."
]
},
{
"cell_type": "code",
"execution_count": 5,
"id": "0a8c771b",
"metadata": {},
"outputs": [],
"source": [
"# The input symbol and params files\n",
"sym = './mlp-symbol.json'\n",
"params = './mlp-0000.params'\n",
"\n",
"# Standard Imagenet input - 3 channels, 224*224\n",
"input_shape = (50,)\n",
"\n",
"# Path of the output file\n",
"onnx_file = './mxnet_exported_mlp.onnx'"
]
},
{
"cell_type": "markdown",
"id": "5a3239f2",
"metadata": {},
"source": [
"We have defined the input parameters required for the `export_model` API. Now, we are ready to covert the MXNet model into ONNX format."
]
},
{
"cell_type": "code",
"execution_count": 6,
"id": "ab08ade0",
"metadata": {},
"outputs": [],
"source": [
"# Invoke export model API. It returns path of the converted onnx model\n",
"converted_model_path = mxnet_onnx.export_model(sym, params, [input_shape], [np.float32], onnx_file)"
]
},
{
"cell_type": "markdown",
"id": "a02b0e55",
"metadata": {},
"source": [
"This API returns path of the converted model which you can later use to import the model into other frameworks.\n",
"\n",
"## Check validity of ONNX model\n",
"\n",
"Now we can check validity of the converted ONNX model by using ONNX checker tool. The tool will validate the model by checking if the content contains valid protobuf:"
]
},
{
"cell_type": "code",
"execution_count": 7,
"id": "99266017",
"metadata": {},
"outputs": [],
"source": [
"from onnx import checker\n",
"import onnx\n",
"\n",
"# Load onnx model\n",
"model_proto = onnx.load_model(converted_model_path)\n",
"\n",
"# Check if converted ONNX protobuf is valid\n",
"checker.check_graph(model_proto.graph)"
]
},
{
"cell_type": "markdown",
"id": "0db18682",
"metadata": {},
"source": [
"If the converted protobuf format doesn't qualify to ONNX proto specifications, the checker will throw errors, but in this case it successfully passes.\n",
"\n",
"This method confirms exported model protobuf is valid. Now, the model is ready to be imported in other frameworks for inference!"
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}