blob: 67ac87bd4007cfccf7e9759ab8be89a6f27dbcb9 [file] [log] [blame]
{
"cells": [
{
"cell_type": "markdown",
"id": "9118e42e",
"metadata": {},
"source": [
"<!--- Licensed to the Apache Software Foundation (ASF) under one -->\n",
"<!--- or more contributor license agreements. See the NOTICE file -->\n",
"<!--- distributed with this work for additional information -->\n",
"<!--- regarding copyright ownership. The ASF licenses this file -->\n",
"<!--- to you under the Apache License, Version 2.0 (the -->\n",
"<!--- \"License\"); you may not use this file except in compliance -->\n",
"<!--- with the License. You may obtain a copy of the License at -->\n",
"\n",
"<!--- http://www.apache.org/licenses/LICENSE-2.0 -->\n",
"\n",
"<!--- Unless required by applicable law or agreed to in writing, -->\n",
"<!--- software distributed under the License is distributed on an -->\n",
"<!--- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY -->\n",
"<!--- KIND, either express or implied. See the License for the -->\n",
"<!--- specific language governing permissions and limitations -->\n",
"<!--- under the License. -->\n",
"\n",
"# Install MXNet with oneDNN\n",
"\n",
"A better training and inference performance is expected to be achieved on Intel-Architecture CPUs with MXNet built with [oneDNN](https://github.com/oneapi-src/oneDNN) on multiple operating system, including Linux, Windows and MacOS.\n",
"In the following sections, you will find build instructions for MXNet with oneDNN on Linux, MacOS and Windows.\n",
"\n",
"The detailed performance data collected on Intel Xeon CPU with MXNet built with oneDNN can be found [here](https://mxnet.apache.org/api/faq/perf#intel-cpu).\n",
"\n",
"\n",
"<h2 id=\"0\">Contents</h2>\n",
"\n",
"* [1. Linux](#1)\n",
"* [2. MacOS](#2)\n",
"* [3. Windows](#3)\n",
"* [4. Verify MXNet with python](#4)\n",
"* [5. Enable MKL BLAS](#5)\n",
"* [6. Enable graph optimization](#6)\n",
"* [7. Quantization](#7)\n",
"* [8. Support](#8)\n",
"\n",
"<h2 id=\"1\">Linux</h2>\n",
"\n",
"### Prerequisites"
]
},
{
"cell_type": "markdown",
"id": "6978cd97",
"metadata": {},
"source": [
"```\n",
"sudo apt-get update\n",
"sudo apt-get install -y build-essential git\n",
"sudo apt-get install -y libopenblas-dev liblapack-dev\n",
"sudo apt-get install -y libopencv-dev\n",
"sudo apt-get install -y graphviz\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "fc1a1a8f",
"metadata": {},
"source": [
"### Clone MXNet sources"
]
},
{
"cell_type": "markdown",
"id": "fe2f0cf0",
"metadata": {},
"source": [
"```\n",
"git clone --recursive https://github.com/apache/mxnet.git\n",
"cd mxnet\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "a13d0ce5",
"metadata": {},
"source": [
"### Build MXNet with oneDNN\n",
"\n",
"To achieve better performance, the Intel OpenMP and llvm OpenMP are recommended as below instruction. Otherwise, default GNU OpenMP will be used and you may get the sub-optimal performance. If you don't have the full [MKL](https://software.intel.com/en-us/intel-mkl) library installation, you might use OpenBLAS as the blas library, by setting USE_BLAS=Open."
]
},
{
"cell_type": "markdown",
"id": "da5587af",
"metadata": {},
"source": [
"```\n",
"# build with llvm OpenMP and Intel MKL/OpenBlas\n",
"mkdir build && cd build\n",
"cmake -DUSE_CUDA=OFF -DUSE_ONEDNN=ON -DUSE_OPENMP=ON -DUSE_OPENCV=ON ..\n",
"make -j $(nproc)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "de6f1a4a",
"metadata": {},
"source": [
"```\n",
"# build with Intel MKL and Intel OpenMP\n",
"mkdir build && cd build\n",
"cmake -DUSE_CUDA=OFF -DUSE_ONEDNN=ON -DUSE_BLAS=mkl ..\n",
"make -j $(nproc)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "6835e62b",
"metadata": {},
"source": [
"```\n",
"# build with openblas and GNU OpenMP (sub-optimal performance)\n",
"mkdir build && cd build\n",
"cmake -DUSE_CUDA=OFF -DUSE_ONEDNN=ON -DUSE_BLAS=Open ..\n",
"make -j $(nproc)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "a110f5e1",
"metadata": {},
"source": [
"<h2 id=\"2\">MacOS</h2>\n",
"\n",
"### Prerequisites\n",
"\n",
"Install the dependencies, required for MXNet, with the following commands:\n",
"\n",
"- [Homebrew](https://brew.sh/)\n",
"- llvm (clang in macOS does not support OpenMP)\n",
"- OpenCV (for computer vision operations)"
]
},
{
"cell_type": "markdown",
"id": "5bf8d693",
"metadata": {},
"source": [
"```\n",
"# Paste this command in Mac terminal to install Homebrew\n",
"/usr/bin/ruby -e \"$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)\"\n",
"\n",
"# install dependency\n",
"brew update\n",
"brew install pkg-config\n",
"brew install graphviz\n",
"brew tap homebrew/core\n",
"brew install opencv\n",
"brew tap homebrew/versions\n",
"brew install llvm\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "bac628d1",
"metadata": {},
"source": [
"### Clone MXNet sources"
]
},
{
"cell_type": "markdown",
"id": "603726e0",
"metadata": {},
"source": [
"```\n",
"git clone --recursive https://github.com/apache/mxnet.git\n",
"cd mxnet\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "eb8d401f",
"metadata": {},
"source": [
"### Build MXNet with oneDNN"
]
},
{
"cell_type": "markdown",
"id": "a255f7d1",
"metadata": {},
"source": [
"```\n",
"LIBRARY_PATH=$(brew --prefix llvm)/lib/ make -j $(sysctl -n hw.ncpu) CC=$(brew --prefix llvm)/bin/clang CXX=$(brew --prefix llvm)/bin/clang++ USE_OPENCV=1 USE_OPENMP=1 USE_ONEDNN=1 USE_BLAS=apple\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "f9b3b7ba",
"metadata": {},
"source": [
"<h2 id=\"3\">Windows</h2>\n",
"\n",
"On Windows, you can use [Micrsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) and [Microsoft Visual Studio 2017](https://www.visualstudio.com/downloads/) to compile MXNet with oneDNN.\n",
"[Micrsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) is recommended.\n",
"\n",
"**Visual Studio 2015**\n",
"\n",
"To build and install MXNet yourself, you need the following dependencies. Install the required dependencies:\n",
"\n",
"1. If [Microsoft Visual Studio 2015](https://www.visualstudio.com/vs/older-downloads/) is not already installed, download and install it. You can download and install the free community edition.\n",
"2. Download and Install [CMake 3](https://cmake.org/files/v3.14/cmake-3.14.0-win64-x64.msi) if it is not already installed.\n",
"3. Download [OpenCV 3](https://sourceforge.net/projects/opencvlibrary/files/3.4.5/opencv-3.4.5-vc14_vc15.exe/download), and unzip the OpenCV package, set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV build directory``` (e.g.,```OpenCV_DIR = C:\\opencv\\build ```). Also, add the OpenCV bin directory (```C:\\opencv\\build\\x64\\vc14\\bin``` for example) to the ``PATH`` variable.\n",
"4. If you have Intel Math Kernel Library (Intel MKL) installed, set ```MKLROOT``` environment variable to point to ```MKL``` directory that contains the ```include``` and ```lib```. If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. Typically, you can find the directory in ```C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries\\windows\\mkl```.\n",
"5. If you don't have the Intel Math Kernel Library (MKL) installed, download and install [OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/), or build the latest version of OpenBLAS from source. Note that you should also download ```mingw64.dll.zip``` along with openBLAS and add them to PATH.\n",
"6. Set the environment variable ```OpenBLAS_HOME``` to point to the ```OpenBLAS``` directory that contains the ```include``` and ```lib``` directories. Typically, you can find the directory in ```C:\\Downloads\\OpenBLAS\\```.\n",
"\n",
"After you have installed all of the required dependencies, build the MXNet source code:\n",
"\n",
"1. Start a Visual Studio command prompt by click windows Start menu>>Visual Studio 2015>>VS2015 X64 Native Tools Command Prompt, and download the MXNet source code from [GitHub](https://github.com/apache/mxnet) by the command:"
]
},
{
"cell_type": "markdown",
"id": "c0f79aef",
"metadata": {},
"source": [
"```\n",
"git clone --recursive https://github.com/apache/mxnet.git\n",
"cd C:\\mxent\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "fa7a303b",
"metadata": {},
"source": [
"2. Enable oneDNN by -DUSE_ONEDNN=1. Use [CMake 3](https://cmake.org/) to create a Visual Studio solution in ```./build```. Make sure to specify the architecture in the\n",
"command:"
]
},
{
"cell_type": "markdown",
"id": "3021449c",
"metadata": {},
"source": [
"```\n",
">mkdir build\n",
">cd build\n",
">cmake -G \"Visual Studio 14 Win64\" .. -DUSE_CUDA=0 -DUSE_CUDNN=0 -DUSE_NVRTC=0 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=Open -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All -DUSE_ONEDNN=1 -DCMAKE_BUILD_TYPE=Release\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "d7d7f887",
"metadata": {},
"source": [
"3. Enable oneDNN and Intel MKL as BLAS library by the command:"
]
},
{
"cell_type": "markdown",
"id": "dec2edb6",
"metadata": {},
"source": [
"```\n",
">\"C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries\\windows\\mkl\\bin\\mklvars.bat\" intel64\n",
">cmake -G \"Visual Studio 14 Win64\" .. -DUSE_CUDA=0 -DUSE_CUDNN=0 -DUSE_NVRTC=0 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=mkl -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All -DUSE_ONEDNN=1 -DCMAKE_BUILD_TYPE=Release\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "3e480235",
"metadata": {},
"source": [
"4. After the CMake successfully completed, in Visual Studio, open the solution file ```.sln``` and compile it, or compile the MXNet source code by using following command:"
]
},
{
"cell_type": "markdown",
"id": "66bcb621",
"metadata": {},
"source": [
"```r\n",
"msbuild mxnet.sln /p:Configuration=Release;Platform=x64 /maxcpucount\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "db1b2740",
"metadata": {},
"source": [
"These commands produce mxnet library called ```libmxnet.dll``` in the ```./build/Release/``` or ```./build/Debug``` folder. Also ```libmkldnn.dll``` with be in the ```./build/3rdparty/onednn/src/Release/```\n",
"\n",
"5. Make sure that all the dll files used above(such as `libmkldnn.dll`, `libmklml*.dll`, `libiomp5.dll`, `libopenblas*.dll`, etc) are added to the system PATH. For convinence, you can put all of them to ```\\windows\\system32```. Or you will come across `Not Found Dependencies` when loading MXNet.\n",
"\n",
"**Visual Studio 2017**\n",
"\n",
"User can follow the same steps of Visual Studio 2015 to build MXNET with oneDNN, but change the version related command, for example,```C:\\opencv\\build\\x64\\vc15\\bin``` and build command is as below:"
]
},
{
"cell_type": "markdown",
"id": "36f80401",
"metadata": {},
"source": [
"```\n",
">cmake -G \"Visual Studio 15 Win64\" .. -DUSE_CUDA=0 -DUSE_CUDNN=0 -DUSE_NVRTC=0 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=mkl -DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All -DUSE_ONEDNN=1 -DCMAKE_BUILD_TYPE=Release\n",
"\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "178f335e",
"metadata": {},
"source": [
"<h2 id=\"4\">Verify MXNet with python</h2>\n",
"\n",
"Preinstall python and some dependent modules:"
]
},
{
"cell_type": "markdown",
"id": "27e33d1a",
"metadata": {},
"source": [
"```\n",
"pip install numpy graphviz\n",
"set PYTHONPATH=[workdir]\\mxnet\\python\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "c40f89ca",
"metadata": {},
"source": [
"or install mxnet"
]
},
{
"cell_type": "markdown",
"id": "831145fa",
"metadata": {},
"source": [
"```\n",
"cd python\n",
"sudo python setup.py install\n",
"python -c \"import mxnet as mx;print((mx.nd.ones((2, 3))*2).asnumpy());\"\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "5c7b2757",
"metadata": {},
"source": [
"Expected Output:"
]
},
{
"cell_type": "markdown",
"id": "d23b7c1a",
"metadata": {},
"source": [
"```\n",
"[[ 2. 2. 2.]\n",
" [ 2. 2. 2.]]\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "85583841",
"metadata": {},
"source": [
"### Verify whether oneDNN works\n",
"\n",
"After MXNet is installed, you can verify if oneDNN backend works well with a single Convolution layer."
]
},
{
"cell_type": "markdown",
"id": "44b9dc44",
"metadata": {},
"source": [
"```\n",
"from mxnet import np\n",
"from mxnet.gluon import nn\n",
"\n",
"num_filter = 32\n",
"kernel = (3, 3)\n",
"pad = (1, 1)\n",
"shape = (32, 32, 256, 256)\n",
"\n",
"conv_layer = nn.Conv2D(channels=num_filter, kernel_size=kernel, padding=pad)\n",
"conv_layer.initialize()\n",
"\n",
"data = np.random.normal(size=shape)\n",
"o = conv_layer(data)\n",
"print(o)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "25dec245",
"metadata": {},
"source": [
"More detailed debugging and profiling information can be logged by setting the environment variable 'DNNL_VERBOSE':"
]
},
{
"cell_type": "markdown",
"id": "26926415",
"metadata": {},
"source": [
"```\n",
"export DNNL_VERBOSE=1\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "98f7fc16",
"metadata": {},
"source": [
"For example, by running above code snippet, the following debugging logs providing more insights on oneDNN primitives `convolution` and `reorder`. That includes: Memory layout, infer shape and the time cost of primitive execution."
]
},
{
"cell_type": "markdown",
"id": "22f7806f",
"metadata": {},
"source": [
"```\n",
"dnnl_verbose,info,oneDNN v2.3.2 (commit e2d45252ae9c3e91671339579e3c0f0061f81d49)\n",
"dnnl_verbose,info,cpu,runtime:OpenMP\n",
"dnnl_verbose,info,cpu,isa:Intel AVX-512 with Intel DL Boost\n",
"dnnl_verbose,info,gpu,runtime:none\n",
"dnnl_verbose,info,prim_template:operation,engine,primitive,implementation,prop_kind,memory_descriptors,attributes,auxiliary,problem_desc,exec_time\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:acdb:f0,,,32x32x256x256,8.34912\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:Acdb32a:f0,,,32x32x3x3,0.0229492\n",
"dnnl_verbose,exec,cpu,convolution,brgconv:avx512_core,forward_inference,src_f32::blocked:acdb:f0 wei_f32::blocked:Acdb32a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:acdb:f0,,alg:convolution_direct,mb32_ic32oc32_ih256oh256kh3sh1dh0ph1_iw256ow256kw3sw1dw0pw1,10.5898\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "390b3cf3",
"metadata": {},
"source": [
"You can find step-by-step guidance to do profiling for oneDNN primitives in [Profiling oneDNN Operators](https://mxnet.apache.org/api/python/docs/tutorials/performance/backend/profiler.html#Profiling-MKLDNN-Operators).\n",
"\n",
"<h2 id=\"5\">Enable MKL BLAS</h2>\n",
"\n",
"With MKL BLAS, the performace is expected to furtherly improved with variable range depending on the computation load of the models.\n",
"You can redistribute not only dynamic libraries but also headers, examples and static libraries on accepting the license [Intel Simplified license](https://software.intel.com/en-us/license/intel-simplified-software-license).\n",
"Installing the full MKL installation enables MKL support for all operators under the linalg namespace.\n",
"\n",
" 1. Download and install the latest full MKL version following instructions on the [intel website.](https://software.intel.com/en-us/mkl) You can also install MKL through [YUM](https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/installation/install-using-package-managers/yum-dnf-zypper.html) or [APT](https://software.intel.com/content/www/us/en/develop/documentation/installation-guide-for-intel-oneapi-toolkits-linux/top/installation/install-using-package-managers/apt.html) Repository.\n",
"\n",
" 2. Create and navigate to build directory `mkdir build && cd build`\n",
"\n",
" 3. Run `cmake -DUSE_CUDA=OFF -DUSE_BLAS=mkl ..`\n",
"\n",
" 4. Run `make -j`\n",
"\n",
" 5. Navigate into the python directory\n",
"\n",
" 6. Run `sudo python setup.py install`\n",
"\n",
"### Verify whether MKL works\n",
"\n",
"After MXNet is installed, you can verify if MKL BLAS works well with a linear matrix solver."
]
},
{
"cell_type": "markdown",
"id": "095eb338",
"metadata": {},
"source": [
"```\n",
"from mxnet import np\n",
"coeff = np.array([[7, 0], [5, 2]])\n",
"y = np.array([14, 18])\n",
"x = np.linalg.solve(coeff, y)\n",
"print(x)\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "86c2323a",
"metadata": {},
"source": [
"You can get the verbose log output from mkl library by setting environment variable:"
]
},
{
"cell_type": "markdown",
"id": "d33a4432",
"metadata": {},
"source": [
"```\n",
"export MKL_VERBOSE=1\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "5c42d03f",
"metadata": {},
"source": [
"Then by running above code snippet, you should get the similar output to message below (`SGESV` primitive from MKL was executed). Layout information and primitive execution performance are also demonstrated in the log message."
]
},
{
"cell_type": "markdown",
"id": "89e55be3",
"metadata": {},
"source": [
"```\n",
"mkl-service + Intel(R) MKL: THREADING LAYER: (null)\n",
"mkl-service + Intel(R) MKL: setting Intel(R) MKL to use INTEL OpenMP runtime\n",
"mkl-service + Intel(R) MKL: preloading libiomp5.so runtime\n",
"Intel(R) MKL 2020.0 Update 1 Product build 20200208 for Intel(R) 64 architecture Intel(R) Advanced Vector Extensions 512 (Intel(R) AVX-512) with support of Vector Neural Network Instructions enabled processors, Lnx 2.70GHz lp64 intel_thread\n",
"MKL_VERBOSE SGESV(2,1,0x7f74d4002780,2,0x7f74d4002798,0x7f74d4002790,2,0) 77.58us CNR:OFF Dyn:1 FastMM:1 TID:0 NThr:56\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "73077b82",
"metadata": {},
"source": [
"<h2 id=\"6\">Graph optimization</h2>\n",
"\n",
"To better utilise oneDNN potential, using graph optimizations is recommended. There are few limitations of this feature:\n",
"\n",
"- It works only for inference.\n",
"- Only subclasses of HybridBlock and Symbol can call optimize_for API.\n",
"- This feature will only run on the CPU, even if you're using a GPU-enabled build of MXNet.\n",
"\n",
"If your use case met above conditions, graph optimizations can be enabled by just simple call `optimize_for` API. Example below:"
]
},
{
"cell_type": "markdown",
"id": "b3cbbcac",
"metadata": {},
"source": [
"```\n",
"from mxnet import np\n",
"from mxnet.gluon import nn\n",
"\n",
"data = np.random.normal(size=(32,3,224,224))\n",
"\n",
"net = nn.HybridSequential()\n",
"net.add(nn.Conv2D(channels=64, kernel_size=(3,3)))\n",
"net.add(nn.Activation('relu'))\n",
"net.initialize()\n",
"print(\"=\" * 5, \" Not optimized \", \"=\" * 5)\n",
"o = net(data)\n",
"print(o)\n",
"\n",
"net.optimize_for(data, backend='ONEDNN')\n",
"print(\"=\" * 5, \" Optimized \", \"=\" * 5)\n",
"o = net(data)\n",
"print(o)\n",
"\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "a5f716fa",
"metadata": {},
"source": [
"Above code snippet should produce similar output to the following one (printed tensors are omitted) :"
]
},
{
"cell_type": "markdown",
"id": "1a9514b6",
"metadata": {},
"source": [
"```\n",
"===== Not optimized =====\n",
"[15:05:43] ../src/storage/storage.cc:202: Using Pooled (Naive) StorageManager for CPU\n",
"dnnl_verbose,info,oneDNN v2.3.2 (commit e2d45252ae9c3e91671339579e3c0f0061f81d49)\n",
"dnnl_verbose,info,cpu,runtime:OpenMP\n",
"dnnl_verbose,info,cpu,isa:Intel AVX-512 with AVX512BW, AVX512VL, and AVX512DQ extensions\n",
"dnnl_verbose,info,gpu,runtime:none\n",
"dnnl_verbose,info,prim_template:operation,engine,primitive,implementation,prop_kind,memory_descriptors,attributes,auxiliary,problem_desc,exec_time\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:acdb:f0,,,32x3x224x224,8.87793\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:Acdb64a:f0,,,64x3x3x3,0.00708008\n",
"dnnl_verbose,exec,cpu,convolution,brgconv:avx512_core,forward_inference,src_f32::blocked:acdb:f0 wei_f32::blocked:Acdb64a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:acdb:f0,,alg:convolution_direct,mb32_ic3oc64_ih224oh222kh3sh1dh0ph0_iw224ow222kw3sw1dw0pw0,91.511\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:Acdb64a:f0,,,64x3x3x3,0.00610352\n",
"dnnl_verbose,exec,cpu,eltwise,jit:avx512_common,forward_inference,data_f32::blocked:acdb:f0 diff_undef::undef::f0,,alg:eltwise_relu alpha:0 beta:0,32x64x222x222,85.4392\n",
"===== Optimized =====\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:Acdb64a:f0 dst_f32::blocked:abcd:f0,,,64x3x3x3,0.00610352\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:Acdb64a:f0,,,64x3x3x3,0.00585938\n",
"dnnl_verbose,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd:f0 dst_f32::blocked:acdb:f0,,,32x3x224x224,3.98999\n",
"dnnl_verbose,exec,cpu,convolution,brgconv:avx512_core,forward_inference,src_f32::blocked:acdb:f0 wei_f32::blocked:Acdb64a:f0 bia_f32::blocked:a:f0 dst_f32::blocked:acdb:f0,attr-post-ops:eltwise_relu:0:1 ,alg:convolution_direct,mb32_ic3oc64_ih224oh222kh3sh1dh0ph0_iw224ow222kw3sw1dw0pw0,20.46\n",
"```\n"
]
},
{
"cell_type": "markdown",
"id": "02ec408a",
"metadata": {},
"source": [
"After optimization of Convolution + ReLU oneDNN executes both operations within single convolution primitive.\n",
"\n",
"<h2 id=\"7\">Quantization and Inference with INT8</h2>\n",
"\n",
"MXNet built with oneDNN brings outstanding performance improvement on quantization and inference with INT8 Intel CPU Platform on Intel Xeon Scalable Platform.\n",
"\n",
"- [CNN Quantization Examples](https://github.com/apache/mxnet/tree/master/example/quantization).\n",
"\n",
"- [Model Quantization for Production-Level Neural Network Inference](https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimization+and+Quantization+based+on+subgraph+and+MKL-DNN).\n",
"\n",
"<h2 id=\"8\">Next Steps and Support</h2>\n",
"\n",
"- For questions or support specific to MKL, visit the [Intel MKL](https://software.intel.com/en-us/mkl) website.\n",
"\n",
"- For questions or support specific to oneDNN, visit the [oneDNN](https://github.com/oneapi-src/oneDNN) website.\n",
"\n",
"- If you find bugs, please open an issue on GitHub for [MXNet with MKL](https://github.com/apache/mxnet/labels/MKL) or [MXNet with oneDNN](https://github.com/apache/mxnet/labels/MKLDNN)."
]
}
],
"metadata": {
"language_info": {
"name": "python"
}
},
"nbformat": 4,
"nbformat_minor": 5
}